US6128398A - System, method and application for the recognition, verification and similarity ranking of facial or other object patterns - Google Patents

System, method and application for the recognition, verification and similarity ranking of facial or other object patterns Download PDF

Info

Publication number
US6128398A
US6128398A US09/020,443 US2044398A US6128398A US 6128398 A US6128398 A US 6128398A US 2044398 A US2044398 A US 2044398A US 6128398 A US6128398 A US 6128398A
Authority
US
United States
Prior art keywords
weight
weights
feature
value
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US09/020,443
Inventor
Michael Kuperstein
James A. Kottas
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Miros Inc
Idemia Identity and Security USA LLC
Original Assignee
Miros Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Miros Inc filed Critical Miros Inc
Priority to US09/020,443 priority Critical patent/US6128398A/en
Application granted granted Critical
Publication of US6128398A publication Critical patent/US6128398A/en
Assigned to U.S. VENTURES L.P. reassignment U.S. VENTURES L.P. SECURITY AGREEMENT Assignors: ETRUE.COM, INC.
Assigned to VIISAGE TECHNOLOGY, INC. reassignment VIISAGE TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ETRUE.COM, INC. F/K/A MIROS, INC.
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/20Individual registration on entry or exit involving the use of a pass
    • G07C9/22Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder
    • G07C9/25Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition
    • G07C9/253Individual registration on entry or exit involving the use of a pass in combination with an identity check of the pass holder using biometric data, e.g. fingerprints, iris scans or voice recognition visually

Definitions

  • the present invention relates to object recognition systems and more particularly to a neural network based system and method for verifying a match between two objects patterns.
  • the task of automatic object recognition represents one of the major challenges to modern computational systems.
  • One frequently encountered problem in object recognition is the task of recognizing a match between a known and an unknown object.
  • One example of this problem occurs in the field of face recognition.
  • This task of facial matching is fraught with difficulties due to the many unpredictable differences which may occur between the previously stored facial image and the live image.
  • these differences may include one or more of the following: mis-registration between the two images, resulting from differences in the height of the face or the tilt of the head etc.; different lighting conditions which will result in different shadows which greatly affects the contrast distribution of the pattern of the eyes and the face; changes in the individual's appearance due to different hairstyles, make-up, jewelry, facial hair, facial expressions, etc.; different background clutter in the image; the facial images may be turned to the side which greatly affects the appearance of facial features.
  • the system will find features, such as eyes, nose, mouth, etc. and then determine facial recognition using the ratios of those features in a neural network.
  • features such as eyes, nose, mouth, etc.
  • these ratios also change with the different orientations and expressions.
  • systems of this sort are not always reliable when confronted with different orientations and expressions.
  • Eigenfaces are facial features used to discriminate features of one face from those of another.
  • an average face is derived and the difference from a target face are determined in terms of the eigenfaces.
  • a related problem is the excessive computational time required to perform an analysis of facial images to determine whether a match is present.
  • Many conventional techniques require massive computational capabilities and/or long computational time to perform the required analysis of the images.
  • Though-put is a problem in many applications where access to a system is required very rapidly and long delays for analysis of facial images cannot be tolerated.
  • a system and method for locating an object pattern and performing matching of object patterns is provided.
  • the present invention is adopted to operate on object patterns which consist of facial images.
  • the system is able to analyze reference and live facial images and determine if the two images are from the same person or not. It is able to do this without having previously seen either of the facial images.
  • the system utilizes a neural network which is trained to recognize matching facial images using a large number of example faces in different views and orientations. Once trained, the system is able to recognize when two facial images are from the same person without ever having been previously trained with the facial image of that particular person.
  • the system does not require storage of any information about the particular individual's face. This greatly minimizes the storage requirements of the system.
  • through-put is maximized because there is no need for accessing a database of previously stored features from an individual's face, and also because of the parallelism of the neural network approach of the present invention.
  • a method for verifying that two facial images are from the same person.
  • the method includes the steps of receiving reference and test facial images, the reference facial image being from a known face and the test facial image being from an unknown face.
  • Reference and test feature sets are then derived from the corresponding facial images.
  • Each element in each of the feature sets is then assigned to a look-up table of weights that is quantized by the amplitude of the feature element in a corresponding reference and test weight set in a neural network, wherein each weight set comprises a plurality of weights which correspond to each element in the feature set.
  • An output of the neural network is then determined by calculating the dot product normalized by weight vector length of the assigned weights in the two respective weight sets.
  • the system compares the results of the dot product output to a threshold, wherein an above threshold output indicates that the facial images are from the same person, and a below threshold output indicates that the two facial images are from different persons.
  • the method in accordance with present invention undergoes a prior learning process which includes the steps of generating a set of training test feature sets and training reference sets.
  • the training test and reference feature sets are derived from different images of a plurality of known facial images.
  • the feature sets are processed as described above for the training test and reference feature sets.
  • the output is correct the weights remain unchanged.
  • the value of the assigned weights of the test and reference weight vectors are adjusted to be farther apart from each other. If, on the other hand, the results are below threshold for facial images which originate from the same face the weights are adjusted to be closer to each other.
  • a system for verifying that two facial images are from the same person.
  • the system is adapted to implement the above-discussed method.
  • the present invention provides a system and method for verifying a match between two facial images.
  • the system is able to verify facial matches to a high degree of accuracy while tolerating a wide range of variations in the images including modifications of hair, make-up, facial expression and orientation.
  • the present invention accomplishes all this with a very minimal storage requirements and with rapid throughput thus making it practical for many applications where a large number of different faces must be verified quickly.
  • FIG. 1 is a diagram of a preferred embodiment of the system of the present invention which determines whether the face of a user of an access card matches a previously recorded image of the known user's face.
  • FIG. 2 is a diagram of the major steps performed by the face verification system in accordance with a preferred embodiment of the present invention.
  • FIG. 3 is a diagram of the major steps involved in deriving a facial image bounding box in accordance with the present invention.
  • FIG. 4 is a diagram of the major steps involved in locating the eyes in a facial image in accordance with the present invention.
  • FIG. 5 is a diagram showing additional details of the process of selecting weights in accordance with the present invention.
  • FIG. 6 is a diagram of an alternative embodiment of the present invention.
  • FIG. 7 is an alternative embodiment for using the present invention with a computer network.
  • the present invention is a system, method and application for the recognition, verification and similarity ranking of facial or other object patterns. While the techniques of the present invention have many applications in the general area of object recognition, they are particularly well suited to the task of face recognition. This task poses numerous difficulties particularly where many variations can be expected in the appearance of the facial image to be recognized. The techniques of the present invention result in a system which can verify a match between two facial images where significant differences may occur between the images.
  • FIG. 1 an overall functional diagram of a preferred embodiment of the present invention is shown.
  • a user desires access to some entity.
  • This entity may comprise a computer network, an automated teller machine (ATM), access to a building, etc.
  • ATM automated teller machine
  • the user of this face verification system 10 in accordance with the present invention enters an access card 12 into a card reader 14.
  • the card reader generates an output comprising a previously stored reference image of the user's face 16. It should be appreciated that this previously stored image may be imprinted on the access card and/or may be stored in a database (not shown) accessible to the card reader 14.
  • the resulting reference image is shown in FIG. 1 as image 16.
  • This image may comprise, for example, a 100 pixel high by 80 pixel wide digitized image of the known user's face.
  • This digitized image is then input into the automated face verifier 18 of the present invention. It will be appreciated that in the preferred embodiment shown in FIG. 1, utilizing the access card 12, the digitized reference image 16 will be input into the automated face verifier 18 each time access is required. In another embodiment a database containing all of the possible known reference facial images may be stored and will be accessed by the automated face verifier by an index code each time verification is required.
  • a camera 20 acquires an image of the individuals desiring access. This person will either present an access card 12 with his/her image on it or will have had his/her image previously stored in the database accessible to the verifier 18.
  • the camera 20 produces an image 22 which includes, for example, the entire head and shoulders of the individual. In accordance with a process described in more detail below, this image is adaptively clipped to include just the immediate area of the individual's face to yield a clip 24 which is the same size as the reference image 16.
  • This clip image 24 is then transferred to an automated face locator 26 which performs the function of registering the position and orientation of the face in the image 24.
  • the location of the face is determined in two phases. First, the clip image 24 is found by defining a bounding box at its perimeter. The location of the bounding box is based on a number of features. Second, the location of the individual's eyes is determined. Once the location of the eyes is determined, the face is rotated about an axis located at the midpoint (gaze point) between the eyes to achieve a precise vertical alignment of the two eyes. The purpose of the automated face locator 26 is to achieve a relatively precise alignment of the test image 24 with the reference image 16.
  • an automated face locator 26 will also be used to locate the face in the test image 16. It should be noted that the adaptive automated face locator 26 is needed to locate the face in the test and reference image 16, because with standard (nonadaptive) image processing techniques, the derived outline of the face will necessarily include the outline of the hair. However in accordance with the present invention the clip image 24 defined by the bounding box will not include the hair.
  • the resulting test image 28 be accurately registered with respect to the reference image 16. That is, in accordance with the preferred embodiment described in more detail below an accurate location of the eyes is determined for the reference image 16 and an accurate location for the eyes is determined for the test image 24.
  • the two images are then registered so that the location of the midpoint between both eyes are registered in both images. This is important because the automated face verifier 18 will be attempting to determine whether the two images are those of the same person. If the two images are misregistered, it is more likely to incorrectly determine that the two images of the same person are from different persons because similar features will not be aligned with similar features.
  • the automated face verifier 18 receives the clipped and registered reference image 16 and test image 28 and makes a determination of whether the persons depicted in the two images are the same or are different. This determination is made using a neural network which has been previously trained on numerous faces to make this determination. However, once trained, the automated face verifier is able to make the verification determination without having actually been exposed to the face of the individual.
  • FIG. 2 a diagram of the generalized process of face verification in accordance with the present invention is shown. Initially a test image 22 and a reference image 30 are acquired. These images are then both processed by a clip processor 32 which defines the bounding box containing predetermined portions of each face.
  • the reference prerecorded image may be stored in various ways. The entire image of the previous facial image may be recorded as shown in the image 30 in FIG. 2, or only a previously derived clip 16 may be stored. Also, a clip that is compressed in a compression method for storage may be stored which is then decompressed from storage for use. In addition, some other parameterization of the clip 16 may be stored and accessed later to reduce the amount of storage capacity required.
  • the prerecorded image could be stored on a access card as shown in FIG. 1 or on a smartcard consisting of magnetic media, optical media, two dimensional barcode media or active chip media. Alternatively, the prerecorded image may be stored in a database as discussed above.
  • the reference and test images 22, 30 are then clipped. This occurs in two stages. First, a coarse location is found in step 33. This yields the coarse location of the image shown in Blocks 23 and 24. Next, a first neural network 26 is used to find a precise bounding box shown in Blocks 28 and 29. In a preferred embodiment the region of this bounding box 28 is defined vertically to be from just below the chin to just above the natural hair line (or implied natural hair line if the person is bald or wearing a hat). The horizontal region of the face in this clipping region is defined to be between the beginning of the ears at the back of the cheek on both sides of the face. If one ear is not visible because the face is turned at an angle, the clipping region is defined to be the edge of the cheek or nose, whichever is more extreme. This process performed by chip processor 32 will be described in more detail below in connection with FIG. 3.
  • a second neural network 30 is used to locate the eyes.
  • the image is then rotated in step 34 about a gaze point as described in more detail in FIG. 4.
  • the above steps are repeated for both the reference and the test images.
  • the two images are then registered in step 88, using the position of the eyes as reference points.
  • the registered images are normalized in step 90. This includes normalizing each feature value by the mean of all the feature values. It should be noted that the components of the input image vectors represent a measure of a feature at a certain location, and these components comprises continuous valued numbers.
  • a third neural network 38 is used to perform the verification of the match or mismatch between the two faces 22, 30.
  • weights are assigned in block 36, as described in more detail connected with in FIG. 5. It should be noted that the location of the weights and features are registered. Once the weight assignments are made the appropriate weights in the neural network 38 are selected.
  • the assigned reference weights comprise a first weight vector 40 and the assigned test weights comprise a second weight vector 42.
  • the neural network 38 determines a normalized dot product of the first weight vector and the second weight vector in block 44. This is a dot product of vectors on the unit circle in N dimensioned space, wherein each weight vector is first normalized relative to its length.
  • the result is a number which is the output 46 of the neural network 38. This output is then compared to a threshold in decision step 48. Above threshold outputs indicate a match 50 and below threshold outputs indicate a mismatch 52.
  • An acquired image 54 may comprise either the test or the reference image.
  • This image includes the face of the subject as well as additional portions such as the neck and the shoulders and also will include background clutter.
  • An image substraction process is performed in accordance with conventional techniques to subtract the background. For example, an image of the background without the face 56 is acquired. The image of the face and background is then subtracted from the background (block 58). The result is the facial image without the background 60.
  • step 61 standard, non-adaptive edge detection image processing techniques are used to determine a very coarse location of the silhouette of the face. It is coarse because this outline is affected by hair, clothing, etc.
  • the image is scaled down, for example, by a factor of 20 (block 62). This would reduce a 100 pixel by 80 pixel image down to 5 ⁇ 5.
  • the images is then scaled down.
  • the total resulting image may include the following scales: 5 ⁇ 5, 6 ⁇ 6, 7 ⁇ 7, 10 ⁇ 10, 12 ⁇ 12, 16 ⁇ 16 and 18 ⁇ 18. This results in a hierarchy of resolutions.
  • scaling it should be noted that the convolution types and sizes are identical for all images at all scales; and because they are identical, if the images are first scaled down to have coarsely scaled inputs then the convolutions will yield a measure of more coarse features.
  • the convolution will yield finer resolution features.
  • the scaling process results in a plurality of features at different sizes.
  • the next step is to perform a convolution on the scaled image in block 64.
  • this may be a 3 ⁇ 3 convolution.
  • the convolutions used have zero-sum kernel coefficients.
  • a plurality of distributions of coefficients are used in order to achieve a plurality of different feature types. These may include, for example, a center surround, or vertical or horizontal bars, etc. This results in different feature types at each different scale. Steps 62 and 64 are then repeated for a plurality of scales and convolution kernels.
  • a feature space set 66 composed of a number of scales ("S") a number of features ("F”) based on a number of kernels ("K”).
  • This feature space then becomes the input to a neural network 68.
  • this comprises a conventional single layer linear proportional neural network which has been trained to produce as output the coordinates of the four corners of the desired bounding box 72 when given the facial outline image as input.
  • a description of a neural network suitable for this purpose may be found in the article, M. Kuperstein, "Neural Model of Adaptive Hand-eye Coordination For Single Postures", SCIENCE Vol. 239 pp. 1308-1311 (1988), which is herein incorporated by reference.
  • a hierarchical approach may be employed in which the feature space is transformed by a series of neural networks into bounding boxes that are increasingly closer to the desired bounding box. That is, the first time through the first neural network the output is a bounding box which is slightly smaller than the perimeter of the image and that box is clipped out and the features redefined and put into another neural network that has an output which is a bounding box that is a little closer to the desired bounding box.
  • weights in the neural network 33 are assigned according to the techniques shown in FIG. 5 and discussed below.
  • the process of locating the face 26 within the bounding box is shown.
  • the general approach of the present invention is to locate with some precision a given feature on the face and register the corresponding features in the reference and test images before preforming the comparison process.
  • the feature used is the eyes. It will be appreciated that the eyes can be difficult to locate because of various factors such as reflections of light from glasses, from the eyes themselves, variations in shadows, etc. Further, the size of the eyes, their height, and other factors are all unknown. Because of this, an adaptive neural network is used to find the location of each of the eyes.
  • This feature space 72 (shown in FIG. 4) is input into a neural network 74 which has been trained to generate the x coordinate point of a single point, referred to as the "mean gaze".
  • the mean is defined as the mean position along the horizonal axis between the two eyes. That is, the x position of the left and right eye are added together and divided by two to derive the mean gaze position.
  • the neural network 74 may comprise one similar to the neural network 68 shown in FIG. 3, This neural net 74 is trained with known faces in various orientations to generate as output the location of the mean gaze. In the preferred embodiment weights in the neural network 74 are assigned according to the technique shown in FIG. 5 and discussed below.
  • the mean gaze is determined 76, a determination is made of which of five bands along the horizontal axis the gaze falls into. That is, a number of categories of where the gaze occurs are created. For example, these categories may determine whether the gaze occurred relatively within the middle or relatively in the next outer band, or in a third outer band of the total width of the face. These bands are not necessarily of equal width. For example, the center band may be the thinnest, the next outer ones a little wider and the final ones the widest. Wherever the computed mean gaze is located on the x coordinate will determine which band it falls into (step 78). Further, this will determine which of five neural networks will be used to find the location of the eyes. (step 80) Next, the feature set is input to the selected neural network in step 82. This neural network has been trained to determine the x and y coordinates of eyes having the mean gaze in the selected band 84.
  • the use of a plurality of neural networks for the different bands has the effect of making the inputs to each of the networks with respect to themselves much more similar. This is important because of the highly variable appearance of faces depending on whether the gaze is forward, leftward or rightward.
  • a hierarchy of neural networks which each correspond to a certain range of the gaze of the face the inputs to each of the networks with respect to themselves are much more similar.
  • the entire face is rotated (in two dimensions) about the gaze point until the x and y position of the eyes are level on the horizontal axis in step 86.
  • the gaze point becomes a reference point for registration of the test and reference images as indicated in step 88 in FIG. 2.
  • the feature sets are normalized 90 (shown in FIG. 2). This is accomplished by, for example, normalizing each feature value by the mean amplitude of all feature values. This normalization process normalizes against variations such as lighting conditions so that the feature set used in the neural network can withstand varying contrast or lighting conditions.
  • step 36 the feature values are assigned to weights in the neural network 38.
  • the preferred approach for neural network 38, as well as for neural networks 26 and 30 will be to quantize the feature values from an analog number to a quantum number that is positive or negative. This is done by taking the whole range of values of all sets and quantize the range by certain ratios of twice the mean (positive and negative).
  • the positive feature value are ranked and the negative feature values are ranked with respect to their values.
  • a set of positive ranks and a set of negative ranks are thereby defined. For a given feature value it can be assigned to a bin that is quantized by ranking the values. In the preferred embodiment this is done by defining the ranks by the fractions 1/3 and 1/2.
  • all of the elements in the input vector are used to determine their positive mean and their negative mean.
  • twice the positive mean may be 1000 and twice the negative mean may be 1500.
  • Applying the fractions of 1/3 and 1/2 to 1000 would equal 333 and 500.
  • the first rank would equal components from 0333 the second rank between 334-500 and the third rank would be components greater than 500.
  • all the individual components of the input vector are placed in one of the three ranks based on their value. The same process is also performed for the negative components in the feature vectors,
  • each ranked component value is assigned a weight based on it's rank. This process of assigning weights is described in more detail in FIG. 5.
  • a four by four weight lookup table vector 92 is shown which contains 16 components of the feature vectors. For example, one of these components is 600, another is 400, another is -100.
  • a four by four weight vector 94 is depicted. Each of the 16 weight locations in the four by four weight vector 94 correspond to one of the 16 components of the feature vector. Each location in the weight vector has six different weights corresponding to six different ranks.
  • each component in the feature vector is ranked. For example, 400 is determined to be of rank five, thus this component is mapped to the 5th of six weights within the corresponding location in the four by four weight vector 94. Similarly, the component having a value of 600 is put into the 6th rank and accordingly this feature vector is assigned to the weight value which exists in the third rank of its corresponding location of weight vector 94. The component having a value of -100 is assigned to the 2nd rank.
  • the feature vector may have many more components. There may be, for example, 10,000 components in the feature vector.
  • feature vector may have a value of zero.
  • features values equal zero the system can decide to put these values in a bin or not. This decision is made differently for different neural networks. For the networks used to locate the bounding box 26 and the eyes 30, feature values of zero are not used. However, for the matching neural network 38 feature values of zero are used for weights associations. This is because with the bounding box or the eyes the output of the neural net is a coordinate value and it is not desirable to have a non-feature contribute to the location of an x,y point. However, when the feature value for the face verification neural network 38 is zero, it is desirable to have that contribute to the result.
  • a non-zero value for a feature vector component means that a feature has been measured at that location while a zero indicated that no feature has been measured at that location.
  • the exact weight chosen in the weight vector will depend on the preexisting value of that weight vector component. However, there is a fixed relationship between each location in the feature vector and the corresponding location in the weight vector (each of which has multiple weights, one for each rank).
  • the neural network 38 computes the normalized dot product of the two weight vectors. In essence, this operation computes the sum of the products of corresponding elements of the two weight vectors. This is operation 44 shown in FIG. 2. It will be appreciated that the dot product output will be a number which is proportional to the similarity between the two weight vectors. That is, highly similar weight vectors are more parallel and will yield higher dot product outputs indicating that the faces are similar. Dissimilar weight vectors will yield lower valued dot product outputs indicating that the faces are less similar.
  • the fact that the dot product operation is a "normalized" dot product means that the dot product of the output 46 is normalized to the unit circle in N dimensional space.
  • the normalization process is performed by dividing the dot product by the product of each of the vector lengths.
  • the normalized dot product results in a confidence level and that confidence level is normalized by a linear transformation constant to get the range needed, i.e., 0-10 or 0-100. If the confidence measure is above a preset verification threshold then the result is "positive". This means that the face in the test clip 32 depicts a face belonging to the same person as that in the reference clip 33. If the value is not above the predetermined threshold the result is "negative,” which means that the test clip 33 and reference clip 32 depict faces of different people.
  • the resulting dot product of the two weight vectors will also be zero. Because this is training data however it is known whether the two faces are from the same person or not. If they are from the same person then it is desired to have the result be a relatively high valued positive number. This is because matching feature vectors should produce above threshold outputs.
  • the threshold may be selected arbitrarily to be at the midrange. When the two faces are from the same person, a starting positive value is selected and the two weight vectors are made to be the same positive value. If the two faces are from a different people then each weight value is given opposite assigned values, one starting value is positive and one is a negative but equal value.
  • the neural network will be trained on many examples of pairs of faces, some of which match, and some of which do not match.
  • a variety of faces in a variety of orientations and lighting conditions will be used to allow the neural network to generalize all of this information. As a result it will be able to recognize when two different views of the same person are actually the same person, and when two images of different people are in fact faces of different people.
  • a correct result means that two faces that are the same generate an output which is above threshold, and two faces which are from different persons generate an output that is below threshold.
  • weights and weight vectors 1 and 2 are closer to each other.
  • the amount of adjustment is preferably a percentage of the difference between the two weights. This percentage is the learning rate for the adaptation. It should be noted that only weights which are selected by the feature sets 1 and 2 are adapted; non-selected weights are not. As discussed above, if both weight values are zero, (as in the initial condition) both weight values are changed to be a preset constant value.
  • weight value of weight set 1 is set to the same preset constant value used in training step 2 above. However, the weight value from weight set 2 is set to the negative of this value.
  • the test images should comprise of pairs of randomly selected images of faces. Also, images of the same person should be used approximately half the time and images of different persons should be used about half the time.
  • the objective of training is to give the system enough training with different orientations and different head postures etc. so it will be able to generalize across different head orientation and head postures.
  • the training example will include examples of a head looking straight, to the left, to the right, up and down.
  • the system may be trained with images of 300 different people in different orientations. It is being trained not to recognize any specific face but instead it is being trained to recognize what is similar about different images of the same face. It is being trained to be a generalized face recognizer as opposed to being able to recognize any specific face.
  • hysteresis is used during learning. This means that to avoid learning the result must be above or below the threshold by a given amount. For example, if two test images are from the same face, and the threshold is defined as an output of 5 on a scale of 0 to 10, then to avoid learning the output must be 5+delta. Thus any output less than the threshold of 5+delta will cause the system to adapt weights to be closer to each other. In this way, only results which are less ambiguously correct will avoid learning. Results which are correct, but only slightly above threshold will be further refined by additional training.
  • the result when the system is trained with two training images of different faces, in order to avoid adaptation of the weights, the result must be below threshold by a given amount, for example below 5 minus delta. As a result any output above 5 minus delta will result in adaptation of the weights to produce less ambiguous results.
  • the delta amount used for the learning hysteresis may be 0.5. It should be remembered that this hysteresis is only used during the training procedure and not during actual use of the system on unknown faces. Thus, in actual use, where it is not known beforehand whether the faces match or not, any above threshold output will be considered to be a match and any result which is at or below threshold will be considered to be no match.
  • weights are always associated with a certain location in the neural network 38 and a certain feature of the neural network.
  • every face is different so every image that comes from a different face will pick up different weights.
  • the weights themselves are always associate with a certain location and with a certain feature even though which weights are actually picked up depends on which face is being processed. As a result, the entire neural network will begin to average over all faces it has ever seen in it's experience.
  • the operations of the neural network 38 in accordance with the present invention is quite different from the prior techniques, such as the self-organizing maps of Kohonen as described, for example in the article R. Lippman, An Introduction to Computing with Neural Networks". IEEE ASSP Magazine, April 1987, pp 4-2, which is incorporated by reference.
  • Those skilled in the art will appreciate that with the Kohonen method a dot product is taken between a single input and the weight vector in the neural network. The weight vector which generates the highest dot product is designated the "winner" and that weight vector is modified during training to be even closer to the input vector.
  • each input vector selects weights in the neural network and the dot product between each of the two selected weight vectors is determined.
  • both sets of weight vectors are adapted to be closer to each other or farther apart from each other.
  • the actual feature vector is never used in the dot product as its in Kohonen networks.
  • weights are used in the dot product operation. Also in the Kohonen system initially the weights are set to random values; in the present invention weights are initially set to zero.
  • Another advantage of the present invention is that it can be trained to generate a high matching value for incompatible looking objects. This is a major advantage over prior art approaches to face recognition. For example, suppose input vectors one and two representing facial images were identical. If a dot product is performed on the two images and they are identical, the result would be very high. However, if the images are offset by even one or two pixels then the dot product will be very low because everything is misregistered. In contrast, with the technique of the present invention the system can be trained to generate a matching output for different appearing objects. For example, if the input images were of an apple and an orange each image would select weight vectors and those weight vectors would be trained on various images of apples and oranges to generate a high dot product value. Yet a dot product between the raw image of the apple and orange would yield a very low value.
  • This malleable nature of the present invention is important because the human face varies tremendously whenever the orientation and lighting etc. of the face is changed.
  • the present invention achieves the goal of being able to match images that are in some ways incompatible. This approach works because it defers the dot product operation to a reference of the inputs (the weight vectors) and does not perform the dot product on the raw image.
  • FIG. 6 A simple way to circumvent this problem is shown in FIG. 6.
  • this approach utilizes the facial image as a personal identification number (PIN number). This may be accomplished by taking a predetermined random sample of values of the picture that was taken of the correct user during his enrollment. This random sample of values from the image is then stored in a database.
  • PIN number personal identification number
  • the access card 12 is entered into the card reader 14 which reads and digitizes the image 16 on the card.
  • a random sample of this image is then taken by the module 94.
  • a random sample of a reference image of the known user's face has been acquired and stored in memory unit 96. This may be done by entering the reference image into the card reader and transmitting the digitized image to the random sample unit 94, which then transmits the random sample to the memory unit 96.
  • the two corresponding random samples (each associated with a common I.D. No.) are then compared in comparison unit 98. A match indicates that the faces are the same and the card is valid. No match indicates that the face on the card is not the same as the one previously stored and access will be denied.
  • the system may sample 24 locations in the image at random and each location may comprise a byte. Thus the total sample is 24 bytes. For each byte the system may sample a random bit position (yielding a zero or one) which would yield 24 bits of information, or three bytes.
  • the ID number may also comprise about three bytes of information. This would yield a total of six bytes; three for the ID number, and three for the face pin number.
  • the image on the card itself is used as a PIN number.
  • Two checks will be made before access is allowed. The first check occurs when a random sampling of the image on the card alone is taken and checked with the previously stored random sampling of the same image on the card. If there is no match access is denied because the card is invalid. The second check is when the live face of the person desiring access is compared to the image on the card and access is only allowed if a match is found.
  • the technique for using a face image on a card as a PIN number as described above and shown in FIG. 6 can also be used alone, without the face matching system for checking the live image of the user's face.
  • the system shown in FIG. 6 will insure that the face on the card is the same one previously stored and that the card in not fraudulent.
  • the face recognition system 10 shown in FIG. 1 could be tampered with by a fraudulent user.
  • a VCR to the frame grabber 21 in place of the camera 20.
  • an image of a the authorized user could be inserted in place of the actual fraudulent user.
  • Other ways are also possible for a fraudulent user to insert the authorized user's image into the system, for example, by inserting an image directly into the automated face locator 26.
  • the present invention is utilized by persons at remote locations it is desirable to insure that fraudulent users do not tamper with the signal which is transmitted remotely to the automated face verifier of the present invention.
  • the camera 20 and frame grabber 102 are enclosed in a tamperproof box 104.
  • the tamperproof box 104 shown in FIG. 7 will comprise a conventional tamperproof box well-known in the art in which the components inside are disabled by a EPROM which erases if the box is tampered with.
  • a special hardware device 106 which takes the live image and samples it in a manner similar to the technique disclosed above for using the facial image as a PIN number. That is, the image is sampled at, for example, 24 locations and these samples are put stored in three bites.
  • the system insures that the live image is actually the image of the person who is actually there since the camera and frame grabber cannot be tampered with. It also insures that a fraudulent user has not inserted a fraudulent image of an authorized user at a point in the system down stream of the camera and frame grabber.
  • the automated face verifier of the present invention described above can then compare the live image with the image on the access card in accordance with the techniques discussed above.
  • a technique for minimizing the variations in facial orientation when the image is acquired.
  • the camera 20 takes the picture of the person desiring access one approach would be to prompt the person (for example, by recorded voice command) to look into the camera while the picture is taken.
  • prompt the person for example, by recorded voice command
  • people frequently will do things such as pose, assume and unnatural expression, adjust their hair, clothing, etc.
  • the result is an unnatural and less consistent image.
  • the step of prompting and taking the picture takes time which slows down the process.
  • the camera is disposed to acquire the image of the person automatically when they initiate a certain action.
  • This action may comprise, for example, the pressing of a key on the keyboard, inserting a card, an entry into a certain proximity of the access unit.
  • the feature sets are derived in the above discussion using convolutions and scaling, other known techniques may also be employed with the teachings of the present invention.
  • the feature sets may be derived using morphological operations, image transforms, pattern blob analysis, and eigenvalue approaches.
  • adaptive processors could be used including, but not limited to, genetic algorithms, fuzzy logic, etc.
  • adaptive processors are those which can perform facial verification for faces that vary by head orientation, lighting conditions, facial expression, etc., without having explicit knowledge of these variations.

Abstract

A system and method is disclosed for determining the likelihood that two object patterns arise from the same object source. It is able to do this without having previously been exposed to either of the object patterns. This system utilizes an adaptive processor trained to make this determination using a large number of example patterns in different views and orientations. It accommodates large verifications of how an object source is presented in an image. The system also does not require the storage of any information about any particular pattern, which greatly minimizes the storage requirements and improves the throughput of the system. Further, there is no need for accessing a database of previously stored features from a given pattern source. The system is particularly useful for object patterns which consist of facial images. The system employs a new technique for locating an object of interest within a pattern using an adaptive processor to determine a region of interest. The technique is resistant to irrelevant overlapping patterns. The system and method of the present invention employ a technique for performing authentication of the validity of a card or a user's image on a computer network system which has a user's face image stored within. Finally, this invention provides a new technique for naturally aligning the face. This technique, which is convenient to use and yields candid shots, is useful in facial verification.

Description

This application is continuation of Ser. No. 08/382,229 now abandoned.
BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to object recognition systems and more particularly to a neural network based system and method for verifying a match between two objects patterns.
2. Discussion
The task of automatic object recognition represents one of the major challenges to modern computational systems. One frequently encountered problem in object recognition is the task of recognizing a match between a known and an unknown object. One example of this problem occurs in the field of face recognition. In many applications it would be desirable to have a system which can compare a previously acquired image of a face with a "live" image to determine if the two facial images are those of the same person or not.
This task of facial matching is fraught with difficulties due to the many unpredictable differences which may occur between the previously stored facial image and the live image. For example, these differences may include one or more of the following: mis-registration between the two images, resulting from differences in the height of the face or the tilt of the head etc.; different lighting conditions which will result in different shadows which greatly affects the contrast distribution of the pattern of the eyes and the face; changes in the individual's appearance due to different hairstyles, make-up, jewelry, facial hair, facial expressions, etc.; different background clutter in the image; the facial images may be turned to the side which greatly affects the appearance of facial features.
Because of these and other variations it is very difficult for existing computational systems to recognize when two facial images are from the same person. Some progress has been made in this area by adaptive systems, such as neural networks which have demonstrated an ability to generalize facial features based on training examples despite the above-described kinds of variations. One example of a neural network system of this sort is French Patent No. 2,688,329 issued to B. Anjeniol. Even so, there has still not been satisfactory performance by neural network systems for facial recognition where the variations in the images are as large as those encountered in real-life applications.
For example, in some approaches the system will find features, such as eyes, nose, mouth, etc. and then determine facial recognition using the ratios of those features in a neural network. However, since not all features are well-defined and since the features change with different facial orientations and different facial expressions, these ratios also change with the different orientations and expressions. As a result, systems of this sort are not always reliable when confronted with different orientations and expressions.
Another approach utilizes an approach called eigenfaces. Eigenfaces are facial features used to discriminate features of one face from those of another. In this approach, an average face is derived and the difference from a target face are determined in terms of the eigenfaces. For additional information about this and other techniques see the article "Face Value", Byte, February 1995, page 85-89, which is herein incorporated by reference.
However the eigenfaces approach is sensitive to changes in head orientation and lighting conditions because it uses the difference between the target face and the average of all faces as its primary means of comparison.
An additional problem with prior face recognition systems has been one of storage capacity and through-put. With regard to storage capacity, where it is desired to recognize a large number of different faces, the volume of information that needs to be stored can be very large. Even with the use of data compression techniques, facial recognition systems which rely on stored information about known faces when making comparisons with the live face can require an impractical amount of storage space for applications where a reasonably large number of faces need to be recognized.
A related problem is the excessive computational time required to perform an analysis of facial images to determine whether a match is present. Many conventional techniques require massive computational capabilities and/or long computational time to perform the required analysis of the images. Though-put is a problem in many applications where access to a system is required very rapidly and long delays for analysis of facial images cannot be tolerated.
Furthermore, even before a match between test and reference facial images can be attempted, an accurate location of the face must be determined. This task is problematic due to the aforementioned variations in the image. Particularly difficult is the task of locating the face amid background clutter and variable amount of hair on the head.
Thus it would be desirable to provide a system and method for accurately determining the location of a face in an image having background clutter.
It would also be desirable to provide a system and method for accurately performing facial recognition which does not require the storage of a large database and which also does not require excessive computational time.
SUMMARY OF THE INVENTION
Pursuant to the present invention, a system and method for locating an object pattern and performing matching of object patterns is provided. In the preferred embodiment the present invention is adopted to operate on object patterns which consist of facial images. The system is able to analyze reference and live facial images and determine if the two images are from the same person or not. It is able to do this without having previously seen either of the facial images. The system utilizes a neural network which is trained to recognize matching facial images using a large number of example faces in different views and orientations. Once trained, the system is able to recognize when two facial images are from the same person without ever having been previously trained with the facial image of that particular person. Thus, the system does not require storage of any information about the particular individual's face. This greatly minimizes the storage requirements of the system. Furthermore, through-put is maximized because there is no need for accessing a database of previously stored features from an individual's face, and also because of the parallelism of the neural network approach of the present invention.
In accordance with one aspect of the present invention, a method is provided for verifying that two facial images are from the same person. The method includes the steps of receiving reference and test facial images, the reference facial image being from a known face and the test facial image being from an unknown face. Reference and test feature sets are then derived from the corresponding facial images. Each element in each of the feature sets is then assigned to a look-up table of weights that is quantized by the amplitude of the feature element in a corresponding reference and test weight set in a neural network, wherein each weight set comprises a plurality of weights which correspond to each element in the feature set. An output of the neural network is then determined by calculating the dot product normalized by weight vector length of the assigned weights in the two respective weight sets. The system then compares the results of the dot product output to a threshold, wherein an above threshold output indicates that the facial images are from the same person, and a below threshold output indicates that the two facial images are from different persons.
In order to derive the correct weights for performing this method, the method in accordance with present invention undergoes a prior learning process which includes the steps of generating a set of training test feature sets and training reference sets. The training test and reference feature sets are derived from different images of a plurality of known facial images. The feature sets are processed as described above for the training test and reference feature sets. When the output is correct the weights remain unchanged. When the output is above threshold and the test feature sets are from different faces, the value of the assigned weights of the test and reference weight vectors are adjusted to be farther apart from each other. If, on the other hand, the results are below threshold for facial images which originate from the same face the weights are adjusted to be closer to each other.
In accordance with another aspect of the present invention, a system is provided for verifying that two facial images are from the same person. The system is adapted to implement the above-discussed method.
As a result, the present invention provides a system and method for verifying a match between two facial images. The system is able to verify facial matches to a high degree of accuracy while tolerating a wide range of variations in the images including modifications of hair, make-up, facial expression and orientation. The present invention accomplishes all this with a very minimal storage requirements and with rapid throughput thus making it practical for many applications where a large number of different faces must be verified quickly.
BRIEF DESCRIPTION OF THE DRAWINGS
The various advantages of the present invention will become apparent one skilled in the art by reading the following specification and by reference to the following drawings in which:
FIG. 1 is a diagram of a preferred embodiment of the system of the present invention which determines whether the face of a user of an access card matches a previously recorded image of the known user's face.
FIG. 2 is a diagram of the major steps performed by the face verification system in accordance with a preferred embodiment of the present invention.
FIG. 3 is a diagram of the major steps involved in deriving a facial image bounding box in accordance with the present invention.
FIG. 4 is a diagram of the major steps involved in locating the eyes in a facial image in accordance with the present invention.
FIG. 5 is a diagram showing additional details of the process of selecting weights in accordance with the present invention.
FIG. 6 is a diagram of an alternative embodiment of the present invention.
FIG. 7 is an alternative embodiment for using the present invention with a computer network.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention is a system, method and application for the recognition, verification and similarity ranking of facial or other object patterns. While the techniques of the present invention have many applications in the general area of object recognition, they are particularly well suited to the task of face recognition. This task poses numerous difficulties particularly where many variations can be expected in the appearance of the facial image to be recognized. The techniques of the present invention result in a system which can verify a match between two facial images where significant differences may occur between the images.
Referring now to FIG. 1, an overall functional diagram of a preferred embodiment of the present invention is shown. In this system, a user desires access to some entity. This entity may comprise a computer network, an automated teller machine (ATM), access to a building, etc. Initially the user of this face verification system 10 in accordance with the present invention enters an access card 12 into a card reader 14. The card reader generates an output comprising a previously stored reference image of the user's face 16. It should be appreciated that this previously stored image may be imprinted on the access card and/or may be stored in a database (not shown) accessible to the card reader 14.
The resulting reference image is shown in FIG. 1 as image 16. This image may comprise, for example, a 100 pixel high by 80 pixel wide digitized image of the known user's face. This digitized image is then input into the automated face verifier 18 of the present invention. It will be appreciated that in the preferred embodiment shown in FIG. 1, utilizing the access card 12, the digitized reference image 16 will be input into the automated face verifier 18 each time access is required. In another embodiment a database containing all of the possible known reference facial images may be stored and will be accessed by the automated face verifier by an index code each time verification is required.
A camera 20 acquires an image of the individuals desiring access. This person will either present an access card 12 with his/her image on it or will have had his/her image previously stored in the database accessible to the verifier 18. The camera 20 produces an image 22 which includes, for example, the entire head and shoulders of the individual. In accordance with a process described in more detail below, this image is adaptively clipped to include just the immediate area of the individual's face to yield a clip 24 which is the same size as the reference image 16.
This clip image 24 is then transferred to an automated face locator 26 which performs the function of registering the position and orientation of the face in the image 24. In accordance with a technique which will be described in more detail below, in the preferred embodiment of the present invention the location of the face is determined in two phases. First, the clip image 24 is found by defining a bounding box at its perimeter. The location of the bounding box is based on a number of features. Second, the location of the individual's eyes is determined. Once the location of the eyes is determined, the face is rotated about an axis located at the midpoint (gaze point) between the eyes to achieve a precise vertical alignment of the two eyes. The purpose of the automated face locator 26 is to achieve a relatively precise alignment of the test image 24 with the reference image 16. It will be appreciated that an automated face locator 26 will also be used to locate the face in the test image 16. It should be noted that the adaptive automated face locator 26 is needed to locate the face in the test and reference image 16, because with standard (nonadaptive) image processing techniques, the derived outline of the face will necessarily include the outline of the hair. However in accordance with the present invention the clip image 24 defined by the bounding box will not include the hair.
In any event, it is important that the resulting test image 28 be accurately registered with respect to the reference image 16. That is, in accordance with the preferred embodiment described in more detail below an accurate location of the eyes is determined for the reference image 16 and an accurate location for the eyes is determined for the test image 24. The two images are then registered so that the location of the midpoint between both eyes are registered in both images. This is important because the automated face verifier 18 will be attempting to determine whether the two images are those of the same person. If the two images are misregistered, it is more likely to incorrectly determine that the two images of the same person are from different persons because similar features will not be aligned with similar features.
The automated face verifier 18 receives the clipped and registered reference image 16 and test image 28 and makes a determination of whether the persons depicted in the two images are the same or are different. This determination is made using a neural network which has been previously trained on numerous faces to make this determination. However, once trained, the automated face verifier is able to make the verification determination without having actually been exposed to the face of the individual.
Referring now to FIG. 2, a diagram of the generalized process of face verification in accordance with the present invention is shown. Initially a test image 22 and a reference image 30 are acquired. These images are then both processed by a clip processor 32 which defines the bounding box containing predetermined portions of each face. It will be appreciated that, in general, the reference prerecorded image may be stored in various ways. The entire image of the previous facial image may be recorded as shown in the image 30 in FIG. 2, or only a previously derived clip 16 may be stored. Also, a clip that is compressed in a compression method for storage may be stored which is then decompressed from storage for use. In addition, some other parameterization of the clip 16 may be stored and accessed later to reduce the amount of storage capacity required. The prerecorded image could be stored on a access card as shown in FIG. 1 or on a smartcard consisting of magnetic media, optical media, two dimensional barcode media or active chip media. Alternatively, the prerecorded image may be stored in a database as discussed above.
The reference and test images 22, 30 are then clipped. This occurs in two stages. First, a coarse location is found in step 33. This yields the coarse location of the image shown in Blocks 23 and 24. Next, a first neural network 26 is used to find a precise bounding box shown in Blocks 28 and 29. In a preferred embodiment the region of this bounding box 28 is defined vertically to be from just below the chin to just above the natural hair line (or implied natural hair line if the person is bald or wearing a hat). The horizontal region of the face in this clipping region is defined to be between the beginning of the ears at the back of the cheek on both sides of the face. If one ear is not visible because the face is turned at an angle, the clipping region is defined to be the edge of the cheek or nose, whichever is more extreme. This process performed by chip processor 32 will be described in more detail below in connection with FIG. 3.
Next, a second neural network 30 is used to locate the eyes. The image is then rotated in step 34 about a gaze point as described in more detail in FIG. 4. The above steps are repeated for both the reference and the test images. The two images are then registered in step 88, using the position of the eyes as reference points.
Next, the registered images are normalized in step 90. This includes normalizing each feature value by the mean of all the feature values. It should be noted that the components of the input image vectors represent a measure of a feature at a certain location, and these components comprises continuous valued numbers.
Next, a third neural network 38 is used to perform the verification of the match or mismatch between the two faces 22, 30. First, weights are assigned in block 36, as described in more detail connected with in FIG. 5. It should be noted that the location of the weights and features are registered. Once the weight assignments are made the appropriate weights in the neural network 38 are selected. The assigned reference weights comprise a first weight vector 40 and the assigned test weights comprise a second weight vector 42. The neural network 38 then determines a normalized dot product of the first weight vector and the second weight vector in block 44. This is a dot product of vectors on the unit circle in N dimensioned space, wherein each weight vector is first normalized relative to its length. A well- known technique for normalizing such vectors is used in vector quantization, which is commonly used in connection with Kohonen neural networks. For further details with respect to normalization and related Kohonen neural networks see Wasserman, Neural Computing Theory and Practice, Van Nostrand Reinhold (1989). pp. 63-71 and pp. 201-209 which is incorporated in its entirety herein by reference.
The result is a number which is the output 46 of the neural network 38. This output is then compared to a threshold in decision step 48. Above threshold outputs indicate a match 50 and below threshold outputs indicate a mismatch 52.
The above process will now be described in more detail. Referring to FIG. 3, the clip process 32 is shown. An acquired image 54 may comprise either the test or the reference image. This image includes the face of the subject as well as additional portions such as the neck and the shoulders and also will include background clutter. An image substraction process is performed in accordance with conventional techniques to subtract the background. For example, an image of the background without the face 56 is acquired. The image of the face and background is then subtracted from the background (block 58). The result is the facial image without the background 60. In step 61 standard, non-adaptive edge detection image processing techniques are used to determine a very coarse location of the silhouette of the face. It is coarse because this outline is affected by hair, clothing, etc.
Next the image is scaled down, for example, by a factor of 20 (block 62). This would reduce a 100 pixel by 80 pixel image down to 5×5. The images is then scaled down. For example, the total resulting image may include the following scales: 5×5, 6×6, 7×7, 10×10, 12×12, 16×16 and 18×18. This results in a hierarchy of resolutions. With regard to scaling it should be noted that the convolution types and sizes are identical for all images at all scales; and because they are identical, if the images are first scaled down to have coarsely scaled inputs then the convolutions will yield a measure of more coarse features. Conversely, if higher resolution inputs are used (with the same size and type kernel convolution) then the convolution will yield finer resolution features. Thus, the scaling process results in a plurality of features at different sizes. Accordingly, the next step is to perform a convolution on the scaled image in block 64. For example this may be a 3×3 convolution. In the preferred embodiment the convolutions used have zero-sum kernel coefficients. Also, a plurality of distributions of coefficients are used in order to achieve a plurality of different feature types. These may include, for example, a center surround, or vertical or horizontal bars, etc. This results in different feature types at each different scale. Steps 62 and 64 are then repeated for a plurality of scales and convolution kernels. This results in a feature space set 66 composed of a number of scales ("S") a number of features ("F") based on a number of kernels ("K"). This feature space then becomes the input to a neural network 68. In the preferred embodiment this comprises a conventional single layer linear proportional neural network which has been trained to produce as output the coordinates of the four corners of the desired bounding box 72 when given the facial outline image as input.
A description of a neural network suitable for this purpose may be found in the article, M. Kuperstein, "Neural Model of Adaptive Hand-eye Coordination For Single Postures", SCIENCE Vol. 239 pp. 1308-1311 (1988), which is herein incorporated by reference. Optionally, a hierarchical approach may be employed in which the feature space is transformed by a series of neural networks into bounding boxes that are increasingly closer to the desired bounding box. That is, the first time through the first neural network the output is a bounding box which is slightly smaller than the perimeter of the image and that box is clipped out and the features redefined and put into another neural network that has an output which is a bounding box that is a little closer to the desired bounding box. By repeating this process interactively until the final desired bounding box achieved, it has been found that the amount of noise with each iteration was reduced and the result is a more stable convergence to the desired bounding box with each neural network. Adequate results have been achieved in this manner with a hierarchy of two neural networks. In the preferred embodiment weights in the neural network 33 are assigned according to the techniques shown in FIG. 5 and discussed below.
Referring now to FIG. 4, the process of locating the face 26 within the bounding box is shown. The general approach of the present invention is to locate with some precision a given feature on the face and register the corresponding features in the reference and test images before preforming the comparison process. In the preferred embodiment the feature used is the eyes. It will be appreciated that the eyes can be difficult to locate because of various factors such as reflections of light from glasses, from the eyes themselves, variations in shadows, etc. Further, the size of the eyes, their height, and other factors are all unknown. Because of this, an adaptive neural network is used to find the location of each of the eyes.
In more detail, first, the data outside the bounding box in feature space 66 (shown in FIG. 3) is eliminated. This feature space 72 (shown in FIG. 4) is input into a neural network 74 which has been trained to generate the x coordinate point of a single point, referred to as the "mean gaze". The mean is defined as the mean position along the horizonal axis between the two eyes. That is, the x position of the left and right eye are added together and divided by two to derive the mean gaze position. The neural network 74 may comprise one similar to the neural network 68 shown in FIG. 3, This neural net 74 is trained with known faces in various orientations to generate as output the location of the mean gaze. In the preferred embodiment weights in the neural network 74 are assigned according to the technique shown in FIG. 5 and discussed below.
Once the mean gaze is determined 76, a determination is made of which of five bands along the horizontal axis the gaze falls into. That is, a number of categories of where the gaze occurs are created. For example, these categories may determine whether the gaze occurred relatively within the middle or relatively in the next outer band, or in a third outer band of the total width of the face. These bands are not necessarily of equal width. For example, the center band may be the thinnest, the next outer ones a little wider and the final ones the widest. Wherever the computed mean gaze is located on the x coordinate will determine which band it falls into (step 78). Further, this will determine which of five neural networks will be used to find the location of the eyes. (step 80) Next, the feature set is input to the selected neural network in step 82. This neural network has been trained to determine the x and y coordinates of eyes having the mean gaze in the selected band 84.
The use of a plurality of neural networks for the different bands has the effect of making the inputs to each of the networks with respect to themselves much more similar. This is important because of the highly variable appearance of faces depending on whether the gaze is forward, leftward or rightward. By the use of a hierarchy of neural networks which each correspond to a certain range of the gaze of the face the inputs to each of the networks with respect to themselves are much more similar.
Next, the entire face is rotated (in two dimensions) about the gaze point until the x and y position of the eyes are level on the horizontal axis in step 86. The gaze point becomes a reference point for registration of the test and reference images as indicated in step 88 in FIG. 2.
Next, the feature sets are normalized 90 (shown in FIG. 2). This is accomplished by, for example, normalizing each feature value by the mean amplitude of all feature values. This normalization process normalizes against variations such as lighting conditions so that the feature set used in the neural network can withstand varying contrast or lighting conditions.
Next, in step 36 (in FIG. 2) the feature values are assigned to weights in the neural network 38. The preferred approach (for neural network 38, as well as for neural networks 26 and 30) will be to quantize the feature values from an analog number to a quantum number that is positive or negative. This is done by taking the whole range of values of all sets and quantize the range by certain ratios of twice the mean (positive and negative). Next, the positive feature value are ranked and the negative feature values are ranked with respect to their values. A set of positive ranks and a set of negative ranks are thereby defined. For a given feature value it can be assigned to a bin that is quantized by ranking the values. In the preferred embodiment this is done by defining the ranks by the fractions 1/3 and 1/2. In particular, all of the elements in the input vector (which comprises both positive and negative numbers) are used to determine their positive mean and their negative mean. For example, twice the positive mean may be 1000 and twice the negative mean may be 1500. Applying the fractions of 1/3 and 1/2 to 1000 would equal 333 and 500. Thus the first rank would equal components from 0333 the second rank between 334-500 and the third rank would be components greater than 500. Next, all the individual components of the input vector are placed in one of the three ranks based on their value. The same process is also performed for the negative components in the feature vectors,
Next, each ranked component value is assigned a weight based on it's rank. This process of assigning weights is described in more detail in FIG. 5. There are 6 bins, each bin corresponding to a weight. There are 3 negative and 3 positive bins throughout the total range of component values of -800 through +800. A four by four weight lookup table vector 92 is shown which contains 16 components of the feature vectors. For example, one of these components is 600, another is 400, another is -100. Also, a four by four weight vector 94 is depicted. Each of the 16 weight locations in the four by four weight vector 94 correspond to one of the 16 components of the feature vector. Each location in the weight vector has six different weights corresponding to six different ranks.
In this example, there are three positive ranks and three negative ranks. As described above, each component in the feature vector is ranked. For example, 400 is determined to be of rank five, thus this component is mapped to the 5th of six weights within the corresponding location in the four by four weight vector 94. Similarly, the component having a value of 600 is put into the 6th rank and accordingly this feature vector is assigned to the weight value which exists in the third rank of its corresponding location of weight vector 94. The component having a value of -100 is assigned to the 2nd rank.
This process is repeated for all of the components of the feature vector. In an actual image, however, the feature vector may have many more components. There may be, for example, 10,000 components in the feature vector.
It should be noted that some components of feature vector may have a value of zero. When features values equal zero the system can decide to put these values in a bin or not. This decision is made differently for different neural networks. For the networks used to locate the bounding box 26 and the eyes 30, feature values of zero are not used. However, for the matching neural network 38 feature values of zero are used for weights associations. This is because with the bounding box or the eyes the output of the neural net is a coordinate value and it is not desirable to have a non-feature contribute to the location of an x,y point. However, when the feature value for the face verification neural network 38 is zero, it is desirable to have that contribute to the result. For example, in a face, the absence of a feature (zero feature value) is an important indicator of a mismatch, whereas the absence of a feature is not important to locate the bounding box or the eyes. A non-zero value for a feature vector component means that a feature has been measured at that location while a zero indicated that no feature has been measured at that location.
It should also be noted that the actual values of the selected weights in the vector are adaptive and will be modified during training as described in more detail below.
Also, the exact weight chosen in the weight vector will depend on the preexisting value of that weight vector component. However, there is a fixed relationship between each location in the feature vector and the corresponding location in the weight vector (each of which has multiple weights, one for each rank).
Once the weight vector 94 has been determined for both the reference set and test feature set the neural network 38 computes the normalized dot product of the two weight vectors. In essence, this operation computes the sum of the products of corresponding elements of the two weight vectors. This is operation 44 shown in FIG. 2. It will be appreciated that the dot product output will be a number which is proportional to the similarity between the two weight vectors. That is, highly similar weight vectors are more parallel and will yield higher dot product outputs indicating that the faces are similar. Dissimilar weight vectors will yield lower valued dot product outputs indicating that the faces are less similar.
The fact that the dot product operation is a "normalized" dot product means that the dot product of the output 46 is normalized to the unit circle in N dimensional space. The normalization process is performed by dividing the dot product by the product of each of the vector lengths. The normalized dot product results in a confidence level and that confidence level is normalized by a linear transformation constant to get the range needed, i.e., 0-10 or 0-100. If the confidence measure is above a preset verification threshold then the result is "positive". This means that the face in the test clip 32 depicts a face belonging to the same person as that in the reference clip 33. If the value is not above the predetermined threshold the result is "negative," which means that the test clip 33 and reference clip 32 depict faces of different people.
The procedure for training the neural network 38 to correctly perform the face matching procedure will now be described. Initially all of the weights are set to zero.
When two training facial images are input into the system, since all the weight values are zero the resulting dot product of the two weight vectors will also be zero. Because this is training data however it is known whether the two faces are from the same person or not. If they are from the same person then it is desired to have the result be a relatively high valued positive number. This is because matching feature vectors should produce above threshold outputs. The threshold may be selected arbitrarily to be at the midrange. When the two faces are from the same person, a starting positive value is selected and the two weight vectors are made to be the same positive value. If the two faces are from a different people then each weight value is given opposite assigned values, one starting value is positive and one is a negative but equal value.
Subsequently the neural network will be trained on many examples of pairs of faces, some of which match, and some of which do not match. A variety of faces in a variety of orientations and lighting conditions will be used to allow the neural network to generalize all of this information. As a result it will be able to recognize when two different views of the same person are actually the same person, and when two images of different people are in fact faces of different people.
The learning algorithm used in the preferred embodiment is as follows:
1. If the output 46 is correct make no changes to the weights. That is, a correct result means that two faces that are the same generate an output which is above threshold, and two faces which are from different persons generate an output that is below threshold.
2. If the result is negative (below threshold) and incorrect, adapt corresponding weights and weight vectors 1 and 2 to be closer to each other. The amount of adjustment is preferably a percentage of the difference between the two weights. This percentage is the learning rate for the adaptation. It should be noted that only weights which are selected by the feature sets 1 and 2 are adapted; non-selected weights are not. As discussed above, if both weight values are zero, (as in the initial condition) both weight values are changed to be a preset constant value.
3. If the output 46 is positive (above threshold) and incorrect, adapt the corresponding weights in weight vectors 1 and 2 to be farther from each other. Again, the amount of adjustment is a percentage of their difference. Only weights which are selected by the feature sets are adapted. If both the weight values are zero, the weight value of weight set 1 is set to the same preset constant value used in training step 2 above. However, the weight value from weight set 2 is set to the negative of this value.
The test images should comprise of pairs of randomly selected images of faces. Also, images of the same person should be used approximately half the time and images of different persons should be used about half the time. The objective of training is to give the system enough training with different orientations and different head postures etc. so it will be able to generalize across different head orientation and head postures. Thus, the training example will include examples of a head looking straight, to the left, to the right, up and down.
For example, the system may be trained with images of 300 different people in different orientations. It is being trained not to recognize any specific face but instead it is being trained to recognize what is similar about different images of the same face. It is being trained to be a generalized face recognizer as opposed to being able to recognize any specific face.
In a preferred embodiment, hysteresis is used during learning. This means that to avoid learning the result must be above or below the threshold by a given amount. For example, if two test images are from the same face, and the threshold is defined as an output of 5 on a scale of 0 to 10, then to avoid learning the output must be 5+delta. Thus any output less than the threshold of 5+delta will cause the system to adapt weights to be closer to each other. In this way, only results which are less ambiguously correct will avoid learning. Results which are correct, but only slightly above threshold will be further refined by additional training.
Likewise, when the system is trained with two training images of different faces, in order to avoid adaptation of the weights, the result must be below threshold by a given amount, for example below 5 minus delta. As a result any output above 5 minus delta will result in adaptation of the weights to produce less ambiguous results. In a preferred embodiment the delta amount used for the learning hysteresis may be 0.5. It should be remembered that this hysteresis is only used during the training procedure and not during actual use of the system on unknown faces. Thus, in actual use, where it is not known beforehand whether the faces match or not, any above threshold output will be considered to be a match and any result which is at or below threshold will be considered to be no match. It should be noted that the weights are always associated with a certain location in the neural network 38 and a certain feature of the neural network. However, every face is different so every image that comes from a different face will pick up different weights. But the weights themselves are always associate with a certain location and with a certain feature even though which weights are actually picked up depends on which face is being processed. As a result, the entire neural network will begin to average over all faces it has ever seen in it's experience.
It should also be noted that the operations of the neural network 38 in accordance with the present invention is quite different from the prior techniques, such as the self-organizing maps of Kohonen as described, for example in the article R. Lippman, An Introduction to Computing with Neural Networks". IEEE ASSP Magazine, April 1987, pp 4-2, which is incorporated by reference. Those skilled in the art will appreciate that with the Kohonen method a dot product is taken between a single input and the weight vector in the neural network. The weight vector which generates the highest dot product is designated the "winner" and that weight vector is modified during training to be even closer to the input vector.
In contrast, in the present invention two inputs operate on the neural network simultaneously instead of just one. Further, in the present invention, each input vector selects weights in the neural network and the dot product between each of the two selected weight vectors is determined. During learning, in the present invention, both sets of weight vectors are adapted to be closer to each other or farther apart from each other. Thus it is important to recognize that the architectural and learning algorithm of the present invention are specifically adapted to perform a comparison between two inputs, unlike Kohonen network which is adapted to classify an input into one of several outputs or associate an input with an output. The Kohonen network does not perform the function of comparing the similarity between two inputs. Also, in the present invention the actual feature vector is never used in the dot product as its in Kohonen networks. In the present invention only weights are used in the dot product operation. Also in the Kohonen system initially the weights are set to random values; in the present invention weights are initially set to zero.
Another advantage of the present invention is that it can be trained to generate a high matching value for incompatible looking objects. This is a major advantage over prior art approaches to face recognition. For example, suppose input vectors one and two representing facial images were identical. If a dot product is performed on the two images and they are identical, the result would be very high. However, if the images are offset by even one or two pixels then the dot product will be very low because everything is misregistered. In contrast, with the technique of the present invention the system can be trained to generate a matching output for different appearing objects. For example, if the input images were of an apple and an orange each image would select weight vectors and those weight vectors would be trained on various images of apples and oranges to generate a high dot product value. Yet a dot product between the raw image of the apple and orange would yield a very low value.
This malleable nature of the present invention is important because the human face varies tremendously whenever the orientation and lighting etc. of the face is changed. The present invention achieves the goal of being able to match images that are in some ways incompatible. This approach works because it defers the dot product operation to a reference of the inputs (the weight vectors) and does not perform the dot product on the raw image.
Of course, there are limits as to how variable the inputs can be even with the present invention. If input images vary too widely the training process will average weights according too wide a variability and the results will be unsatisfactory. This is why it is important to reliably produce the registration of the images; for example by achieving a very good location of a particular feature (for example, the eyes). If instead this feature is mislocated the faces will be misregistered and the results will be less reliable.
It should be noted that a system such as the one depicted in FIG. 1 can be tampered with if a fraudulent user is able to substitute his own image for the correct image on the access card. In this situation (assuming no prior storage of the facial image in a database), the system will compare the image on the card to the fraudulent user and will verify a match. A simple way to circumvent this problem is shown in FIG. 6. In effect, this approach utilizes the facial image as a personal identification number (PIN number). This may be accomplished by taking a predetermined random sample of values of the picture that was taken of the correct user during his enrollment. This random sample of values from the image is then stored in a database. If a fraudulent user then puts his picture on the card the random sample of the image will be scanned and it will be noted that they are different from the ones in the database and the card will be rejected. For example, as shown in FIG. 6 the access card 12 is entered into the card reader 14 which reads and digitizes the image 16 on the card. A random sample of this image is then taken by the module 94. Previously, a random sample of a reference image of the known user's face has been acquired and stored in memory unit 96. This may be done by entering the reference image into the card reader and transmitting the digitized image to the random sample unit 94, which then transmits the random sample to the memory unit 96. The two corresponding random samples (each associated with a common I.D. No.) are then compared in comparison unit 98. A match indicates that the faces are the same and the card is valid. No match indicates that the face on the card is not the same as the one previously stored and access will be denied.
It should be noted that while the position of the samples are determined randomly, the samples will always be in a repeatable location. For example, out of the entire image on the card the system may sample 24 locations in the image at random and each location may comprise a byte. Thus the total sample is 24 bytes. For each byte the system may sample a random bit position (yielding a zero or one) which would yield 24 bits of information, or three bytes. The ID number may also comprise about three bytes of information. This would yield a total of six bytes; three for the ID number, and three for the face pin number. As a result, with such minimal storage requirements it would be possible to put all the conceivable users of a given system on a single disk. For example six bytes for forty million users would fit on a single disk. This disk could then be distributed on a read only form to all the locations that required access checking.
In this way the image on the card itself is used as a PIN number. Two checks will be made before access is allowed. The first check occurs when a random sampling of the image on the card alone is taken and checked with the previously stored random sampling of the same image on the card. If there is no match access is denied because the card is invalid. The second check is when the live face of the person desiring access is compared to the image on the card and access is only allowed if a match is found.
It will be appreciated that the technique for using a face image on a card as a PIN number as described above and shown in FIG. 6 can also be used alone, without the face matching system for checking the live image of the user's face. For example, in applications where it is desirable to simply check the validity of a card having the user's face imprinted on it, the system shown in FIG. 6 will insure that the face on the card is the same one previously stored and that the card in not fraudulent.
It should be noted that the face recognition system 10 shown in FIG. 1 could be tampered with by a fraudulent user. For example, in some situations it may be possible to connect a VCR to the frame grabber 21 in place of the camera 20. Thus, an image of a the authorized user could be inserted in place of the actual fraudulent user. Other ways are also possible for a fraudulent user to insert the authorized user's image into the system, for example, by inserting an image directly into the automated face locator 26. For example, on systems such as those shown in FIG. 7 where the present invention is utilized by persons at remote locations it is desirable to insure that fraudulent users do not tamper with the signal which is transmitted remotely to the automated face verifier of the present invention. In accordance with an alternative embodiment of the present invention, the camera 20 and frame grabber 102 are enclosed in a tamperproof box 104. The tamperproof box 104 shown in FIG. 7 will comprise a conventional tamperproof box well-known in the art in which the components inside are disabled by a EPROM which erases if the box is tampered with. Also, within the tamperproof box is a special hardware device 106 which takes the live image and samples it in a manner similar to the technique disclosed above for using the facial image as a PIN number. That is, the image is sampled at, for example, 24 locations and these samples are put stored in three bites. These three bites are then encrypted by the sample unit 106 and the system using I/O unit 108 sends these encrypted samples along transmission line 110 to a remote server 112 which decrypts the sample. Also the server receives the live image from the camera and rechecks the live image with the decrypted sample in comparitor 114 to make sure that the two are identical.
In this way, the system insures that the live image is actually the image of the person who is actually there since the camera and frame grabber cannot be tampered with. It also insures that a fraudulent user has not inserted a fraudulent image of an authorized user at a point in the system down stream of the camera and frame grabber. Once this authentication process takes place, the automated face verifier of the present invention described above can then compare the live image with the image on the access card in accordance with the techniques discussed above.
In another alternative embodiment of the present invention a technique is employed for minimizing the variations in facial orientation when the image is acquired. When the camera 20 takes the picture of the person desiring access one approach would be to prompt the person (for example, by recorded voice command) to look into the camera while the picture is taken. However, it has been found that when prompted, people frequently will do things such as pose, assume and unnatural expression, adjust their hair, clothing, etc. The result is an unnatural and less consistent image. Further, the step of prompting and taking the picture takes time which slows down the process.
Also, there is added cost in the prompting mechanism.
To overcome these problems in the preferred embodiment, the camera is disposed to acquire the image of the person automatically when they initiate a certain action. This action may comprise, for example, the pressing of a key on the keyboard, inserting a card, an entry into a certain proximity of the access unit. By taking the picture in a candid way a more natural and consistent expression and pose will result. For example, when a person is pressing keys on a keyboard they are very likely to be looking directly at the keyboard with a neutral expression. This reduces variations in the facial expression and orientation. Also this approach eliminates the cost of the system and increases the speed of the recognition.
It should be noted that the above described techniques have been described in connection with the problem of matching facial images. However, the same techniques may easily be adapted to other patterned images and object patterns. These may include, for example, voice prints, finger prints, chromosomal and DNA patterns, etc. That is, these techniques are useful for any application where there is something about a pattern that is identifiable with a person (or other entity) and it is desirable to easily and automatically determine whether a test pattern matches that of a reference patterned image.
Also, it should be noted that while the feature sets are derived in the above discussion using convolutions and scaling, other known techniques may also be employed with the teachings of the present invention. For example, the feature sets may be derived using morphological operations, image transforms, pattern blob analysis, and eigenvalue approaches.
Further, while the preferred embodiment employs neural networks to perform verification, other adaptive processors could be used including, but not limited to, genetic algorithms, fuzzy logic, etc. In general adaptive processors are those which can perform facial verification for faces that vary by head orientation, lighting conditions, facial expression, etc., without having explicit knowledge of these variations. Thus, one could substitute another type of adaptive processor for the neural network in the present invention.
It will also be appreciated by those skilled in the art that all of the functions of the present invention can be implemented by suitable computer programming techniques. Also, it will be appreciated that the techniques discussed above have applications outside facial recognition and matching.
Those skilled in the art can appreciate that other advantages can be obtained from the use of this invention and that modification may be made without departing from the true spirit of invention after studying the specification, drawing and following claims.

Claims (12)

What is claimed is:
1. A system for determining the likelihood that two object patterns arise from the dame object source, said system comprising:
a. means for receiving first and second object patterns each including a plurality of feature components having a plurality of values;
b. Means for assigning a predetermined weight to each said feature component in each of said object patterns, wherein the assigned weights are one of a plurality of weights in corresponding first and second weight sets in a neural network, wherein each weight set includes a plurality of weight subsets, each weight subset corresponding to the location of one of said feature components, wherein particular weights in each subset are assigned according to the value of the component; and
c. means for determining an output of said neural network by calculating a comparison function of the assigned weights in the two respective weight sets, wherein the values of the weights are predetermined such that the output is a measure of the likelihood that two object patterns arise from the same object source, wherein the value of said predetermined weights are derived from a training procedure and are independent of the value of the feature component.
2. The system of claim 1 further comprising a learning unit which comprises:
means for generating sets of training pairs of first and second training object patterns derived from known object sources, the training pairs comprising first and second training object patterns that vary from each other, some of which derive from the same object source;
means for adjusting the value of said assigned weights of the first and second weight sets to have a greater difference for said training pairs arising from two different object sources that generate an output incorrectly indicating the same source; and
means for adjusting the value of said assigned wights of the first and second sets to have a smaller difference measure for said pairs arising from the same object source that generate an output incorrectly indicating different sources.
3. The system of claim 1 further comprising:
means for defining a bounding region around said object pattern within a larger pattern, the region wherein portions outside of the bounding region are discarded.
4. The system of claim 3 further comprising a neural network for deriving the location of said bounding region adaptively.
5. The system of claim 3 wherein said object patterns each comprise human facial images and further comprising
means for locating the position of the eyes in said human face; and
means for rotating the facial image about a fixed point until the eyes are level horizontally, and registering the two object patterns based on said eye positions.
6. The system of claim 1 further comprising:
means for discretizing each feature value in discrete bins as a function of the value's proportion of the mean of the feature values; and
means for assigning each feature value to one of a set of possible weight values in said weight set based on said discretizing.
7. A method for determining the likelihood that two object patterns arise from the same object source, said method comprising:
a. receiving first and second object patterns each including a plurality of feature components having different values;
c. assigning a predetermined weight to each said feature component in each of said object patterns, wherein the assigned weights are one of a plurality of weights in corresponding first or second weight sets in a neural network, wherein each weight set includes a plurality of weight subsets, each weight subset corresponding to the location of one of said feature components, wherein particular weights in each subset are assigned according to the value of the component; and
d. determining an output of said neural network by calculating a comparison function of the assigned weights in the two respective weight sets, wherein the values of the weights are predetermined such that the output is a measure of the likelihood that two object patterns arise from the same object source, and wherein the value of said predetermined weights are derived from a training procedure and are independent of the value of the feature component.
8. The method of claim 7 further comprising performing a training procedure to determine said weight values, the training procedure comprising the steps of:
generating sets of training pairs of first and second training object patterns derived from known object sources, the training pairs comprising first and second training object patterns that vary from each other, some of which derive from the same object source;
adjusting the value of said assigned weights of the first and second weight sets to have a greater difference measure for said training pairs arising from two different objects sources that generate an output incorrectly indicating the same source; and
adjusting the value of said assigned weights of the first and second weight sets to have a smaller difference measure for said training pairs arising from the same object source that generate an output incorrectly indicating different sources.
9. The method of claim 7 further comprising:
defining a bounding region around said object pattern within a larger pattern, the region wherein portions outside of the bounding region are discarded.
10. The method of claim 9 further comprising a neural network for deriving the location of said bounding region adaptively.
11. The method of claim 9 further comprising:
locating the position of the eyes, by determining the average eye position; and
rotating the facial image about the average eye position until the eyes are level horizontally, and registering the two object patterns based on said eye positions.
12. The method of claim 7 further comprising:
ranking each feature value as a function of the value's proportion of the mean of the feature values; and
assigning each feature value to one of a set of possible weight values in said weight set based on said ranking.
US09/020,443 1995-01-31 1998-02-09 System, method and application for the recognition, verification and similarity ranking of facial or other object patterns Expired - Fee Related US6128398A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/020,443 US6128398A (en) 1995-01-31 1998-02-09 System, method and application for the recognition, verification and similarity ranking of facial or other object patterns

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US38222995A 1995-01-31 1995-01-31
US09/020,443 US6128398A (en) 1995-01-31 1998-02-09 System, method and application for the recognition, verification and similarity ranking of facial or other object patterns

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US38222995A Continuation 1995-01-31 1995-01-31

Publications (1)

Publication Number Publication Date
US6128398A true US6128398A (en) 2000-10-03

Family

ID=23508048

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/020,443 Expired - Fee Related US6128398A (en) 1995-01-31 1998-02-09 System, method and application for the recognition, verification and similarity ranking of facial or other object patterns

Country Status (1)

Country Link
US (1) US6128398A (en)

Cited By (112)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001027763A1 (en) * 1999-10-08 2001-04-19 Ivex Corporation Networked digital security system and methods
US20010005222A1 (en) * 1999-12-24 2001-06-28 Yoshihiro Yamaguchi Identification photo system and image processing method
US20020101619A1 (en) * 2001-01-31 2002-08-01 Hisayoshi Tsubaki Image recording method and system, image transmitting method, and image recording apparatus
US20020129251A1 (en) * 2001-03-01 2002-09-12 Yukio Itakura Method and system for individual authentication and digital signature utilizing article having DNA based ID information mark
US20020154793A1 (en) * 2001-03-05 2002-10-24 Robert Hillhouse Method and system for adaptively varying templates to accommodate changes in biometric information
US20030005296A1 (en) * 2001-06-15 2003-01-02 Eastman Kodak Company Method for authenticating animation
US6504942B1 (en) * 1998-01-23 2003-01-07 Sharp Kabushiki Kaisha Method of and apparatus for detecting a face-like region and observer tracking display
US6549913B1 (en) * 1998-02-26 2003-04-15 Minolta Co., Ltd. Method for compiling an image database, an image database system, and an image data storage medium
US20030072489A1 (en) * 2001-08-28 2003-04-17 Sick Ag Method of recognizing a code
US6628821B1 (en) * 1996-05-21 2003-09-30 Interval Research Corporation Canonical correlation analysis of image/control-point location coupling for the automatic location of control points
US20030185423A1 (en) * 2001-07-27 2003-10-02 Hironori Dobashi Face image recognition apparatus
US20030190076A1 (en) * 2002-04-05 2003-10-09 Bruno Delean Vision-based operating method and system
US20030198368A1 (en) * 2002-04-23 2003-10-23 Samsung Electronics Co., Ltd. Method for verifying users and updating database, and face verification system using the same
US20040052418A1 (en) * 2002-04-05 2004-03-18 Bruno Delean Method and apparatus for probabilistic image analysis
US20040151347A1 (en) * 2002-07-19 2004-08-05 Helena Wisniewski Face recognition system and method therefor
US6793128B2 (en) * 2001-06-18 2004-09-21 Hewlett-Packard Development Company, L.P. Face photo storage system
US20040202385A1 (en) * 2003-04-09 2004-10-14 Min Cheng Image retrieval
US20050030151A1 (en) * 2003-08-07 2005-02-10 Abhishek Singh Secure authentication of a user to a system and secure operation thereafter
US20050129306A1 (en) * 2003-12-12 2005-06-16 Xianglin Wang Method and apparatus for image deinterlacing using neural networks
US20050213796A1 (en) * 2004-03-12 2005-09-29 Matsushita Electric Industrial Co., Ltd. Multi-identification method and multi-identification apparatus
US20050262067A1 (en) * 1999-02-01 2005-11-24 Lg Electronics Inc. Method of searching multimedia data
US20050270948A1 (en) * 2004-06-02 2005-12-08 Funai Electric Co., Ltd. DVD recorder and recording and reproducing device
US20060074986A1 (en) * 2004-08-20 2006-04-06 Viisage Technology, Inc. Method and system to authenticate an object
US20060147093A1 (en) * 2003-03-03 2006-07-06 Takashi Sanse ID card generating apparatus, ID card, facial recognition terminal apparatus, facial recognition apparatus and system
US20060167833A1 (en) * 2004-10-13 2006-07-27 Kurt Wallerstorfer Access control system
US20060193520A1 (en) * 2005-02-28 2006-08-31 Takeshi Mita Object detection apparatus, learning apparatus, object detection system, object detection method and object detection program
US7130454B1 (en) * 1998-07-20 2006-10-31 Viisage Technology, Inc. Real-time facial recognition and verification system
US20060251327A1 (en) * 2002-12-20 2006-11-09 Miroslav Trajkovic Light invariant face recognition
US20070014430A1 (en) * 2002-01-30 2007-01-18 Samsung Electronics Co., Ltd. Apparatus and method for providing security in a base or mobile station by using detection of face information
US7206748B1 (en) * 1998-08-13 2007-04-17 International Business Machines Corporation Multimedia player toolkit for electronic content delivery
US20070247526A1 (en) * 2004-04-30 2007-10-25 Flook Ronald A Camera Tamper Detection
US20080037839A1 (en) * 2006-08-11 2008-02-14 Fotonation Vision Limited Real-Time Face Tracking in a Digital Image Acquisition Device
US20080037838A1 (en) * 2006-08-11 2008-02-14 Fotonation Vision Limited Real-Time Face Tracking in a Digital Image Acquisition Device
US20080062278A1 (en) * 2001-05-09 2008-03-13 Sal Khan Secure Access Camera and Method for Camera Control
US20080181508A1 (en) * 2007-01-30 2008-07-31 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
US7440593B1 (en) 2003-06-26 2008-10-21 Fotonation Vision Limited Method of improving orientation and color balance of digital images using face detection information
US20090003652A1 (en) * 2006-08-11 2009-01-01 Fotonation Ireland Limited Real-time face tracking with reference images
US7533805B1 (en) * 1998-10-09 2009-05-19 Diebold, Incorporated Data bearing record based capture and correlation of user image data at a card reading banking system machine
US7565030B2 (en) 2003-06-26 2009-07-21 Fotonation Vision Limited Detecting orientation of digital images using face detection information
US7574016B2 (en) 2003-06-26 2009-08-11 Fotonation Vision Limited Digital image processing using face detection information
US20090238472A1 (en) * 2008-03-18 2009-09-24 Kabushiki Kaisha Toshiba Image recognition device, image recognition method, and image scanning apparatus having image recognition device
US7616233B2 (en) 2003-06-26 2009-11-10 Fotonation Vision Limited Perfecting of digital image capture parameters within acquisition devices using face detection
US20090278655A1 (en) * 2008-05-06 2009-11-12 The Abraham Joshua Heschel School Method for inhibiting egress from a chamber containing contaminants
US20100002912A1 (en) * 2005-01-10 2010-01-07 Solinsky James C Facial feature evaluation based on eye location
US20100021008A1 (en) * 2008-07-23 2010-01-28 Zoran Corporation System and Method for Face Tracking
US20100026802A1 (en) * 2000-10-24 2010-02-04 Object Video, Inc. Video analytic rule detection system and method
US7684630B2 (en) 2003-06-26 2010-03-23 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US7693311B2 (en) 2003-06-26 2010-04-06 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US7844076B2 (en) 2003-06-26 2010-11-30 Fotonation Vision Limited Digital image processing using face detection and skin tone information
US7855737B2 (en) 2008-03-26 2010-12-21 Fotonation Ireland Limited Method of making a digital camera image of a scene including the camera user
US7900823B1 (en) 1998-10-09 2011-03-08 Diebold, Incorporated Banking system controlled by data bearing records
US7916971B2 (en) 2007-05-24 2011-03-29 Tessera Technologies Ireland Limited Image processing method and apparatus
US7916897B2 (en) 2006-08-11 2011-03-29 Tessera Technologies Ireland Limited Face tracking for controlling imaging parameters
US7953251B1 (en) 2004-10-28 2011-05-31 Tessera Technologies Ireland Limited Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images
US7962629B2 (en) 2005-06-17 2011-06-14 Tessera Technologies Ireland Limited Method for establishing a paired connection between media devices
US7965875B2 (en) 2006-06-12 2011-06-21 Tessera Technologies Ireland Limited Advances in extending the AAM techniques from grayscale to color images
US7974714B2 (en) 1999-10-05 2011-07-05 Steven Mark Hoffberg Intelligent electronic appliance system and method
US8046313B2 (en) 1991-12-23 2011-10-25 Hoffberg Steven M Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US8055067B2 (en) 2007-01-18 2011-11-08 DigitalOptics Corporation Europe Limited Color segmentation
US8064645B1 (en) 2011-01-20 2011-11-22 Daon Holdings Limited Methods and systems for authenticating users
US20110285504A1 (en) * 2008-11-28 2011-11-24 Sergio Grau Puerto Biometric identity verification
US8085992B1 (en) 2011-01-20 2011-12-27 Daon Holdings Limited Methods and systems for capturing biometric data
US8155397B2 (en) 2007-09-26 2012-04-10 DigitalOptics Corporation Europe Limited Face tracking in a camera processor
US8213737B2 (en) 2007-06-21 2012-07-03 DigitalOptics Corporation Europe Limited Digital image enhancement with reference images
US8224039B2 (en) 2007-02-28 2012-07-17 DigitalOptics Corporation Europe Limited Separating a directional lighting variability in statistical face modelling based on texture space decomposition
US20120308124A1 (en) * 2011-06-02 2012-12-06 Kriegman-Belhumeur Vision Technologies, Llc Method and System For Localizing Parts of an Object in an Image For Computer Vision Applications
US8330831B2 (en) 2003-08-05 2012-12-11 DigitalOptics Corporation Europe Limited Method of gathering visual meta data using a reference image
US8345114B2 (en) 2008-07-30 2013-01-01 DigitalOptics Corporation Europe Limited Automatic face and skin beautification using face detection
US8351662B2 (en) 2010-09-16 2013-01-08 Seiko Epson Corporation System and method for face verification using video sequence
US8379917B2 (en) 2009-10-02 2013-02-19 DigitalOptics Corporation Europe Limited Face recognition performance using additional image features
US20130156276A1 (en) * 2011-12-14 2013-06-20 Hon Hai Precision Industry Co., Ltd. Electronic device with a function of searching images based on facial feature and method
US8494286B2 (en) 2008-02-05 2013-07-23 DigitalOptics Corporation Europe Limited Face detection in mid-shot digital images
US8498452B2 (en) 2003-06-26 2013-07-30 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US20130198079A1 (en) * 2012-01-27 2013-08-01 Daniel Mattes Verification of Online Transactions
US8503800B2 (en) 2007-03-05 2013-08-06 DigitalOptics Corporation Europe Limited Illumination detection using classifier chains
US8593542B2 (en) 2005-12-27 2013-11-26 DigitalOptics Corporation Europe Limited Foreground/background separation using reference images
US20130329971A1 (en) * 2010-12-10 2013-12-12 Nagravision S.A. Method and device to speed up face recognition
US8649604B2 (en) 2007-03-05 2014-02-11 DigitalOptics Corporation Europe Limited Face searching and detection in a digital image acquisition device
US8675991B2 (en) 2003-06-26 2014-03-18 DigitalOptics Corporation Europe Limited Modification of post-viewing parameters for digital images using region or feature information
US8682097B2 (en) 2006-02-14 2014-03-25 DigitalOptics Corporation Europe Limited Digital image enhancement with reference images
US20140093156A1 (en) * 2012-09-28 2014-04-03 Ncr Corporation Methods of processing data from multiple image sources to provide normalized confidence levels for use in improving performance of a recognition processor
US8711217B2 (en) 2000-10-24 2014-04-29 Objectvideo, Inc. Video surveillance system employing video primitives
US8965046B2 (en) 2012-03-16 2015-02-24 Qualcomm Technologies, Inc. Method, apparatus, and manufacture for smiling face detection
US8989453B2 (en) 2003-06-26 2015-03-24 Fotonation Limited Digital image processing using face detection information
US9129381B2 (en) 2003-06-26 2015-09-08 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US9269010B2 (en) 2008-07-14 2016-02-23 Jumio Inc. Mobile phone payment system using integrated camera credit card reader
US9305230B2 (en) 2008-07-14 2016-04-05 Jumio Inc. Internet payment system using credit card imaging
US20160373437A1 (en) * 2015-02-15 2016-12-22 Beijing Kuangshi Technology Co., Ltd. Method and system for authenticating liveness face, and computer program product thereof
US9530047B1 (en) * 2013-11-30 2016-12-27 Beijing Sensetime Technology Development Co., Ltd. Method and system for face image recognition
EP1552464B1 (en) 2002-07-09 2017-01-11 Neology, Inc. System and method for providing secure identification solutions
USRE46310E1 (en) 1991-12-23 2017-02-14 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US9641752B2 (en) 2015-02-03 2017-05-02 Jumio Corporation Systems and methods for imaging identification information
US9692964B2 (en) 2003-06-26 2017-06-27 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US20180096212A1 (en) * 2016-09-30 2018-04-05 Alibaba Group Holding Limited Facial recognition-based authentication
US20180365512A1 (en) * 2017-06-20 2018-12-20 Nvidia Corporation Equivariant landmark transformation for landmark localization
CN109118621A (en) * 2018-07-24 2019-01-01 石数字技术成都有限公司 The face registration system of recognition of face gate inhibition a kind of and application in access control
US20190095700A1 (en) * 2017-09-28 2019-03-28 Nec Laboratories America, Inc. Long-tail large scale face recognition by non-linear feature level domain adaption
US10257191B2 (en) 2008-11-28 2019-04-09 Nottingham Trent University Biometric identity verification
CN109712104A (en) * 2018-11-26 2019-05-03 深圳艺达文化传媒有限公司 The exposed method of self-timer video cartoon head portrait and Related product
US10339367B2 (en) 2016-03-29 2019-07-02 Microsoft Technology Licensing, Llc Recognizing a face and providing feedback on the face-recognition process
US10361802B1 (en) 1999-02-01 2019-07-23 Blanding Hovenweep, Llc Adaptive pattern recognition based control system and method
US20190377409A1 (en) * 2018-06-11 2019-12-12 Fotonation Limited Neural network image processing apparatus
WO2020018416A1 (en) * 2018-07-16 2020-01-23 Alibaba Group Holding Limited Payment method, apparatus, and system
US10552697B2 (en) 2012-02-03 2020-02-04 Jumio Corporation Systems, devices, and methods for identifying user data
US10579785B2 (en) * 2017-09-29 2020-03-03 General Electric Company Automatic authentification for MES system using facial recognition
USRE47908E1 (en) 1991-12-23 2020-03-17 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
EP3614300A4 (en) * 2017-04-20 2020-04-22 Hangzhou Hikvision Digital Technology Co., Ltd. People-credentials comparison authentication method, system and camera
USRE48056E1 (en) 1991-12-23 2020-06-16 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US20200202515A1 (en) * 2018-12-21 2020-06-25 General Electric Company Systems and methods for deep learning based automated spine registration and label propagation
EP3819812A1 (en) * 2019-11-08 2021-05-12 Axis AB A method of object re-identification
US20220004758A1 (en) * 2015-10-16 2022-01-06 Magic Leap, Inc. Eye pose identification using eye features
US11861937B2 (en) * 2017-03-23 2024-01-02 Samsung Electronics Co., Ltd. Facial verification method and apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4975969A (en) * 1987-10-22 1990-12-04 Peter Tal Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system utilizing the same
US5056147A (en) * 1989-05-16 1991-10-08 Products From Ideas Ltd. Recognition procedure and an apparatus for carrying out the recognition procedure
US5161204A (en) * 1990-06-04 1992-11-03 Neuristics, Inc. Apparatus for generating a feature matrix based on normalized out-class and in-class variation matrices
US5199081A (en) * 1989-12-15 1993-03-30 Kabushiki Kaisha Toshiba System for recording an image having a facial image and id information
US5432864A (en) * 1992-10-05 1995-07-11 Daozheng Lu Identification card verification system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4975969A (en) * 1987-10-22 1990-12-04 Peter Tal Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system utilizing the same
US5056147A (en) * 1989-05-16 1991-10-08 Products From Ideas Ltd. Recognition procedure and an apparatus for carrying out the recognition procedure
US5199081A (en) * 1989-12-15 1993-03-30 Kabushiki Kaisha Toshiba System for recording an image having a facial image and id information
US5161204A (en) * 1990-06-04 1992-11-03 Neuristics, Inc. Apparatus for generating a feature matrix based on normalized out-class and in-class variation matrices
US5432864A (en) * 1992-10-05 1995-07-11 Daozheng Lu Identification card verification system

Cited By (249)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE46310E1 (en) 1991-12-23 2017-02-14 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
USRE47908E1 (en) 1991-12-23 2020-03-17 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US8046313B2 (en) 1991-12-23 2011-10-25 Hoffberg Steven M Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
USRE48056E1 (en) 1991-12-23 2020-06-16 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
USRE49387E1 (en) 1991-12-23 2023-01-24 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US6628821B1 (en) * 1996-05-21 2003-09-30 Interval Research Corporation Canonical correlation analysis of image/control-point location coupling for the automatic location of control points
US6504942B1 (en) * 1998-01-23 2003-01-07 Sharp Kabushiki Kaisha Method of and apparatus for detecting a face-like region and observer tracking display
US6549913B1 (en) * 1998-02-26 2003-04-15 Minolta Co., Ltd. Method for compiling an image database, an image database system, and an image data storage medium
US7130454B1 (en) * 1998-07-20 2006-10-31 Viisage Technology, Inc. Real-time facial recognition and verification system
US7206748B1 (en) * 1998-08-13 2007-04-17 International Business Machines Corporation Multimedia player toolkit for electronic content delivery
US7533806B1 (en) 1998-10-09 2009-05-19 Diebold, Incorporated Reading of image data bearing record for comparison with stored user image in authorizing automated banking machine access
US7900823B1 (en) 1998-10-09 2011-03-08 Diebold, Incorporated Banking system controlled by data bearing records
US7533805B1 (en) * 1998-10-09 2009-05-19 Diebold, Incorporated Data bearing record based capture and correlation of user image data at a card reading banking system machine
US10361802B1 (en) 1999-02-01 2019-07-23 Blanding Hovenweep, Llc Adaptive pattern recognition based control system and method
US7016916B1 (en) * 1999-02-01 2006-03-21 Lg Electronics Inc. Method of searching multimedia data
US20100318523A1 (en) * 1999-02-01 2010-12-16 Lg Electronics Inc. Method of searching multimedia data
US20050262067A1 (en) * 1999-02-01 2005-11-24 Lg Electronics Inc. Method of searching multimedia data
US20100318522A1 (en) * 1999-02-01 2010-12-16 Lg Electronics Inc. Method of searching multimedia data
US7974714B2 (en) 1999-10-05 2011-07-05 Steven Mark Hoffberg Intelligent electronic appliance system and method
WO2001027763A1 (en) * 1999-10-08 2001-04-19 Ivex Corporation Networked digital security system and methods
US7952609B2 (en) 1999-10-08 2011-05-31 Axcess International, Inc. Networked digital security system and methods
US6954859B1 (en) 1999-10-08 2005-10-11 Axcess, Inc. Networked digital security system and methods
US20010005222A1 (en) * 1999-12-24 2001-06-28 Yoshihiro Yamaguchi Identification photo system and image processing method
US7548260B2 (en) * 1999-12-24 2009-06-16 Fujifilm Corporation Identification photo system and image processing method which automatically corrects image data of a person in an identification photo
US8711217B2 (en) 2000-10-24 2014-04-29 Objectvideo, Inc. Video surveillance system employing video primitives
US8564661B2 (en) * 2000-10-24 2013-10-22 Objectvideo, Inc. Video analytic rule detection system and method
US10347101B2 (en) 2000-10-24 2019-07-09 Avigilon Fortress Corporation Video surveillance system employing video primitives
US20100026802A1 (en) * 2000-10-24 2010-02-04 Object Video, Inc. Video analytic rule detection system and method
US9378632B2 (en) 2000-10-24 2016-06-28 Avigilon Fortress Corporation Video surveillance system employing video primitives
US10645350B2 (en) 2000-10-24 2020-05-05 Avigilon Fortress Corporation Video analytic rule detection system and method
US10026285B2 (en) 2000-10-24 2018-07-17 Avigilon Fortress Corporation Video surveillance system employing video primitives
US7664296B2 (en) * 2001-01-31 2010-02-16 Fujifilm Corporation Image recording method and system, image transmitting method, and image recording apparatus
US20020101619A1 (en) * 2001-01-31 2002-08-01 Hisayoshi Tsubaki Image recording method and system, image transmitting method, and image recording apparatus
US20020129251A1 (en) * 2001-03-01 2002-09-12 Yukio Itakura Method and system for individual authentication and digital signature utilizing article having DNA based ID information mark
EP1237327A3 (en) * 2001-03-01 2003-07-02 NTT Data Technology Corporation Method and system for individual authentication and digital signature utilizing article having DNA based ID information mark
US7103200B2 (en) * 2001-03-05 2006-09-05 Robert Hillhouse Method and system for adaptively varying templates to accommodate changes in biometric information
US7372979B2 (en) 2001-03-05 2008-05-13 Activcard Ireland Limited Method and system for adaptively varying templates to accommodate changes in biometric information
US20070110283A1 (en) * 2001-03-05 2007-05-17 Activcard Ireland Limited Method and system for adaptively varying templates to accommodate changes in biometric information
US20020154793A1 (en) * 2001-03-05 2002-10-24 Robert Hillhouse Method and system for adaptively varying templates to accommodate changes in biometric information
US20080062278A1 (en) * 2001-05-09 2008-03-13 Sal Khan Secure Access Camera and Method for Camera Control
US7800687B2 (en) * 2001-05-09 2010-09-21 Sal Khan Secure access camera and method for camera control
US6993162B2 (en) * 2001-06-15 2006-01-31 Eastman Kodak Company Method for authenticating animation
US20030005296A1 (en) * 2001-06-15 2003-01-02 Eastman Kodak Company Method for authenticating animation
US6793128B2 (en) * 2001-06-18 2004-09-21 Hewlett-Packard Development Company, L.P. Face photo storage system
US7239725B2 (en) * 2001-07-27 2007-07-03 Kabushiki Kaisha Toshiba Face image recognition apparatus
US20030185423A1 (en) * 2001-07-27 2003-10-02 Hironori Dobashi Face image recognition apparatus
US20030072489A1 (en) * 2001-08-28 2003-04-17 Sick Ag Method of recognizing a code
US7388984B2 (en) * 2001-08-28 2008-06-17 Sick Ag Method of recognizing a code
US7864988B2 (en) * 2002-01-30 2011-01-04 Samsung Electronics Co., Ltd. Apparatus and method for providing security in a base or mobile station by using detection of face information
US20070014430A1 (en) * 2002-01-30 2007-01-18 Samsung Electronics Co., Ltd. Apparatus and method for providing security in a base or mobile station by using detection of face information
US20030190076A1 (en) * 2002-04-05 2003-10-09 Bruno Delean Vision-based operating method and system
US7945076B2 (en) 2002-04-05 2011-05-17 Identix Incorporated Vision-based operating method and system
US7369685B2 (en) 2002-04-05 2008-05-06 Identix Corporation Vision-based operating method and system
US20090097713A1 (en) * 2002-04-05 2009-04-16 Identix Incorporated Vision-based operating method and system
US20040052418A1 (en) * 2002-04-05 2004-03-18 Bruno Delean Method and apparatus for probabilistic image analysis
US7187786B2 (en) * 2002-04-23 2007-03-06 Samsung Electronics Co., Ltd. Method for verifying users and updating database, and face verification system using the same
US20030198368A1 (en) * 2002-04-23 2003-10-23 Samsung Electronics Co., Ltd. Method for verifying users and updating database, and face verification system using the same
EP1552464B1 (en) 2002-07-09 2017-01-11 Neology, Inc. System and method for providing secure identification solutions
US20040151347A1 (en) * 2002-07-19 2004-08-05 Helena Wisniewski Face recognition system and method therefor
US20060251327A1 (en) * 2002-12-20 2006-11-09 Miroslav Trajkovic Light invariant face recognition
US20060147093A1 (en) * 2003-03-03 2006-07-06 Takashi Sanse ID card generating apparatus, ID card, facial recognition terminal apparatus, facial recognition apparatus and system
US7519236B2 (en) * 2003-04-09 2009-04-14 Arcsoft, Inc. Image retrieval
US20040202385A1 (en) * 2003-04-09 2004-10-14 Min Cheng Image retrieval
US7844076B2 (en) 2003-06-26 2010-11-30 Fotonation Vision Limited Digital image processing using face detection and skin tone information
US7809162B2 (en) 2003-06-26 2010-10-05 Fotonation Vision Limited Digital image processing using face detection information
US8131016B2 (en) 2003-06-26 2012-03-06 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US7574016B2 (en) 2003-06-26 2009-08-11 Fotonation Vision Limited Digital image processing using face detection information
US7630527B2 (en) 2003-06-26 2009-12-08 Fotonation Ireland Limited Method of improving orientation and color balance of digital images using face detection information
US7634109B2 (en) 2003-06-26 2009-12-15 Fotonation Ireland Limited Digital image processing using face detection information
US8675991B2 (en) 2003-06-26 2014-03-18 DigitalOptics Corporation Europe Limited Modification of post-viewing parameters for digital images using region or feature information
US8989453B2 (en) 2003-06-26 2015-03-24 Fotonation Limited Digital image processing using face detection information
US7565030B2 (en) 2003-06-26 2009-07-21 Fotonation Vision Limited Detecting orientation of digital images using face detection information
US8005265B2 (en) 2003-06-26 2011-08-23 Tessera Technologies Ireland Limited Digital image processing using face detection information
US7684630B2 (en) 2003-06-26 2010-03-23 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US7693311B2 (en) 2003-06-26 2010-04-06 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US7702136B2 (en) 2003-06-26 2010-04-20 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US8498452B2 (en) 2003-06-26 2013-07-30 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US9692964B2 (en) 2003-06-26 2017-06-27 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US8126208B2 (en) 2003-06-26 2012-02-28 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US8948468B2 (en) 2003-06-26 2015-02-03 Fotonation Limited Modification of viewing parameters for digital images using face detection information
US7844135B2 (en) 2003-06-26 2010-11-30 Tessera Technologies Ireland Limited Detecting orientation of digital images using face detection information
US7616233B2 (en) 2003-06-26 2009-11-10 Fotonation Vision Limited Perfecting of digital image capture parameters within acquisition devices using face detection
US7848549B2 (en) 2003-06-26 2010-12-07 Fotonation Vision Limited Digital image processing using face detection information
US7853043B2 (en) 2003-06-26 2010-12-14 Tessera Technologies Ireland Limited Digital image processing using face detection information
US8055090B2 (en) 2003-06-26 2011-11-08 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US7440593B1 (en) 2003-06-26 2008-10-21 Fotonation Vision Limited Method of improving orientation and color balance of digital images using face detection information
US8326066B2 (en) 2003-06-26 2012-12-04 DigitalOptics Corporation Europe Limited Digital image adjustable compression and resolution using face detection information
US7860274B2 (en) 2003-06-26 2010-12-28 Fotonation Vision Limited Digital image processing using face detection information
US9053545B2 (en) 2003-06-26 2015-06-09 Fotonation Limited Modification of viewing parameters for digital images using face detection information
US8224108B2 (en) 2003-06-26 2012-07-17 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US9129381B2 (en) 2003-06-26 2015-09-08 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US7912245B2 (en) 2003-06-26 2011-03-22 Tessera Technologies Ireland Limited Method of improving orientation and color balance of digital images using face detection information
US8330831B2 (en) 2003-08-05 2012-12-11 DigitalOptics Corporation Europe Limited Method of gathering visual meta data using a reference image
US20050030151A1 (en) * 2003-08-07 2005-02-10 Abhishek Singh Secure authentication of a user to a system and secure operation thereafter
US7084734B2 (en) 2003-08-07 2006-08-01 Georgia Tech Research Corporation Secure authentication of a user to a system and secure operation thereafter
US20050129306A1 (en) * 2003-12-12 2005-06-16 Xianglin Wang Method and apparatus for image deinterlacing using neural networks
US20050213796A1 (en) * 2004-03-12 2005-09-29 Matsushita Electric Industrial Co., Ltd. Multi-identification method and multi-identification apparatus
US20070247526A1 (en) * 2004-04-30 2007-10-25 Flook Ronald A Camera Tamper Detection
US20050270948A1 (en) * 2004-06-02 2005-12-08 Funai Electric Co., Ltd. DVD recorder and recording and reproducing device
US9569678B2 (en) * 2004-08-20 2017-02-14 Morphotrust Usa, Llc Method and system to authenticate an object
WO2006039003A2 (en) * 2004-08-20 2006-04-13 Viisage Technology, Inc. Method and system to authenticate an object
US20140226874A1 (en) * 2004-08-20 2014-08-14 Morphotrust Usa, Inc. Method And System To Authenticate An Object
US8402040B2 (en) * 2004-08-20 2013-03-19 Morphotrust Usa, Inc. Method and system to authenticate an object
WO2006039003A3 (en) * 2004-08-20 2008-10-09 Viisage Technology Inc Method and system to authenticate an object
US20060074986A1 (en) * 2004-08-20 2006-04-06 Viisage Technology, Inc. Method and system to authenticate an object
US7735728B2 (en) * 2004-10-13 2010-06-15 Skidata Ag Access control system
US20060167833A1 (en) * 2004-10-13 2006-07-27 Kurt Wallerstorfer Access control system
US8135184B2 (en) 2004-10-28 2012-03-13 DigitalOptics Corporation Europe Limited Method and apparatus for detection and correction of multiple image defects within digital images using preview or other reference images
US7953251B1 (en) 2004-10-28 2011-05-31 Tessera Technologies Ireland Limited Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images
US8320641B2 (en) 2004-10-28 2012-11-27 DigitalOptics Corporation Europe Limited Method and apparatus for red-eye detection using preview or other reference images
US20100002912A1 (en) * 2005-01-10 2010-01-07 Solinsky James C Facial feature evaluation based on eye location
US7809171B2 (en) 2005-01-10 2010-10-05 Battelle Memorial Institute Facial feature evaluation based on eye location
US20060193520A1 (en) * 2005-02-28 2006-08-31 Takeshi Mita Object detection apparatus, learning apparatus, object detection system, object detection method and object detection program
US7962629B2 (en) 2005-06-17 2011-06-14 Tessera Technologies Ireland Limited Method for establishing a paired connection between media devices
US8593542B2 (en) 2005-12-27 2013-11-26 DigitalOptics Corporation Europe Limited Foreground/background separation using reference images
US8682097B2 (en) 2006-02-14 2014-03-25 DigitalOptics Corporation Europe Limited Digital image enhancement with reference images
US7965875B2 (en) 2006-06-12 2011-06-21 Tessera Technologies Ireland Limited Advances in extending the AAM techniques from grayscale to color images
US8055029B2 (en) 2006-08-11 2011-11-08 DigitalOptics Corporation Europe Limited Real-time face tracking in a digital image acquisition device
US7469055B2 (en) 2006-08-11 2008-12-23 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US8270674B2 (en) 2006-08-11 2012-09-18 DigitalOptics Corporation Europe Limited Real-time face tracking in a digital image acquisition device
US7864990B2 (en) 2006-08-11 2011-01-04 Tessera Technologies Ireland Limited Real-time face tracking in a digital image acquisition device
US7620218B2 (en) 2006-08-11 2009-11-17 Fotonation Ireland Limited Real-time face tracking with reference images
US7916897B2 (en) 2006-08-11 2011-03-29 Tessera Technologies Ireland Limited Face tracking for controlling imaging parameters
US20080037840A1 (en) * 2006-08-11 2008-02-14 Fotonation Vision Limited Real-Time Face Tracking in a Digital Image Acquisition Device
US7460694B2 (en) 2006-08-11 2008-12-02 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US7460695B2 (en) 2006-08-11 2008-12-02 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US20080037838A1 (en) * 2006-08-11 2008-02-14 Fotonation Vision Limited Real-Time Face Tracking in a Digital Image Acquisition Device
US8385610B2 (en) 2006-08-11 2013-02-26 DigitalOptics Corporation Europe Limited Face tracking for controlling imaging parameters
US7403643B2 (en) 2006-08-11 2008-07-22 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US8509496B2 (en) 2006-08-11 2013-08-13 DigitalOptics Corporation Europe Limited Real-time face tracking with reference images
US20080037839A1 (en) * 2006-08-11 2008-02-14 Fotonation Vision Limited Real-Time Face Tracking in a Digital Image Acquisition Device
US8422739B2 (en) 2006-08-11 2013-04-16 DigitalOptics Corporation Europe Limited Real-time face tracking in a digital image acquisition device
US20090003652A1 (en) * 2006-08-11 2009-01-01 Fotonation Ireland Limited Real-time face tracking with reference images
US8050465B2 (en) 2006-08-11 2011-11-01 DigitalOptics Corporation Europe Limited Real-time face tracking in a digital image acquisition device
US8055067B2 (en) 2007-01-18 2011-11-08 DigitalOptics Corporation Europe Limited Color segmentation
US8494233B2 (en) 2007-01-30 2013-07-23 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
US8165399B2 (en) * 2007-01-30 2012-04-24 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
CN101236600B (en) * 2007-01-30 2013-04-03 佳能株式会社 Image processing apparatus and image processing method
US20080181508A1 (en) * 2007-01-30 2008-07-31 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and program
US8224039B2 (en) 2007-02-28 2012-07-17 DigitalOptics Corporation Europe Limited Separating a directional lighting variability in statistical face modelling based on texture space decomposition
US8509561B2 (en) 2007-02-28 2013-08-13 DigitalOptics Corporation Europe Limited Separating directional lighting variability in statistical face modelling based on texture space decomposition
US8503800B2 (en) 2007-03-05 2013-08-06 DigitalOptics Corporation Europe Limited Illumination detection using classifier chains
US8923564B2 (en) 2007-03-05 2014-12-30 DigitalOptics Corporation Europe Limited Face searching and detection in a digital image acquisition device
US9224034B2 (en) 2007-03-05 2015-12-29 Fotonation Limited Face searching and detection in a digital image acquisition device
US8649604B2 (en) 2007-03-05 2014-02-11 DigitalOptics Corporation Europe Limited Face searching and detection in a digital image acquisition device
US8494232B2 (en) 2007-05-24 2013-07-23 DigitalOptics Corporation Europe Limited Image processing method and apparatus
US7916971B2 (en) 2007-05-24 2011-03-29 Tessera Technologies Ireland Limited Image processing method and apparatus
US8515138B2 (en) 2007-05-24 2013-08-20 DigitalOptics Corporation Europe Limited Image processing method and apparatus
US9767539B2 (en) 2007-06-21 2017-09-19 Fotonation Limited Image capture device with contemporaneous image correction mechanism
US8213737B2 (en) 2007-06-21 2012-07-03 DigitalOptics Corporation Europe Limited Digital image enhancement with reference images
US10733472B2 (en) 2007-06-21 2020-08-04 Fotonation Limited Image capture device with contemporaneous image correction mechanism
US8896725B2 (en) 2007-06-21 2014-11-25 Fotonation Limited Image capture device with contemporaneous reference image capture mechanism
US8155397B2 (en) 2007-09-26 2012-04-10 DigitalOptics Corporation Europe Limited Face tracking in a camera processor
US8494286B2 (en) 2008-02-05 2013-07-23 DigitalOptics Corporation Europe Limited Face detection in mid-shot digital images
US8184910B2 (en) * 2008-03-18 2012-05-22 Toshiba Tec Kabushiki Kaisha Image recognition device, image recognition method, and image scanning apparatus having image recognition device
US20090238472A1 (en) * 2008-03-18 2009-09-24 Kabushiki Kaisha Toshiba Image recognition device, image recognition method, and image scanning apparatus having image recognition device
US8243182B2 (en) 2008-03-26 2012-08-14 DigitalOptics Corporation Europe Limited Method of making a digital camera image of a scene including the camera user
US7855737B2 (en) 2008-03-26 2010-12-21 Fotonation Ireland Limited Method of making a digital camera image of a scene including the camera user
US20090278655A1 (en) * 2008-05-06 2009-11-12 The Abraham Joshua Heschel School Method for inhibiting egress from a chamber containing contaminants
US9836726B2 (en) 2008-07-14 2017-12-05 Jumio Corporation Internet payment system using credit card imaging
US10558967B2 (en) 2008-07-14 2020-02-11 Jumio Corporation Mobile phone payment system using integrated camera credit card reader
US9269010B2 (en) 2008-07-14 2016-02-23 Jumio Inc. Mobile phone payment system using integrated camera credit card reader
US9305230B2 (en) 2008-07-14 2016-04-05 Jumio Inc. Internet payment system using credit card imaging
US9053355B2 (en) 2008-07-23 2015-06-09 Qualcomm Technologies, Inc. System and method for face tracking
US8855360B2 (en) 2008-07-23 2014-10-07 Qualcomm Technologies, Inc. System and method for face tracking
US20100021008A1 (en) * 2008-07-23 2010-01-28 Zoran Corporation System and Method for Face Tracking
US9007480B2 (en) 2008-07-30 2015-04-14 Fotonation Limited Automatic face and skin beautification using face detection
US8345114B2 (en) 2008-07-30 2013-01-01 DigitalOptics Corporation Europe Limited Automatic face and skin beautification using face detection
US8384793B2 (en) 2008-07-30 2013-02-26 DigitalOptics Corporation Europe Limited Automatic face and skin beautification using face detection
US9311546B2 (en) * 2008-11-28 2016-04-12 Nottingham Trent University Biometric identity verification for access control using a trained statistical classifier
GB2465782B (en) * 2008-11-28 2016-04-13 Univ Nottingham Trent Biometric identity verification
US10257191B2 (en) 2008-11-28 2019-04-09 Nottingham Trent University Biometric identity verification
US20110285504A1 (en) * 2008-11-28 2011-11-24 Sergio Grau Puerto Biometric identity verification
US8379917B2 (en) 2009-10-02 2013-02-19 DigitalOptics Corporation Europe Limited Face recognition performance using additional image features
US10032068B2 (en) 2009-10-02 2018-07-24 Fotonation Limited Method of making a digital camera image of a first scene with a superimposed second scene
US8351662B2 (en) 2010-09-16 2013-01-08 Seiko Epson Corporation System and method for face verification using video sequence
US20130329971A1 (en) * 2010-12-10 2013-12-12 Nagravision S.A. Method and device to speed up face recognition
US10192101B2 (en) 2010-12-10 2019-01-29 Nagravision S.A. Method and device to speed up face recognition
US11783561B2 (en) 2010-12-10 2023-10-10 Nagravision S.A. Method and device to speed up face recognition
US9740913B2 (en) * 2010-12-10 2017-08-22 Nagravision S.A. Method and device to speed up face recognition
US10909350B2 (en) 2010-12-10 2021-02-02 Nagravision S.A. Method and device to speed up face recognition
US9990528B2 (en) 2011-01-20 2018-06-05 Daon Holdings Limited Methods and systems for capturing biometric data
US9400915B2 (en) 2011-01-20 2016-07-26 Daon Holdings Limited Methods and systems for capturing biometric data
US10235550B2 (en) 2011-01-20 2019-03-19 Daon Holdings Limited Methods and systems for capturing biometric data
US9519818B2 (en) 2011-01-20 2016-12-13 Daon Holdings Limited Methods and systems for capturing biometric data
US8064645B1 (en) 2011-01-20 2011-11-22 Daon Holdings Limited Methods and systems for authenticating users
US9519820B2 (en) 2011-01-20 2016-12-13 Daon Holdings Limited Methods and systems for authenticating users
US8548206B2 (en) 2011-01-20 2013-10-01 Daon Holdings Limited Methods and systems for capturing biometric data
US9679193B2 (en) 2011-01-20 2017-06-13 Daon Holdings Limited Methods and systems for capturing biometric data
US9519821B2 (en) 2011-01-20 2016-12-13 Daon Holdings Limited Methods and systems for capturing biometric data
US9202102B1 (en) 2011-01-20 2015-12-01 Daon Holdings Limited Methods and systems for capturing biometric data
US8085992B1 (en) 2011-01-20 2011-12-27 Daon Holdings Limited Methods and systems for capturing biometric data
US10607054B2 (en) 2011-01-20 2020-03-31 Daon Holdings Limited Methods and systems for capturing biometric data
US9112858B2 (en) 2011-01-20 2015-08-18 Daon Holdings Limited Methods and systems for capturing biometric data
US9298999B2 (en) 2011-01-20 2016-03-29 Daon Holdings Limited Methods and systems for capturing biometric data
US8457370B2 (en) 2011-01-20 2013-06-04 Daon Holdings Limited Methods and systems for authenticating users with captured palm biometric data
US8811726B2 (en) * 2011-06-02 2014-08-19 Kriegman-Belhumeur Vision Technologies, Llc Method and system for localizing parts of an object in an image for computer vision applications
US20120308124A1 (en) * 2011-06-02 2012-12-06 Kriegman-Belhumeur Vision Technologies, Llc Method and System For Localizing Parts of an Object in an Image For Computer Vision Applications
US20130156276A1 (en) * 2011-12-14 2013-06-20 Hon Hai Precision Industry Co., Ltd. Electronic device with a function of searching images based on facial feature and method
US8634602B2 (en) * 2011-12-14 2014-01-21 Fu Tai Hua Industry (Shenzhen) Co., Ltd. Electronic device with a function of searching images based on facial feature and method
US20130198079A1 (en) * 2012-01-27 2013-08-01 Daniel Mattes Verification of Online Transactions
US10552697B2 (en) 2012-02-03 2020-02-04 Jumio Corporation Systems, devices, and methods for identifying user data
US8965046B2 (en) 2012-03-16 2015-02-24 Qualcomm Technologies, Inc. Method, apparatus, and manufacture for smiling face detection
US9195884B2 (en) 2012-03-16 2015-11-24 Qualcomm Technologies, Inc. Method, apparatus, and manufacture for smiling face detection
US20160117549A1 (en) * 2012-09-28 2016-04-28 Ncr Corporation Methods of processing data from multiple image sources to provide normalized confidence levels for use in improving performance of a recognition processor
US10152629B2 (en) * 2012-09-28 2018-12-11 Ncr Corporation Methods of processing data from multiple image sources to provide normalized confidence levels for use in improving performance of a recognition processor
US20140093156A1 (en) * 2012-09-28 2014-04-03 Ncr Corporation Methods of processing data from multiple image sources to provide normalized confidence levels for use in improving performance of a recognition processor
US9208378B2 (en) * 2012-09-28 2015-12-08 Ncr Corporation Methods of processing data from multiple image sources to provide normalized confidence levels for use in improving performance of a recognition processor
US9760769B2 (en) * 2012-09-28 2017-09-12 Ncr Corporation Methods of processing data from multiple image sources to provide normalized confidence levels for use in improving performance of a recognition processor
US20160110596A1 (en) * 2012-09-28 2016-04-21 Ncr Corporation Methods of processing data from multiple image sources to provide normalized confidence levels for use in improving performance of a recognition processor
US9530047B1 (en) * 2013-11-30 2016-12-27 Beijing Sensetime Technology Development Co., Ltd. Method and system for face image recognition
US10572729B2 (en) 2015-02-03 2020-02-25 Jumio Corporation Systems and methods for imaging identification information
US11468696B2 (en) 2015-02-03 2022-10-11 Jumio Corporation Systems and methods for imaging identification information
US9641752B2 (en) 2015-02-03 2017-05-02 Jumio Corporation Systems and methods for imaging identification information
US10176371B2 (en) 2015-02-03 2019-01-08 Jumio Corporation Systems and methods for imaging identification information
US10776620B2 (en) 2015-02-03 2020-09-15 Jumio Corporation Systems and methods for imaging identification information
US20160373437A1 (en) * 2015-02-15 2016-12-22 Beijing Kuangshi Technology Co., Ltd. Method and system for authenticating liveness face, and computer program product thereof
US9985963B2 (en) * 2015-02-15 2018-05-29 Beijing Kuangshi Technology Co., Ltd. Method and system for authenticating liveness face, and computer program product thereof
US11749025B2 (en) * 2015-10-16 2023-09-05 Magic Leap, Inc. Eye pose identification using eye features
US20220004758A1 (en) * 2015-10-16 2022-01-06 Magic Leap, Inc. Eye pose identification using eye features
US10339367B2 (en) 2016-03-29 2019-07-02 Microsoft Technology Licensing, Llc Recognizing a face and providing feedback on the face-recognition process
US10997445B2 (en) 2016-09-30 2021-05-04 Alibaba Group Holding Limited Facial recognition-based authentication
US20180096212A1 (en) * 2016-09-30 2018-04-05 Alibaba Group Holding Limited Facial recognition-based authentication
US10762368B2 (en) * 2016-09-30 2020-09-01 Alibaba Group Holding Limited Facial recognition-based authentication
US11551482B2 (en) * 2016-09-30 2023-01-10 Alibaba Group Holding Limited Facial recognition-based authentication
US11861937B2 (en) * 2017-03-23 2024-01-02 Samsung Electronics Co., Ltd. Facial verification method and apparatus
US11256902B2 (en) * 2017-04-20 2022-02-22 Hangzhou Hikvision Digital Technology Co., Ltd. People-credentials comparison authentication method, system and camera
EP3614300A4 (en) * 2017-04-20 2020-04-22 Hangzhou Hikvision Digital Technology Co., Ltd. People-credentials comparison authentication method, system and camera
US10783394B2 (en) * 2017-06-20 2020-09-22 Nvidia Corporation Equivariant landmark transformation for landmark localization
US20180365512A1 (en) * 2017-06-20 2018-12-20 Nvidia Corporation Equivariant landmark transformation for landmark localization
US20190095704A1 (en) * 2017-09-28 2019-03-28 Nec Laboratories America, Inc. Long-tail large scale face recognition by non-linear feature level domain adaption
US10796135B2 (en) * 2017-09-28 2020-10-06 Nec Corporation Long-tail large scale face recognition by non-linear feature level domain adaptation
US20190095700A1 (en) * 2017-09-28 2019-03-28 Nec Laboratories America, Inc. Long-tail large scale face recognition by non-linear feature level domain adaption
US10796134B2 (en) * 2017-09-28 2020-10-06 Nec Corporation Long-tail large scale face recognition by non-linear feature level domain adaptation
US10853627B2 (en) * 2017-09-28 2020-12-01 Nec Corporation Long-tail large scale face recognition by non-linear feature level domain adaptation
US20190095705A1 (en) * 2017-09-28 2019-03-28 Nec Laboratories America, Inc. Long-tail large scale face recognition by non-linear feature level domain adaption
US10579785B2 (en) * 2017-09-29 2020-03-03 General Electric Company Automatic authentification for MES system using facial recognition
US10684681B2 (en) * 2018-06-11 2020-06-16 Fotonation Limited Neural network image processing apparatus
US11314324B2 (en) 2018-06-11 2022-04-26 Fotonation Limited Neural network image processing apparatus
US20190377409A1 (en) * 2018-06-11 2019-12-12 Fotonation Limited Neural network image processing apparatus
US11699293B2 (en) 2018-06-11 2023-07-11 Fotonation Limited Neural network image processing apparatus
US10747990B2 (en) 2018-07-16 2020-08-18 Alibaba Group Holding Limited Payment method, apparatus, and system
US10769417B2 (en) 2018-07-16 2020-09-08 Alibaba Group Holding Limited Payment method, apparatus, and system
WO2020018416A1 (en) * 2018-07-16 2020-01-23 Alibaba Group Holding Limited Payment method, apparatus, and system
CN109118621A (en) * 2018-07-24 2019-01-01 石数字技术成都有限公司 The face registration system of recognition of face gate inhibition a kind of and application in access control
CN109712104A (en) * 2018-11-26 2019-05-03 深圳艺达文化传媒有限公司 The exposed method of self-timer video cartoon head portrait and Related product
US20200202515A1 (en) * 2018-12-21 2020-06-25 General Electric Company Systems and methods for deep learning based automated spine registration and label propagation
US11080849B2 (en) * 2018-12-21 2021-08-03 General Electric Company Systems and methods for deep learning based automated spine registration and label propagation
EP3819812A1 (en) * 2019-11-08 2021-05-12 Axis AB A method of object re-identification

Similar Documents

Publication Publication Date Title
US6128398A (en) System, method and application for the recognition, verification and similarity ranking of facial or other object patterns
JP4543423B2 (en) Method and apparatus for automatic object recognition and collation
US7130454B1 (en) Real-time facial recognition and verification system
Gao et al. Face recognition using line edge map
US5901244A (en) Feature extraction system and face image recognition system
EP1460580A1 (en) Face meta-data creation and face similarity calculation
US5450504A (en) Method for finding a most likely matching of a target facial image in a data base of facial images
US7031499B2 (en) Object recognition system
US20020136448A1 (en) Real-time facial recognition and verification system
Lu et al. A survey of face detection, extraction and recognition
US20190205608A1 (en) Method and apparatus for safety monitoring of a body of water
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
KR100445800B1 (en) Face-image recognition method of similarity measure using correlation
Lee et al. A new face authentication system for memory-constrained devices
US20060056667A1 (en) Identifying faces from multiple images acquired from widely separated viewpoints
Kekre et al. Face and gender recognition using principal component analysis
Monwar et al. A real-time face recognition approach from video sequence using skin color model and eigenface method
Orts Face Recognition Techniques
Zhou et al. Eye localization based on face alignment
Baker et al. A theory of pattern rejection
GAHARWAR et al. FACE DETECTION AND REAL TIME SYSTEM USING DRLBP
Siregar et al. Identity recognition of people through face image using principal component analysis
Mastronardi et al. Geodesic Distances and Hidden Markov Models for the 3D Face Recognition
Bayana Gender classification using facial components.
Mohamed Product of likelihood ratio scores fusion of dynamic face and on-line signature based biometrics verification application systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: U.S. VENTURES L.P., CAYMAN ISLANDS

Free format text: SECURITY AGREEMENT;ASSIGNOR:ETRUE.COM, INC.;REEL/FRAME:012110/0859

Effective date: 20011029

AS Assignment

Owner name: VIISAGE TECHNOLOGY, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ETRUE.COM, INC. F/K/A MIROS, INC.;REEL/FRAME:012991/0170

Effective date: 20020531

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20041003