CA2199040A1 - Automated, non-invasive iris recognition system and method - Google Patents
Automated, non-invasive iris recognition system and methodInfo
- Publication number
- CA2199040A1 CA2199040A1 CA002199040A CA2199040A CA2199040A1 CA 2199040 A1 CA2199040 A1 CA 2199040A1 CA 002199040 A CA002199040 A CA 002199040A CA 2199040 A CA2199040 A CA 2199040A CA 2199040 A1 CA2199040 A1 CA 2199040A1
- Authority
- CA
- Canada
- Prior art keywords
- eye
- iris
- image
- user
- spatial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims description 35
- 210000000554 iris Anatomy 0.000 claims abstract description 143
- 210000000744 eyelid Anatomy 0.000 claims description 28
- 230000002197 limbic effect Effects 0.000 claims description 25
- 238000012545 processing Methods 0.000 claims description 18
- 238000001914 filtration Methods 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 6
- 230000001815 facial effect Effects 0.000 claims description 5
- 238000012886 linear function Methods 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 4
- 238000011084 recovery Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 claims 2
- 238000003672 processing method Methods 0.000 claims 2
- 238000013459 approach Methods 0.000 description 16
- 210000003128 head Anatomy 0.000 description 12
- 230000004807 localization Effects 0.000 description 10
- 238000012986 modification Methods 0.000 description 9
- 230000004048 modification Effects 0.000 description 9
- 238000000354 decomposition reaction Methods 0.000 description 5
- 230000033001 locomotion Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 210000001747 pupil Anatomy 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 244000228957 Ferula foetida Species 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 210000004087 cornea Anatomy 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000000877 morphologic effect Effects 0.000 description 2
- 230000010287 polarization Effects 0.000 description 2
- 206010065042 Immune reconstitution inflammatory syndrome Diseases 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 210000000720 eyelash Anatomy 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000013450 outlier detection Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/30—Individual registration on entry or exit not involving the use of a pass
- G07C9/32—Individual registration on entry or exit not involving the use of a pass in combination with an identity check
- G07C9/37—Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/245—Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
Abstract
Iris recognition is achieved by iris acquisition that permits a user to self-position his or her eye (216) into an imager's (200) field of view without the need for any physical contact, spatially locating the data defining that portion of a digitized video image of the user's eye that defines solely the iris thereof without any initial spatial condition of the iris being provided, and pattern matching the spatially located data defining the iris of the user's eye with stored data defining a model iris by employing normalized spatial correlation for first comparing, at each of a plurality of spatial scales, each of distinctive spatial characteristics of the respective irises that are spatially registered with one another to quantitatively determine, at each of the plurality of spatial scales, a goodness value of match at that spatial scale, and then judging whether or not the pattern which manifests solely the iris of the user's eye matches the digital data which manifests solely the model iris in accordance with a certain combination of the quantitatively-determined goodness values of match at each of said plurality of spatial scales.
Description
- W096t07978 2 1 ~9040 1 PCI/US9S/10985 AUTOMATED, NON-INVASIVE IRIS RECOGNITION ~Y~;'l'~;~ AND
METHOD
The United States Government has rights in this invention under a 5 government contract.
The prior art includes various te~hnologies for uniquely identifying an individual person in accordance with an ~ min~t.ion of particular attributes of either the person's interior or exterior eye. The prior art also includes a technology for eye tr~cking image pickup apparatus for separating noise 1 0 from feature portions, such as that disclosed in U.S. patent 5,016,282, issued to Tomono et al. on May 14, 1991. One of these prior-art technologies involves the visual ç~min~tion of the particular attributes of the exterior of the iris of at least one of the person's eyes. In this regard, reference is madeto U.S. patent 4,641,349 issued to Flom et al. on February 3, 1987, U.S.
1 5 patent 5,291,560, issued to Dallgm~n on March 1, 1994, and to Dallgm~n's article "High Confidence Visual Recognition of Persons by a Test of Statistical Independence", which appears on pages 1148-1161 of the IEEE
Tr~n.q~qct;ons on Pattern Analysis and M~rhine Intelligence, Volume 15, No.
11, November 1993. As made clear by the aforesaid patents and article, the 2 0 visible texture of a person's iris can be used to distinguish one person from another with great accuracy. Thus, iris recognition may be used for such purposes as controlling access to a secure facility or an Automated Tr~n.q~ction M~hine (ATM) for dispensing cash, by way of e~mples. An iris recognition system involves the use of an imager to video image the iris of 2 5 each person ~ttempting access and computer-vision image procesqing means for comparing this iris video image with a reference iris image on file in a database. For instance, the person attempting access may first enter a personal identification number (PIN), thereby permitting the video image of the iris of that person to be associated with his or her reference iris image on3 0 file. In addition, an iris recognition system is useful for such purposes as me~ gnostics in the medical ç~minAtiQn of the exterior eye.
From a practical point of view, there are problems with prior-art iris recognition systems and methods.
First, previous approaches to acquiring high quality images of the iris 3 5 of the eye have: (i) an invasive positioning device (e.g., a head rest or bite bar) serving to bring the subject of interest into a known standard configuration;
(ii) a controlled light source providing standardized illllmin~tion of the eye, and (iii) an imager serving to capture the positioned and illllmin~ted eye. There WO 96/07978 PCI/US95/lOg85 are a nllmber of limit~tions with this standard setup, including: (a) users findthe physical contact required for positioning to be unappealing, and (b) the illumination level required by these previous approaches for the capture of good quality, high contrast images can be annoying to the user.
Second, previous approaches to loc~ ing the iris in images of the eye have employed parameterized models of the iris. The parameters of these models are iteratively fit to an image of the eye that has been enh~nce-l so as to highlight regions corresponding to the iris boundary. The compl~ity of the model varies from concentric circles that delimit the inner and outer boundaries of the iris to more elaborate models involving the effects of partially occluding eyelids. The methods used to ~nh~nçe the iris boundaries include gradient based edge detection as well as morphological filtering. The chief limitations of these approaches include their need for good initial conditions that serve as seeds for the iterative fitting process as well as 1 5 extensive computational expense.
Third, previous approaches to pattern match a localized iris data image derived from the video image of a person attempting to gain access with that of one or more reference localized iris data images on file in a database provide reasonable discrimin~;on between these iris data images., 2 0 but require ~ncive computational expense The invention is directed to an improved system and method that provides a solution to disadvantages associated one or more of the aforesaid three approaches with prior-art iris recognition systems and methods.
The solution to the first of the aforesaid three approaches comprises a 2 5 non-invasive ~li nment mech~ni~m that may be implemented by a larger first edge and a smaller second edge having geometrically .simil~r shapes that are subst~nti~lly centered about and spaced at different distances from an imager lens to permit a user to self-position his or her eye into an imager's field of view without the need for any physical contact with the system by 3 0 maneuv~ g his or her eye to that point in space where, due to perspective, the sm~ller edge subst~nti~lly totally occludes the larger edge.
The solution to the second of the aforesaid three approaches comprises delimiting digital data to that portion of a digitized image of the eye of an individual that defines solely the iris of the eye of the individual by image-3 5 filtering at least one of the limbic boundary of the iris, the pupilary boundary of said iris, and the boundaries of said eye's upper and lower eyelids to derivean enh~nce~ image thereof, and then histogr~mming the enh~nced image by means that embody a voting scheme. This results in the recovery of the iris WO 96/07g78 1 ~ g5llo98s 21 99040 3;
boundaries without requiring knowledge of any initial conditions other than the digital data repres~ltalive of the individual's eye.
The solution to the third of the aforesaid three approaches comprises a pattern-m~tching te-~hnique for use in providing ~lltomP~te~l iris recognition for security access control. The pattern-matching technique, which is responsive to first digital data tl~fining a ~igiti7.e~ image of solely the iris of the eye of a certain individual attempting access and previously stored second digital data of a digitized image that defines solely the iris of the eye of a specified individual, employs normalized spatial correlation for first l 0 comparing, at each of a plurality of spatial scales, each of distinctive spatial characteristics of the respective irises of the given individual and the specified individual that are spatially registered with one another to quantitatively determine, at each of the plurality of spatial scales, a goodness value of match at that spatial scale. Whether or not the pattern of the digital data l S which m~nifests solely the iris of said eye of the given individual m~t~hes the digital data which m~nifests solely the iris of an eye of the specified individual is judged in accordance with a certain combination of the quantitatively-determined goodness values of match at each of the plurality of spatial scales.
2 0 The te~chings of the invention can be readily understood by con~i~lçring the following detailed description in conjunction with the accompanying d~dwillgs, in which:
Fig. 1 is a functio~l block diagram of an al1tom~ted, non-invasive iris recognition system incorporating the principles of the invention;
2 5 Fig. 2 illustrates an embo-lim~nt of iris acquisition means incorporating prinfiples ofthe invention;
Figs. 2a and 2b together illustrate a modification of the iris acquisition means of Fig. 2 for enh~nring the embo-liment thereof; and Fig. 3 illustrates the flow of computational steps employed by the 3 0 invention for automatically proces.cing an input image of an iris to provide complete iris loc~ tinn In Fig. 1, an automated, non-invasive iris recognition system comprises iris acquisition means 100 (shown in more detail in Fig. 2) for deriving an input image, typically a video image, of the iris of a person 3 5 (hereafter referred to as the "user") attempting to be recognized by the system as being a certain predetermined person; iris loc~ tion means 102 (employing the computational steps shown in Fig. 3) for automatically proces.sing an input image of an iris to provide complete loc~li7~tion of the W096/07978 PCT~S9511~5 21 99040 4 ~
video input image of the iris applied thereto from acquisition means 100; and pattern m?~t~hing means 104 for automatically comparing the pattern of the localized iris information applied thereto from means 102 with the pattern of a stored model iris 106 of the certain predetermined person, and concluding 5 with high accuracy whether the user is, in fact, the certain predetermined person.
Acquisition means 100, as shown in Fig. 2, comprises imager 200, such as a video camera, an array of light sources 202, diffuser 204, circular polarizer 206, larger square edge 208, smaller square edge 210, and image l 0 frame grabber 212.
Imager 200 is typically a low light level video camera, such as a silicon intensified target (SIT) camera having an optical component comprising a telephoto/macro lens 214, which points through a hole in the center of diffuser 204 so that lens 214 does not interfere with imager 200 obt~ining a clear 1 5 image. Lens 214 permits a high resolution image to be obt~ine-l of an eye 216 of the user, who is positioned a substantial distance in front of lens 214, so that e~ me ~U~ lity between eye 216 and imager 200 is not lC~lllile;l.
Light from the array of light sources 202, which sul,oulld imager 200, passes through diffuser 204 and polarizer 206 to illl1min~te an eye 216 of the 2 0 user who is positioned in front of polarizer 206. Diffuser 204 is a diffusing panel that operates as a first filter which serves the purposes of both providing uniform illllmin~tion of eye 216 and integrating radiant energy over a wide region at eye 216 in order to allow for an amount of light int~n.~ity to be distributed across the user's view that would be annoying if the same energy 2 5 was concçntrated in a single point source. Polarizer 206, which is situated in front of lens 214, operates as a second filter which ameliorates the effects of specular reflection at the cornea that would otherwise obfuscate the underlying structure of eye 216. More specifically, light emerging from polarizer 206 will have a particular sense of rotation. When this light hits a 3 0 specularly reflecting surface (e.g., the cornea) the light that is reflected back will still be polarized, but have a reversed sense. This revcl;.cd sense light will not be passed back through polarizer 206 and is thereby blocked to the view of imager 200. Huwcver, light hitting diffusely reflecting parts of the eye (e.g., the iris) will scatter the impinEin~ light and this light will be passed back 3 5 through polarizer 206 and subsequently be available for image formation. It should be noted that, strictly spe~king, circular polarization is accomplished via linear polarization followed by a quarter wave retarder; therefore, it is necess7~rily tuned for only a particular wavelength range.
wo 96l07978 ~ 3~/l098s 2~ 99040 ~
As shown in Fig. 2, both larger and sm?.llçr square edges 208 and 210 are centered in position with respect to the axis of lens 214, with larger square edge 208 being displaced a relatively shorter distance in front of polarizer 206 and smaller square edge 210 being displaced a relatively longer distance in front of polarizer 206. These edges 208 and 210 are useful as an ~lignment merh~ni~m for the purpose of permitting the user to self-position his or her eye 216 into the field of view of imager 200 without the need for anyphysical cont~ct with the system. The goal for positioning is to constrain the three tr~n~l~tinn~l degrees of freedom of the object to be imaged (i.e., eye 216) l 0 so that it is centered on the sensor array (not shown) of imager 200 and at a distance that lies in the focal plane of lens 214. This is accomplished by simple perspective geometry to provide cues to the user so that he or she càn maneuver to the point in space that s~ti~fies these conditions. In particular, as shown by dashed lines 220, due to perspective, there is only one spatial l S position of eye 216 in which the square outline contour of smaller square edge 210 will totally occlude the square outline contour of larger square edge 208.
This spatial position is a substantially longer distance in front of polarizer 206 than is smaller square edge 208. The relative sizes and distances between square edges 208 and 210 are chosen so that when the eye is 2 0 a~y~ iately positioned, their square contours overlap and mi~ nment of the smaller and larger square edges 208 and 210 provides continuous feedback for the user regarding the accuracy of the current position of ~lignm~nt of his or her eye. This ~ nment procedure may be leferled to as Vernier ~ nment in analogy with the human's Vernier acuity, the ability to 2 5 align thin lines and other small t~ts with hyper-pre~icion Further, while both larger and smaller edges 208 and 210 of the embodiment of Fig. 2 have square outline contour shapes, it should be understood that the outline contour of these larger and smaller edges may have geometrically simil~r shapes other than square, such that, when the 3 0 eye is a~ oy~;ately positioned, their geometrically ~imil~r contours overlap and mi~lignment of the smaller and larger edges provides continuous feedback for the user regarding the accuracy of the current position of ~lignment of his or her eye.
In any case, imager 200, which receives a precisely focused light-3 5 intensity image (having negligihle specular-reflection noise) of the user's eye 216, derives sllcces.sive video frames of this eye image. Frame grabber 212 (which is a standard digital frame grabber) stores the eye image lefine~l by a selected one of the video frames. This stored eye image from frame grabber 212 is then fol ~. ~led to means 102 (shown in Fig. 2) for iris loc~li7.~tion For illustrative purposes, assume that the user is either attempting access to a secure facility or, alternatively, attempting access to an ATM. In either case, the user, after first employing square edges 208 and 210 in the m~nner described above to self-position his or her eye 216 into the field of view of imager 200 without the need for any physical contact with the system, then may push a button (not shown) causing frame grabber 212 to store the eye image defined by the currently-occurring video frame derived 1 0 from imager 200. Thus, the operation of pll.ching the button by the user issimil~r to that of a user operating the shutter of a still camera to record a sn~T).qhot of a scene on the film of the still camera.
The structure shown in Fig. 2 and described above constitutes a basic embodiment of acquisition means 100. However, because different users 1 5 vary in size and facial features from one another, it is desirable to enh~nce the structure of acquisition means 100 so that the position of the image of any user's eye viewed by the imager and stored by the frame grabber is independent of that user's particular size and facial features, for ease of use and to provide for the possi~ ility of covert image capture. Further, in 2 0 controlling access to a secure facility, it is desirable to provide video camera surveillance of the area in the general vicinity that a user employs to self-position his or her eye into the field of view of the imager, as well as to provide additional visual information that can be used to identify a user attempting ~ccess Figs. 2a and 2b together illustrate a modification of the structure of 2 5 means 100 that provides such enh~nrements.
As shown in Fig. 2a, the modification of the structure of acquisition means 100 includes low-resolution imager 222 having a relatively wide field of view for deriving image 224 of at least the head of user 226 then attempting access. The modification also includes high-resolution imager 228 having a 3 0 relatively narrow field of view that is controlled by the position of active '-"~lOl 230 for deriving image 232 of an eye of user 226 (where imager 228 correæponds to imager 200 of Fig. 2). Image procescing means of the type shown in Fig. 2b, described below, uses information contained in sllcces.cive ~ideo frames of imager 222 to control the adjustment of the position of active 3 5 mirror 230 in accordance with prior-art te~rhingR disclosed in one or more of U.S. patents 4,692, 806; 5,063,603; and 5,067,014, all of which are incorporated herein by 1 efe~ ce.
WO 96/07978 ~ S/lffl5 More specifically, the modification of acquisition means 100 involves active image acquisition and tr~king of the human head, face and eye for recogni7inF the initial position of an operator's head (as well as its componentfacial features, e.g., eyes and iris) and subsequent tr~king. The approach 5 lltili7e-1 by the moAific~t.ion, which makes use of image inform~tion derived by imager 222, decomposes the matter into three parts. The first part is concerned with crude loc~li7~tion and tracking of the head and its component features. The second part is concerned with using the crude localization and tr~çking information to zoom in on and refine the positional and temporal 1 0 estimates of the eye region, especially the iris. The third part is concerned with motion tr:~rking.
The first part of eye localization is a meçh~ni~m for alerting the system that a potential user is present, and also for choosing candidate locations where the user might be. Such an alerting me~h~ni.~m is the 1 5 change-energy pyramid, shown in Fig. 2b (discussed in more detail below), where images recorded at a time interval are differenced and squared.
Change energy at di~ e-lt resolutions is produced using a G~ n l~y- d~id on the differenced, squared images. Change is analyzed at coarse resolution, and if present can alert the system that a potential user is entering the 2 0 imagers field of view. Other alerting me~hAni.~m.c include stereo, where theLi l ity of the user is detected by computing disparity between two images recorded from two positions, and alerting the system to objects that are nearby.
The second part of eye localization is a mech~ni.~m for initially 2 5 loc~qli7ing the head and eyes of the user. Localization is performed using a pattern-tree which comprises a model of a generic user, for example, a template of a head at a coarse resolution, and templates for the eyes, nose and mouth. The alerting mel-h~ni~m gives c~n~ te positions for a t~mpl~te matching process that m~trhe~ the image with the model. Initially m~tching 3 0 is done at a coarse resolution to locate coarse features such as the head, and subsequently fine resolution features, such as the eyes, nose and mouth, are located using information from the coarse resolution m~trh The third part of eye loc~li7~tion is to track the head and eyes once in view. This is done using a motion tracker which performs a correlation match 3 5 between a previous image frame and the current frame. The correlation match is done on the features used for eye localization, but can also be performed on other features, such as hair, that are useful for tracking over short time intervals, but vary from person to person.
The result of the three previous parts provides the location of the eye in image 224 from imager 222 and, if stereo is used, the a~ .x;...~te range of the eye. This information is used by active mirror 230 to point imager 228 toward the eye to capture an image. Given the position of the eye in the image 224, its ap~io~Li~ate range, and a known geometry between imager 222 and the imager 228, the pointing direction to capture the eye using imager 228 can be easily computed. If the range of the eye is unknown, then imager 228 is pointed to a position corresponding to the approximate expected range, from which it points to positions corresponding to ranges 1 0 ~ o~ ling the expected range. If imager 228 and imager 222 are configured to be optically ~ ne~, then only the image location of the eye in image 224 is necessary to point imager 228. Once imager 228 has been initially pointed to the eye, images from imager 228 are used to keep the eye in the field of view.
This is to compensate for eye sF.cc~-les, and normal movement of the user.
1 5 Such movçment~ will appear in~ignificant in images, such as image 224, from imager 222, but will appear significant in images, such as image 232, from imager 228. The tracking procedure is the same as that described for tracking the head and eyes, except the features used in images, such as image 232, of the user's eye are the eye's pupil, limbal boundary, and texture 2 0 corresponding to the eyelid.
Referring to Fig. 2b, there is shown a functional block diagram of an image processor responsive to images from imager 222 for controlling the position of active mirror 230 so that image 232 of the eye of user 226 is in theview of imager 228.
2 5 Specifically, the video signal output from imager 222, representing sllcce~ive frames of image 224, is applied, after being digitized, as an input Go to Gaussian pyramid 234. Input Go is forwarded, with suitable delay, to an output of Gaussian pyr~mid 234 to provide a Go image 236 of an image pyr~mid at the same resolution and sampling density as image 224. Further, 3 0 as known in the pyramid art, Ga~ si~n pyramid 234 includes cascaded convolution and sllhs~mpling stages for deriving reduced-resolution G1 output image 238 and G2 output image 240 of the image pyramid as outputs from G~ n pyramid 234.
The respective Go, Gl, and G2 outputs of Gaussian pyramid 234 are 3 5 delayed a given number of one or more frame periods by frame delay 242.
Subtractor 244 provides the difference between the polarized amplitude of correspon~ing pixels of the current and frame-delayed frames of each of Go, Gl, and G2 as an output therefrom, thereby minimi7ing the amplitude of wo 96/07978 P~ 3StlO985 stationary image objects with respect to the amplitude of moving object images. This minimi7~tion is m~nifietl and polarity is ~limin~te~l by squaring the output from subtractor 244 (as indicated by block 246) to provide a Go, G1, and G2 change energy pyramid (as indicated by respective blocks 248, 250 and 252). The change energy pyramid information, in a coarse-to-fine process known in the art, may then be used to control the position of active 230 of Fig.2a.
In addition, the moAific~tion may employ template matching, such as taught in aforesaid U.S. patent 5,063,603, for object recognition.
1 0 Alternatively, crude loç~li7.~tion and tr~cking could be based on a feature-based algorithm, such as disclosed in aforesaid U.S. patent 4,692,806, rather than template matching to provide simil~r information. Further, the modification could operate in an opportunistic fashion by acquiring a sequence of images until one with quality adequate for subsequent operations 1 5 has been obtained. Alternatively, from such a sequence, pieces of the region of interest could be acquired across frames and subsequently mosaiced together to yield a single image of adequate quality. Also, any of these modification approaches could be used to zoom in on and acquire h~igh resolution images of facial features other than the eye and iris. For example, 2 0 high resolution images of the lips of an operator could be obtained in an analogous f~.~hi-n The system shown in Fig. 2, either with or without the enhancement provided by the modification of Figs. 2a and 2b, could be generalized in a number of ways. First, the system could operate in spectral bands other 2 5 than the visible (e.g., near infrared). Thus, the term "light", as used herein, includes light r~ t.ion in both the visible and non-visible spectral bands. In order to ~t complich this, the spectral distribution of the illllmin~nt as well as the wavelength tuning of the quarter wave retarder must be matched to the desired spectral band. Second, the system could make use of a standard 3 0 video camera (repl~.ing the low light level camera), although a more intense illllmin~nt would need to be employed. Third, other ~hoices could be made for the lens system, including the use of an auto-focus zoom lens. This addition would place less of a premium on the accuracy with which the user deploys the Vernier ~ nm~nt procedure. Fourth, other instantiations of the Vernier 3 5 ~lignment procedure could be used. For example, pairs of lights could be projected in such a fashion that they would be seen as a single spot if the useris in the correct position and double otherwise. Fifth, in place of (or in addition to) the passive Vernier ~ nment meçh~ni~m, the system could be coupled wo 96/07978 rcrlusss/logss with an active tr~cking imager and associated software (such as that described above in connection with Figs. 2a and 2b) that automatically locates and tracks the eye of the user. This generalization would place less of a ~iUlll on having a cooperative user.
The output from acquisition means 100, which is applied as an input to localization means 102, comprises data in digital form that defines a relatively high-resolution eye image that corresponds to the particular video frame stored in frame grabber 212. Fig. 3 diagrammatically shows the sequence of the s-lcceæ,qive data procçs.qing steps performed by locP.li~t.io~
l 0 means 102 on the eye image data applied as an input thereto.
More specifically, input image 300 represents the relatively high-resolution eye image data that is applied as an input to localization means 102 from acquisition means 100. The first data processing step 302 is to average and reduce input image 300. This is accomplished by convolving the l 5 data dçfining input image 300 with a low-pass Gaussian filter that serves to spatially average and thereby reduce high frequency noise. Since spatial averaging introduces redundancy in the spatial domain, the filtered image is next sllhs~mple-l without any additional loss of information. The sllhs~mple-3 image serves as the basis for subsequent proc~sqin~ with the advantage that 2 0 its smaller dimen.qion.q and lower resolution leads to fewer computational lçm~n~.q comp~red to the original, full size, input image 300.
The next data procesqing steps involved in loc~ ing the iris consist of the sequential location of various components of the iris boundary. In sequence, step 304 locates the limbic (or outer) boundary of the iris, step 306 2 5 locates the pupilary (or inner) boundary of the iris, and step 308 locates the boundaries of the eyelids (which might be occluding a portion of the iris). Thisordering has been chosen based on the relative salience of the involved image features as well as on the ability of located components to constrain the location of additional components. The lo~ tion step of each component is 3 0 performed in two sub-steps. The first sub-step consists of an edge detection operation that is tuned to the expected configuration of high contrast image locations. This tuning is based on generic properties of the boundary component of interest (e.g., orientation) as well as on specific constraints that are provided by previously isolated boundary components. The second sub-3 5 step consists of a scheme where the detected edge pixels vote to instantiate particular values for a parameterized model of the boundary component of interest. Most simply, this step can be thought of in terms of a generalized WO 9G~'u791~ PCr/US95/10985 Hough transform as disclosed in U.S. patent 3,069,654, incorporated by reference.
In more detail, for the limbic boundary in step 304, the image is filtered with a gradient-based edge detector that is tuned in orientation so as to favor 5 near verticality. This directional selectivity is motivated by the fact that even in the face of occluding eyelids, the left and right portions of the limbusshould be clearly visible and oriented near the vertical. (This assumes that the head is in an upright position). The limbic boundary is modeled as a circle parameterized by its two center coor~in~tes, xc and yc, and its radius, r. The l 0 detected edge pixels are thinned and then histogrammed into a three-dimensional (xc, yc, r)-space, acco~ g to permissible (xc, yc, r) values for a given (x, y) image location. The (xc, yc, r) point with the m~im~l number of votes is taken to represent the limbic boundary. The only additional constraint imposed on this boundary is that it lies within the given image of l 5 the eye.
In more detail, for the pupilary boundary in step 306, the image is filtered with a gradient-based edge detector that is not directionally tuned.
The pupilary boundary is modeled as a circle, cimil~r to the limbic boundary.
The parameters of the circle again are instantiated in terms of the most 20 number of votes received as the edge pixels are thinned and then histogrammed into permiccihle (xc,yc,r) values. For the case of the pupil the permissible parameter values are constrained to lie within the circle that describes the limbic boundary.
In more detail, for the eyelid boundaries in step 308, the image is 2 5 filtered with a gradient-based edge detector that is tuned in orientation so as to favor the horizontal. This directional selectivity is motivated by the fact that the portion of the eyelid (if any) that is within the limbic boundary should be nearly horizontal. (Again, this assumes that the head is upright). The upper and lower eyelids are modeled as (two separate) parabolic, i.e., second-30 order, arcs. Particular values for the parameteri7~tion are instantiated asthe detecte-l edge pixels are thinned and then histo~ ...e~ acculdh.g to their permicsihle values. For the eyelids case, the detected boundaries are additionally constrained to be within the circle that specifies the limbic boundary and above or below the pupil for the upper and lower eyelids, 3 S respectively.
Finally, with the various components of the iris boundary isolated, the final processing step 310 consists of comhining these components so as to delimit the iris, per se. This is ~ccomplished by taking the iris as that portion wo 96/07978 ~ s~/los8s of the image that is outside the pupil boundary, inside the limbic boundary, below the upper eyelid and above the lower eyelid.
The above-described approach to iris loc~li7.~tion could be generalized in a number of ways. First, image representations other than oriented 5 gradient-based edge detection could be used for enhancing iris boundaries.
Second, alternative parameterizations for the iris boundary could be employed. Third, localization of various components of the iris boundary (limhic, pupilary and eyelid boundaries) could be performed in ~ t orders, or in parallel. Fourth, alternative constraints, including absence of l 0 constraints, could be enforced in specifying the relative configuration of the components of the iris boundary. Fifth, the fit of the parameterized models of the iris boundary could be performed across multiple resolutions, e.g., in an iterative coarse-to-fine fashion. Sixth, iris boundary localization could be performed without the initial steps of spatial averaging and sllhs~mpling.
1 5 The benefit of the above-described approach to iris loc~li7.~tion of an input eye image (particularly, as exemplified by the sequential data proces,~ing steps shown in Fig. 3) is that it requires no additional initial conditions and that it can be implemented employing simple filtering operations (that enhance relevant image structures) and histogr:~mming 2 0 operations (that embodies a voting scheme for lec~ve~ing the iris boundaries from the ~nh~nced image) that incur little computational expense.
In Fig. 1, the processed data output from localization means 102, representing the image of solely the localized iris of the user, is applied as afirst input to matching means 104, while selected data, previously stored in a 2 5 dat~b~e, that represents a model of the image of solely the loçP.li7e~ iris 106 of the person whom the user purports to be is applied as a second input to matching means 104. Means 104 employs principles of the invention to efficiently process the first and second input data thereto to determine whether or not there is a match sufficient to indicate the user is, in fact, the3 0 person whom he or she purports to be.
More specifically, the distinctive spatial characteristics of the human iris are manifest at a variety of scales. For example, distinglli~hing structures range from the overall shape of the iris to the distribution of tiny crypts and detailed texture. To capture this range of spatial structures, the 3 5 iris image is represented in terms of a 2D bandpass signal decomposition.
Prelimin~ry empirical studies lead to the conclusion that acceptable discrimination between iris images could be based on octave-wide bands computed at four different resolutions that are implemented by means of WO 9G~ ~7 /o 1~ /10985 2 l q9 040 13 Laplacian pyramids to capture this information. This m~kes for efficient storage and processing as lower frequency bands are subsampled sllcce.s~ively without loss of inform~tion .
In order to make a detailed comparison between two images it is 5 advantageous to est~hli~h a precise correspondence between characteristic structures across the pair. An area-based image registration technique is used for this purpose. This technique seeks the mapping function (u(x,y),v(x,y)), such that, for all (x,y), the pixel value at (x,y)-(u(x,y),v(x,y)) in the data image is close to that at (x,y) in the model image. Here, (x,y) are 10 taken over the image regions that are localized as the iris by the iris loc~li7~tion technique described herein. Further, the mapping function is constrained to be a .Cimil~rity transformation, i.e., translational shift, scaleand rotation. This allows the observed degrees of freedom between various imaged instances of the same iris to be compensated for. Shift accounts for 1 5 offsets in the plane parallel to the imager's sensor array. Scale accounts for offsets along the camera's optical axis. Rotation accounts for deviation in rotation about the optical axis beyond that not naturally comp~n~tell for by cyclotorsion of the eye. Given the ability to accurately position the person attempting ~ccess, as described above in connection with image acquisition, 2 0 these prove to be the only degrees of freedom that need to be addressed in establi~hing correspon~lence. This approach has been implemented in terms of a hierarchical gradient-based image registration algorithm employing model-based motion estimation known in the art. Initial conditions for the algorithm are derived from the relative offset of iris boundaries located by the2 5 iris loc~li7~fion technique described above.
With the model and data images accurately and precisely registered, the next task is to assign a goodness of match to quantify the comparison.
Given the system's ability to bring model and data images into fine registration, an a~l l o~iate match metric can be based on integrating pixel 3 0 differences over spatial position within each frequency band of the image representation. Spatial correlation captures this notion. More specific~lly, norm~li7ed correlation is made use o Normalized correlation captures the same type of information as standard correlation; howev~r, it also accounts for local variations in image intensity that corrupt standard correlation, as 3 ~ known in the art. The corrql~tion~ are performed over small blocks of pixels (8 x 8) in each spatial frequency band. A goodness of match subsequently is derived for each band by combining the block correlation values via the median statistic. Blocking combined with the median operation allows for wo 96/07978 ~ 9SJ10985 local adjustments of m~t--hing and a degree of outlier detection and thereby provides robustness against mi.qmz~tches due to noise, misregistration and occlusion (e.g., a stray eyelash).
The final task that must be performed is to comhine the four goodness 5 of match values that have been computed (one for each spatial frequency band) into a final j~ gment as to whether the data image comes from the same iris as does the model image. A reasonable approach to this matter is to comhine the values in a fashion so that the variance within a class of iris images (i.e., various instances of the same iris) is minimi7ed, while the l 0 variance between different classes of iris images (i.e., instances of different irises) is m~nmi7erl A linear function that provides such a solution is well known and is given by Fisher's Linear Discrimin~nt. This technique has been disclosed, among others, by Duda and Hart in "Pattern Classification And Scene Analysis", John Wiley & Sons, 1973, pages 1114-118. While it is not a 1 5 foregone conclusion that any linear function can ~ e~ly distinguish di~ lt classes of all~iLlaly data sets, it has been found that, in practice, it works quite well in the case of iris images. Further, in practice, Fishers Linear Discrimin~nt, has been defined based on a small set of iris image training data comprising 5 images of 10 irises). Subsequently, in practice, this 2 0 function has made for ~ellent discrimin~tion between incoming data images that have a corresponding fiP~t~h~Re entry and those that do not.
It is to be understood that the apparatus and method of operation taught herein are illustrative of the invention. Modifications may readily be devised by those skilled in the art without departing from the spirit or scope of 2 5 the invention. In particular, methods of registration other than simil~rity may be used. Image representations other than those derived via application of isotropic bandpass filtering could serve as the basis for correlation. For example, oriented bandpass filtering, such as that disclosed by Burt et al in U.S. Patent No. 5,325,449 issued June 28, 1994, incorporated herein by 3 0 ~ef~ ce, or morphological filtering could be used. Other signal decomposition methods than bandpass such as wavelet decomposition can be used. A
wavelet decomposition is a specific type of multiresolution pyr~mid that uses quadrature mirror filters (QMF) to produce subband decompositions of an original image representative video signal. A signal processor of this type is 3 5 described by Pentland et al. in "A Practical Approach to Fractal-Based Image Compression", Proceedings of the DCC '91 Data Compression Conference, Apr. 8-11, 1991, IEEE Computer Society Press, Los Alamitos, Cali The Pentland et al. compression system attempts to use low frequency coarse wo 96/07978 ~ JS95/10985 scale information to predict significant information at high frequency finer scales. QMF subband pyramid proces~in~ also is described in the book "Subband Image Coding", J.W. Woods, ed., Kluwer Academic Publishers, 1991. Alternatively, an oriented bandpass such as that t~ losed by Burt et al in U.S. Patent No. 5,325,449 issued June 28, 1994, could be used.
Image matching could be performed in a more symbolic fashion. For example, multiple derived match values could be comhined in m~nn~rs other than those given by Fisher's Linear Disc~ nt For example, a non-linear combination (e.g., derived with a neural network) could be used. Other 1 0 coInr~rison methods than corr~l~tion and other ~le~ on criteria than Fisher's Linear Discrimin~nt can also be used.
Alternative methods could be used for ~ligninE the irises that are being compared. For example, the images can be aligned subject to either simpler or more complex image transformations. Prior to the actual matching 1 5 procedure the ~nn~ r iris images could be conv~l ~ed to a rectangular format, e.g., with radial and angular position converted to vertical and horizontal.
Such manipulation would serve to simplify certain subsequent operations.
Prior to the actual m~tr~inE procedure the iris images could be projected along some direction to yield a one--limeT-~io~ iEn~l. For example, the 2 0 images could be projected along the radial direction~
The invention can be used to control access to an area, facility or a device such as computer or an ATM or in biometric asses~m~n~
METHOD
The United States Government has rights in this invention under a 5 government contract.
The prior art includes various te~hnologies for uniquely identifying an individual person in accordance with an ~ min~t.ion of particular attributes of either the person's interior or exterior eye. The prior art also includes a technology for eye tr~cking image pickup apparatus for separating noise 1 0 from feature portions, such as that disclosed in U.S. patent 5,016,282, issued to Tomono et al. on May 14, 1991. One of these prior-art technologies involves the visual ç~min~tion of the particular attributes of the exterior of the iris of at least one of the person's eyes. In this regard, reference is madeto U.S. patent 4,641,349 issued to Flom et al. on February 3, 1987, U.S.
1 5 patent 5,291,560, issued to Dallgm~n on March 1, 1994, and to Dallgm~n's article "High Confidence Visual Recognition of Persons by a Test of Statistical Independence", which appears on pages 1148-1161 of the IEEE
Tr~n.q~qct;ons on Pattern Analysis and M~rhine Intelligence, Volume 15, No.
11, November 1993. As made clear by the aforesaid patents and article, the 2 0 visible texture of a person's iris can be used to distinguish one person from another with great accuracy. Thus, iris recognition may be used for such purposes as controlling access to a secure facility or an Automated Tr~n.q~ction M~hine (ATM) for dispensing cash, by way of e~mples. An iris recognition system involves the use of an imager to video image the iris of 2 5 each person ~ttempting access and computer-vision image procesqing means for comparing this iris video image with a reference iris image on file in a database. For instance, the person attempting access may first enter a personal identification number (PIN), thereby permitting the video image of the iris of that person to be associated with his or her reference iris image on3 0 file. In addition, an iris recognition system is useful for such purposes as me~ gnostics in the medical ç~minAtiQn of the exterior eye.
From a practical point of view, there are problems with prior-art iris recognition systems and methods.
First, previous approaches to acquiring high quality images of the iris 3 5 of the eye have: (i) an invasive positioning device (e.g., a head rest or bite bar) serving to bring the subject of interest into a known standard configuration;
(ii) a controlled light source providing standardized illllmin~tion of the eye, and (iii) an imager serving to capture the positioned and illllmin~ted eye. There WO 96/07978 PCI/US95/lOg85 are a nllmber of limit~tions with this standard setup, including: (a) users findthe physical contact required for positioning to be unappealing, and (b) the illumination level required by these previous approaches for the capture of good quality, high contrast images can be annoying to the user.
Second, previous approaches to loc~ ing the iris in images of the eye have employed parameterized models of the iris. The parameters of these models are iteratively fit to an image of the eye that has been enh~nce-l so as to highlight regions corresponding to the iris boundary. The compl~ity of the model varies from concentric circles that delimit the inner and outer boundaries of the iris to more elaborate models involving the effects of partially occluding eyelids. The methods used to ~nh~nçe the iris boundaries include gradient based edge detection as well as morphological filtering. The chief limitations of these approaches include their need for good initial conditions that serve as seeds for the iterative fitting process as well as 1 5 extensive computational expense.
Third, previous approaches to pattern match a localized iris data image derived from the video image of a person attempting to gain access with that of one or more reference localized iris data images on file in a database provide reasonable discrimin~;on between these iris data images., 2 0 but require ~ncive computational expense The invention is directed to an improved system and method that provides a solution to disadvantages associated one or more of the aforesaid three approaches with prior-art iris recognition systems and methods.
The solution to the first of the aforesaid three approaches comprises a 2 5 non-invasive ~li nment mech~ni~m that may be implemented by a larger first edge and a smaller second edge having geometrically .simil~r shapes that are subst~nti~lly centered about and spaced at different distances from an imager lens to permit a user to self-position his or her eye into an imager's field of view without the need for any physical contact with the system by 3 0 maneuv~ g his or her eye to that point in space where, due to perspective, the sm~ller edge subst~nti~lly totally occludes the larger edge.
The solution to the second of the aforesaid three approaches comprises delimiting digital data to that portion of a digitized image of the eye of an individual that defines solely the iris of the eye of the individual by image-3 5 filtering at least one of the limbic boundary of the iris, the pupilary boundary of said iris, and the boundaries of said eye's upper and lower eyelids to derivean enh~nce~ image thereof, and then histogr~mming the enh~nced image by means that embody a voting scheme. This results in the recovery of the iris WO 96/07g78 1 ~ g5llo98s 21 99040 3;
boundaries without requiring knowledge of any initial conditions other than the digital data repres~ltalive of the individual's eye.
The solution to the third of the aforesaid three approaches comprises a pattern-m~tching te-~hnique for use in providing ~lltomP~te~l iris recognition for security access control. The pattern-matching technique, which is responsive to first digital data tl~fining a ~igiti7.e~ image of solely the iris of the eye of a certain individual attempting access and previously stored second digital data of a digitized image that defines solely the iris of the eye of a specified individual, employs normalized spatial correlation for first l 0 comparing, at each of a plurality of spatial scales, each of distinctive spatial characteristics of the respective irises of the given individual and the specified individual that are spatially registered with one another to quantitatively determine, at each of the plurality of spatial scales, a goodness value of match at that spatial scale. Whether or not the pattern of the digital data l S which m~nifests solely the iris of said eye of the given individual m~t~hes the digital data which m~nifests solely the iris of an eye of the specified individual is judged in accordance with a certain combination of the quantitatively-determined goodness values of match at each of the plurality of spatial scales.
2 0 The te~chings of the invention can be readily understood by con~i~lçring the following detailed description in conjunction with the accompanying d~dwillgs, in which:
Fig. 1 is a functio~l block diagram of an al1tom~ted, non-invasive iris recognition system incorporating the principles of the invention;
2 5 Fig. 2 illustrates an embo-lim~nt of iris acquisition means incorporating prinfiples ofthe invention;
Figs. 2a and 2b together illustrate a modification of the iris acquisition means of Fig. 2 for enh~nring the embo-liment thereof; and Fig. 3 illustrates the flow of computational steps employed by the 3 0 invention for automatically proces.cing an input image of an iris to provide complete iris loc~ tinn In Fig. 1, an automated, non-invasive iris recognition system comprises iris acquisition means 100 (shown in more detail in Fig. 2) for deriving an input image, typically a video image, of the iris of a person 3 5 (hereafter referred to as the "user") attempting to be recognized by the system as being a certain predetermined person; iris loc~ tion means 102 (employing the computational steps shown in Fig. 3) for automatically proces.sing an input image of an iris to provide complete loc~li7~tion of the W096/07978 PCT~S9511~5 21 99040 4 ~
video input image of the iris applied thereto from acquisition means 100; and pattern m?~t~hing means 104 for automatically comparing the pattern of the localized iris information applied thereto from means 102 with the pattern of a stored model iris 106 of the certain predetermined person, and concluding 5 with high accuracy whether the user is, in fact, the certain predetermined person.
Acquisition means 100, as shown in Fig. 2, comprises imager 200, such as a video camera, an array of light sources 202, diffuser 204, circular polarizer 206, larger square edge 208, smaller square edge 210, and image l 0 frame grabber 212.
Imager 200 is typically a low light level video camera, such as a silicon intensified target (SIT) camera having an optical component comprising a telephoto/macro lens 214, which points through a hole in the center of diffuser 204 so that lens 214 does not interfere with imager 200 obt~ining a clear 1 5 image. Lens 214 permits a high resolution image to be obt~ine-l of an eye 216 of the user, who is positioned a substantial distance in front of lens 214, so that e~ me ~U~ lity between eye 216 and imager 200 is not lC~lllile;l.
Light from the array of light sources 202, which sul,oulld imager 200, passes through diffuser 204 and polarizer 206 to illl1min~te an eye 216 of the 2 0 user who is positioned in front of polarizer 206. Diffuser 204 is a diffusing panel that operates as a first filter which serves the purposes of both providing uniform illllmin~tion of eye 216 and integrating radiant energy over a wide region at eye 216 in order to allow for an amount of light int~n.~ity to be distributed across the user's view that would be annoying if the same energy 2 5 was concçntrated in a single point source. Polarizer 206, which is situated in front of lens 214, operates as a second filter which ameliorates the effects of specular reflection at the cornea that would otherwise obfuscate the underlying structure of eye 216. More specifically, light emerging from polarizer 206 will have a particular sense of rotation. When this light hits a 3 0 specularly reflecting surface (e.g., the cornea) the light that is reflected back will still be polarized, but have a reversed sense. This revcl;.cd sense light will not be passed back through polarizer 206 and is thereby blocked to the view of imager 200. Huwcver, light hitting diffusely reflecting parts of the eye (e.g., the iris) will scatter the impinEin~ light and this light will be passed back 3 5 through polarizer 206 and subsequently be available for image formation. It should be noted that, strictly spe~king, circular polarization is accomplished via linear polarization followed by a quarter wave retarder; therefore, it is necess7~rily tuned for only a particular wavelength range.
wo 96l07978 ~ 3~/l098s 2~ 99040 ~
As shown in Fig. 2, both larger and sm?.llçr square edges 208 and 210 are centered in position with respect to the axis of lens 214, with larger square edge 208 being displaced a relatively shorter distance in front of polarizer 206 and smaller square edge 210 being displaced a relatively longer distance in front of polarizer 206. These edges 208 and 210 are useful as an ~lignment merh~ni~m for the purpose of permitting the user to self-position his or her eye 216 into the field of view of imager 200 without the need for anyphysical cont~ct with the system. The goal for positioning is to constrain the three tr~n~l~tinn~l degrees of freedom of the object to be imaged (i.e., eye 216) l 0 so that it is centered on the sensor array (not shown) of imager 200 and at a distance that lies in the focal plane of lens 214. This is accomplished by simple perspective geometry to provide cues to the user so that he or she càn maneuver to the point in space that s~ti~fies these conditions. In particular, as shown by dashed lines 220, due to perspective, there is only one spatial l S position of eye 216 in which the square outline contour of smaller square edge 210 will totally occlude the square outline contour of larger square edge 208.
This spatial position is a substantially longer distance in front of polarizer 206 than is smaller square edge 208. The relative sizes and distances between square edges 208 and 210 are chosen so that when the eye is 2 0 a~y~ iately positioned, their square contours overlap and mi~ nment of the smaller and larger square edges 208 and 210 provides continuous feedback for the user regarding the accuracy of the current position of ~lignm~nt of his or her eye. This ~ nment procedure may be leferled to as Vernier ~ nment in analogy with the human's Vernier acuity, the ability to 2 5 align thin lines and other small t~ts with hyper-pre~icion Further, while both larger and smaller edges 208 and 210 of the embodiment of Fig. 2 have square outline contour shapes, it should be understood that the outline contour of these larger and smaller edges may have geometrically simil~r shapes other than square, such that, when the 3 0 eye is a~ oy~;ately positioned, their geometrically ~imil~r contours overlap and mi~lignment of the smaller and larger edges provides continuous feedback for the user regarding the accuracy of the current position of ~lignment of his or her eye.
In any case, imager 200, which receives a precisely focused light-3 5 intensity image (having negligihle specular-reflection noise) of the user's eye 216, derives sllcces.sive video frames of this eye image. Frame grabber 212 (which is a standard digital frame grabber) stores the eye image lefine~l by a selected one of the video frames. This stored eye image from frame grabber 212 is then fol ~. ~led to means 102 (shown in Fig. 2) for iris loc~li7.~tion For illustrative purposes, assume that the user is either attempting access to a secure facility or, alternatively, attempting access to an ATM. In either case, the user, after first employing square edges 208 and 210 in the m~nner described above to self-position his or her eye 216 into the field of view of imager 200 without the need for any physical contact with the system, then may push a button (not shown) causing frame grabber 212 to store the eye image defined by the currently-occurring video frame derived 1 0 from imager 200. Thus, the operation of pll.ching the button by the user issimil~r to that of a user operating the shutter of a still camera to record a sn~T).qhot of a scene on the film of the still camera.
The structure shown in Fig. 2 and described above constitutes a basic embodiment of acquisition means 100. However, because different users 1 5 vary in size and facial features from one another, it is desirable to enh~nce the structure of acquisition means 100 so that the position of the image of any user's eye viewed by the imager and stored by the frame grabber is independent of that user's particular size and facial features, for ease of use and to provide for the possi~ ility of covert image capture. Further, in 2 0 controlling access to a secure facility, it is desirable to provide video camera surveillance of the area in the general vicinity that a user employs to self-position his or her eye into the field of view of the imager, as well as to provide additional visual information that can be used to identify a user attempting ~ccess Figs. 2a and 2b together illustrate a modification of the structure of 2 5 means 100 that provides such enh~nrements.
As shown in Fig. 2a, the modification of the structure of acquisition means 100 includes low-resolution imager 222 having a relatively wide field of view for deriving image 224 of at least the head of user 226 then attempting access. The modification also includes high-resolution imager 228 having a 3 0 relatively narrow field of view that is controlled by the position of active '-"~lOl 230 for deriving image 232 of an eye of user 226 (where imager 228 correæponds to imager 200 of Fig. 2). Image procescing means of the type shown in Fig. 2b, described below, uses information contained in sllcces.cive ~ideo frames of imager 222 to control the adjustment of the position of active 3 5 mirror 230 in accordance with prior-art te~rhingR disclosed in one or more of U.S. patents 4,692, 806; 5,063,603; and 5,067,014, all of which are incorporated herein by 1 efe~ ce.
WO 96/07978 ~ S/lffl5 More specifically, the modification of acquisition means 100 involves active image acquisition and tr~king of the human head, face and eye for recogni7inF the initial position of an operator's head (as well as its componentfacial features, e.g., eyes and iris) and subsequent tr~king. The approach 5 lltili7e-1 by the moAific~t.ion, which makes use of image inform~tion derived by imager 222, decomposes the matter into three parts. The first part is concerned with crude loc~li7~tion and tracking of the head and its component features. The second part is concerned with using the crude localization and tr~çking information to zoom in on and refine the positional and temporal 1 0 estimates of the eye region, especially the iris. The third part is concerned with motion tr:~rking.
The first part of eye localization is a meçh~ni~m for alerting the system that a potential user is present, and also for choosing candidate locations where the user might be. Such an alerting me~h~ni.~m is the 1 5 change-energy pyramid, shown in Fig. 2b (discussed in more detail below), where images recorded at a time interval are differenced and squared.
Change energy at di~ e-lt resolutions is produced using a G~ n l~y- d~id on the differenced, squared images. Change is analyzed at coarse resolution, and if present can alert the system that a potential user is entering the 2 0 imagers field of view. Other alerting me~hAni.~m.c include stereo, where theLi l ity of the user is detected by computing disparity between two images recorded from two positions, and alerting the system to objects that are nearby.
The second part of eye localization is a mech~ni.~m for initially 2 5 loc~qli7ing the head and eyes of the user. Localization is performed using a pattern-tree which comprises a model of a generic user, for example, a template of a head at a coarse resolution, and templates for the eyes, nose and mouth. The alerting mel-h~ni~m gives c~n~ te positions for a t~mpl~te matching process that m~trhe~ the image with the model. Initially m~tching 3 0 is done at a coarse resolution to locate coarse features such as the head, and subsequently fine resolution features, such as the eyes, nose and mouth, are located using information from the coarse resolution m~trh The third part of eye loc~li7~tion is to track the head and eyes once in view. This is done using a motion tracker which performs a correlation match 3 5 between a previous image frame and the current frame. The correlation match is done on the features used for eye localization, but can also be performed on other features, such as hair, that are useful for tracking over short time intervals, but vary from person to person.
The result of the three previous parts provides the location of the eye in image 224 from imager 222 and, if stereo is used, the a~ .x;...~te range of the eye. This information is used by active mirror 230 to point imager 228 toward the eye to capture an image. Given the position of the eye in the image 224, its ap~io~Li~ate range, and a known geometry between imager 222 and the imager 228, the pointing direction to capture the eye using imager 228 can be easily computed. If the range of the eye is unknown, then imager 228 is pointed to a position corresponding to the approximate expected range, from which it points to positions corresponding to ranges 1 0 ~ o~ ling the expected range. If imager 228 and imager 222 are configured to be optically ~ ne~, then only the image location of the eye in image 224 is necessary to point imager 228. Once imager 228 has been initially pointed to the eye, images from imager 228 are used to keep the eye in the field of view.
This is to compensate for eye sF.cc~-les, and normal movement of the user.
1 5 Such movçment~ will appear in~ignificant in images, such as image 224, from imager 222, but will appear significant in images, such as image 232, from imager 228. The tracking procedure is the same as that described for tracking the head and eyes, except the features used in images, such as image 232, of the user's eye are the eye's pupil, limbal boundary, and texture 2 0 corresponding to the eyelid.
Referring to Fig. 2b, there is shown a functional block diagram of an image processor responsive to images from imager 222 for controlling the position of active mirror 230 so that image 232 of the eye of user 226 is in theview of imager 228.
2 5 Specifically, the video signal output from imager 222, representing sllcce~ive frames of image 224, is applied, after being digitized, as an input Go to Gaussian pyramid 234. Input Go is forwarded, with suitable delay, to an output of Gaussian pyr~mid 234 to provide a Go image 236 of an image pyr~mid at the same resolution and sampling density as image 224. Further, 3 0 as known in the pyramid art, Ga~ si~n pyramid 234 includes cascaded convolution and sllhs~mpling stages for deriving reduced-resolution G1 output image 238 and G2 output image 240 of the image pyramid as outputs from G~ n pyramid 234.
The respective Go, Gl, and G2 outputs of Gaussian pyramid 234 are 3 5 delayed a given number of one or more frame periods by frame delay 242.
Subtractor 244 provides the difference between the polarized amplitude of correspon~ing pixels of the current and frame-delayed frames of each of Go, Gl, and G2 as an output therefrom, thereby minimi7ing the amplitude of wo 96/07978 P~ 3StlO985 stationary image objects with respect to the amplitude of moving object images. This minimi7~tion is m~nifietl and polarity is ~limin~te~l by squaring the output from subtractor 244 (as indicated by block 246) to provide a Go, G1, and G2 change energy pyramid (as indicated by respective blocks 248, 250 and 252). The change energy pyramid information, in a coarse-to-fine process known in the art, may then be used to control the position of active 230 of Fig.2a.
In addition, the moAific~tion may employ template matching, such as taught in aforesaid U.S. patent 5,063,603, for object recognition.
1 0 Alternatively, crude loç~li7.~tion and tr~cking could be based on a feature-based algorithm, such as disclosed in aforesaid U.S. patent 4,692,806, rather than template matching to provide simil~r information. Further, the modification could operate in an opportunistic fashion by acquiring a sequence of images until one with quality adequate for subsequent operations 1 5 has been obtained. Alternatively, from such a sequence, pieces of the region of interest could be acquired across frames and subsequently mosaiced together to yield a single image of adequate quality. Also, any of these modification approaches could be used to zoom in on and acquire h~igh resolution images of facial features other than the eye and iris. For example, 2 0 high resolution images of the lips of an operator could be obtained in an analogous f~.~hi-n The system shown in Fig. 2, either with or without the enhancement provided by the modification of Figs. 2a and 2b, could be generalized in a number of ways. First, the system could operate in spectral bands other 2 5 than the visible (e.g., near infrared). Thus, the term "light", as used herein, includes light r~ t.ion in both the visible and non-visible spectral bands. In order to ~t complich this, the spectral distribution of the illllmin~nt as well as the wavelength tuning of the quarter wave retarder must be matched to the desired spectral band. Second, the system could make use of a standard 3 0 video camera (repl~.ing the low light level camera), although a more intense illllmin~nt would need to be employed. Third, other ~hoices could be made for the lens system, including the use of an auto-focus zoom lens. This addition would place less of a premium on the accuracy with which the user deploys the Vernier ~ nm~nt procedure. Fourth, other instantiations of the Vernier 3 5 ~lignment procedure could be used. For example, pairs of lights could be projected in such a fashion that they would be seen as a single spot if the useris in the correct position and double otherwise. Fifth, in place of (or in addition to) the passive Vernier ~ nment meçh~ni~m, the system could be coupled wo 96/07978 rcrlusss/logss with an active tr~cking imager and associated software (such as that described above in connection with Figs. 2a and 2b) that automatically locates and tracks the eye of the user. This generalization would place less of a ~iUlll on having a cooperative user.
The output from acquisition means 100, which is applied as an input to localization means 102, comprises data in digital form that defines a relatively high-resolution eye image that corresponds to the particular video frame stored in frame grabber 212. Fig. 3 diagrammatically shows the sequence of the s-lcceæ,qive data procçs.qing steps performed by locP.li~t.io~
l 0 means 102 on the eye image data applied as an input thereto.
More specifically, input image 300 represents the relatively high-resolution eye image data that is applied as an input to localization means 102 from acquisition means 100. The first data processing step 302 is to average and reduce input image 300. This is accomplished by convolving the l 5 data dçfining input image 300 with a low-pass Gaussian filter that serves to spatially average and thereby reduce high frequency noise. Since spatial averaging introduces redundancy in the spatial domain, the filtered image is next sllhs~mple-l without any additional loss of information. The sllhs~mple-3 image serves as the basis for subsequent proc~sqin~ with the advantage that 2 0 its smaller dimen.qion.q and lower resolution leads to fewer computational lçm~n~.q comp~red to the original, full size, input image 300.
The next data procesqing steps involved in loc~ ing the iris consist of the sequential location of various components of the iris boundary. In sequence, step 304 locates the limbic (or outer) boundary of the iris, step 306 2 5 locates the pupilary (or inner) boundary of the iris, and step 308 locates the boundaries of the eyelids (which might be occluding a portion of the iris). Thisordering has been chosen based on the relative salience of the involved image features as well as on the ability of located components to constrain the location of additional components. The lo~ tion step of each component is 3 0 performed in two sub-steps. The first sub-step consists of an edge detection operation that is tuned to the expected configuration of high contrast image locations. This tuning is based on generic properties of the boundary component of interest (e.g., orientation) as well as on specific constraints that are provided by previously isolated boundary components. The second sub-3 5 step consists of a scheme where the detected edge pixels vote to instantiate particular values for a parameterized model of the boundary component of interest. Most simply, this step can be thought of in terms of a generalized WO 9G~'u791~ PCr/US95/10985 Hough transform as disclosed in U.S. patent 3,069,654, incorporated by reference.
In more detail, for the limbic boundary in step 304, the image is filtered with a gradient-based edge detector that is tuned in orientation so as to favor 5 near verticality. This directional selectivity is motivated by the fact that even in the face of occluding eyelids, the left and right portions of the limbusshould be clearly visible and oriented near the vertical. (This assumes that the head is in an upright position). The limbic boundary is modeled as a circle parameterized by its two center coor~in~tes, xc and yc, and its radius, r. The l 0 detected edge pixels are thinned and then histogrammed into a three-dimensional (xc, yc, r)-space, acco~ g to permissible (xc, yc, r) values for a given (x, y) image location. The (xc, yc, r) point with the m~im~l number of votes is taken to represent the limbic boundary. The only additional constraint imposed on this boundary is that it lies within the given image of l 5 the eye.
In more detail, for the pupilary boundary in step 306, the image is filtered with a gradient-based edge detector that is not directionally tuned.
The pupilary boundary is modeled as a circle, cimil~r to the limbic boundary.
The parameters of the circle again are instantiated in terms of the most 20 number of votes received as the edge pixels are thinned and then histogrammed into permiccihle (xc,yc,r) values. For the case of the pupil the permissible parameter values are constrained to lie within the circle that describes the limbic boundary.
In more detail, for the eyelid boundaries in step 308, the image is 2 5 filtered with a gradient-based edge detector that is tuned in orientation so as to favor the horizontal. This directional selectivity is motivated by the fact that the portion of the eyelid (if any) that is within the limbic boundary should be nearly horizontal. (Again, this assumes that the head is upright). The upper and lower eyelids are modeled as (two separate) parabolic, i.e., second-30 order, arcs. Particular values for the parameteri7~tion are instantiated asthe detecte-l edge pixels are thinned and then histo~ ...e~ acculdh.g to their permicsihle values. For the eyelids case, the detected boundaries are additionally constrained to be within the circle that specifies the limbic boundary and above or below the pupil for the upper and lower eyelids, 3 S respectively.
Finally, with the various components of the iris boundary isolated, the final processing step 310 consists of comhining these components so as to delimit the iris, per se. This is ~ccomplished by taking the iris as that portion wo 96/07978 ~ s~/los8s of the image that is outside the pupil boundary, inside the limbic boundary, below the upper eyelid and above the lower eyelid.
The above-described approach to iris loc~li7.~tion could be generalized in a number of ways. First, image representations other than oriented 5 gradient-based edge detection could be used for enhancing iris boundaries.
Second, alternative parameterizations for the iris boundary could be employed. Third, localization of various components of the iris boundary (limhic, pupilary and eyelid boundaries) could be performed in ~ t orders, or in parallel. Fourth, alternative constraints, including absence of l 0 constraints, could be enforced in specifying the relative configuration of the components of the iris boundary. Fifth, the fit of the parameterized models of the iris boundary could be performed across multiple resolutions, e.g., in an iterative coarse-to-fine fashion. Sixth, iris boundary localization could be performed without the initial steps of spatial averaging and sllhs~mpling.
1 5 The benefit of the above-described approach to iris loc~li7.~tion of an input eye image (particularly, as exemplified by the sequential data proces,~ing steps shown in Fig. 3) is that it requires no additional initial conditions and that it can be implemented employing simple filtering operations (that enhance relevant image structures) and histogr:~mming 2 0 operations (that embodies a voting scheme for lec~ve~ing the iris boundaries from the ~nh~nced image) that incur little computational expense.
In Fig. 1, the processed data output from localization means 102, representing the image of solely the localized iris of the user, is applied as afirst input to matching means 104, while selected data, previously stored in a 2 5 dat~b~e, that represents a model of the image of solely the loçP.li7e~ iris 106 of the person whom the user purports to be is applied as a second input to matching means 104. Means 104 employs principles of the invention to efficiently process the first and second input data thereto to determine whether or not there is a match sufficient to indicate the user is, in fact, the3 0 person whom he or she purports to be.
More specifically, the distinctive spatial characteristics of the human iris are manifest at a variety of scales. For example, distinglli~hing structures range from the overall shape of the iris to the distribution of tiny crypts and detailed texture. To capture this range of spatial structures, the 3 5 iris image is represented in terms of a 2D bandpass signal decomposition.
Prelimin~ry empirical studies lead to the conclusion that acceptable discrimination between iris images could be based on octave-wide bands computed at four different resolutions that are implemented by means of WO 9G~ ~7 /o 1~ /10985 2 l q9 040 13 Laplacian pyramids to capture this information. This m~kes for efficient storage and processing as lower frequency bands are subsampled sllcce.s~ively without loss of inform~tion .
In order to make a detailed comparison between two images it is 5 advantageous to est~hli~h a precise correspondence between characteristic structures across the pair. An area-based image registration technique is used for this purpose. This technique seeks the mapping function (u(x,y),v(x,y)), such that, for all (x,y), the pixel value at (x,y)-(u(x,y),v(x,y)) in the data image is close to that at (x,y) in the model image. Here, (x,y) are 10 taken over the image regions that are localized as the iris by the iris loc~li7~tion technique described herein. Further, the mapping function is constrained to be a .Cimil~rity transformation, i.e., translational shift, scaleand rotation. This allows the observed degrees of freedom between various imaged instances of the same iris to be compensated for. Shift accounts for 1 5 offsets in the plane parallel to the imager's sensor array. Scale accounts for offsets along the camera's optical axis. Rotation accounts for deviation in rotation about the optical axis beyond that not naturally comp~n~tell for by cyclotorsion of the eye. Given the ability to accurately position the person attempting ~ccess, as described above in connection with image acquisition, 2 0 these prove to be the only degrees of freedom that need to be addressed in establi~hing correspon~lence. This approach has been implemented in terms of a hierarchical gradient-based image registration algorithm employing model-based motion estimation known in the art. Initial conditions for the algorithm are derived from the relative offset of iris boundaries located by the2 5 iris loc~li7~fion technique described above.
With the model and data images accurately and precisely registered, the next task is to assign a goodness of match to quantify the comparison.
Given the system's ability to bring model and data images into fine registration, an a~l l o~iate match metric can be based on integrating pixel 3 0 differences over spatial position within each frequency band of the image representation. Spatial correlation captures this notion. More specific~lly, norm~li7ed correlation is made use o Normalized correlation captures the same type of information as standard correlation; howev~r, it also accounts for local variations in image intensity that corrupt standard correlation, as 3 ~ known in the art. The corrql~tion~ are performed over small blocks of pixels (8 x 8) in each spatial frequency band. A goodness of match subsequently is derived for each band by combining the block correlation values via the median statistic. Blocking combined with the median operation allows for wo 96/07978 ~ 9SJ10985 local adjustments of m~t--hing and a degree of outlier detection and thereby provides robustness against mi.qmz~tches due to noise, misregistration and occlusion (e.g., a stray eyelash).
The final task that must be performed is to comhine the four goodness 5 of match values that have been computed (one for each spatial frequency band) into a final j~ gment as to whether the data image comes from the same iris as does the model image. A reasonable approach to this matter is to comhine the values in a fashion so that the variance within a class of iris images (i.e., various instances of the same iris) is minimi7ed, while the l 0 variance between different classes of iris images (i.e., instances of different irises) is m~nmi7erl A linear function that provides such a solution is well known and is given by Fisher's Linear Discrimin~nt. This technique has been disclosed, among others, by Duda and Hart in "Pattern Classification And Scene Analysis", John Wiley & Sons, 1973, pages 1114-118. While it is not a 1 5 foregone conclusion that any linear function can ~ e~ly distinguish di~ lt classes of all~iLlaly data sets, it has been found that, in practice, it works quite well in the case of iris images. Further, in practice, Fishers Linear Discrimin~nt, has been defined based on a small set of iris image training data comprising 5 images of 10 irises). Subsequently, in practice, this 2 0 function has made for ~ellent discrimin~tion between incoming data images that have a corresponding fiP~t~h~Re entry and those that do not.
It is to be understood that the apparatus and method of operation taught herein are illustrative of the invention. Modifications may readily be devised by those skilled in the art without departing from the spirit or scope of 2 5 the invention. In particular, methods of registration other than simil~rity may be used. Image representations other than those derived via application of isotropic bandpass filtering could serve as the basis for correlation. For example, oriented bandpass filtering, such as that disclosed by Burt et al in U.S. Patent No. 5,325,449 issued June 28, 1994, incorporated herein by 3 0 ~ef~ ce, or morphological filtering could be used. Other signal decomposition methods than bandpass such as wavelet decomposition can be used. A
wavelet decomposition is a specific type of multiresolution pyr~mid that uses quadrature mirror filters (QMF) to produce subband decompositions of an original image representative video signal. A signal processor of this type is 3 5 described by Pentland et al. in "A Practical Approach to Fractal-Based Image Compression", Proceedings of the DCC '91 Data Compression Conference, Apr. 8-11, 1991, IEEE Computer Society Press, Los Alamitos, Cali The Pentland et al. compression system attempts to use low frequency coarse wo 96/07978 ~ JS95/10985 scale information to predict significant information at high frequency finer scales. QMF subband pyramid proces~in~ also is described in the book "Subband Image Coding", J.W. Woods, ed., Kluwer Academic Publishers, 1991. Alternatively, an oriented bandpass such as that t~ losed by Burt et al in U.S. Patent No. 5,325,449 issued June 28, 1994, could be used.
Image matching could be performed in a more symbolic fashion. For example, multiple derived match values could be comhined in m~nn~rs other than those given by Fisher's Linear Disc~ nt For example, a non-linear combination (e.g., derived with a neural network) could be used. Other 1 0 coInr~rison methods than corr~l~tion and other ~le~ on criteria than Fisher's Linear Discrimin~nt can also be used.
Alternative methods could be used for ~ligninE the irises that are being compared. For example, the images can be aligned subject to either simpler or more complex image transformations. Prior to the actual matching 1 5 procedure the ~nn~ r iris images could be conv~l ~ed to a rectangular format, e.g., with radial and angular position converted to vertical and horizontal.
Such manipulation would serve to simplify certain subsequent operations.
Prior to the actual m~tr~inE procedure the iris images could be projected along some direction to yield a one--limeT-~io~ iEn~l. For example, the 2 0 images could be projected along the radial direction~
The invention can be used to control access to an area, facility or a device such as computer or an ATM or in biometric asses~m~n~
Claims (36)
1. In a system including an imager having a lens for deriving a focused image of an eye of a user of the system; the improvement comprising:
alignment means for permitting said user to self-position his or her eye into said imager's field of view without the need for any physical contact with said system.
alignment means for permitting said user to self-position his or her eye into said imager's field of view without the need for any physical contact with said system.
2. The system of Claim 1, wherein said alignment means comprises:
first edge means having a given outline contour shape of a first given size that is substantially centered with respect to said lens, said first edge means being disposed a first distance in front of said lens; and second edge means having an outline contour shape of a second given size smaller than said first given size that is geometrically similar in shape to said given outline contour shape and is substantially centered with respect to said lens, said second edge means being disposed a second distance in front of said lens that is longer than said first distance by a specified amount, said specified amount being such that said lens forms a focused image of said user's eye when said user's eye of is maneuvered to that point in space further in front of said lens than said second edge means at which said outline contour of said first edge means is substantially totally occluded by said outline contour of said second edge means
first edge means having a given outline contour shape of a first given size that is substantially centered with respect to said lens, said first edge means being disposed a first distance in front of said lens; and second edge means having an outline contour shape of a second given size smaller than said first given size that is geometrically similar in shape to said given outline contour shape and is substantially centered with respect to said lens, said second edge means being disposed a second distance in front of said lens that is longer than said first distance by a specified amount, said specified amount being such that said lens forms a focused image of said user's eye when said user's eye of is maneuvered to that point in space further in front of said lens than said second edge means at which said outline contour of said first edge means is substantially totally occluded by said outline contour of said second edge means
3. The system of Claim 2, wherein:
said given outline contour shape of said first edge means is square;
whereby said geometrically similar outline contour shape of said second edge means is also square.
said given outline contour shape of said first edge means is square;
whereby said geometrically similar outline contour shape of said second edge means is also square.
4. The system of Claim 2, wherein said system further comprises:
light means for illuminating said user's eye with diffuse light which is circularly polarized in a given sense of rotation to derive diffusely reflected light from said user's eye that is incident on said lens and which diffusely reflected light incident on said lens.
light means for illuminating said user's eye with diffuse light which is circularly polarized in a given sense of rotation to derive diffusely reflected light from said user's eye that is incident on said lens and which diffusely reflected light incident on said lens.
5. The system of Claim 4, wherein said light means comprises:
an array of light sources which surround said imager for deriving illuminating light;
a diffuser panel through which said illuminating light from said array of light sources passes, said diffuser panel having a hole therein situated with respect to said lens for permitting said reflected light to reach said lens without passing through said diffuser panel; and a circular polarizer situated in front of said diffuser panel and said lens for circularly polarizing that illuminating light reaching said user's eye in said given sense of rotation.
an array of light sources which surround said imager for deriving illuminating light;
a diffuser panel through which said illuminating light from said array of light sources passes, said diffuser panel having a hole therein situated with respect to said lens for permitting said reflected light to reach said lens without passing through said diffuser panel; and a circular polarizer situated in front of said diffuser panel and said lens for circularly polarizing that illuminating light reaching said user's eye in said given sense of rotation.
6. The system of Claim 4, further comprising:
a digital frame grabber coupled to said imager for deriving digital data representative of said focused image of said eye of a user.
a digital frame grabber coupled to said imager for deriving digital data representative of said focused image of said eye of a user.
7. The system of Claim 2, wherein said imager is a relatively high-resolution, narrow-field imager and said alignment means further comprises:a relatively low-resolution, wide field imager for deriving successive video frames of image information that includes predetermined facial features of said user;
image processing means responsive to said predetermined-facial-feature image information for locating the position of said user's eye in said video frames thereof; and means associated with said image processing means which is responsive to the location of the position of said user's eye in said video frames of said low-resolution, wide field imager for permitting said high-resolution, narrow field imager to be provided with a focused image of saiduser's eye.
image processing means responsive to said predetermined-facial-feature image information for locating the position of said user's eye in said video frames thereof; and means associated with said image processing means which is responsive to the location of the position of said user's eye in said video frames of said low-resolution, wide field imager for permitting said high-resolution, narrow field imager to be provided with a focused image of saiduser's eye.
8. The system of Claim 7, wherein:
said means associated with said image processing means includes an active mirror positioned in accordance with said located position of said user'seye in said video frames of said relatively low-resolution, wide field imager.
said means associated with said image processing means includes an active mirror positioned in accordance with said located position of said user'seye in said video frames of said relatively low-resolution, wide field imager.
9. The system of Claim 7, wherein said imaging processing means includes:
a Gaussian pyramid for deriving a multistage image pyramid of at least one of said successive video frames of image information; and means responsive to said image-pyramid stages for deriving a change energy pyramid.
a Gaussian pyramid for deriving a multistage image pyramid of at least one of said successive video frames of image information; and means responsive to said image-pyramid stages for deriving a change energy pyramid.
10. The system of Claim 1, wherein said imager is a relatively high-resolution, narrow-field imager and said improvement further comprises:
a relatively low-resolution, wide field imager for deriving successive video frames of image information that includes predetermined facial features of said user;
image processing means responsive to said predetermined-facial-feature image information for locating the position of said user's eye in said video frames thereof; and means associated with said image processing means which is responsive to the location of the position of said user's eye in said video frames of said low-resolution, wide field imager for permitting said high-resolution, narrow field imager to be provided with a focused image of saiduser's eye.
a relatively low-resolution, wide field imager for deriving successive video frames of image information that includes predetermined facial features of said user;
image processing means responsive to said predetermined-facial-feature image information for locating the position of said user's eye in said video frames thereof; and means associated with said image processing means which is responsive to the location of the position of said user's eye in said video frames of said low-resolution, wide field imager for permitting said high-resolution, narrow field imager to be provided with a focused image of saiduser's eye.
11. The system of Claim 7, wherein:
said means associated with said image processing means includes an active mirror positioned in accordance with said located position of said user'seye in said video frames of said relatively low-resolution, wide field imager.
said means associated with said image processing means includes an active mirror positioned in accordance with said located position of said user'seye in said video frames of said relatively low-resolution, wide field imager.
12. The system of Claim 7, wherein said imaging processing means includes:
a Gaussian pyramid for deriving a multistage image pyramid of at least one of said successive video frames of image information; and means responsive to said image-pyramid stages for deriving a change energy pyramid.
a Gaussian pyramid for deriving a multistage image pyramid of at least one of said successive video frames of image information; and means responsive to said image-pyramid stages for deriving a change energy pyramid.
13. The system of Claim 1, wherein said system is directed to examining the iris of said user's eye; wherein said system includes a digital frame grabber coupled to said imager for deriving digital data representative of said focused image of said user's eye; and wherein said improvement further comprises:
image processing means responsive to digital data from said frame grabber that manifests said user's eye for localizing said eye's iris by, in sequential order, (1) locating that data which is within the image of the user'seye that defines the limbic boundary of said iris, (2) locating that data which is within said limbic boundary that defines the pupilary boundary of said iris, (3) locating that data which is within said limbic boundary that defines the boundaries of said eye's upper and lower eyelids, and (4) then employing only that data that is outside of said pupilary boundary, inside said limbic boundary, and below the upper eyelid and above the lower eyelid thereby to delimit said data to that portion thereof which manifests said eye's iris.
image processing means responsive to digital data from said frame grabber that manifests said user's eye for localizing said eye's iris by, in sequential order, (1) locating that data which is within the image of the user'seye that defines the limbic boundary of said iris, (2) locating that data which is within said limbic boundary that defines the pupilary boundary of said iris, (3) locating that data which is within said limbic boundary that defines the boundaries of said eye's upper and lower eyelids, and (4) then employing only that data that is outside of said pupilary boundary, inside said limbic boundary, and below the upper eyelid and above the lower eyelid thereby to delimit said data to that portion thereof which manifests said eye's iris.
14. The system of Claim 13, wherein:
said image processing means includes means for low-pass filtering and then subsampling said digital data from said frame grabber prior to localizing said eye's iris in said sequential order.
said image processing means includes means for low-pass filtering and then subsampling said digital data from said frame grabber prior to localizing said eye's iris in said sequential order.
15. The system of Claim 13, wherein:
said image processing means employs image-filtering means to derive enhanced images and histogramming means that embody a voting scheme for recovering said iris boundaries from said enhanced images;
whereby said recovery of said iris boundaries does not require knowledge of any initial conditions other than the digital data representative of said focused image of said user's eye.
said image processing means employs image-filtering means to derive enhanced images and histogramming means that embody a voting scheme for recovering said iris boundaries from said enhanced images;
whereby said recovery of said iris boundaries does not require knowledge of any initial conditions other than the digital data representative of said focused image of said user's eye.
16. The system of Claim 13, wherein said image processing means for locating that data which is within the image of the user's eye that defines the limbic boundary of said iris includes:
gradient-based edge detector filter means tuned in orientation to favor near verticality for deriving detected edge data; and means, responsive to said limbic boundary being modeled as a circle parameterized by its two center coordinates, xc and yc, and its radius, r, for thinning and then histogramming said detected edge data into a three-dimensional (xc, yc, r)-space, according to permissible (xc, yc, r) values for a given (x, y) image location;
whereby the (xc, yc, r) point with the maximal number of votes is taken to represent the limbic boundary.
gradient-based edge detector filter means tuned in orientation to favor near verticality for deriving detected edge data; and means, responsive to said limbic boundary being modeled as a circle parameterized by its two center coordinates, xc and yc, and its radius, r, for thinning and then histogramming said detected edge data into a three-dimensional (xc, yc, r)-space, according to permissible (xc, yc, r) values for a given (x, y) image location;
whereby the (xc, yc, r) point with the maximal number of votes is taken to represent the limbic boundary.
17. The system of Claim 13, wherein said image processing means for locating that data which is within the image of the user's eye that defines the pupilary boundary of said iris includes:
gradient-based edge detector filter means that is directionally untuned in orientation for deriving detected edge data; and means, responsive to said pupilary boundary being modeled as a circle parameterized by its two center coordinates, xc and yc, and its radius, r, for thinning and then histogramming said detected edge data into a three-dimensional (xc, yc, r)-space, according to permissible (xc, yc, r) values for a given (x, y) image location;
whereby the (xc, yc, r) point with the maximal number of votes is taken to represent the pupilary boundary.
gradient-based edge detector filter means that is directionally untuned in orientation for deriving detected edge data; and means, responsive to said pupilary boundary being modeled as a circle parameterized by its two center coordinates, xc and yc, and its radius, r, for thinning and then histogramming said detected edge data into a three-dimensional (xc, yc, r)-space, according to permissible (xc, yc, r) values for a given (x, y) image location;
whereby the (xc, yc, r) point with the maximal number of votes is taken to represent the pupilary boundary.
18. The system of Claim 13, wherein said image processing means for locating that data which is within the image of the user's eye that defines the boundaries of said eye's upper and lower eyelids of said iris includes:
gradient-based edge detector filter means tuned in orientation to favor the horizontal for deriving detected edge data; and means, responsive to each of said of said eye's upper eyelid boundary and lower eyelid boundary being modeled as a parabolic parameterized by second-order arcs, in which particular values for the parameterization are instantiated by thinning and then histogramming said detected edge data into a three-dimensional-space, according to permissible values for a given image location;
whereby the spatial point with the maximal number of votes is taken to represent that eyelid boundary.
gradient-based edge detector filter means tuned in orientation to favor the horizontal for deriving detected edge data; and means, responsive to each of said of said eye's upper eyelid boundary and lower eyelid boundary being modeled as a parabolic parameterized by second-order arcs, in which particular values for the parameterization are instantiated by thinning and then histogramming said detected edge data into a three-dimensional-space, according to permissible values for a given image location;
whereby the spatial point with the maximal number of votes is taken to represent that eyelid boundary.
19. The system of Claim 1, wherein said system is directed to automated iris recognition for security access control; wherein said system includes a digital frame grabber coupled to said imager for deriving digital data representative of said focused image of said user's eye; and wherein said improvement further comprises image processing means including:
iris-localizing means responsive to digital data from said frame grabber that manifests said user's eye for substantially delimiting said digital data toonly that portion thereof which manifests solely said iris of said user's eye;
and pattern-matching means responsive to said portion of said digital data and previously stored digital data which manifests solely the iris of an eye of a given individual, said pattern-matching means employing normalized spatial correlation for first comparing, at each of a plurality of spatial scales, each of distinctive spatial characteristics of the respective irises of said user and said given individual that are spatially registered with one another to quantitatively determine, at each of said plurality of spatial scales, a goodness value of match at that spatial scale, and then judging whether or not the pattern of said delimited digital data which manifests solely said iris of said user's eye matches said digital data which manifests solely the iris of an eye of said given individual in accordance with a certain combination of the quantitatively-determined goodness values of match at each of said plurality of spatial scales.
iris-localizing means responsive to digital data from said frame grabber that manifests said user's eye for substantially delimiting said digital data toonly that portion thereof which manifests solely said iris of said user's eye;
and pattern-matching means responsive to said portion of said digital data and previously stored digital data which manifests solely the iris of an eye of a given individual, said pattern-matching means employing normalized spatial correlation for first comparing, at each of a plurality of spatial scales, each of distinctive spatial characteristics of the respective irises of said user and said given individual that are spatially registered with one another to quantitatively determine, at each of said plurality of spatial scales, a goodness value of match at that spatial scale, and then judging whether or not the pattern of said delimited digital data which manifests solely said iris of said user's eye matches said digital data which manifests solely the iris of an eye of said given individual in accordance with a certain combination of the quantitatively-determined goodness values of match at each of said plurality of spatial scales.
20. The system of Claim 19, wherein:
said pattern-matching means includes area-based image registration means utilizing a mapping function (u(x,y),v(x,y)) constrained to be a similarity transformation of translational shift, scale and rotation, such that,for all (x,y), the data value at (x,y)-(u(x,y),v(x,y)) in the delimited digital data which manifests solely said iris of said user's eye is close to that at (x,y) ofsaid digital data which manifests solely the iris of an eye of said given individual.
said pattern-matching means includes area-based image registration means utilizing a mapping function (u(x,y),v(x,y)) constrained to be a similarity transformation of translational shift, scale and rotation, such that,for all (x,y), the data value at (x,y)-(u(x,y),v(x,y)) in the delimited digital data which manifests solely said iris of said user's eye is close to that at (x,y) ofsaid digital data which manifests solely the iris of an eye of said given individual.
21. The system of Claim 19, wherein:
said pattern-matching means includes means for performing normalized spatial correlation over given spatial blocks made up of a first plurality of data points in a first spatial dimension of each of said plurality of scales and a second plurality of data points in a second spatial dimension of each of said plurality of scales.
said pattern-matching means includes means for performing normalized spatial correlation over given spatial blocks made up of a first plurality of data points in a first spatial dimension of each of said plurality of scales and a second plurality of data points in a second spatial dimension of each of said plurality of scales.
22. The system of Claim 21, wherein:
said pattern-matching means includes means combining said normalized spatial correlation over said given spatial blocks via a median statistic at each of said plurality of spatial scales for quantitatively determining said goodness value of match at that spatial scale.
said pattern-matching means includes means combining said normalized spatial correlation over said given spatial blocks via a median statistic at each of said plurality of spatial scales for quantitatively determining said goodness value of match at that spatial scale.
23. The system of Claim 19, wherein:
said pattern-matching means includes means for combining the quantitatively-determined goodness values of match at each of said plurality of spatial scales so that the variance within various instances of the same iris is minimized and the variance within various instances of different irises is maximized.
said pattern-matching means includes means for combining the quantitatively-determined goodness values of match at each of said plurality of spatial scales so that the variance within various instances of the same iris is minimized and the variance within various instances of different irises is maximized.
24. The system of Claim 23, wherein:
said means for combining the quantitatively-determined goodness values of match at each of said plurality of spatial scales employs Fisher's Linear Discriminant as a linear function that minimizes the variance among various instances of the same iris and maximizes the variance among various instances of different irises.
said means for combining the quantitatively-determined goodness values of match at each of said plurality of spatial scales employs Fisher's Linear Discriminant as a linear function that minimizes the variance among various instances of the same iris and maximizes the variance among various instances of different irises.
25. In an image-processing method responsive to digital data defining a digitized image of the eye of an individual for delimiting said digital data to that portion thereof that defines solely the iris of said eye of said individual;
wherein said method includes a delimiting step of locating that data which is within the image of said individual's eye that defines at least one of the limbic boundary of said iris, the pupilary boundary of said iris, and the boundaries ofsaid eye's upper and lower eyelids; the improvement wherein said delimiting step comprises the steps of:
a) image-filtering said one of the limbic boundary of said iris, the pupilary boundary of said iris, and the boundaries of said eye's upper and lower eyelids to derive an enhanced image thereof; and b) histogramming said enhanced image, in which said histogramming step embodies a voting scheme for recovering said one of said iris boundaries from said enhanced image;
whereby said recovery of said one of said iris boundaries does not require knowledge of any initial conditions other than the digital data definingsaid digitized image of said eye of said individual.
wherein said method includes a delimiting step of locating that data which is within the image of said individual's eye that defines at least one of the limbic boundary of said iris, the pupilary boundary of said iris, and the boundaries ofsaid eye's upper and lower eyelids; the improvement wherein said delimiting step comprises the steps of:
a) image-filtering said one of the limbic boundary of said iris, the pupilary boundary of said iris, and the boundaries of said eye's upper and lower eyelids to derive an enhanced image thereof; and b) histogramming said enhanced image, in which said histogramming step embodies a voting scheme for recovering said one of said iris boundaries from said enhanced image;
whereby said recovery of said one of said iris boundaries does not require knowledge of any initial conditions other than the digital data definingsaid digitized image of said eye of said individual.
26. The method of Claim 25, wherein said delimiting step includes the sequential steps of:
c) first, locating that portion of said digital data that defines said limbic boundary of said iris;
d) second, locating that portion of said digital data which is within said limbic boundary that defines said pupilary boundary of said iris;
e) third, locating that portion of said digital data which is within said limbic boundary that defines said boundaries of said eye's upper and lower eyelids; and f) fourth, employing only that portion of said digital data that is outside of said pupilary boundary, inside said limbic boundary, and below the upper eyelid and above the lower eyelid thereby to delimit said digital data to that portion thereof which manifests said eye's iris.
c) first, locating that portion of said digital data that defines said limbic boundary of said iris;
d) second, locating that portion of said digital data which is within said limbic boundary that defines said pupilary boundary of said iris;
e) third, locating that portion of said digital data which is within said limbic boundary that defines said boundaries of said eye's upper and lower eyelids; and f) fourth, employing only that portion of said digital data that is outside of said pupilary boundary, inside said limbic boundary, and below the upper eyelid and above the lower eyelid thereby to delimit said digital data to that portion thereof which manifests said eye's iris.
27. The method of Claim 26, wherein step (c) includes the steps of:
g) employing gradient-based edge detector filter means tuned in orientation to favor near verticality for detecting edge data; and h) thinning and then histogramming said detected edge data into a three-dimensional (xc, yc, r)-space, according to permissible (xc, yc, r) valuesfor a given (x, y) image location in accordance with a model of said limbic boundary as a circle parameterized by its two center coordinates, xc and yc, and its radius r;
whereby the (xc, yc, r) point with the maximal number of votes is taken to represent the limbic boundary.
g) employing gradient-based edge detector filter means tuned in orientation to favor near verticality for detecting edge data; and h) thinning and then histogramming said detected edge data into a three-dimensional (xc, yc, r)-space, according to permissible (xc, yc, r) valuesfor a given (x, y) image location in accordance with a model of said limbic boundary as a circle parameterized by its two center coordinates, xc and yc, and its radius r;
whereby the (xc, yc, r) point with the maximal number of votes is taken to represent the limbic boundary.
28. The method of Claim 26, wherein step (d) includes the steps of:
g) employing gradient-based edge detector filter means that is directionally untuned in orientation for detecting edge data; and h) thinning and then histogramming said detected edge data into a three-dimensional (xc, yc, r)-space, according to permissible (xc, yc, r) valuesfor a given (x, y) image location in accordance with a model of said pupilary boundary as a circle parameterized by its two center coordinates, xc and yc, and its radius r;
whereby the (xc, yc, r) point with the maximal number of votes is taken to represent the pupilary boundary.
g) employing gradient-based edge detector filter means that is directionally untuned in orientation for detecting edge data; and h) thinning and then histogramming said detected edge data into a three-dimensional (xc, yc, r)-space, according to permissible (xc, yc, r) valuesfor a given (x, y) image location in accordance with a model of said pupilary boundary as a circle parameterized by its two center coordinates, xc and yc, and its radius r;
whereby the (xc, yc, r) point with the maximal number of votes is taken to represent the pupilary boundary.
29. The method of Claim 26, wherein step (e) includes the steps of g) employing gradient-based edge detector filter means tuned in orientation to favor the horizontal for detecting edge data; and h) thinning and then histogramming said detected edge data into a three-dimensional space, according to permissible values for a given image location in accordance with a model of each of said eye's upper eyelid boundary and lower eyelid boundary as a parabolic parameterized by second-order arcs;
whereby the spatial point with the maximal number of votes is taken to represent that eyelid boundary.
whereby the spatial point with the maximal number of votes is taken to represent that eyelid boundary.
30. The method of Claim 25, further including the further step of:
low-pass filtering and then subsampling said digital data defining said digitized image of said eye of said individual prior to performing said delimiting step.
low-pass filtering and then subsampling said digital data defining said digitized image of said eye of said individual prior to performing said delimiting step.
31. In an image-processing method for use in providing automated iris recognition for security access control; said method being responsive to first digital data defining a digitized image of solely the iris of the eye of a certain individual attempting access and previously stored second digital data of a digitized image that defines solely the iris of the eye of a specified individual;
wherein said method includes a pattern-matching step that comprises the steps of:
a) employing normalized spatial correlation for first comparing, at each of a plurality of spatial scales, each of distinctive spatial characteristics ofthe respective irises of said given individual and said specified individual that are spatially registered with one another to quantitatively determine, at each of said plurality of spatial scales, a goodness value of match at that spatial scale; and b) judging whether or not the pattern of said digital data which manifests solely said iris of said eye of said given individual matches said digital data which manifests solely the iris of an eye of said specified individual in accordance with a certain combination of the quantitatively-determined goodness values of match at each of said plurality of spatial scales.
wherein said method includes a pattern-matching step that comprises the steps of:
a) employing normalized spatial correlation for first comparing, at each of a plurality of spatial scales, each of distinctive spatial characteristics ofthe respective irises of said given individual and said specified individual that are spatially registered with one another to quantitatively determine, at each of said plurality of spatial scales, a goodness value of match at that spatial scale; and b) judging whether or not the pattern of said digital data which manifests solely said iris of said eye of said given individual matches said digital data which manifests solely the iris of an eye of said specified individual in accordance with a certain combination of the quantitatively-determined goodness values of match at each of said plurality of spatial scales.
32. The system of Claim 31, wherein step (a) comprises the step of:
c) providing area-based image registration utilizing a mapping function (u(x,y),v(x,y)) constrained to be a similarity transformation of translational shift, scale and rotation, such that, for all (x,y), the data value at (x,y)-(u(x,y),v(x,y)) in the first digital data which manifests solely said iris of said eye of said given individual is close to that at (x,y) of said second digital data which manifests solely the iris of an eye of said specified individual.
c) providing area-based image registration utilizing a mapping function (u(x,y),v(x,y)) constrained to be a similarity transformation of translational shift, scale and rotation, such that, for all (x,y), the data value at (x,y)-(u(x,y),v(x,y)) in the first digital data which manifests solely said iris of said eye of said given individual is close to that at (x,y) of said second digital data which manifests solely the iris of an eye of said specified individual.
33. The system of Claim 31, wherein step (a) comprises the step of:
c) performing normalized spatial correlation over given spatial blocks made up of a first plurality of data points in a first spatial dimension of eachof said plurality of scales and a second plurality of data points in a second spatial dimension of each of said plurality of scales.
c) performing normalized spatial correlation over given spatial blocks made up of a first plurality of data points in a first spatial dimension of eachof said plurality of scales and a second plurality of data points in a second spatial dimension of each of said plurality of scales.
34. The system of Claim 33, wherein step (c) comprises the step of:
d) combining said normalized spatial correlation over said given spatial blocks via a median statistic at each of said plurality of spatial scales for quantitatively determining said goodness value of match at that spatial scale.
d) combining said normalized spatial correlation over said given spatial blocks via a median statistic at each of said plurality of spatial scales for quantitatively determining said goodness value of match at that spatial scale.
35. The system of Claim 31, wherein step (b) comprises the step of:
c) combining the quantitatively-determined goodness values of match at each of said plurality of spatial scales so that the variance within various instances of the same iris is minimized and the variance within various instances of different irises is maximized
c) combining the quantitatively-determined goodness values of match at each of said plurality of spatial scales so that the variance within various instances of the same iris is minimized and the variance within various instances of different irises is maximized
36. The system of Claim 35, wherein step (c) comprises the step of:
d) employing Fisher's Linear Discriminant as a linear function that minimizes the variance among various instances of the same iris and maximizes the variance among various instances of different irises.
d) employing Fisher's Linear Discriminant as a linear function that minimizes the variance among various instances of the same iris and maximizes the variance among various instances of different irises.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/300,678 US5572596A (en) | 1994-09-02 | 1994-09-02 | Automated, non-invasive iris recognition system and method |
US08/300,678 | 1994-09-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2199040A1 true CA2199040A1 (en) | 1996-03-14 |
Family
ID=23160145
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002199040A Abandoned CA2199040A1 (en) | 1994-09-02 | 1995-09-05 | Automated, non-invasive iris recognition system and method |
Country Status (11)
Country | Link |
---|---|
US (2) | US5572596A (en) |
EP (2) | EP1126403A2 (en) |
JP (3) | JP3943591B2 (en) |
KR (1) | KR970705798A (en) |
CN (1) | CN1160446A (en) |
AU (1) | AU702883B2 (en) |
BR (1) | BR9508691A (en) |
CA (1) | CA2199040A1 (en) |
HU (1) | HUT76950A (en) |
MX (1) | MX9701624A (en) |
WO (1) | WO1996007978A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440483A (en) * | 2013-09-03 | 2013-12-11 | 吉林大学 | Active auto-focus type iris image capturing device |
Families Citing this family (449)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10361802B1 (en) | 1999-02-01 | 2019-07-23 | Blanding Hovenweep, Llc | Adaptive pattern recognition based control system and method |
EP0654765B1 (en) * | 1993-11-23 | 2002-10-09 | Hewlett-Packard Company, A Delaware Corporation | Ink rendering |
EP0664642B1 (en) | 1994-01-20 | 2002-07-24 | Omron Corporation | Image processing device for identifying an input image, and copier including same |
US6714665B1 (en) | 1994-09-02 | 2004-03-30 | Sarnoff Corporation | Fully automated iris recognition system utilizing wide and narrow fields of view |
JP2630923B2 (en) * | 1994-12-05 | 1997-07-16 | 日本アイ・ビー・エム株式会社 | Image recognition method and apparatus |
EP0865637A4 (en) * | 1995-12-04 | 1999-08-18 | Sarnoff David Res Center | Wide field of view/narrow field of view recognition system and method |
JP3625941B2 (en) * | 1996-01-30 | 2005-03-02 | 沖電気工業株式会社 | Iris recognition system |
EP0892956A4 (en) * | 1996-02-09 | 2002-07-24 | Sarnoff Corp | Method and apparatus for training a neural network to detect and classify objects with uncertain training data |
JP2907120B2 (en) * | 1996-05-29 | 1999-06-21 | 日本電気株式会社 | Red-eye detection correction device |
DE69717826T2 (en) | 1996-06-06 | 2003-09-04 | British Telecomm Public Ltd Co | IDENTIFICATION OF PERSONS |
JP3751368B2 (en) * | 1996-06-28 | 2006-03-01 | 沖電気工業株式会社 | Iris recognition system and iris recognition device |
JP3436293B2 (en) * | 1996-07-25 | 2003-08-11 | 沖電気工業株式会社 | Animal individual identification device and individual identification system |
EP0959769A1 (en) * | 1996-08-25 | 1999-12-01 | Sensar, Inc. | Apparatus for the iris acquiring images |
US6819783B2 (en) | 1996-09-04 | 2004-11-16 | Centerframe, Llc | Obtaining person-specific images in a public venue |
WO1998024006A2 (en) * | 1996-11-12 | 1998-06-04 | Express Key, Inc. | Automated check-in/check-out system |
JP3587635B2 (en) * | 1996-11-15 | 2004-11-10 | 沖電気工業株式会社 | Personal recognition device using iris and automatic transaction system using this personal recognition device |
EP0953188A2 (en) * | 1996-11-19 | 1999-11-03 | Fingerpin AG | Process and device for identifying and recognizing living beings and/or objects |
US6324532B1 (en) | 1997-02-07 | 2001-11-27 | Sarnoff Corporation | Method and apparatus for training a neural network to detect objects in an image |
US6786420B1 (en) | 1997-07-15 | 2004-09-07 | Silverbrook Research Pty. Ltd. | Data distribution mechanism in the form of ink dots on cards |
US6381345B1 (en) * | 1997-06-03 | 2002-04-30 | At&T Corp. | Method and apparatus for detecting eye location in an image |
US6618117B2 (en) | 1997-07-12 | 2003-09-09 | Silverbrook Research Pty Ltd | Image sensing apparatus including a microcontroller |
US7110024B1 (en) | 1997-07-15 | 2006-09-19 | Silverbrook Research Pty Ltd | Digital camera system having motion deblurring means |
US6985207B2 (en) | 1997-07-15 | 2006-01-10 | Silverbrook Research Pty Ltd | Photographic prints having magnetically recordable media |
AUPO850597A0 (en) * | 1997-08-11 | 1997-09-04 | Silverbrook Research Pty Ltd | Image processing method and apparatus (art01a) |
US20040160524A1 (en) * | 1997-07-15 | 2004-08-19 | Kia Silverbrook | Utilising exposure information for image processing in a digital image camera |
US7551201B2 (en) | 1997-07-15 | 2009-06-23 | Silverbrook Research Pty Ltd | Image capture and processing device for a print on demand digital camera system |
US7593058B2 (en) * | 1997-07-15 | 2009-09-22 | Silverbrook Research Pty Ltd | Digital camera with integrated inkjet printer having removable cartridge containing ink and media substrate |
US7551202B2 (en) * | 1997-07-15 | 2009-06-23 | Silverbrook Research Pty Ltd | Digital camera with integrated inkjet printer |
US6624848B1 (en) | 1997-07-15 | 2003-09-23 | Silverbrook Research Pty Ltd | Cascading image modification using multiple digital cameras incorporating image processing |
US7246897B2 (en) * | 1997-07-15 | 2007-07-24 | Silverbrook Research Pty Ltd | Media cartridge for inkjet printhead |
US7714889B2 (en) * | 1997-07-15 | 2010-05-11 | Silverbrook Research Pty Ltd | Digital camera using exposure information for image processing |
US7077515B2 (en) * | 1997-07-15 | 2006-07-18 | Silverbrook Research Pty Ltd | Media cartridge for inkjet printhead |
US6690419B1 (en) | 1997-07-15 | 2004-02-10 | Silverbrook Research Pty Ltd | Utilising eye detection methods for image processing in a digital image camera |
US7705891B2 (en) * | 1997-07-15 | 2010-04-27 | Silverbrook Research Pty Ltd | Correction of distortions in digital images |
AUPO799997A0 (en) * | 1997-07-15 | 1997-08-07 | Silverbrook Research Pty Ltd | Image processing method and apparatus (ART10) |
AUPO802797A0 (en) | 1997-07-15 | 1997-08-07 | Silverbrook Research Pty Ltd | Image processing method and apparatus (ART54) |
US6879341B1 (en) | 1997-07-15 | 2005-04-12 | Silverbrook Research Pty Ltd | Digital camera system containing a VLIW vector processor |
US6119096A (en) * | 1997-07-31 | 2000-09-12 | Eyeticket Corporation | System and method for aircraft passenger check-in and boarding using iris recognition |
US6252976B1 (en) * | 1997-08-29 | 2001-06-26 | Eastman Kodak Company | Computer program product for redeye detection |
US6151403A (en) * | 1997-08-29 | 2000-11-21 | Eastman Kodak Company | Method for automatic detection of human eyes in digital images |
US7042505B1 (en) | 1997-10-09 | 2006-05-09 | Fotonation Ireland Ltd. | Red-eye filter method and apparatus |
US7630006B2 (en) | 1997-10-09 | 2009-12-08 | Fotonation Ireland Limited | Detecting red eye filter and apparatus using meta-data |
US7738015B2 (en) * | 1997-10-09 | 2010-06-15 | Fotonation Vision Limited | Red-eye filter method and apparatus |
US7352394B1 (en) | 1997-10-09 | 2008-04-01 | Fotonation Vision Limited | Image modification based on red-eye filter analysis |
US6299307B1 (en) | 1997-10-10 | 2001-10-09 | Visx, Incorporated | Eye tracking device for laser eye surgery using corneal margin detection |
EP0910986A1 (en) * | 1997-10-24 | 1999-04-28 | BRITISH TELECOMMUNICATIONS public limited company | Imaging apparatus |
US6055322A (en) | 1997-12-01 | 2000-04-25 | Sensor, Inc. | Method and apparatus for illuminating and imaging eyes through eyeglasses using multiple sources of illumination |
US6850631B1 (en) | 1998-02-20 | 2005-02-01 | Oki Electric Industry Co., Ltd. | Photographing device, iris input device and iris image input method |
JP3271750B2 (en) | 1998-03-05 | 2002-04-08 | 沖電気工業株式会社 | Iris identification code extraction method and device, iris recognition method and device, data encryption device |
US5966197A (en) * | 1998-04-21 | 1999-10-12 | Visx, Incorporated | Linear array eye tracker |
US6283954B1 (en) | 1998-04-21 | 2001-09-04 | Visx, Incorporated | Linear array eye tracker |
KR100575472B1 (en) * | 1998-05-19 | 2006-05-03 | 가부시키가이샤 소니 컴퓨터 엔터테인먼트 | Image processing apparatus and method |
US5956122A (en) * | 1998-06-26 | 1999-09-21 | Litton Systems, Inc | Iris recognition apparatus and method |
US7398119B2 (en) * | 1998-07-13 | 2008-07-08 | Childrens Hospital Los Angeles | Assessing blood brain barrier dynamics or identifying or measuring selected substances, including ethanol or toxins, in a subject by analyzing Raman spectrum signals |
JP3315648B2 (en) | 1998-07-17 | 2002-08-19 | 沖電気工業株式会社 | Iris code generation device and iris recognition system |
JP3610234B2 (en) * | 1998-07-17 | 2005-01-12 | 株式会社メディア・テクノロジー | Iris information acquisition device and iris identification device |
AUPP702098A0 (en) | 1998-11-09 | 1998-12-03 | Silverbrook Research Pty Ltd | Image creation method and apparatus (ART73) |
US6377699B1 (en) | 1998-11-25 | 2002-04-23 | Iridian Technologies, Inc. | Iris imaging telephone security module and method |
US6532298B1 (en) | 1998-11-25 | 2003-03-11 | Iridian Technologies, Inc. | Portable authentication device and method using iris patterns |
ATE475260T1 (en) * | 1998-11-25 | 2010-08-15 | Iridian Technologies Inc | RAPID FOCUS ASSESSMENT SYSTEM AND METHOD FOR IMAGE CAPTURE |
US6289113B1 (en) | 1998-11-25 | 2001-09-11 | Iridian Technologies, Inc. | Handheld iris imaging apparatus and method |
US6424727B1 (en) * | 1998-11-25 | 2002-07-23 | Iridian Technologies, Inc. | System and method of animal identification and animal transaction authorization using iris patterns |
US6753919B1 (en) | 1998-11-25 | 2004-06-22 | Iridian Technologies, Inc. | Fast focus assessment system and method for imaging |
US6538649B2 (en) | 1998-12-01 | 2003-03-25 | Intel Corporation | Computer vision control variable transformation |
US6396476B1 (en) | 1998-12-01 | 2002-05-28 | Intel Corporation | Synthesizing computer input events |
KR100320465B1 (en) | 1999-01-11 | 2002-01-16 | 구자홍 | Iris recognition system |
US6944318B1 (en) | 1999-01-15 | 2005-09-13 | Citicorp Development Center, Inc. | Fast matching systems and methods for personal identification |
US6363160B1 (en) * | 1999-01-22 | 2002-03-26 | Intel Corporation | Interface using pattern recognition and tracking |
US6577329B1 (en) | 1999-02-25 | 2003-06-10 | International Business Machines Corporation | Method and system for relevance feedback through gaze tracking and ticker interfaces |
KR100320188B1 (en) * | 1999-03-23 | 2002-01-10 | 구자홍 | Forgery judgment method for iris recognition system |
GB9907515D0 (en) | 1999-04-01 | 1999-05-26 | Ncr Int Inc | Self service terminal |
KR100356600B1 (en) * | 1999-04-09 | 2002-10-19 | 아이리텍 잉크 | A Method For Identifying The Iris Of Persons Based On The Shape Of Lacuna And/Or Autonomous Nervous Wreath |
US6247813B1 (en) * | 1999-04-09 | 2001-06-19 | Iritech, Inc. | Iris identification system and method of identifying a person through iris recognition |
US6700998B1 (en) * | 1999-04-23 | 2004-03-02 | Oki Electric Industry Co, Ltd. | Iris registration unit |
US7711152B1 (en) * | 1999-04-30 | 2010-05-04 | Davida George I | System and method for authenticated and privacy preserving biometric identification systems |
US8325994B2 (en) | 1999-04-30 | 2012-12-04 | Davida George I | System and method for authenticated and privacy preserving biometric identification systems |
AUPQ056099A0 (en) | 1999-05-25 | 1999-06-17 | Silverbrook Research Pty Ltd | A method and apparatus (pprint01) |
JP2001034754A (en) * | 1999-07-19 | 2001-02-09 | Sony Corp | Iris authentication device |
US6553494B1 (en) | 1999-07-21 | 2003-04-22 | Sensar, Inc. | Method and apparatus for applying and verifying a biometric-based digital signature to an electronic document |
US6647131B1 (en) | 1999-08-27 | 2003-11-11 | Intel Corporation | Motion detection using normal optical flow |
US6322216B1 (en) | 1999-10-07 | 2001-11-27 | Visx, Inc | Two camera off-axis eye tracker for laser eye surgery |
US7020351B1 (en) | 1999-10-08 | 2006-03-28 | Sarnoff Corporation | Method and apparatus for enhancing and indexing video and audio signals |
WO2001028238A2 (en) * | 1999-10-08 | 2001-04-19 | Sarnoff Corporation | Method and apparatus for enhancing and indexing video and audio signals |
JP2003511183A (en) | 1999-10-21 | 2003-03-25 | テクノラス ゲーエムベーハー オフタルモロギッシェ システム | Personalized corneal profile |
CN100362975C (en) | 1999-10-21 | 2008-01-23 | 泰思诺拉斯眼科系统公司 | Iris recognition and tracking for optical treatment |
WO2001035349A1 (en) | 1999-11-09 | 2001-05-17 | Iridian Technologies, Inc. | System and method of biometric authentication of electronic signatures using iris patterns |
US6505193B1 (en) | 1999-12-01 | 2003-01-07 | Iridian Technologies, Inc. | System and method of fast biometric database searching using digital certificates |
US6654483B1 (en) | 1999-12-22 | 2003-11-25 | Intel Corporation | Motion detection using normal optical flow |
US6494363B1 (en) * | 2000-01-13 | 2002-12-17 | Ncr Corporation | Self-service terminal |
US7565671B1 (en) * | 2000-02-01 | 2009-07-21 | Swisscom Mobile Ag | System and method for diffusing image objects |
DE60119418T2 (en) * | 2000-03-22 | 2007-05-24 | Kabushiki Kaisha Toshiba, Kawasaki | Face-capturing recognition device and passport verification device |
US6540392B1 (en) | 2000-03-31 | 2003-04-01 | Sensar, Inc. | Micro-illuminator for use with image recognition system |
US7044602B2 (en) | 2002-05-30 | 2006-05-16 | Visx, Incorporated | Methods and systems for tracking a torsional orientation and position of an eye |
US7587368B2 (en) | 2000-07-06 | 2009-09-08 | David Paul Felsher | Information record infrastructure, system and method |
JP2002101322A (en) * | 2000-07-10 | 2002-04-05 | Matsushita Electric Ind Co Ltd | Iris camera module |
JP3401502B2 (en) | 2000-07-13 | 2003-04-28 | 松下電器産業株式会社 | Eye imaging device |
US7277561B2 (en) * | 2000-10-07 | 2007-10-02 | Qritek Co., Ltd. | Iris identification |
DE10052201B8 (en) * | 2000-10-20 | 2005-06-30 | Carl Zeiss Meditec Ag | Method and device for identifying a patient and an operating area |
US6453057B1 (en) | 2000-11-02 | 2002-09-17 | Retinal Technologies, L.L.C. | Method for generating a unique consistent signal pattern for identification of an individual |
US7224822B2 (en) * | 2000-11-02 | 2007-05-29 | Retinal Technologies, L.L.C. | System for capturing an image of the retina for identification |
US6697504B2 (en) * | 2000-12-15 | 2004-02-24 | Institute For Information Industry | Method of multi-level facial image recognition and system using the same |
KR100620628B1 (en) * | 2000-12-30 | 2006-09-13 | 주식회사 비즈모델라인 | System for recognizing and verifying the Iris by using wireless telecommunication devices |
KR100620627B1 (en) * | 2000-12-30 | 2006-09-13 | 주식회사 비즈모델라인 | System for Security and Control of Wireless Telecommunication Devices by using the Iris-Recognition System |
US6961599B2 (en) * | 2001-01-09 | 2005-11-01 | Childrens Hospital Los Angeles | Identifying or measuring selected substances or toxins in a subject using resonant raman signals |
US20020091937A1 (en) * | 2001-01-10 | 2002-07-11 | Ortiz Luis M. | Random biometric authentication methods and systems |
US7921297B2 (en) * | 2001-01-10 | 2011-04-05 | Luis Melisendro Ortiz | Random biometric authentication utilizing unique biometric signatures |
US8462994B2 (en) * | 2001-01-10 | 2013-06-11 | Random Biometrics, Llc | Methods and systems for providing enhanced security over, while also facilitating access through, secured points of entry |
KR100374708B1 (en) * | 2001-03-06 | 2003-03-04 | 에버미디어 주식회사 | Non-contact type human iris recognition method by correction of rotated iris image |
KR100374707B1 (en) * | 2001-03-06 | 2003-03-04 | 에버미디어 주식회사 | Method of recognizing human iris using daubechies wavelet transform |
US20020162031A1 (en) * | 2001-03-08 | 2002-10-31 | Shmuel Levin | Method and apparatus for automatic control of access |
US7095901B2 (en) * | 2001-03-15 | 2006-08-22 | Lg Electronics, Inc. | Apparatus and method for adjusting focus position in iris recognition system |
US7181017B1 (en) | 2001-03-23 | 2007-02-20 | David Felsher | System and method for secure three-party communications |
JP2002330318A (en) * | 2001-04-27 | 2002-11-15 | Matsushita Electric Ind Co Ltd | Mobile terminal |
US20040193893A1 (en) * | 2001-05-18 | 2004-09-30 | Michael Braithwaite | Application-specific biometric templates |
US6937135B2 (en) * | 2001-05-30 | 2005-08-30 | Hewlett-Packard Development Company, L.P. | Face and environment sensing watch |
CA2359269A1 (en) * | 2001-10-17 | 2003-04-17 | Biodentity Systems Corporation | Face imaging system for recordal and automated identity confirmation |
US20030076981A1 (en) * | 2001-10-18 | 2003-04-24 | Smith Gregory Hugh | Method for operating a pre-crash sensing system in a vehicle having a counter-measure system |
FR2831351A1 (en) * | 2001-10-19 | 2003-04-25 | St Microelectronics Sa | Bandpass filter for detecting texture of iris, has passband which is real, bi-dimensional, oriented along phase axis and resulting from product of pair of identical one-dimensional Hamming windows having specified transfer function |
JP4172930B2 (en) | 2001-10-31 | 2008-10-29 | 松下電器産業株式会社 | Eye imaging device and entrance / exit management system |
US7226166B2 (en) * | 2001-11-13 | 2007-06-05 | Philadelphia Retina Endowment Fund | Optimizing the properties of electromagnetic energy in a medium using stochastic parallel perturbation gradient descent optimization adaptive optics |
US20040165147A1 (en) * | 2001-11-13 | 2004-08-26 | Della Vecchia Michael A. | Determining iris biometric and spatial orientation of an iris in accordance with same |
US7775665B2 (en) * | 2001-11-13 | 2010-08-17 | Dellavecchia Michael A | Method for optically scanning objects |
US6775605B2 (en) | 2001-11-29 | 2004-08-10 | Ford Global Technologies, Llc | Remote sensing based pre-crash threat assessment system |
US6819991B2 (en) * | 2001-11-29 | 2004-11-16 | Ford Global Technologies, Llc | Vehicle sensing based pre-crash threat assessment system |
US7158870B2 (en) * | 2002-01-24 | 2007-01-02 | Ford Global Technologies, Llc | Post collision restraints control module |
US6665426B1 (en) * | 2002-01-29 | 2003-12-16 | West Virginia University Research Corporation | Method of biometric identification of an individual and associated apparatus |
US6831572B2 (en) | 2002-01-29 | 2004-12-14 | Ford Global Technologies, Llc | Rear collision warning system |
US6519519B1 (en) | 2002-02-01 | 2003-02-11 | Ford Global Technologies, Inc. | Passive countermeasure methods |
US6721659B2 (en) | 2002-02-01 | 2004-04-13 | Ford Global Technologies, Llc | Collision warning and safety countermeasure system |
US6498972B1 (en) | 2002-02-13 | 2002-12-24 | Ford Global Technologies, Inc. | Method for operating a pre-crash sensing system in a vehicle having a countermeasure system |
US7009500B2 (en) | 2002-02-13 | 2006-03-07 | Ford Global Technologies, Llc | Method for operating a pre-crash sensing system in a vehicle having a countermeasure system using stereo cameras |
US7088872B1 (en) * | 2002-02-14 | 2006-08-08 | Cogent Systems, Inc. | Method and apparatus for two dimensional image processing |
KR100842501B1 (en) | 2002-02-21 | 2008-07-01 | 엘지전자 주식회사 | Indicated device of iris location for iris recognition system |
KR20030077126A (en) * | 2002-03-25 | 2003-10-01 | 엘지전자 주식회사 | System for removing of optical reflection in iris recognition of pc and the method |
JP4148700B2 (en) * | 2002-05-30 | 2008-09-10 | 松下電器産業株式会社 | Eye imaging device |
KR100885576B1 (en) * | 2002-06-07 | 2009-02-24 | 엘지전자 주식회사 | System of multiple focus using of slit in iris recognition |
US7850526B2 (en) * | 2002-07-27 | 2010-12-14 | Sony Computer Entertainment America Inc. | System for tracking user manipulations within an environment |
US9818136B1 (en) | 2003-02-05 | 2017-11-14 | Steven M. Hoffberg | System and method for determining contingent relevance |
US7436986B2 (en) * | 2003-03-25 | 2008-10-14 | Bausch & Lomb Incorporated | Positive patient identification |
JP2004295572A (en) * | 2003-03-27 | 2004-10-21 | Matsushita Electric Ind Co Ltd | Imaging apparatus of certification object image and imaging method therefor |
US7751594B2 (en) | 2003-04-04 | 2010-07-06 | Lumidigm, Inc. | White-light spectral biometric sensors |
US7460696B2 (en) | 2004-06-01 | 2008-12-02 | Lumidigm, Inc. | Multispectral imaging biometrics |
JP4347599B2 (en) * | 2003-04-10 | 2009-10-21 | オリンパス株式会社 | Personal authentication device |
JP4584912B2 (en) * | 2003-04-11 | 2010-11-24 | ボシュ・アンド・ロム・インコーポレイテッド | System and method for eye data acquisition and alignment and tracking |
US8571902B1 (en) | 2003-04-18 | 2013-10-29 | Unisys Corporation | Remote biometric verification |
US7458683B2 (en) * | 2003-06-16 | 2008-12-02 | Amo Manufacturing Usa, Llc | Methods and devices for registering optical measurement datasets of an optical system |
US7536036B2 (en) | 2004-10-28 | 2009-05-19 | Fotonation Vision Limited | Method and apparatus for red-eye detection in an acquired digital image |
US7636486B2 (en) | 2004-11-10 | 2009-12-22 | Fotonation Ireland Ltd. | Method of determining PSF using multiple instances of a nominally similar scene |
US8417055B2 (en) * | 2007-03-05 | 2013-04-09 | DigitalOptics Corporation Europe Limited | Image processing method and apparatus |
US8170294B2 (en) | 2006-11-10 | 2012-05-01 | DigitalOptics Corporation Europe Limited | Method of detecting redeye in a digital image |
US7792335B2 (en) * | 2006-02-24 | 2010-09-07 | Fotonation Vision Limited | Method and apparatus for selective disqualification of digital images |
US7587085B2 (en) * | 2004-10-28 | 2009-09-08 | Fotonation Vision Limited | Method and apparatus for red-eye detection in an acquired digital image |
US8264576B2 (en) | 2007-03-05 | 2012-09-11 | DigitalOptics Corporation Europe Limited | RGBW sensor array |
US7970182B2 (en) | 2005-11-18 | 2011-06-28 | Tessera Technologies Ireland Limited | Two stage detection for photographic eye artifacts |
US8199222B2 (en) * | 2007-03-05 | 2012-06-12 | DigitalOptics Corporation Europe Limited | Low-light video frame enhancement |
US9160897B2 (en) * | 2007-06-14 | 2015-10-13 | Fotonation Limited | Fast motion estimation method |
US8036458B2 (en) | 2007-11-08 | 2011-10-11 | DigitalOptics Corporation Europe Limited | Detecting redeye defects in digital images |
US7574016B2 (en) | 2003-06-26 | 2009-08-11 | Fotonation Vision Limited | Digital image processing using face detection information |
US7920723B2 (en) | 2005-11-18 | 2011-04-05 | Tessera Technologies Ireland Limited | Two stage detection for photographic eye artifacts |
US7689009B2 (en) | 2005-11-18 | 2010-03-30 | Fotonation Vision Ltd. | Two stage detection for photographic eye artifacts |
US7639889B2 (en) * | 2004-11-10 | 2009-12-29 | Fotonation Ireland Ltd. | Method of notifying users regarding motion artifacts based on image analysis |
US7792970B2 (en) | 2005-06-17 | 2010-09-07 | Fotonation Vision Limited | Method for establishing a paired connection between media devices |
US8254674B2 (en) | 2004-10-28 | 2012-08-28 | DigitalOptics Corporation Europe Limited | Analyzing partial face regions for red-eye detection in acquired digital images |
US8180173B2 (en) | 2007-09-21 | 2012-05-15 | DigitalOptics Corporation Europe Limited | Flash artifact eye defect correction in blurred images using anisotropic blurring |
US8989516B2 (en) * | 2007-09-18 | 2015-03-24 | Fotonation Limited | Image processing method and apparatus |
KR20030066512A (en) * | 2003-07-04 | 2003-08-09 | 김재민 | Iris Recognition System Robust to noises |
WO2005006732A1 (en) * | 2003-07-11 | 2005-01-20 | Yoshiaki Takida | Next-generation facsimile machine of internet terminal type |
WO2005008567A1 (en) * | 2003-07-18 | 2005-01-27 | Yonsei University | Apparatus and method for iris recognition from all direction of view |
US20050024516A1 (en) * | 2003-07-31 | 2005-02-03 | Robert Fish | Digital camera |
US20050031224A1 (en) * | 2003-08-05 | 2005-02-10 | Yury Prilutsky | Detecting red eye filter and apparatus using meta-data |
US9412007B2 (en) | 2003-08-05 | 2016-08-09 | Fotonation Limited | Partial face detector red-eye filter method and apparatus |
US20050140801A1 (en) * | 2003-08-05 | 2005-06-30 | Yury Prilutsky | Optimized performance and performance for red-eye filter method and apparatus |
US8520093B2 (en) | 2003-08-05 | 2013-08-27 | DigitalOptics Corporation Europe Limited | Face tracker and partial face tracker for red-eye filter method and apparatus |
WO2005024698A2 (en) * | 2003-09-04 | 2005-03-17 | Sarnoff Corporation | Method and apparatus for performing iris recognition from an image |
US8090157B2 (en) | 2005-01-26 | 2012-01-03 | Honeywell International Inc. | Approaches and apparatus for eye detection in a digital image |
US8442276B2 (en) | 2006-03-03 | 2013-05-14 | Honeywell International Inc. | Invariant radial iris segmentation |
US8085993B2 (en) | 2006-03-03 | 2011-12-27 | Honeywell International Inc. | Modular biometrics collection system architecture |
US8064647B2 (en) * | 2006-03-03 | 2011-11-22 | Honeywell International Inc. | System for iris detection tracking and recognition at a distance |
US7593550B2 (en) | 2005-01-26 | 2009-09-22 | Honeywell International Inc. | Distance iris recognition |
US7756301B2 (en) * | 2005-01-26 | 2010-07-13 | Honeywell International Inc. | Iris recognition system and method |
US8049812B2 (en) | 2006-03-03 | 2011-11-01 | Honeywell International Inc. | Camera with auto focus capability |
US8098901B2 (en) | 2005-01-26 | 2012-01-17 | Honeywell International Inc. | Standoff iris recognition system |
US8705808B2 (en) | 2003-09-05 | 2014-04-22 | Honeywell International Inc. | Combined face and iris recognition system |
JP3945474B2 (en) * | 2003-11-28 | 2007-07-18 | 松下電器産業株式会社 | Eye image input device, authentication device, and image processing method |
FR2864290B1 (en) * | 2003-12-18 | 2006-05-26 | Sagem | METHOD AND DEVICE FOR RECOGNIZING IRIS |
CN100342390C (en) * | 2004-04-16 | 2007-10-10 | 中国科学院自动化研究所 | Identity identifying method based on iris plaque |
FR2870948B1 (en) * | 2004-05-25 | 2006-09-01 | Sagem | DEVICE FOR POSITIONING A USER BY DISPLAYING ITS MIRROR IMAGE, IMAGE CAPTURE DEVICE AND CORRESPONDING POSITIONING METHOD |
US8229185B2 (en) | 2004-06-01 | 2012-07-24 | Lumidigm, Inc. | Hygienic biometric sensors |
GB0412175D0 (en) * | 2004-06-01 | 2004-06-30 | Smart Sensors Ltd | Identification of image characteristics |
CN1299231C (en) * | 2004-06-11 | 2007-02-07 | 清华大学 | Living body iris patterns collecting method and collector |
JP4610614B2 (en) * | 2004-06-21 | 2011-01-12 | グーグル インク. | Multi-biometric system and method based on a single image |
US8787630B2 (en) | 2004-08-11 | 2014-07-22 | Lumidigm, Inc. | Multispectral barcode imaging |
KR100601963B1 (en) * | 2004-08-23 | 2006-07-14 | 삼성전자주식회사 | Authentication apparatus and method using eye gaze |
US7248720B2 (en) * | 2004-10-21 | 2007-07-24 | Retica Systems, Inc. | Method and system for generating a combined retina/iris pattern biometric |
US7167736B2 (en) * | 2004-11-04 | 2007-01-23 | Q Step Technologies, Inc. | Non-invasive measurement system and method for measuring the concentration of an optically-active substance |
US7639888B2 (en) * | 2004-11-10 | 2009-12-29 | Fotonation Ireland Ltd. | Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts |
US7616788B2 (en) * | 2004-11-12 | 2009-11-10 | Cogent Systems, Inc. | System and method for fast biometric pattern matching |
US20060123240A1 (en) * | 2004-12-08 | 2006-06-08 | Alison Chaiken | Secure biometric authentication system and method of implementation thereof |
US20110102553A1 (en) * | 2007-02-28 | 2011-05-05 | Tessera Technologies Ireland Limited | Enhanced real-time face models from stereo imaging |
US8488023B2 (en) | 2009-05-20 | 2013-07-16 | DigitalOptics Corporation Europe Limited | Identifying facial expressions in acquired digital images |
US20060147095A1 (en) * | 2005-01-03 | 2006-07-06 | Usher David B | Method and system for automatically capturing an image of a retina |
US7809171B2 (en) * | 2005-01-10 | 2010-10-05 | Battelle Memorial Institute | Facial feature evaluation based on eye location |
WO2006081505A1 (en) * | 2005-01-26 | 2006-08-03 | Honeywell International Inc. | A distance iris recognition system |
JP4767971B2 (en) * | 2005-01-26 | 2011-09-07 | ハネウェル・インターナショナル・インコーポレーテッド | Distance iris recognition system |
WO2006091869A2 (en) * | 2005-02-25 | 2006-08-31 | Youfinder Intellectual Property Licensing Limited Liability Company | Automated indexing for distributing event photography |
US7151332B2 (en) * | 2005-04-27 | 2006-12-19 | Stephen Kundel | Motor having reciprocating and rotating permanent magnets |
ES2399030T3 (en) | 2005-07-18 | 2013-03-25 | Hysterical Sunset Limited | Automated indexing manually assisted image using facial recognition |
US8874477B2 (en) | 2005-10-04 | 2014-10-28 | Steven Mark Hoffberg | Multifactorial optimization system and method |
US7801335B2 (en) * | 2005-11-11 | 2010-09-21 | Global Rainmakers Inc. | Apparatus and methods for detecting the presence of a human eye |
US8260008B2 (en) | 2005-11-11 | 2012-09-04 | Eyelock, Inc. | Methods for performing biometric recognition of a human eye and corroboration of same |
US8131477B2 (en) * | 2005-11-16 | 2012-03-06 | 3M Cogent, Inc. | Method and device for image-based biological data quantification |
US7599577B2 (en) | 2005-11-18 | 2009-10-06 | Fotonation Vision Limited | Method and apparatus of correcting hybrid flash artifacts in digital images |
US7583823B2 (en) * | 2006-01-11 | 2009-09-01 | Mitsubishi Electric Research Laboratories, Inc. | Method for localizing irises in images using gradients and textures |
JP4643715B2 (en) | 2006-02-14 | 2011-03-02 | テセラ テクノロジーズ アイルランド リミテッド | Automatic detection and correction of defects caused by non-red eye flash |
GB0603411D0 (en) * | 2006-02-21 | 2006-03-29 | Xvista Ltd | Method of processing an image of an eye |
US7804983B2 (en) * | 2006-02-24 | 2010-09-28 | Fotonation Vision Limited | Digital image acquisition control and correction method and apparatus |
US7844084B2 (en) * | 2006-02-27 | 2010-11-30 | Donald Martin Monro | Rotation compensated iris comparison |
US20070206372A1 (en) * | 2006-03-02 | 2007-09-06 | Casillas Robert J | Illuminated container |
EP1991947B1 (en) | 2006-03-03 | 2020-04-29 | Gentex Corporation | Indexing and database search system |
KR101299074B1 (en) | 2006-03-03 | 2013-08-30 | 허니웰 인터내셔널 인코포레이티드 | Iris encoding system |
AU2007220010B2 (en) | 2006-03-03 | 2011-02-17 | Gentex Corporation | Single lens splitter camera |
US8364646B2 (en) * | 2006-03-03 | 2013-01-29 | Eyelock, Inc. | Scalable searching of biometric databases using dynamic selection of data subsets |
DE602007007062D1 (en) | 2006-03-03 | 2010-07-22 | Honeywell Int Inc | IRISER IDENTIFICATION SYSTEM WITH IMAGE QUALITY METERING |
WO2008039252A2 (en) * | 2006-05-15 | 2008-04-03 | Retica Systems, Inc. | Multimodal ocular biometric system |
IES20070229A2 (en) * | 2006-06-05 | 2007-10-03 | Fotonation Vision Ltd | Image acquisition method and apparatus |
EP2033142B1 (en) | 2006-06-12 | 2011-01-26 | Tessera Technologies Ireland Limited | Advances in extending the aam techniques from grayscale to color images |
US8604901B2 (en) | 2006-06-27 | 2013-12-10 | Eyelock, Inc. | Ensuring the provenance of passengers at a transportation facility |
US8175346B2 (en) | 2006-07-19 | 2012-05-08 | Lumidigm, Inc. | Whole-hand multispectral biometric imaging |
US8355545B2 (en) | 2007-04-10 | 2013-01-15 | Lumidigm, Inc. | Biometric detection using spatial, temporal, and/or spectral techniques |
US7995808B2 (en) | 2006-07-19 | 2011-08-09 | Lumidigm, Inc. | Contactless multispectral biometric capture |
US7801339B2 (en) * | 2006-07-31 | 2010-09-21 | Lumidigm, Inc. | Biometrics with spatiospectral spoof detection |
US7804984B2 (en) * | 2006-07-31 | 2010-09-28 | Lumidigm, Inc. | Spatial-spectral fingerprint spoof detection |
JP2009545822A (en) * | 2006-07-31 | 2009-12-24 | ルミダイム インコーポレイテッド | Spatial-spectral fingerprint spoof detection |
WO2008033784A2 (en) * | 2006-09-15 | 2008-03-20 | Retica Systems, Inc. | Long distance multimodal biometric system and method |
US8170293B2 (en) * | 2006-09-15 | 2012-05-01 | Identix Incorporated | Multimodal ocular biometric system and methods |
US8121356B2 (en) | 2006-09-15 | 2012-02-21 | Identix Incorporated | Long distance multimodal biometric system and method |
WO2008036897A1 (en) | 2006-09-22 | 2008-03-27 | Global Rainmakers, Inc. | Compact biometric acquisition system and method |
US7970179B2 (en) * | 2006-09-25 | 2011-06-28 | Identix Incorporated | Iris data extraction |
US8280120B2 (en) | 2006-10-02 | 2012-10-02 | Eyelock Inc. | Fraud resistant biometric financial transaction system and method |
US9846739B2 (en) | 2006-10-23 | 2017-12-19 | Fotonation Limited | Fast database matching |
US7809747B2 (en) * | 2006-10-23 | 2010-10-05 | Donald Martin Monro | Fuzzy database matching |
US8055067B2 (en) | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
EP2115662B1 (en) | 2007-02-28 | 2010-06-23 | Fotonation Vision Limited | Separating directional lighting variability in statistical face modelling based on texture space decomposition |
JP2010520567A (en) | 2007-03-05 | 2010-06-10 | フォトネーション ビジョン リミテッド | Red-eye false detection filtering using face position and orientation |
JP4309927B2 (en) * | 2007-03-14 | 2009-08-05 | 株式会社豊田中央研究所 | Eyelid detection device and program |
US8285010B2 (en) | 2007-03-21 | 2012-10-09 | Lumidigm, Inc. | Biometrics based on locally consistent features |
US7773118B2 (en) * | 2007-03-25 | 2010-08-10 | Fotonation Vision Limited | Handheld article with movement discrimination |
WO2008131201A1 (en) | 2007-04-19 | 2008-10-30 | Global Rainmakers, Inc. | Method and system for biometric recognition |
US8953849B2 (en) | 2007-04-19 | 2015-02-10 | Eyelock, Inc. | Method and system for biometric recognition |
US8063889B2 (en) | 2007-04-25 | 2011-11-22 | Honeywell International Inc. | Biometric data collection system |
US8275179B2 (en) * | 2007-05-01 | 2012-09-25 | 3M Cogent, Inc. | Apparatus for capturing a high quality image of a moist finger |
US20120239458A9 (en) * | 2007-05-18 | 2012-09-20 | Global Rainmakers, Inc. | Measuring Effectiveness of Advertisements and Linking Certain Consumer Activities Including Purchases to Other Activities of the Consumer |
US8411916B2 (en) * | 2007-06-11 | 2013-04-02 | 3M Cogent, Inc. | Bio-reader device with ticket identification |
US20090060348A1 (en) * | 2007-08-28 | 2009-03-05 | Donald Martin Monro | Determination of Image Similarity |
US9002073B2 (en) | 2007-09-01 | 2015-04-07 | Eyelock, Inc. | Mobile identity platform |
US9036871B2 (en) | 2007-09-01 | 2015-05-19 | Eyelock, Inc. | Mobility identity platform |
US8212870B2 (en) | 2007-09-01 | 2012-07-03 | Hanna Keith J | Mirror system and method for acquiring biometric data |
WO2009029765A1 (en) | 2007-09-01 | 2009-03-05 | Global Rainmakers, Inc. | Mirror system and method for acquiring biometric data |
US9117119B2 (en) | 2007-09-01 | 2015-08-25 | Eyelock, Inc. | Mobile identity platform |
KR100936880B1 (en) * | 2007-09-07 | 2010-01-14 | 아이리텍 잉크 | An Iris Image Storing Method and An Iris Image Restored Method |
US7824034B2 (en) * | 2007-09-19 | 2010-11-02 | Utc Fire & Security Americas Corporation, Inc. | Iris imaging system and method for the same |
US8503818B2 (en) | 2007-09-25 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Eye defect detection in international standards organization images |
US20090252382A1 (en) * | 2007-12-06 | 2009-10-08 | University Of Notre Dame Du Lac | Segmentation of iris images using active contour processing |
US8750578B2 (en) | 2008-01-29 | 2014-06-10 | DigitalOptics Corporation Europe Limited | Detecting facial expressions in digital images |
US8212864B2 (en) | 2008-01-30 | 2012-07-03 | DigitalOptics Corporation Europe Limited | Methods and apparatuses for using image acquisition data to detect and correct image defects |
US8189879B2 (en) * | 2008-02-14 | 2012-05-29 | Iristrac, Llc | System and method for animal identification using IRIS images |
JP2009222853A (en) * | 2008-03-14 | 2009-10-01 | Fujitsu Ltd | Polarizing optical element and manufacturing method therefor |
US8436907B2 (en) | 2008-05-09 | 2013-05-07 | Honeywell International Inc. | Heterogeneous video capturing system |
US10380603B2 (en) * | 2008-05-31 | 2019-08-13 | International Business Machines Corporation | Assessing personality and mood characteristics of a customer to enhance customer satisfaction and improve chances of a sale |
US8005264B2 (en) * | 2008-06-09 | 2011-08-23 | Arcsoft, Inc. | Method of automatically detecting and tracking successive frames in a region of interesting by an electronic imaging device |
DE602008003019D1 (en) | 2008-06-25 | 2010-11-25 | Deutsche Telekom Ag | System for extraction, identification and verification of iris features based on directionlets |
WO2009158662A2 (en) | 2008-06-26 | 2009-12-30 | Global Rainmakers, Inc. | Method of reducing visibility of illimination while acquiring high quality imagery |
US20100014755A1 (en) * | 2008-07-21 | 2010-01-21 | Charles Lee Wilson | System and method for grid-based image segmentation and matching |
US8644565B2 (en) * | 2008-07-23 | 2014-02-04 | Indiana University Research And Technology Corp. | System and method for non-cooperative iris image acquisition |
US8213782B2 (en) | 2008-08-07 | 2012-07-03 | Honeywell International Inc. | Predictive autofocusing system |
US8090246B2 (en) | 2008-08-08 | 2012-01-03 | Honeywell International Inc. | Image acquisition system |
US8081254B2 (en) | 2008-08-14 | 2011-12-20 | DigitalOptics Corporation Europe Limited | In-camera based method of detecting defect eye with high accuracy |
US8306279B2 (en) * | 2008-09-15 | 2012-11-06 | Eyelock, Inc. | Operator interface for face and iris recognition devices |
US20100278394A1 (en) * | 2008-10-29 | 2010-11-04 | Raguin Daniel H | Apparatus for Iris Capture |
US8317325B2 (en) | 2008-10-31 | 2012-11-27 | Cross Match Technologies, Inc. | Apparatus and method for two eye imaging for iris identification |
US8280119B2 (en) | 2008-12-05 | 2012-10-02 | Honeywell International Inc. | Iris recognition system using quality metrics |
US20100232654A1 (en) * | 2009-03-11 | 2010-09-16 | Harris Corporation | Method for reconstructing iris scans through novel inpainting techniques and mosaicing of partial collections |
US20100232659A1 (en) * | 2009-03-12 | 2010-09-16 | Harris Corporation | Method for fingerprint template synthesis and fingerprint mosaicing using a point matching algorithm |
US8195044B2 (en) | 2009-03-30 | 2012-06-05 | Eyelock Inc. | Biometric camera mount system |
JP5456159B2 (en) * | 2009-05-29 | 2014-03-26 | デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド | Method and apparatus for separating the top of the foreground from the background |
US8472681B2 (en) | 2009-06-15 | 2013-06-25 | Honeywell International Inc. | Iris and ocular recognition system using trace transforms |
US8630464B2 (en) | 2009-06-15 | 2014-01-14 | Honeywell International Inc. | Adaptive iris matching using database indexing |
US8306288B2 (en) * | 2009-08-19 | 2012-11-06 | Harris Corporation | Automatic identification of fingerprint inpainting target areas |
US20110044513A1 (en) * | 2009-08-19 | 2011-02-24 | Harris Corporation | Method for n-wise registration and mosaicing of partial prints |
US8872908B2 (en) | 2009-08-26 | 2014-10-28 | Lumidigm, Inc | Dual-imager biometric sensor |
US8392268B2 (en) | 2009-09-02 | 2013-03-05 | Image Holdings | Method and system of displaying, managing and selling images in an event photography environment |
WO2011027227A1 (en) | 2009-09-02 | 2011-03-10 | Image Holdings | Method and system for displaying, managing and selling digital images |
US20110119141A1 (en) * | 2009-11-16 | 2011-05-19 | Hoyos Corporation | Siccolla Identity Verification Architecture and Tool |
US9134534B2 (en) | 2010-02-28 | 2015-09-15 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses including a modular image source |
US8488246B2 (en) | 2010-02-28 | 2013-07-16 | Osterhout Group, Inc. | See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film |
US9129295B2 (en) | 2010-02-28 | 2015-09-08 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear |
US9759917B2 (en) | 2010-02-28 | 2017-09-12 | Microsoft Technology Licensing, Llc | AR glasses with event and sensor triggered AR eyepiece interface to external devices |
US8472120B2 (en) | 2010-02-28 | 2013-06-25 | Osterhout Group, Inc. | See-through near-eye display glasses with a small scale image source |
US9182596B2 (en) | 2010-02-28 | 2015-11-10 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light |
US10180572B2 (en) | 2010-02-28 | 2019-01-15 | Microsoft Technology Licensing, Llc | AR glasses with event and user action control of external applications |
US9223134B2 (en) | 2010-02-28 | 2015-12-29 | Microsoft Technology Licensing, Llc | Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses |
US8482859B2 (en) | 2010-02-28 | 2013-07-09 | Osterhout Group, Inc. | See-through near-eye display glasses wherein image light is transmitted to and reflected from an optically flat film |
US9366862B2 (en) | 2010-02-28 | 2016-06-14 | Microsoft Technology Licensing, Llc | System and method for delivering content to a group of see-through near eye display eyepieces |
US8477425B2 (en) | 2010-02-28 | 2013-07-02 | Osterhout Group, Inc. | See-through near-eye display glasses including a partially reflective, partially transmitting optical element |
US9097891B2 (en) | 2010-02-28 | 2015-08-04 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment |
US9229227B2 (en) | 2010-02-28 | 2016-01-05 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with a light transmissive wedge shaped illumination system |
US20120249797A1 (en) | 2010-02-28 | 2012-10-04 | Osterhout Group, Inc. | Head-worn adaptive display |
WO2011106798A1 (en) * | 2010-02-28 | 2011-09-01 | Osterhout Group, Inc. | Local advertising content on an interactive head-mounted eyepiece |
US9341843B2 (en) | 2010-02-28 | 2016-05-17 | Microsoft Technology Licensing, Llc | See-through near-eye display glasses with a small scale image source |
US8467133B2 (en) | 2010-02-28 | 2013-06-18 | Osterhout Group, Inc. | See-through display with an optical assembly including a wedge-shaped illumination system |
US9097890B2 (en) | 2010-02-28 | 2015-08-04 | Microsoft Technology Licensing, Llc | Grating in a light transmissive illumination system for see-through near-eye display glasses |
US20150309316A1 (en) | 2011-04-06 | 2015-10-29 | Microsoft Technology Licensing, Llc | Ar glasses with predictive control of external device based on event input |
US9091851B2 (en) | 2010-02-28 | 2015-07-28 | Microsoft Technology Licensing, Llc | Light control in head mounted displays |
US9285589B2 (en) | 2010-02-28 | 2016-03-15 | Microsoft Technology Licensing, Llc | AR glasses with event and sensor triggered control of AR eyepiece applications |
US9128281B2 (en) | 2010-09-14 | 2015-09-08 | Microsoft Technology Licensing, Llc | Eyepiece with uniformly illuminated reflective display |
US8570149B2 (en) | 2010-03-16 | 2013-10-29 | Lumidigm, Inc. | Biometric imaging using an optical adaptive interface |
US8577094B2 (en) | 2010-04-09 | 2013-11-05 | Donald Martin Monro | Image template masking |
US10922567B2 (en) | 2010-06-07 | 2021-02-16 | Affectiva, Inc. | Cognitive state based vehicle manipulation using near-infrared image processing |
US11430260B2 (en) | 2010-06-07 | 2022-08-30 | Affectiva, Inc. | Electronic display viewing verification |
US10799168B2 (en) | 2010-06-07 | 2020-10-13 | Affectiva, Inc. | Individual data sharing across a social network |
US11073899B2 (en) | 2010-06-07 | 2021-07-27 | Affectiva, Inc. | Multidevice multimodal emotion services monitoring |
US11393133B2 (en) | 2010-06-07 | 2022-07-19 | Affectiva, Inc. | Emoji manipulation using machine learning |
US11465640B2 (en) | 2010-06-07 | 2022-10-11 | Affectiva, Inc. | Directed control transfer for autonomous vehicles |
US10843078B2 (en) | 2010-06-07 | 2020-11-24 | Affectiva, Inc. | Affect usage within a gaming context |
US9723992B2 (en) | 2010-06-07 | 2017-08-08 | Affectiva, Inc. | Mental state analysis using blink rate |
US10779761B2 (en) | 2010-06-07 | 2020-09-22 | Affectiva, Inc. | Sporadic collection of affect data within a vehicle |
US11704574B2 (en) | 2010-06-07 | 2023-07-18 | Affectiva, Inc. | Multimodal machine learning for vehicle manipulation |
US9204836B2 (en) | 2010-06-07 | 2015-12-08 | Affectiva, Inc. | Sporadic collection of mobile affect data |
US10143414B2 (en) | 2010-06-07 | 2018-12-04 | Affectiva, Inc. | Sporadic collection with mobile affect data |
US11318949B2 (en) | 2010-06-07 | 2022-05-03 | Affectiva, Inc. | In-vehicle drowsiness analysis using blink rate |
US10911829B2 (en) | 2010-06-07 | 2021-02-02 | Affectiva, Inc. | Vehicle video recommendation via affect |
US9959549B2 (en) | 2010-06-07 | 2018-05-01 | Affectiva, Inc. | Mental state analysis for norm generation |
US11657288B2 (en) | 2010-06-07 | 2023-05-23 | Affectiva, Inc. | Convolutional computing using multilayered analysis engine |
US11067405B2 (en) | 2010-06-07 | 2021-07-20 | Affectiva, Inc. | Cognitive state vehicle navigation based on image processing |
US9934425B2 (en) | 2010-06-07 | 2018-04-03 | Affectiva, Inc. | Collection of affect data from multiple mobile devices |
US10592757B2 (en) | 2010-06-07 | 2020-03-17 | Affectiva, Inc. | Vehicular cognitive data collection using multiple devices |
US9247903B2 (en) | 2010-06-07 | 2016-02-02 | Affectiva, Inc. | Using affect within a gaming context |
US11232290B2 (en) | 2010-06-07 | 2022-01-25 | Affectiva, Inc. | Image analysis using sub-sectional component evaluation to augment classifier usage |
US11410438B2 (en) | 2010-06-07 | 2022-08-09 | Affectiva, Inc. | Image analysis using a semiconductor processor for facial evaluation in vehicles |
US11430561B2 (en) | 2010-06-07 | 2022-08-30 | Affectiva, Inc. | Remote computing analysis for cognitive state data metrics |
US10796176B2 (en) | 2010-06-07 | 2020-10-06 | Affectiva, Inc. | Personal emotional profile generation for vehicle manipulation |
US10074024B2 (en) | 2010-06-07 | 2018-09-11 | Affectiva, Inc. | Mental state analysis using blink rate for vehicles |
US10482333B1 (en) | 2017-01-04 | 2019-11-19 | Affectiva, Inc. | Mental state analysis using blink rate within vehicles |
US9646046B2 (en) | 2010-06-07 | 2017-05-09 | Affectiva, Inc. | Mental state data tagging for data collected from multiple sources |
US9642536B2 (en) | 2010-06-07 | 2017-05-09 | Affectiva, Inc. | Mental state analysis using heart rate collection based on video imagery |
US10204625B2 (en) | 2010-06-07 | 2019-02-12 | Affectiva, Inc. | Audio analysis learning using video data |
US10517521B2 (en) | 2010-06-07 | 2019-12-31 | Affectiva, Inc. | Mental state mood analysis using heart rate collection based on video imagery |
US11484685B2 (en) | 2010-06-07 | 2022-11-01 | Affectiva, Inc. | Robotic control using profiles |
US11935281B2 (en) | 2010-06-07 | 2024-03-19 | Affectiva, Inc. | Vehicular in-cabin facial tracking using machine learning |
US11151610B2 (en) | 2010-06-07 | 2021-10-19 | Affectiva, Inc. | Autonomous vehicle control using heart rate collection based on video imagery |
US11511757B2 (en) | 2010-06-07 | 2022-11-29 | Affectiva, Inc. | Vehicle manipulation with crowdsourcing |
US10474875B2 (en) | 2010-06-07 | 2019-11-12 | Affectiva, Inc. | Image analysis using a semiconductor processor for facial evaluation |
US11587357B2 (en) | 2010-06-07 | 2023-02-21 | Affectiva, Inc. | Vehicular cognitive data collection with multiple devices |
US10897650B2 (en) | 2010-06-07 | 2021-01-19 | Affectiva, Inc. | Vehicle content recommendation using cognitive states |
US10401860B2 (en) | 2010-06-07 | 2019-09-03 | Affectiva, Inc. | Image analysis for two-sided data hub |
US11017250B2 (en) | 2010-06-07 | 2021-05-25 | Affectiva, Inc. | Vehicle manipulation using convolutional image processing |
US10108852B2 (en) * | 2010-06-07 | 2018-10-23 | Affectiva, Inc. | Facial analysis to detect asymmetric expressions |
US10614289B2 (en) | 2010-06-07 | 2020-04-07 | Affectiva, Inc. | Facial tracking with classifiers |
US11887352B2 (en) | 2010-06-07 | 2024-01-30 | Affectiva, Inc. | Live streaming analytics within a shared digital environment |
US10627817B2 (en) | 2010-06-07 | 2020-04-21 | Affectiva, Inc. | Vehicle manipulation using occupant image analysis |
US11292477B2 (en) | 2010-06-07 | 2022-04-05 | Affectiva, Inc. | Vehicle manipulation using cognitive state engineering |
US9503786B2 (en) | 2010-06-07 | 2016-11-22 | Affectiva, Inc. | Video recommendation using affect |
US11700420B2 (en) | 2010-06-07 | 2023-07-11 | Affectiva, Inc. | Media manipulation using cognitive state metric analysis |
US10869626B2 (en) | 2010-06-07 | 2020-12-22 | Affectiva, Inc. | Image analysis for emotional metric evaluation |
US10628741B2 (en) | 2010-06-07 | 2020-04-21 | Affectiva, Inc. | Multimodal machine learning for emotion metrics |
US11823055B2 (en) | 2019-03-31 | 2023-11-21 | Affectiva, Inc. | Vehicular in-cabin sensing using machine learning |
US10111611B2 (en) | 2010-06-07 | 2018-10-30 | Affectiva, Inc. | Personal emotional profile generation |
US10289898B2 (en) | 2010-06-07 | 2019-05-14 | Affectiva, Inc. | Video recommendation via affect |
US11056225B2 (en) | 2010-06-07 | 2021-07-06 | Affectiva, Inc. | Analytics for livestreaming based on image analysis within a shared digital environment |
US8971628B2 (en) | 2010-07-26 | 2015-03-03 | Fotonation Limited | Face detection using division-generated haar-like features for illumination invariance |
US8742887B2 (en) | 2010-09-03 | 2014-06-03 | Honeywell International Inc. | Biometric visitor check system |
US8254768B2 (en) * | 2010-12-22 | 2012-08-28 | Michael Braithwaite | System and method for illuminating and imaging the iris of a person |
US8831416B2 (en) * | 2010-12-22 | 2014-09-09 | Michael Braithwaite | System and method for illuminating and identifying a person |
US10043229B2 (en) | 2011-01-26 | 2018-08-07 | Eyelock Llc | Method for confirming the identity of an individual while shielding that individual's personal data |
RU2589859C2 (en) | 2011-02-17 | 2016-07-10 | АЙЛОК ЭлЭлСи | Efficient method and system for obtaining image data of scene and iris image using one sensor |
US8836777B2 (en) | 2011-02-25 | 2014-09-16 | DigitalOptics Corporation Europe Limited | Automatic detection of vertical gaze using an embedded imaging device |
EP2678820A4 (en) | 2011-02-27 | 2014-12-03 | Affectiva Inc | Video recommendation based on affect |
CN102129685B (en) * | 2011-03-24 | 2012-08-29 | 杭州电子科技大学 | Method for detecting irregular circle based on Gauss pyramid decomposition |
CN103797495A (en) | 2011-04-19 | 2014-05-14 | 眼锁股份有限公司 | Biometric chain of provenance |
US8854446B2 (en) | 2011-04-28 | 2014-10-07 | Iristrac, Llc | Method of capturing image data for iris code based identification of vertebrates |
US8639058B2 (en) * | 2011-04-28 | 2014-01-28 | Sri International | Method of generating a normalized digital image of an iris of an eye |
US8755607B2 (en) | 2011-04-28 | 2014-06-17 | Sri International | Method of normalizing a digital image of an iris of an eye |
US8682073B2 (en) | 2011-04-28 | 2014-03-25 | Sri International | Method of pupil segmentation |
US9124798B2 (en) | 2011-05-17 | 2015-09-01 | Eyelock Inc. | Systems and methods for illuminating an iris with visible light for biometric acquisition |
US9400806B2 (en) | 2011-06-08 | 2016-07-26 | Hewlett-Packard Development Company, L.P. | Image triggered transactions |
EP2748768A4 (en) | 2011-08-22 | 2016-05-11 | Eyelock Llc | Systems and methods for capturing artifact free images |
US9373023B2 (en) * | 2012-02-22 | 2016-06-21 | Sri International | Method and apparatus for robustly collecting facial, ocular, and iris images using a single sensor |
US20130226655A1 (en) * | 2012-02-29 | 2013-08-29 | BVI Networks, Inc. | Method and system for statistical analysis of customer movement and integration with other data |
WO2013150549A2 (en) * | 2012-04-04 | 2013-10-10 | Saxena Sulakshna | System and method for locating blood vessels and analysing blood |
US8798332B2 (en) | 2012-05-15 | 2014-08-05 | Google Inc. | Contact lenses |
US9265458B2 (en) | 2012-12-04 | 2016-02-23 | Sync-Think, Inc. | Application of smooth pursuit cognitive testing paradigms to clinical drug development |
US9039179B2 (en) | 2012-12-11 | 2015-05-26 | Elwha Llc | Unobtrusive active eye interrogation |
US9101297B2 (en) | 2012-12-11 | 2015-08-11 | Elwha Llc | Time-based unobtrusive active eye interrogation |
US9039180B2 (en) | 2012-12-11 | 2015-05-26 | Elwah LLC | Self-aligning unobtrusive active eye interrogation |
CN105022880B (en) * | 2013-01-31 | 2017-02-22 | 贵阳科安科技有限公司 | System-level photoelectric optimization design method for iris imaging apparatus |
US9380976B2 (en) | 2013-03-11 | 2016-07-05 | Sync-Think, Inc. | Optical neuroinformatics |
US9495526B2 (en) | 2013-03-15 | 2016-11-15 | Eyelock Llc | Efficient prevention of fraud |
US20140327753A1 (en) * | 2013-05-06 | 2014-11-06 | Delta ID Inc. | Apparatus and method for positioning an iris for iris image capture |
US9582716B2 (en) * | 2013-09-09 | 2017-02-28 | Delta ID Inc. | Apparatuses and methods for iris based biometric recognition |
CN103530617B (en) * | 2013-10-22 | 2017-05-31 | 南京中科神光科技有限公司 | Iris is aligned and harvester |
US10032075B2 (en) | 2013-12-23 | 2018-07-24 | Eyelock Llc | Methods and apparatus for power-efficient iris recognition |
WO2015103595A1 (en) | 2014-01-06 | 2015-07-09 | Eyelock, Inc. | Methods and apparatus for repetitive iris recognition |
EP3117265B1 (en) * | 2014-03-11 | 2021-09-29 | Verily Life Sciences LLC | Contact lenses |
CN106796655A (en) * | 2014-09-12 | 2017-05-31 | 眼锁有限责任公司 | Method and apparatus for guiding sight line of the user in iris authentication system |
US10425814B2 (en) | 2014-09-24 | 2019-09-24 | Princeton Identity, Inc. | Control of wireless communication device capability in a mobile device with a biometric key |
US9767358B2 (en) * | 2014-10-22 | 2017-09-19 | Veridium Ip Limited | Systems and methods for performing iris identification and verification using mobile devices |
US10430557B2 (en) | 2014-11-17 | 2019-10-01 | Elwha Llc | Monitoring treatment compliance using patient activity patterns |
US9589107B2 (en) | 2014-11-17 | 2017-03-07 | Elwha Llc | Monitoring treatment compliance using speech patterns passively captured from a patient environment |
US9585616B2 (en) | 2014-11-17 | 2017-03-07 | Elwha Llc | Determining treatment compliance using speech patterns passively captured from a patient environment |
WO2016081609A1 (en) | 2014-11-19 | 2016-05-26 | Eyelock Llc | Model-based prediction of an optimal convenience metric for authorizing transactions |
CN104484649B (en) * | 2014-11-27 | 2018-09-11 | 北京天诚盛业科技有限公司 | The method and apparatus of iris recognition |
CA2969331A1 (en) | 2014-12-03 | 2016-06-09 | Princeton Identity, Inc. | System and method for mobile device biometric add-on |
WO2016118473A1 (en) | 2015-01-20 | 2016-07-28 | Eyelock Llc | Lens system for high quality visible image acquisition and infra-red iris image acquisition |
FR3032539B1 (en) * | 2015-02-10 | 2018-03-02 | Morpho | METHOD FOR ACQUIRING BIOMETRIC DATA ACCORDING TO A VERIFIED SEQUENCE |
CN104680139B (en) * | 2015-02-11 | 2018-09-11 | 北京天诚盛业科技有限公司 | Based on without burnt mirror self feed back with positioning iris collection device |
US9509690B2 (en) | 2015-03-12 | 2016-11-29 | Eyelock Llc | Methods and systems for managing network activity using biometrics |
US10039928B2 (en) | 2015-03-27 | 2018-08-07 | Equility Llc | Ear stimulation with neural feedback sensing |
US10589105B2 (en) | 2015-03-27 | 2020-03-17 | The Invention Science Fund Ii, Llc | Method and system for controlling ear stimulation |
US10327984B2 (en) | 2015-03-27 | 2019-06-25 | Equility Llc | Controlling ear stimulation in response to image analysis |
US11364380B2 (en) | 2015-03-27 | 2022-06-21 | Elwha Llc | Nerve stimulation system, subsystem, headset, and earpiece |
US10512783B2 (en) | 2015-03-27 | 2019-12-24 | Equility Llc | User interface method and system for ear stimulation |
US9987489B2 (en) | 2015-03-27 | 2018-06-05 | Elwha Llc | Controlling ear stimulation in response to electrical contact sensing |
US10398902B2 (en) | 2015-03-27 | 2019-09-03 | Equility Llc | Neural stimulation method and system with audio output |
US10406376B2 (en) | 2015-03-27 | 2019-09-10 | Equility Llc | Multi-factor control of ear stimulation |
CA2983749C (en) | 2015-05-11 | 2021-12-28 | Magic Leap, Inc. | Devices, methods and systems for biometric user recognition utilizing neural networks |
US10311299B2 (en) | 2015-12-21 | 2019-06-04 | Eyelock Llc | Reflected optic camera module for iris recognition in a computing device |
EP3403217A4 (en) | 2016-01-12 | 2019-08-21 | Princeton Identity, Inc. | Systems and methods of biometric analysis |
AU2017230184B2 (en) | 2016-03-11 | 2021-10-07 | Magic Leap, Inc. | Structure learning in convolutional neural networks |
US10373008B2 (en) | 2016-03-31 | 2019-08-06 | Princeton Identity, Inc. | Systems and methods of biometric analysis with adaptive trigger |
US10366296B2 (en) | 2016-03-31 | 2019-07-30 | Princeton Identity, Inc. | Biometric enrollment systems and methods |
EP3458997A2 (en) | 2016-05-18 | 2019-03-27 | Eyelock, LLC | Iris recognition methods and systems based on an iris stochastic texture model |
US11531756B1 (en) | 2017-03-20 | 2022-12-20 | Hid Global Corporation | Apparatus for directing presentation attack detection in biometric scanners |
US10817722B1 (en) | 2017-03-20 | 2020-10-27 | Cross Match Technologies, Inc. | System for presentation attack detection in an iris or face scanner |
US10607096B2 (en) | 2017-04-04 | 2020-03-31 | Princeton Identity, Inc. | Z-dimension user feedback biometric system |
US10922566B2 (en) | 2017-05-09 | 2021-02-16 | Affectiva, Inc. | Cognitive state evaluation for vehicle navigation |
DE102017115136A1 (en) * | 2017-07-06 | 2019-01-10 | Bundesdruckerei Gmbh | Apparatus and method for detecting biometric features of a person's face |
US10902104B2 (en) | 2017-07-26 | 2021-01-26 | Princeton Identity, Inc. | Biometric security systems and methods |
US11017226B2 (en) | 2017-08-30 | 2021-05-25 | Nec Corporation | Image processing system, image processing method, and storage medium |
US20190172458A1 (en) | 2017-12-01 | 2019-06-06 | Affectiva, Inc. | Speech analysis for cross-language mental state identification |
US10713483B2 (en) * | 2018-03-20 | 2020-07-14 | Welch Allyn, Inc. | Pupil edge detection in digital imaging |
DE102018121256A1 (en) * | 2018-08-30 | 2020-03-05 | Bundesdruckerei Gmbh | Access control system for capturing a facial image of a person |
US11887383B2 (en) | 2019-03-31 | 2024-01-30 | Affectiva, Inc. | Vehicle interior object management |
US10948986B2 (en) | 2019-04-09 | 2021-03-16 | Fotonation Limited | System for performing eye detection and/or tracking |
US11046327B2 (en) | 2019-04-09 | 2021-06-29 | Fotonation Limited | System for performing eye detection and/or tracking |
CN113874883A (en) | 2019-05-21 | 2021-12-31 | 奇跃公司 | Hand pose estimation |
US11770598B1 (en) * | 2019-12-06 | 2023-09-26 | Amazon Technologies, Inc. | Sensor assembly for acquiring images |
US11769056B2 (en) | 2019-12-30 | 2023-09-26 | Affectiva, Inc. | Synthetic data for neural network training using vectors |
CN113706469B (en) * | 2021-07-29 | 2024-04-05 | 天津中科智能识别产业技术研究院有限公司 | Iris automatic segmentation method and system based on multi-model voting mechanism |
WO2023079862A1 (en) * | 2021-11-05 | 2023-05-11 | パナソニックIpマネジメント株式会社 | Imaging system, processing device, and method executed by computer in imaging system |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3069654A (en) * | 1960-03-25 | 1962-12-18 | Paul V C Hough | Method and means for recognizing complex patterns |
US4109237A (en) * | 1977-01-17 | 1978-08-22 | Hill Robert B | Apparatus and method for identifying individuals through their retinal vasculature patterns |
US4620318A (en) * | 1983-04-18 | 1986-10-28 | Eye-D Development Ii Ltd. | Fovea-centered eye fundus scanner |
GB8518803D0 (en) * | 1985-07-25 | 1985-08-29 | Rca Corp | Locating target patterns within images |
US4641349A (en) * | 1985-02-20 | 1987-02-03 | Leonard Flom | Iris recognition system |
JPH01158579A (en) * | 1987-09-09 | 1989-06-21 | Aisin Seiki Co Ltd | Image recognizing device |
US4975969A (en) * | 1987-10-22 | 1990-12-04 | Peter Tal | Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system utilizing the same |
US5016282A (en) * | 1988-07-14 | 1991-05-14 | Atr Communication Systems Research Laboratories | Eye tracking image pickup apparatus for separating noise from feature portions |
US5063603A (en) * | 1989-11-06 | 1991-11-05 | David Sarnoff Research Center, Inc. | Dynamic method for recognizing objects and image processing system therefor |
GB9001468D0 (en) * | 1990-01-23 | 1990-03-21 | Sarnoff David Res Center | Computing multiple motions within an image region |
US5291560A (en) * | 1991-07-15 | 1994-03-01 | Iri Scan Incorporated | Biometric personal identification system based on iris analysis |
US5179441A (en) * | 1991-12-18 | 1993-01-12 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Near real-time stereo vision system |
US5325449A (en) * | 1992-05-15 | 1994-06-28 | David Sarnoff Research Center, Inc. | Method for fusing images and apparatus therefor |
-
1994
- 1994-09-02 US US08/300,678 patent/US5572596A/en not_active Expired - Lifetime
-
1995
- 1995-09-05 MX MX9701624A patent/MX9701624A/en unknown
- 1995-09-05 BR BR9508691A patent/BR9508691A/en not_active Application Discontinuation
- 1995-09-05 CA CA002199040A patent/CA2199040A1/en not_active Abandoned
- 1995-09-05 AU AU34198/95A patent/AU702883B2/en not_active Ceased
- 1995-09-05 EP EP01201449A patent/EP1126403A2/en not_active Withdrawn
- 1995-09-05 JP JP50955696A patent/JP3943591B2/en not_active Expired - Lifetime
- 1995-09-05 HU HU9701760A patent/HUT76950A/en unknown
- 1995-09-05 CN CN95195628A patent/CN1160446A/en active Pending
- 1995-09-05 WO PCT/US1995/010985 patent/WO1996007978A1/en active IP Right Grant
- 1995-09-05 EP EP95931012A patent/EP0793833A4/en not_active Withdrawn
- 1995-09-05 KR KR1019970701378A patent/KR970705798A/en active IP Right Grant
-
1996
- 1996-09-27 US US08/727,366 patent/US5751836A/en not_active Expired - Fee Related
-
2006
- 2006-04-10 JP JP2006107872A patent/JP2006260583A/en active Pending
- 2006-04-10 JP JP2006107873A patent/JP2006302276A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440483A (en) * | 2013-09-03 | 2013-12-11 | 吉林大学 | Active auto-focus type iris image capturing device |
Also Published As
Publication number | Publication date |
---|---|
BR9508691A (en) | 1998-01-06 |
JP2006302276A (en) | 2006-11-02 |
EP1126403A2 (en) | 2001-08-22 |
US5572596A (en) | 1996-11-05 |
JPH10505180A (en) | 1998-05-19 |
CN1160446A (en) | 1997-09-24 |
HUT76950A (en) | 1998-01-28 |
AU3419895A (en) | 1996-03-27 |
US5751836A (en) | 1998-05-12 |
KR970705798A (en) | 1997-10-09 |
EP0793833A1 (en) | 1997-09-10 |
MX9701624A (en) | 1998-05-31 |
WO1996007978A1 (en) | 1996-03-14 |
EP0793833A4 (en) | 1998-06-10 |
JP2006260583A (en) | 2006-09-28 |
AU702883B2 (en) | 1999-03-11 |
JP3943591B2 (en) | 2007-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5572596A (en) | Automated, non-invasive iris recognition system and method | |
Wildes et al. | A system for automated iris recognition | |
Wildes et al. | A machine-vision system for iris recognition | |
US9836648B2 (en) | Iris biometric recognition module and access control assembly | |
US9195890B2 (en) | Iris biometric matching system | |
Beymer | Face recognition under varying pose | |
EP0499627B1 (en) | Dynamic method for recognizing objects and image processing system therefor | |
WO1997021188A1 (en) | Wide field of view/narrow field of view recognition system and method | |
JPH10295674A (en) | Individual identification device, individual identification method, and individual identification system | |
CN106529436B (en) | Identity consistency authentication method and device and mobile terminal | |
JP2004192378A (en) | Face image processor and method therefor | |
US20120274756A1 (en) | Method of capturing image data for iris code based identification of vertebrates | |
KR101821144B1 (en) | Access Control System using Depth Information based Face Recognition | |
KR100347058B1 (en) | Method for photographing and recognizing a face | |
US11354924B1 (en) | Hand recognition system that compares narrow band ultraviolet-absorbing skin chromophores | |
AU719428B2 (en) | Automated, non-invasive iris recognition system and method | |
Yoo et al. | A simply integrated dual-sensor based non-intrusive iris image acquisition system | |
Bobis et al. | Face recognition using binary thresholding for features extraction | |
CN111695437A (en) | Information processing method based on face recognition | |
KR20040098134A (en) | Embedded method of intelligent automated teller machine by digital image processing | |
Fan et al. | A spatial feature enhanced MMI algorithm for multi-modal wild-fire image registration | |
Sambharwal et al. | A Novel Approach to Increase the Performance of Biometrics Iris Sensors | |
Li et al. | James L. Cambier |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FZDE | Discontinued |