US20010000025A1 - Method and apparatus for personnel detection and tracking - Google Patents

Method and apparatus for personnel detection and tracking Download PDF

Info

Publication number
US20010000025A1
US20010000025A1 US09/726,425 US72642500A US2001000025A1 US 20010000025 A1 US20010000025 A1 US 20010000025A1 US 72642500 A US72642500 A US 72642500A US 2001000025 A1 US2001000025 A1 US 2001000025A1
Authority
US
United States
Prior art keywords
image
tracking
detecting
face
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/726,425
Other versions
US6445810B2 (en
Inventor
Trevor Darrell
Gaile Gordon
Michael Harville
John Woodfill
Harlyn Baker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Trevor Darrell
Gaile Gordon
Michael Harville
John Woodfill
Harlyn Baker
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Trevor Darrell, Gaile Gordon, Michael Harville, John Woodfill, Harlyn Baker filed Critical Trevor Darrell
Priority to US09/726,425 priority Critical patent/US6445810B2/en
Publication of US20010000025A1 publication Critical patent/US20010000025A1/en
Application granted granted Critical
Publication of US6445810B2 publication Critical patent/US6445810B2/en
Assigned to VULCAN PATENTS LLC reassignment VULCAN PATENTS LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERVAL RESEARCH CORPORATION
Assigned to INTERVAL LICENSING LLC reassignment INTERVAL LICENSING LLC MERGER (SEE DOCUMENT FOR DETAILS). Assignors: VULCAN PATENTS LLC
Assigned to TYZX, INC. reassignment TYZX, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: INTERVAL LICENSING, LLC
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TYZX, INC.
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Definitions

  • the present invention generally relates to an image detection and identification system, and more specifically to an apparatus and method for personnel detection, background separation and identification. Based upon the detection and/or identification of a person, applications can perform customized information manipulation that is relevant to such information.
  • U.S. Pat. No. 5,642,431 discloses a face detection system that uses an image classifier and an output display device. A training process is employed which uses both face and non-face objects stored in a database to determine whether a face is detected. This system, however, is unable to continuously track the user's face and adjust for real-time movements of the physical objects being detected.
  • U.S. Pat. No. 5,532,741 discloses a camera and video system which are integrally combined. A mirror image of a user is displayed back to the user on a CRT. However this system is merely a passive video playback system which is superimposed on a video screen. There is no visual interactive system which processes displayed images or presents specific information on the basis of detected features of a person who is looking at the system.
  • the data of the person's face is processed with the previously determined image transformations. Based upon the stored data, an “older face” is then digitally superimposed on areas of the younger face to produce an aged face of the user.
  • This system is unable to perform processing in a real-time fashion, for instance on active video signals. Furthermore, this system does not involve any recognition of the person whose image is being shown, or automated face detection.
  • a further complicating factor lies in the time frame over which a person is recognized and tracked.
  • short-term tracking of a person is desirable, e.g. the ability to recognize the person from frame to frame as he or she moves within the scene being viewed.
  • long term tracking i.e. the ability to recognize the same person over a hiatus of several days, is desirable in certain applications, particularly where interactivity is dependent upon characteristics of individuals.
  • the system should also be capable of mid-term tracking, to recognize when a given individual has momentarily left a scene being viewed and then returned.
  • the present invention provides a multi-modal visual person detection and tracking framework which also has the capability to identify persons over various periods of time. Through the use of depth, color and pattern tracking, images of one or more people in a scene can be tracked in real time in a variety of general conditions, with good results.
  • a first module receives stereo image data from cameras and generates a disparity image, preferably through the use of the census algorithm, and locates one or more target regions in the disparity image by a connected components grouping analysis.
  • a second module classifies and tracks each target region through color segmentation.
  • a third module distinguishes and tracks individual facial features located within the target regions, based on grayscale patterns. Each module is able to be utilized individually or in combination with one or more of the other individual modules to locate and track the targets.
  • each module also computes a mode specific description of a user.
  • the mode specific information is combined in a fourth module which estimates the identity of a person whose image has been detected, based upon a database of previously recognized targets. Once the identity of a person is estimated, real-time applications specific to the identified target can be implemented. This feature is also used to increase the robustness of the short-term tracking of an individual.
  • Another exemplary embodiment of the present invention provides an intelligent monitoring system which discriminates between faces and the background scene, and then tracks the faces in real-time. In addition to the determination of actual facial characteristics, the individual face is able to be identified. The identification of the face allows for execution of an application (i.e., a computer program) according to the identification of an individual from among a set of recent users.
  • an application i.e., a computer program
  • Another exemplary embodiment of the present invention provides a real time virtual mirror comprising a detector which detects, tracks, and identifies faces in real time.
  • the processor then creates a virtual mirror image for display in which the facial features are selectively distorted.
  • FIG. 1 is a block diagram of a hardware implementation of an interactive display embodiment of the invention
  • FIG. 2 is a block diagram of the manner in which the different image tracking elements of an exemplary embodiment of the invention are employed;
  • FIG. 3 is a flow chart of the depth estimation module of the present invention.
  • FIG. 4 is a flow chart of the color segmentation module of the present invention.
  • FIG. 5 illustrates exemplary pictorial images of the state of the video signal as it is being processed.
  • FIGS. 6 ( a ), 6 ( b ), 6 ( c ), 6 ( d ) and 6 ( e ) illustrate several exemplary ways in which human faces can be distorted in the virtual mirror embodiment of the present invention.
  • the present invention is directed to the interactive display of information which is based upon computer vision.
  • the invention processes image data representative of a viewed scene to detect target regions corresponding to people or other objects of interest in the scene, track those target regions over time, and, in a preferred implementation, identify each person in a target region from among a database of recent users of the system. Based upon this information, appropriate applications are executed.
  • the tracking is carried out over several different time intervals. Short-term tracking takes place over small quantities of time, for example from frame to frame in a video image. Mid-term tracking takes place over slightly longer periods of time, in which a person might momentarily leave the scene being viewed and then return, e.g. up to the length of a full day. Long-term tracking, which is primarily based upon person identification, occurs over longer periods, e.g. weeks, months and the like, in which conditions such as lighting, clothing, etc. might change.
  • a virtual mirror system is implemented by locating cameras 20 and 25 along the same optical axis as a video display 30 , using a half-silvered mirror 35 to split the optical paths of the cameras from that of the display.
  • multiple cameras are employed to observe a user 40 through a viewing aperture 60 : a primary color camera 20 is mounted in the center of the imaging frame and at least one additional camera 25 is mounted off-axis. While two cameras are shown in exemplary embodiments of the invention, it will be appreciated that additional cameras can be added to provide different perspective views, as needed.
  • the cameras 20 and 25 sense the image of a scene through the half mirror 35 , so that the user 40 can view a video monitor 30 while also looking straight into (but not seeing) the cameras.
  • a video image from the primary camera 20 is displayed on the monitor located on a base 50 , to create a virtual mirror effect.
  • the video image can be selectively distorted as it is being displayed on the monitor.
  • the system for processing the video signals from the cameras and generating the display is shown in FIG. 2.
  • four primary modules are used to track a user's position and estimate the identity of the user from among previous users: a range computation module 210 , a color detection and segmentation module 230 , a face pattern classification module 240 , and a personnel classification module 250 .
  • Classification, grouping and tracking of image pixels is carried out independently in each of the three modules 210 , 230 and 240 , and the results obtained by one module are used to refine or validate decisions made in another module.
  • the video signals from the cameras 20 and 25 undergo dense real-time stereo processing to estimate a user's silhouette, as defined by a region of slowly varying range, or depth.
  • Each region in the image that is estimated to correspond to an individual in the-scene is identified as a target region.
  • the use of multiple fixed cameras allows for easy segmentation of an image of a target 40 from other people and background objects.
  • the range computation module 210 can be used to estimate metric descriptions of the object before the cameras, e.g. an individual's height.
  • the color detection and segmentation module 230 detects regions of flesh tone in a target region.
  • the color detection and segmentation module 230 can also estimate the color of the skin, clothes and hair of a person in the scene.
  • the face pattern classification module 240 is used to discriminate head regions from hands, legs, and other body parts. The results of these three modules are integrated in a further module 255 to provide an estimate of one or more face regions in the image. With continual knowledge of the location of the target's head in 3-D, application programs 260 which employ this type of information can be executed. For instance, graphics techniques to distort and/or morph the shape or other visual properties of the user's face can be applied.
  • the personnel identification module 250 can store face patterns and, based upon the observed body metrics and color information, estimate the identity of the user. On the basis of the personnel identification and the tracking of the face region, a different type of application 260 that is responsive to the detected information can be executed.
  • the range computation module 210 receives raw video data from the two cameras 20 and 25 , and estimates the distance to people or other objects in the image, using dense stereo correspondence techniques. Binocular views, as embodied in the present invention, provide information for determining the distance to elements of a scene. Using conventional stereo vision processing, two simultaneously captured images are compared to produce a disparity (inverse depth) image in which nearby scene elements are represented by large disparity values and distant elements by small values. The disparity image is generated by determining, for each pixel in one image, the displacement to its corresponding pixel in the other image.
  • One issue of concern in determining stereo correspondence is that pixels from two cameras that correspond to the same scene element may differ due to both camera properties such as gain and bias, and to scene properties such as varying reflectance distributions resulting from slightly differing viewpoints.
  • the use of the census correspondence algorithm overcomes these potential differences between images by taking a non-parametric approach to correspondence, and is therefore preferred over more conventional processing techniques.
  • the census algorithm determines the similarity between image regions, not based on inter-image intensity comparisons, but rather based on inter-image comparison of intra-image intensity ordering information.
  • the census algorithm maps each pixel in an intensity image to a bit vector, where each bit represents the ordering between the intensity of pixel and that of a neighboring pixel. Thus, a pixel at the top of an intensity peak would result in a homogenous (all ones) bit vector indicating that its intensity is greater than those of its neighboring pixels.
  • Two census bit vectors in different images can be compared using the Hamming distance, i.e., by counting the number of bits that differ. For each pixel in one image, the correspondence process of finding the best match from within a fixed search window in the other image, is performed by minimizing locally summed Hamming distances. The displacement to the best match serves as the disparity result for a pixel.
  • the census algorithm can be implemented on a single PCI card, multi-FPGA reconfigurable computing engine, for example, of the type described in the article “Real-time Stereo Vision on the PARTS Reconfigurable Computer”, IEEE Proceedings; Symposium on Field - Programmable Custom Computing Machines, April 1997, by J. Woodfill et al.
  • This stereo system is capable of computing 24 stereo disparities on 320 by 240 pixel images at 42 frames per second, or approximately 77 million pixel-disparities per second.
  • the generated disparity image can be down-sampled and mode-filtered before results are passed to the range detection and segmentation module 210 .
  • the raw range signal is first smoothed using a morphological closing operator (S 5 ), and the response of a gradient operator is then computed on the smoothed range data.
  • the gradient response is thresholded at a critical value, based upon the observed noise level in the disparity data. This creates regions of zero value in the image where abrupt transitions occur, such as between people who are located at at different distances from the camera.
  • a connected-components grouping analysis is then applied to regions of smoothly varying range, resulting in the selection of contiguous regions whose area exceeds a minimum threshold (S 7 ).
  • the range computation module 210 is able to provide an independent estimate of the head position and size.
  • the head position is estimated using the maxima of the target's silhouette as computed from the range component discussed above.
  • Size is estimated by measuring the width of the peak of the range component identified as the head.
  • the range module and the face pattern classification module are also used to constrain the size of the head. If the estimated real size of the head is not within one standard deviation of average head size or the face pattern classification does not track a facial area, the size of the head is set to the projection of average size.
  • estimates of body metrics for the a targeted individual can be performed in the range computation module 210 .
  • metrics which can be used to distinguish individuals from one another include-height, shoulder breadth, limb length, and the like. These estimated metrics are input into the personnel classification module 250 , as mode specific information, to further aid in the determination of the viewer's identity.
  • the individual's height is estimated to be proportional to the product of the height of the target's silhouette above the optical center of the system and the range of the person, when the imaging geometry is such that the cameras are parallel to the ground plane. If this is not the case, then height can be computed using a more general camera calibration procedure. Alternatively, height can be estimated without knowledge of the range, for example by using a wide angle view and a ground plane model.
  • Disparity estimation, segmentation, and grouping are repeated independently at each time step, so that range silhouettes are tracked, in short term, frame-to-frame increments, based on position and size constancy.
  • the centroid and three-dimensional size of each new range silhouette is compared to silhouettes from the previous time step.
  • Short-term correspondences are indicated using an approach that starts with the closest unmatched region. For each new region, the closest old region within a minimum threshold is marked as the correspondence match.
  • each image received at Step S 13 is initially represented with pixels corresponding to the red, green, and blue channels of the image, and is converted into a “log color-opponent” space (S 14 ).
  • This space can directly represent the approximate hue of skin color, as well as it's log intensity value.
  • (R,G,B) tuples are converted into tuples of the form (1(G),1(R) ⁇ 1(G),1(B) ⁇ (1(R)+1(G))/2), where 1(x) indicates a logarithm function.
  • Either a Gaussian prior probability model, or a K-Nearest Neighbor classifier is used to model example data labeled as skin or non-skin (S 15 ).
  • a Gaussian prior probability model or a K-Nearest Neighbor classifier is used to model example data labeled as skin or non-skin (S 15 ).
  • two class models are trained, and when a new pixel is presented for classification the likelihood ratio P(skin)/P(non-skin) is computed as a classification score (S 16 ).
  • the classification score is computed to be the average class membership value (1 for skin, 0 for non-skin) of the K nearest training data points to the new pixel.
  • Proximity is defined in the log color-opponent space.
  • a lookup table in the interest of computational efficiency at run-time, can be precomputed for all input values, quantizing the classification score (skin similarity value) into 8 bits and the input color channel values to 6, 7 or 8 bits. This corresponds to a lookup table which ranges between 256K and 16 MB of size.
  • This information can be stored as a texture map for cases in which the computer graphic texture mapping hardware supports the ability to apply “pixel textures”, in which each pixel of an input image being rendered generates texture coordinates according to its RGB value. Otherwise, a traditional lookup table operation can be performed on input images with the main CPU.
  • the use of texture mapping hardware for color detection can offer dramatic speed advantages relative to conventional methods.
  • segmentation and grouping analysis are performed on the classification score image (S 17 ).
  • the same tracking algorithm as described above for range image processing is used, i.e. morphological smoothing, thresholding and connected components computation. In this case, however, the low-gradient mask from the range module is applied before smoothing.
  • the color detection and segmentation module 230 searches for skin color within the target range. This restricts color regions to be identified only within the boundary of range regions; if spurious background skin hue is present in the background it will not adversely affect the shape of foreground skin color regions. Connected component regions are tracked from frame to frame with the constraint that temporal correspondence is not permitted between regions if their three-dimensional size changes more than a threshold amount.
  • the median hue and saturation of the skin, clothing and hair regions is calculated for input to a person classification algorithm in the personnel classification module 250 . These computations are based on the identification of each target as described above.
  • the connected component corresponding to the target silhouette is used to mask the original color data.
  • the median hue and saturation is calculated over all pixels in the masked region.
  • Hair and clothing color analyses are performed in the same manner. The determination of the hair region starts with the target's silhouette and removes the pixels identified by the skin color computation. Only the head region of the target's silhouette is considered, which is estimated as all points in the silhouette above the bottom of the face target as determined by the skin color data.
  • the determination of the clothing color uses the inverse approach.
  • the personnel classification module 250 Once the description of the skin, hair and clothing colors are estimated, they are input into the personnel classification module 250 , where they are stored in a database of recent users, for mid- and long-term tracking purposes. More particularly, if a person whose image is being tracked should step out of the viewed scene and then return later that same day, the combination of skin, hair and clothing colors can be used to immediately identify that person as one who had been tracked earlier. If the person does not return until the next day, or some time later, the clothing colors may be different. However, the skin and hair colors, together with the estimated height of the person, may still be sufficient to adequately distinguish that person from the other recent users.
  • pattern recognition methods which directly model statistical appearance are used in the face pattern classification module 240 .
  • the appearance of “faces” vs. “non-faces” is modeled via a neural network or Gaussian mixture-model.
  • This module reports the bounding box of the face region in the input image, masked by the foreground depth region, as illustrated in FIG. 5.
  • Face detection per se is reliable across many different users and imaging conditions, but is relatively slow, and requires that a frontal view of the face be present.
  • tracking via the face pattern classification module 240 alone can be error-prone.
  • color tracking module 230 and the range computation module 210 much more robust performance is obtained.
  • face detection is initially applied over the entire image. If a region corresponding to a face is detected, it is passed on to the integration module 255 as a candidate head location. Short term tracking is performed in the module 240 for subsequent frames by searching within windows around the detected locations in the previous frame. If a face is detected in a window, it is considered to be in short-term correspondence with a previously detected face. If no face is detected in the new frame, but the face detected in a previous frame overlapped a color or range region, the face detection module is updated by the integration module 255 to move with that region. Thus, faces can be discriminated in successive frames even when another positive face detection may not occur for several frames.
  • the results obtained by face pattern classification module 240 identify which regions correspond to the head.
  • the overlapping color or range region is marked, and the relative offset of the face detection result to the bounding box of the color or range region is recorded in the integration module 255 .
  • Regions are tracked from frame to frame as in the range case, with the additional constraint that a size constancy requirement is enforced: temporal correspondence is not assumed between regions if their three-dimensional size is considerably smaller or larger.
  • this information is fed to an application program 260 which manipulates the display itself.
  • the application may use video texture mapping techniques to apply a distortion and morphing algorithm to the user's face.
  • texture and position coordinates are both normalized to be over a range from 0 to 1.
  • a vertex is defined to be in “canonical coordinates” when position and texture coordinates are identical.
  • a background rectangle to cover the display (from 0,0 to 1,1) in canonical coordinates is generated. This creates a display which is equivalent to a non-distorted, pass-through, video window.
  • a mesh is defined over the region of the user's head. Within the external contour of the head region, vertices are placed optionally at the contour boundary as well as at evenly sampled interior points. Initially all vertices are placed in canonical coordinates, and set to have neutral base color.
  • Color distortions may be effected by manipulating the base color of each vertex.
  • Shape distortions are applied in one of two modes: parametric or physically-based.
  • parametric mode distortions are performed by adding a deformation vector to each vertex position, expressed as a weighted sum of fixed basis deformations. These bases can be constructed so as to keep the borders of the distortion region in approximately canonical coordinates, so that there will be no apparent seams to the video effect.
  • forces can be applied to each vertex and position changes are computed using an approximation to an elastic surface. As a result, a vertex can be “pulled” in a given direction, and the entire mesh will deform as it were a rubber sheet.
  • FIG. 6 a - 6 d illustrate four examples of various types of basis deformations
  • FIG. 6 e depicts a physically-based distortion effect applied to the face of the user shown in FIG. 5.
  • FIG. 6 a shows spherical expansion
  • FIG. 6 b shows spherical shrinking
  • FIG. 6 c illustrates a swirl effect
  • FIG. 6 d shows lateral expansion
  • FIG. 6 e depicts a vertical sliding effect.
  • the weight parameters associated with parametric basis deformations can vary over time, and can be expressed as a function of several relevant variables describing the state of the user: the distance of the user to the screen; their position on the floor in front of the display, or their overall body pose.
  • the weight parameters can vary randomly, or according to a script or external control. Forces for the physically-based model can be input either with an external interface, randomly, or directly in the image as the user's face touches other objects or body parts.
  • the face pattern (a grayscale sub-image) in the target region can be normalized and passed to the personnel classification system 250 .
  • the scale, alignment, and view of detected faces should be comparable.
  • faces are often identified which exhibit a substantial out-of-plane rotation. This is a good property for a detection system, but in the context of identification, it makes the problem more difficult.
  • This process provides enough normalization to demonstrate the value of face patterns in a multi-modal person identification system.
  • All the target regions are scaled to a common size.
  • Each identified face target is compared with an example face at a canonical scale and view (e.g., upright and frontal) and face targets which vary radically from this model are discarded.
  • the comparison is performed using simple normalized correlation.
  • the location of the maximum correlation score is recorded and the face pattern is translated to this alignment. While the face identification algorithm discussed above can be used to identify a face, other more powerful identification algorithms could also be employed such as an eigenface technique.
  • each module computes an estimate of certain user attributes, as discussed above with respect to FIG. 2. If a target is occluded for a medium amount of time, attributes such as body metrics, skin, hair and clothing are used to determine the identity of a target. However, if an object is occluded or missing for a long amount of time (i.e., more than one day) attributes that vary with time or on a day to day basis cannot be utilized for identification purposes.
  • O t is the cumulative user observation through time t
  • F t , H t , and C t are the face pattern, height and color observations at time t, and
  • U j are the saved statistics for person j.
  • u * arg max j ( P ( F 0 , . . . F t
  • Mean and covariance data for the observed user color data is collected, as is mean and variance of user height.
  • U j ) are computed assuming a Gaussian density model.
  • face pattern data the size-normalized and position-normalized mean pattern from each user is stored, and P(F t
  • multi-modal person identification is more robust than identification systems based on a single data modality.
  • Body metrics, color, and face pattern each present independent classification data and are accompanied by similarly independent failure modes.
  • face patterns are perhaps the most common data source for current passive person classification methods, body metrics and color information are not normally incorporated in identification systems because they do not provide sufficient discrimination to justify their use alone.
  • these other modalities can provide important clues to discriminate otherwise similar people, or help classify people when only degraded data is available in other modes.
  • a kiosk could be set up to run different applications for different viewers. For example, a kiosk for selling items could present items more likely to appeal to a male or female depending on the person standing before the kiosk.
  • the above described interactive display can be implemented using three computer systems, e.g., one personal computer and two workstations, an NTSC video monitor, stereo video cameras, a dedicated stereo computation PC board, and an optical half-mirror. Depth estimates are computed on the stereo PC board based on input from the stereo cameras, which is sent over a network from the PC to the first workstation at approx 20 Hz for 128 ⁇ 128 range maps. On this workstation color video is digitized at 640 ⁇ 480, color lookup and connected components analysis is performed at 10-20 Hz, and the output image constructed by applying the acquired video as a texture source for the background rectangle and the face mesh (at 10-20 Hz).
  • three computer systems e.g., one personal computer and two workstations, an NTSC video monitor, stereo video cameras, a dedicated stereo computation PC board, and an optical half-mirror. Depth estimates are computed on the stereo PC board based on input from the stereo cameras, which is sent over a network from the PC to the first workstation at approx 20
  • a second workstation performs face detection routines at 128 ⁇ 128 resolution at approximately (2-3 Hz), using either it's own digitized copy of the color video signal, or using a sub-sampled source image sent over the network. It should also be understood that while the above mentioned hardware implementation can be used with the present embodiments of the invention, other less expensive basic hardware could also be used.
  • the present invention has been described with respect to its preferred embodiments, those skilled in the art will recognize that the present invention is not limited to the specific embodiment described and illustrated herein. Different embodiments and adaptations besides those shown herein and described, as well as many variations, modifications and equivalent arrangements, will be apparent or will be reasonably suggested by the foregoing specification and drawings without departing from the substance or scope of the invention.
  • the disclosed system achieves it's robust performance in detection, tracking, and identification through the combination of three specific visual modalities: range, color, and pattern. Additional independent modalities could serve to further increase robustness and performance.
  • the computation of optical flow or visual motion fields could assist in short term tracking by providing estimates of object trajectory as well as improve figure/ground segmentation.

Abstract

Techniques from computer vision and computer graphics are combined to robustly track a target (e.g., a user) and perform a function based upon the image and/or the identity attributed to the target's face. Three primary modules are used to track a user's head: depth estimation, color segmentation, and pattern classification. The combination of these three techniques allows for robust performance despite unknown background, crowded conditions, and rapidly changing pose or expression of the user. Each of the modules can also provide an identity classification module with valuable information so that the identity of a user can be estimated. With an estimate of the position of a target in 3-D and the target's identity, applications such as individualized computer programs or graphics techniques to distort and/or morph the shape or apparent material properties of the user's face can be performed. The system can track and respond to a user's face in real-time using completely passive and non-invasive techniques.

Description

    BACKGROUND OF THE INVENTION
  • 1. The present invention generally relates to an image detection and identification system, and more specifically to an apparatus and method for personnel detection, background separation and identification. Based upon the detection and/or identification of a person, applications can perform customized information manipulation that is relevant to such information.
  • 2. The creation of computing environments which passively react to their observers, particularly displays and user interfaces, has become an exciting challenge for computer vision. Systems of this type can be employed in a variety of different applications. In an interactive game or kiosk, for example, the system is typically required to detect and track a single person. Other types of applications, such as general surveillance and monitoring, require the system to be capable of separately recognizing and tracking multiple people at once. To date, research in such systems has largely focused on exploiting a single visual processing technique to locate and track features of a user in front of an image sensor. These systems have often been non-robust to real-world conditions and fail in complicated, unpredictable visual environments and/or where no prior information about the user population was available.
  • 3. For example, U.S. Pat. No. 5,642,431 discloses a face detection system that uses an image classifier and an output display device. A training process is employed which uses both face and non-face objects stored in a database to determine whether a face is detected. This system, however, is unable to continuously track the user's face and adjust for real-time movements of the physical objects being detected. U.S. Pat. No. 5,532,741 discloses a camera and video system which are integrally combined. A mirror image of a user is displayed back to the user on a CRT. However this system is merely a passive video playback system which is superimposed on a video screen. There is no visual interactive system which processes displayed images or presents specific information on the basis of detected features of a person who is looking at the system.
  • 4. In addition to detecting and tracking a person in a scene, various types of image processing, or manipulation, can also be employed in the context of the present invention. One possible type of manipulation that can be employed in this regard is the distortion of the image of the person, in particular the person's face, for amusement purposes. This effect has been explored before on static imagery (such as personal computer imaging tools), but has not previously been applied to live video. For instance, U.S. Pat. No. 4,276,570 discloses a method and associated apparatus for producing an image of a person's face at different ages. Images of old and young faces are mapped to one another, and image transformations are determined. Once these results are stored, a camera receives an image of a user's face (possibly a photograph). The data of the person's face is processed with the previously determined image transformations. Based upon the stored data, an “older face” is then digitally superimposed on areas of the younger face to produce an aged face of the user. This system is unable to perform processing in a real-time fashion, for instance on active video signals. Furthermore, this system does not involve any recognition of the person whose image is being shown, or automated face detection.
  • 5. Thus, a robust system is still needed to perform accurate image processing, personnel recognition and manipulations in a real-time fashion.
  • 6. A further complicating factor lies in the time frame over which a person is recognized and tracked. At one extreme, short-term tracking of a person is desirable, e.g. the ability to recognize the person from frame to frame as he or she moves within the scene being viewed. At the other extreme, long term tracking, i.e. the ability to recognize the same person over a hiatus of several days, is desirable in certain applications, particularly where interactivity is dependent upon characteristics of individuals. To be complete, the system should also be capable of mid-term tracking, to recognize when a given individual has momentarily left a scene being viewed and then returned.
  • 7. It is further desirable, therefore, to provide a tracking and identification system which is capable of providing robust performance over each of these possible tracking periods.
  • SUMMARY OF THE INVENTION
  • 8. The present invention provides a multi-modal visual person detection and tracking framework which also has the capability to identify persons over various periods of time. Through the use of depth, color and pattern tracking, images of one or more people in a scene can be tracked in real time in a variety of general conditions, with good results. A first module receives stereo image data from cameras and generates a disparity image, preferably through the use of the census algorithm, and locates one or more target regions in the disparity image by a connected components grouping analysis. A second module classifies and tracks each target region through color segmentation. A third module distinguishes and tracks individual facial features located within the target regions, based on grayscale patterns. Each module is able to be utilized individually or in combination with one or more of the other individual modules to locate and track the targets.
  • 9. In a particular embodiment of the present invention, each module also computes a mode specific description of a user. The mode specific information is combined in a fourth module which estimates the identity of a person whose image has been detected, based upon a database of previously recognized targets. Once the identity of a person is estimated, real-time applications specific to the identified target can be implemented. This feature is also used to increase the robustness of the short-term tracking of an individual.
  • 10. Another exemplary embodiment of the present invention provides an intelligent monitoring system which discriminates between faces and the background scene, and then tracks the faces in real-time. In addition to the determination of actual facial characteristics, the individual face is able to be identified. The identification of the face allows for execution of an application (i.e., a computer program) according to the identification of an individual from among a set of recent users.
  • 11. Another exemplary embodiment of the present invention provides a real time virtual mirror comprising a detector which detects, tracks, and identifies faces in real time. The processor then creates a virtual mirror image for display in which the facial features are selectively distorted.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • 12. The features and advantages of the instant invention will be better understood from the description of embodiments illustrated in the following drawings, in which:
  • 13.FIG. 1 is a block diagram of a hardware implementation of an interactive display embodiment of the invention;
  • 14.FIG. 2 is a block diagram of the manner in which the different image tracking elements of an exemplary embodiment of the invention are employed;
  • 15.FIG. 3 is a flow chart of the depth estimation module of the present invention;
  • 16.FIG. 4 is a flow chart of the color segmentation module of the present invention;
  • 17.FIG. 5 illustrates exemplary pictorial images of the state of the video signal as it is being processed; and
  • 18. FIGS. 6(a), 6(b), 6(c), 6(d) and 6(e) illustrate several exemplary ways in which human faces can be distorted in the virtual mirror embodiment of the present invention.
  • DETAILED DESCRIPTION
  • 19. In its more general aspects, the present invention is directed to the interactive display of information which is based upon computer vision. To achieve this objective, the invention processes image data representative of a viewed scene to detect target regions corresponding to people or other objects of interest in the scene, track those target regions over time, and, in a preferred implementation, identify each person in a target region from among a database of recent users of the system. Based upon this information, appropriate applications are executed. The tracking is carried out over several different time intervals. Short-term tracking takes place over small quantities of time, for example from frame to frame in a video image. Mid-term tracking takes place over slightly longer periods of time, in which a person might momentarily leave the scene being viewed and then return, e.g. up to the length of a full day. Long-term tracking, which is primarily based upon person identification, occurs over longer periods, e.g. weeks, months and the like, in which conditions such as lighting, clothing, etc. might change.
  • 20. To facilitate an understanding of the invention, it will be initially described with particular reference to its implementation in a virtual mirror which processes the image of a single person. It will be appreciated, however, that the practical applications of the principles which underlie the invention are not limited to entertainment devices of this type. Rather, the invention will find utility in any environment in which it is desirable to distinguish and, optionally identify, one or more faces in a scene being viewed and perform an operation that is based upon identified features, such as image manipulation. This invention, for example, also has application in other forms of interactive entertainment, telepresence/virtual environments, and intelligent terminals which respond selectively according to the presence, pose and identity of a target.
  • 21. Referring to an embodiment of the invention depicted in FIG. 1, a virtual mirror system is implemented by locating cameras 20 and 25 along the same optical axis as a video display 30, using a half-silvered mirror 35 to split the optical paths of the cameras from that of the display. For stereo processing, multiple cameras are employed to observe a user 40 through a viewing aperture 60: a primary color camera 20 is mounted in the center of the imaging frame and at least one additional camera 25 is mounted off-axis. While two cameras are shown in exemplary embodiments of the invention, it will be appreciated that additional cameras can be added to provide different perspective views, as needed. The cameras 20 and 25 sense the image of a scene through the half mirror 35, so that the user 40 can view a video monitor 30 while also looking straight into (but not seeing) the cameras. In this particular embodiment, a video image from the primary camera 20 is displayed on the monitor located on a base 50, to create a virtual mirror effect. When used as an entertainment device, the video image can be selectively distorted as it is being displayed on the monitor.
  • 22. The system for processing the video signals from the cameras and generating the display is shown in FIG. 2. Referring thereto, four primary modules are used to track a user's position and estimate the identity of the user from among previous users: a range computation module 210, a color detection and segmentation module 230, a face pattern classification module 240, and a personnel classification module 250. Classification, grouping and tracking of image pixels is carried out independently in each of the three modules 210, 230 and 240, and the results obtained by one module are used to refine or validate decisions made in another module.
  • 23. In the operation of the system, the video signals from the cameras 20 and 25 undergo dense real-time stereo processing to estimate a user's silhouette, as defined by a region of slowly varying range, or depth. Each region in the image that is estimated to correspond to an individual in the-scene is identified as a target region. The use of multiple fixed cameras allows for easy segmentation of an image of a target 40 from other people and background objects. Additionally, the range computation module 210 can be used to estimate metric descriptions of the object before the cameras, e.g. an individual's height. The color detection and segmentation module 230 detects regions of flesh tone in a target region. The color detection and segmentation module 230 can also estimate the color of the skin, clothes and hair of a person in the scene. The face pattern classification module 240 is used to discriminate head regions from hands, legs, and other body parts. The results of these three modules are integrated in a further module 255 to provide an estimate of one or more face regions in the image. With continual knowledge of the location of the target's head in 3-D, application programs 260 which employ this type of information can be executed. For instance, graphics techniques to distort and/or morph the shape or other visual properties of the user's face can be applied. As a further feature of the invention, the personnel identification module 250 can store face patterns and, based upon the observed body metrics and color information, estimate the identity of the user. On the basis of the personnel identification and the tracking of the face region, a different type of application 260 that is responsive to the detected information can be executed.
  • 24. The range computation module 210 receives raw video data from the two cameras 20 and 25, and estimates the distance to people or other objects in the image, using dense stereo correspondence techniques. Binocular views, as embodied in the present invention, provide information for determining the distance to elements of a scene. Using conventional stereo vision processing, two simultaneously captured images are compared to produce a disparity (inverse depth) image in which nearby scene elements are represented by large disparity values and distant elements by small values. The disparity image is generated by determining, for each pixel in one image, the displacement to its corresponding pixel in the other image.
  • 25. One issue of concern in determining stereo correspondence is that pixels from two cameras that correspond to the same scene element may differ due to both camera properties such as gain and bias, and to scene properties such as varying reflectance distributions resulting from slightly differing viewpoints. The use of the census correspondence algorithm overcomes these potential differences between images by taking a non-parametric approach to correspondence, and is therefore preferred over more conventional processing techniques. As employed within the present invention, the census algorithm determines the similarity between image regions, not based on inter-image intensity comparisons, but rather based on inter-image comparison of intra-image intensity ordering information.
  • 26. The census algorithm which can be employed in the context of the present invention is described in detail, for example, in the article entitled “Non-parametric Local Transforms for Computing Visual Correspondence”, Proceedings of the Third European Conference on Computer Vision, May 1994, by R. Zabih et al. The census algorithm described hereinafter is for the case in which two cameras are utilized. It will, however, be apparent that this algorithm could be expanded to accommodate more than two cameras. Referring to FIG. 3, the input images (S1) from the cameras are transformed so that each pixel represents its local image structure (S2). Second, the pixelwise correspondence between the images is computed (S3) so as to produce a disparity image (S4).
  • 27. The census algorithm maps each pixel in an intensity image to a bit vector, where each bit represents the ordering between the intensity of pixel and that of a neighboring pixel. Thus, a pixel at the top of an intensity peak would result in a homogenous (all ones) bit vector indicating that its intensity is greater than those of its neighboring pixels. Two census bit vectors in different images can be compared using the Hamming distance, i.e., by counting the number of bits that differ. For each pixel in one image, the correspondence process of finding the best match from within a fixed search window in the other image, is performed by minimizing locally summed Hamming distances. The displacement to the best match serves as the disparity result for a pixel.
  • 28. In one embodiment of the invention, the census algorithm can be implemented on a single PCI card, multi-FPGA reconfigurable computing engine, for example, of the type described in the article “Real-time Stereo Vision on the PARTS Reconfigurable Computer”, IEEE Proceedings; Symposium on Field-Programmable Custom Computing Machines, April 1997, by J. Woodfill et al. This stereo system is capable of computing 24 stereo disparities on 320 by 240 pixel images at 42 frames per second, or approximately 77 million pixel-disparities per second. The generated disparity image can be down-sampled and mode-filtered before results are passed to the range detection and segmentation module 210.
  • 29. From the disparity image determined by the census algorithm, specific target silhouettes (i.e., tracked individuals) are extracted from the depth information by selecting human-sized surfaces and tracking each region until it moves out of the scene being imaged. This extraction technique proceeds in several stages of processing. To reduce the effects of low confidence stereo disparities, the raw range signal is first smoothed using a morphological closing operator (S5), and the response of a gradient operator is then computed on the smoothed range data. The gradient response is thresholded at a critical value, based upon the observed noise level in the disparity data. This creates regions of zero value in the image where abrupt transitions occur, such as between people who are located at at different distances from the camera. A connected-components grouping analysis is then applied to regions of smoothly varying range, resulting in the selection of contiguous regions whose area exceeds a minimum threshold (S7).
  • 30. The above steps S1-S7 are repeated with each new pair of video frames obtained from the video cameras 20 and 25 (S8). Correspondences are established between regions through time on the basis of similar size and 3-D centroid location (S9). Each region is tracked until it leaves a defined work space or is occluded. Depth information is used to isolate a target's silhouette from the background, so that the color and face detection modules are not confused by clutter from background content or other people in the scene.
  • 31. The union of all connected, smoothly varying range regions constitutes a boolean mask 220 which is then used in conjunction with the image data that is provided to the color detection and classification module 230. In addition to providing user silhouettes, the range computation module 210 is able to provide an independent estimate of the head position and size. The head position is estimated using the maxima of the target's silhouette as computed from the range component discussed above. Size is estimated by measuring the width of the peak of the range component identified as the head. The range module and the face pattern classification module (discussed below) are also used to constrain the size of the head. If the estimated real size of the head is not within one standard deviation of average head size or the face pattern classification does not track a facial area, the size of the head is set to the projection of average size.
  • 32. In addition, estimates of body metrics for the a targeted individual can be performed in the range computation module 210. Examples of metrics which can be used to distinguish individuals from one another include-height, shoulder breadth, limb length, and the like. These estimated metrics are input into the personnel classification module 250, as mode specific information, to further aid in the determination of the viewer's identity. In the case of height, for example, the individual's height is estimated to be proportional to the product of the height of the target's silhouette above the optical center of the system and the range of the person, when the imaging geometry is such that the cameras are parallel to the ground plane. If this is not the case, then height can be computed using a more general camera calibration procedure. Alternatively, height can be estimated without knowledge of the range, for example by using a wide angle view and a ground plane model.
  • 33. Disparity estimation, segmentation, and grouping are repeated independently at each time step, so that range silhouettes are tracked, in short term, frame-to-frame increments, based on position and size constancy. The centroid and three-dimensional size of each new range silhouette is compared to silhouettes from the previous time step. Short-term correspondences are indicated using an approach that starts with the closest unmatched region. For each new region, the closest old region within a minimum threshold is marked as the correspondence match.
  • 34. Within the foreground depth region of a particular target, it is useful to mark and track regions of homogenous color, particularly skin color. This is done through the use of a classification strategy which matches skin hue, but is largely invariant to intensity or saturation, to provide robustness under different lighting conditions and absolute amounts of skin pigment in a particular person.
  • 35. In one approach, color segmentation processing is applied to images obtained from the primary camera 20. Referring to the flow chart of FIG. 4, each image received at Step S13 is initially represented with pixels corresponding to the red, green, and blue channels of the image, and is converted into a “log color-opponent” space (S14). This space can directly represent the approximate hue of skin color, as well as it's log intensity value. Specifically, (R,G,B) tuples are converted into tuples of the form (1(G),1(R)−1(G),1(B)−(1(R)+1(G))/2), where 1(x) indicates a logarithm function. For reasons of numerical precision, it is preferable to use a base ten logarithm function, followed by a scalar multiplier greater than 10. Typically, 1(x)=10*log10(x), where log10(x) is the base ten logarithm. For further information in this regard, reference is made to Fleck et al, “Finding Naked People”, European Conference on Computer Vision, Vol. 11, pp. 592-602, 1966.
  • 36. Either a Gaussian prior probability model, or a K-Nearest Neighbor classifier is used to model example data labeled as skin or non-skin (S15). For the Gaussian case two class models are trained, and when a new pixel is presented for classification the likelihood ratio P(skin)/P(non-skin) is computed as a classification score (S16). In the Nearest Neighbor case, the classification score is computed to be the average class membership value (1 for skin, 0 for non-skin) of the K nearest training data points to the new pixel. Proximity is defined in the log color-opponent space.
  • 37. In another exemplary embodiment of the invention, in the interest of computational efficiency at run-time, a lookup table can be precomputed for all input values, quantizing the classification score (skin similarity value) into 8 bits and the input color channel values to 6, 7 or 8 bits. This corresponds to a lookup table which ranges between 256K and 16 MB of size. This information can be stored as a texture map for cases in which the computer graphic texture mapping hardware supports the ability to apply “pixel textures”, in which each pixel of an input image being rendered generates texture coordinates according to its RGB value. Otherwise, a traditional lookup table operation can be performed on input images with the main CPU. The use of texture mapping hardware for color detection can offer dramatic speed advantages relative to conventional methods.
  • 38. After the skin/non-skin results are obtained from the lookup table, segmentation and grouping analysis are performed on the classification score image (S17). The same tracking algorithm as described above for range image processing is used, i.e. morphological smoothing, thresholding and connected components computation. In this case, however, the low-gradient mask from the range module is applied before smoothing. As shown in FIG. 5, the color detection and segmentation module 230 searches for skin color within the target range. This restricts color regions to be identified only within the boundary of range regions; if spurious background skin hue is present in the background it will not adversely affect the shape of foreground skin color regions. Connected component regions are tracked from frame to frame with the constraint that temporal correspondence is not permitted between regions if their three-dimensional size changes more than a threshold amount.
  • 39. In each frame, the median hue and saturation of the skin, clothing and hair regions is calculated for input to a person classification algorithm in the personnel classification module 250. These computations are based on the identification of each target as described above. For the skin color computation, the connected component corresponding to the target silhouette is used to mask the original color data. The median hue and saturation is calculated over all pixels in the masked region. Hair and clothing color analyses are performed in the same manner. The determination of the hair region starts with the target's silhouette and removes the pixels identified by the skin color computation. Only the head region of the target's silhouette is considered, which is estimated as all points in the silhouette above the bottom of the face target as determined by the skin color data. The determination of the clothing color uses the inverse approach.
  • 40. Once the description of the skin, hair and clothing colors are estimated, they are input into the personnel classification module 250, where they are stored in a database of recent users, for mid- and long-term tracking purposes. More particularly, if a person whose image is being tracked should step out of the viewed scene and then return later that same day, the combination of skin, hair and clothing colors can be used to immediately identify that person as one who had been tracked earlier. If the person does not return until the next day, or some time later, the clothing colors may be different. However, the skin and hair colors, together with the estimated height of the person, may still be sufficient to adequately distinguish that person from the other recent users.
  • 41. To distinguish a head from hands and other body parts, pattern recognition methods which directly model statistical appearance are used in the face pattern classification module 240. In one example, the appearance of “faces” vs. “non-faces” is modeled via a neural network or Gaussian mixture-model. Such an approach is described in the articles “Neural Network-Based Face Detection”, Proc. IEEE Conference on Computer Vision and Pattern Recognition, 1996, by Rowley et al. and “Example-based Learning for View-Based Human Face Detection”, Proceedings of the ARPA IU Workshop '94, 1994, by Sung et al. This module reports the bounding box of the face region in the input image, masked by the foreground depth region, as illustrated in FIG. 5. Face detection per se, is reliable across many different users and imaging conditions, but is relatively slow, and requires that a frontal view of the face be present. For real-time tracking and/or when the target is not facing in direct frontal pose, tracking via the face pattern classification module 240 alone can be error-prone. In concert with color tracking module 230 and the range computation module 210, however, much more robust performance is obtained.
  • 42. More particularly, face detection is initially applied over the entire image. If a region corresponding to a face is detected, it is passed on to the integration module 255 as a candidate head location. Short term tracking is performed in the module 240 for subsequent frames by searching within windows around the detected locations in the previous frame. If a face is detected in a window, it is considered to be in short-term correspondence with a previously detected face. If no face is detected in the new frame, but the face detected in a previous frame overlapped a color or range region, the face detection module is updated by the integration module 255 to move with that region. Thus, faces can be discriminated in successive frames even when another positive face detection may not occur for several frames.
  • 43. Once color regions or range-defined head candidates have been found within the target's silhouette from one frame to the next, the results obtained by face pattern classification module 240 identify which regions correspond to the head. When a face is detected, the overlapping color or range region is marked, and the relative offset of the face detection result to the bounding box of the color or range region is recorded in the integration module 255. Regions are tracked from frame to frame as in the range case, with the additional constraint that a size constancy requirement is enforced: temporal correspondence is not assumed between regions if their three-dimensional size is considerably smaller or larger.
  • 44. When a region does change size dramatically, an additional test is performed to determine if the target region merged or split with another region relative to the previous frame. This may occur, for instance, if a person being tracked occasionally touches his or her face with his or her hand. If this has occurred, the face detection label and subregion position information is maintained, despite the merge or split. An assumption is made that the face did not move, in order to determine which color region to follow. This is done by computing the screen coordinates of the face subregion in the previous frame and re-evaluating which regions it overlaps in the current frame. If two regions have merged, the tracking follows the merged region, with offset such that the face's absolute position on the screen is the same as the previous frame. If two regions have split, the tracking follows the region closest to its position in the previous frame.
  • 45. Once the face is detected and able to be tracked, in accordance with one implementation of the invention, this information is fed to an application program 260 which manipulates the display itself. For instance, the application may use video texture mapping techniques to apply a distortion and morphing algorithm to the user's face. For discussion purposes it is assumed that texture and position coordinates are both normalized to be over a range from 0 to 1. A vertex is defined to be in “canonical coordinates” when position and texture coordinates are identical. To construct a display, a background rectangle to cover the display (from 0,0 to 1,1) in canonical coordinates is generated. This creates a display which is equivalent to a non-distorted, pass-through, video window.
  • 46. To perform face distortions, a mesh is defined over the region of the user's head. Within the external contour of the head region, vertices are placed optionally at the contour boundary as well as at evenly sampled interior points. Initially all vertices are placed in canonical coordinates, and set to have neutral base color.
  • 47. Color distortions may be effected by manipulating the base color of each vertex. Shape distortions are applied in one of two modes: parametric or physically-based. In the parametric mode distortions are performed by adding a deformation vector to each vertex position, expressed as a weighted sum of fixed basis deformations. These bases can be constructed so as to keep the borders of the distortion region in approximately canonical coordinates, so that there will be no apparent seams to the video effect. In the physically-based mode, forces can be applied to each vertex and position changes are computed using an approximation to an elastic surface. As a result, a vertex can be “pulled” in a given direction, and the entire mesh will deform as it were a rubber sheet. FIGS. 6a-6 d illustrate four examples of various types of basis deformations, and FIG. 6e depicts a physically-based distortion effect applied to the face of the user shown in FIG. 5. Specifically, FIG. 6a shows spherical expansion, FIG. 6b shows spherical shrinking, FIG. 6c illustrates a swirl effect, FIG. 6d shows lateral expansion, and FIG. 6e depicts a vertical sliding effect.
  • 48. The weight parameters associated with parametric basis deformations can vary over time, and can be expressed as a function of several relevant variables describing the state of the user: the distance of the user to the screen; their position on the floor in front of the display, or their overall body pose. In addition the weight parameters can vary randomly, or according to a script or external control. Forces for the physically-based model can be input either with an external interface, randomly, or directly in the image as the user's face touches other objects or body parts.
  • 49. In another embodiment of the invention, when a region is identified as a face, based on the face pattern detection algorithm of the face pattern classification module 240, the face pattern (a grayscale sub-image) in the target region can be normalized and passed to the personnel classification system 250. For optimal classification, the scale, alignment, and view of detected faces should be comparable. There is a large amount of variety in the face regions identified by a system of the type described in the previously mentioned article by Rowley et al., which does not employ normalization. For instance, faces are often identified which exhibit a substantial out-of-plane rotation. This is a good property for a detection system, but in the context of identification, it makes the problem more difficult. Several steps are used in the process of the present invention to achieve a set of geometrically normalized face patterns for use in classification. This process provides enough normalization to demonstrate the value of face patterns in a multi-modal person identification system. First, all the target regions are scaled to a common size. Each identified face target is compared with an example face at a canonical scale and view (e.g., upright and frontal) and face targets which vary radically from this model are discarded. The comparison is performed using simple normalized correlation. During the comparison with the canonical face the location of the maximum correlation score is recorded and the face pattern is translated to this alignment. While the face identification algorithm discussed above can be used to identify a face, other more powerful identification algorithms could also be employed such as an eigenface technique.
  • 50. As discussed previously, when a target is momentarily occluded or exits the scene, the short term tracking will fail since the position and size correspondences in each module will no longer apply. In order to track a target over medium and long term time scales, statistical appearance models are used. Each module computes an estimate of certain user attributes, as discussed above with respect to FIG. 2. If a target is occluded for a medium amount of time, attributes such as body metrics, skin, hair and clothing are used to determine the identity of a target. However, if an object is occluded or missing for a long amount of time (i.e., more than one day) attributes that vary with time or on a day to day basis cannot be utilized for identification purposes.
  • 51. Therefore, when a person is observed, an attempt is made to determine if the individual has been previously tracked. A previously identified individual is most likely to have generated the new observations if a calculated probability is above a minimum threshold. In order to determine the identity of a target, likelihood is integrated over time and modality: at time t, the identity estimate is
  • u*=arg maxj P(U j |O t)
  • 52. where
  • P(U j |O t)=P(U j |F 0 , . . . F t , H 0 , . . . H t , C 0 , . . . C t)
  • 53. and where
  • 54. Ot is the cumulative user observation through time t,
  • 55. Ft, Ht, and Ct are the face pattern, height and color observations at time t, and
  • 56. Uj are the saved statistics for person j.
  • 57. Time is restarted at t=0 when a new range silhouette is tracked. For purposes of this discussion, P(Uj) is assumed to be uniform across all users. With Bayes rule and the assumption of modality independence:
  • u*=arg maxj(P(F 0 , . . . F t|Uj)P(H 0 , . . . H t |U j)P(C 0 , . . . C t |U j))
  • 58. Since the observations are independent of the observed noise in sensor and segmentation routines, the posterior probabilities at different times may be considered independent. With this consideration, probability in each modality can be incrementally computed by the following equation:
  • P(F 0 , . . . F t |U j)=P(F 0 , . . . F t−1 |U j)P(F t /U j).
  • 59. Probability is computed similarly for range and color data.
  • 60. Mean and covariance data for the observed user color data is collected, as is mean and variance of user height. The likelihoods P(Ft|Uj) and P(Ct|Uj) are computed assuming a Gaussian density model. For face pattern data, the size-normalized and position-normalized mean pattern from each user is stored, and P(Ft|Uj) is approximated with an empirically determined density which is a function of the normalized correlation of Ft with the mean pattern for person j.
  • 61. Like multi-modal person detection and tracking, multi-modal person identification is more robust than identification systems based on a single data modality. Body metrics, color, and face pattern each present independent classification data and are accompanied by similarly independent failure modes. Although face patterns are perhaps the most common data source for current passive person classification methods, body metrics and color information are not normally incorporated in identification systems because they do not provide sufficient discrimination to justify their use alone. However, combined with each other and with face patterns, these other modalities can provide important clues to discriminate otherwise similar people, or help classify people when only degraded data is available in other modes.
  • 62. Once the viewer has been identified, for instance from a database of recent viewers of the system, that data can be provided to, or otherwise used to control, the application program 260. For example, a person could sit down in front of a computer and be detected by the imaging system. The personnel identification module could then identify the person sitting before the computer and launch a specific application program that the individual always desires to have running. Alternatively, the person's identification can be fed to the computer's operating system to cause it to display that individual's personalized computer desktop, e-mail, etc. In still another possible application, a kiosk could be set up to run different applications for different viewers. For example, a kiosk for selling items could present items more likely to appeal to a male or female depending on the person standing before the kiosk.
  • 63. The preceding discussion of the present invention was presented in the context of a single user of interest in the scene being imaged. However, the principles which underlie the invention can be used to track multiple users simultaneously and apply appropriate applications, e.g. distort each user's face. To implement such a feature, a separate target region is determined for each person of interest in the scene, based upon the range and color information, and the foregoing techniques are applied to each such target region. In the virtual mirror embodiment, for example, one user's face can be morphed or combined with other users of the system in the present or past, to add features to the user's face. Distorting or morphing the user's face onto other characters, virtual or real, is also possible.
  • 64. The above described interactive display can be implemented using three computer systems, e.g., one personal computer and two workstations, an NTSC video monitor, stereo video cameras, a dedicated stereo computation PC board, and an optical half-mirror. Depth estimates are computed on the stereo PC board based on input from the stereo cameras, which is sent over a network from the PC to the first workstation at approx 20 Hz for 128×128 range maps. On this workstation color video is digitized at 640×480, color lookup and connected components analysis is performed at 10-20 Hz, and the output image constructed by applying the acquired video as a texture source for the background rectangle and the face mesh (at 10-20 Hz). A second workstation performs face detection routines at 128×128 resolution at approximately (2-3 Hz), using either it's own digitized copy of the color video signal, or using a sub-sampled source image sent over the network. It should also be understood that while the above mentioned hardware implementation can be used with the present embodiments of the invention, other less expensive basic hardware could also be used.
  • 65. While the present invention has been described with respect to its preferred embodiments, those skilled in the art will recognize that the present invention is not limited to the specific embodiment described and illustrated herein. Different embodiments and adaptations besides those shown herein and described, as well as many variations, modifications and equivalent arrangements, will be apparent or will be reasonably suggested by the foregoing specification and drawings without departing from the substance or scope of the invention. For example, the disclosed system achieves it's robust performance in detection, tracking, and identification through the combination of three specific visual modalities: range, color, and pattern. Additional independent modalities could serve to further increase robustness and performance. For instance, the computation of optical flow or visual motion fields could assist in short term tracking by providing estimates of object trajectory as well as improve figure/ground segmentation.
  • 66. The presently disclosed embodiments are therefore considered in all respects to be illustrative, and not restrictive, of the principles which underlie the invention. The invention is indicated by the appended claims, rather than the foregoing description, and all changes that come within the scope and range of equivalents thereof are intended to be embraced therein.

Claims (24)

What is claimed is:
1. An image detecting and tracking system, comprising:
at least two image sensing devices, each of said image sensing devices producing image data;
a first tracking module which receives the image data, generates a disparity image based upon the image data from each of said image sensing devices, and determines a target region in said disparity image;
a second tracking module which classifies and tracks said target region through color segmentation; and
a third tracking module which distinguishes individual features and tracks identified features which are located within a classified target region.
2. The image detecting and tracking system of
claim 1
, wherein said third tracking module distinguishes individual features based upon intensity patterns in a sensed image.
3. The image detecting and tracking system of
claim 1
, wherein said first tracking module determines the relative distances of respective target regions in the sensed image.
4. The image detecting and tracking system of
claim 1
, wherein said first tracking module locates a target area using a connected components grouping analysis.
5. The image detecting and tracking system of
claim 4
, wherein said image data is received in the form of video frames and said connected components grouping analysis is performed for each pair of video frames received from said image detecting devices.
6. The image detecting and tracking system of
claim 1
, wherein said classification in said second module is performed with a Gaussian prior probability model.
7. The image detecting and tracking system of
claim 1
, wherein said first tracking module generates a boolean mask based upon a determined target region, and said second and third tracking modules only process image data contained within the mask.
8. The image detecting and tracking system of
claim 7
, wherein said boolean mask corresponds to the silhouette of a person detected in the sensed image.
9. The image detecting and tracking system of
claim 1
, wherein said individual features are human features.
10. A method for image detecting and tracking comprising:
detecting an image via two separate optical paths;
receiving image data from said paths, generating a disparity image based upon the image data from each of said paths, and determining a target region in said disparity image;
classifying and tracking said target region through color segmentation;
detecting facial patterns within said target region based on said image data; and
displaying a image of the facial patterns detected within said target region.
11. The image detecting and tracking method of
claim 10
wherein said disparity image is generated using the census algorithm.
12. The image tracking and detecting method of
claim 10
, wherein said step of locating a target area uses a connected components grouping analysis.
13. The image tracking and detecting method of
claim 12
, wherein said image data is received in the form of video frames and said connected components grouping analysis is performed for each set of video frames received from said cameras.
14. The image tracking and detecting method of
claim 10
, wherein said classification employs a Gaussian prior probability model.
15. The image tracking and detecting method of
claim 10
, wherein said displayed facial patterns are distorted relative to the originally detected image.
16. A system for executing an application in accordance with the presence of an identified individual, comprising:
a detector which discriminates between a human image and a background area in a video signal and outputs an image signal representative thereof;
a first processing system which receives said image signal and tracks the location of the human image over time;
a second processing system which determines characteristics of the tracked human image, and outputs characteristics of a human;
an identification system which receives said characteristics of said human and identifies a particular individual from a plurality of possible individuals; and
an application program which performs a function based upon said identification of the individual.
17. The system of
claim 16
wherein said second processing system determines a face region for the tracked human image.
18. The system of
claim 17
wherein said application program comprises,
distortion means which distorts the image in said face region; and
display means which displays the distorted image in said face region.
19. The system of
claim 18
, wherein the facial region is distorted separately from the remainder of the image.
20. The system of
claim 16
, wherein said application program causes information to be displayed which is associated with the identified individual.
21. The system of
claim 16
, wherein said plurality of possible individuals are stored in a database of images which have been previously been detected by said system.
22. An identification system, comprising:
at least two image sensing devices, each of said image sensing devices producing image data;
a first tracking module which receives the image data from each of said image sensing devices, locates and tracks a target area in the sensed image and provides a range identity description;
a second tracking module which classifies said target area through color segmentation and outputs a color identity description;
a third tracking module which distinguishes individual features located within said classified target area and outputs a face identity description; and
a classification module which receives said range identity description, said color identity description and said face identity description, and estimates an identity of a person whose image is contained within said target area.
23. The identification classification system of
claim 22
, wherein said range identity description is a height of the target area in said disparity image.
24. The identification classification system of
claim 22
, wherein said color identity description is a skin color and hair color designation.
US09/726,425 1997-08-01 2000-12-01 Method and apparatus for personnel detection and tracking Expired - Lifetime US6445810B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/726,425 US6445810B2 (en) 1997-08-01 2000-12-01 Method and apparatus for personnel detection and tracking

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US5457297P 1997-08-01 1997-08-01
US09/102,101 US6188777B1 (en) 1997-08-01 1998-06-22 Method and apparatus for personnel detection and tracking
US09/726,425 US6445810B2 (en) 1997-08-01 2000-12-01 Method and apparatus for personnel detection and tracking

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/102,101 Division US6188777B1 (en) 1997-08-01 1998-06-22 Method and apparatus for personnel detection and tracking

Publications (2)

Publication Number Publication Date
US20010000025A1 true US20010000025A1 (en) 2001-03-15
US6445810B2 US6445810B2 (en) 2002-09-03

Family

ID=26733204

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/102,101 Expired - Lifetime US6188777B1 (en) 1997-08-01 1998-06-22 Method and apparatus for personnel detection and tracking
US09/726,425 Expired - Lifetime US6445810B2 (en) 1997-08-01 2000-12-01 Method and apparatus for personnel detection and tracking

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/102,101 Expired - Lifetime US6188777B1 (en) 1997-08-01 1998-06-22 Method and apparatus for personnel detection and tracking

Country Status (4)

Country Link
US (2) US6188777B1 (en)
EP (1) EP0998718A1 (en)
AU (1) AU8584898A (en)
WO (1) WO1999006940A1 (en)

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020044682A1 (en) * 2000-09-08 2002-04-18 Weil Josef Oster Method and apparatus for subject physical position and security determination
WO2003049035A2 (en) * 2001-12-06 2003-06-12 Koninklijke Philips Electronics N.V. Method and apparatus for automatic face blurring
US6591001B1 (en) * 1998-10-26 2003-07-08 Oki Electric Industry Co., Ltd. Image-input device
US6654047B2 (en) * 1998-10-27 2003-11-25 Toshiba Tec Kabushiki Kaisha Method of and device for acquiring information on a traffic line of persons
WO2003100710A1 (en) * 2002-05-22 2003-12-04 A4Vision Methods and systems for detecting and recognizing objects in a controlled wide area
US20030231788A1 (en) * 2002-05-22 2003-12-18 Artiom Yukhin Methods and systems for detecting and recognizing an object based on 3D image data
US20030235335A1 (en) * 2002-05-22 2003-12-25 Artiom Yukhin Methods and systems for detecting and recognizing objects in a controlled wide area
WO2004052691A1 (en) * 2002-12-12 2004-06-24 Daimlerchrysler Ag Method and device for determining a three-dimension position of passengers of a motor car
US20050105821A1 (en) * 2003-11-18 2005-05-19 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, and program
US20050152579A1 (en) * 2003-11-18 2005-07-14 Samsung Electronics Co., Ltd. Person detecting apparatus and method and privacy protection system employing the same
US20050167588A1 (en) * 2003-12-30 2005-08-04 The Mitre Corporation Techniques for building-scale electrostatic tomography
US20050207649A1 (en) * 2004-03-22 2005-09-22 Fuji Photo Film Co., Ltd. Particular-region detection method and apparatus, and program therefor
US7003135B2 (en) * 2001-05-25 2006-02-21 Industrial Technology Research Institute System and method for rapidly tracking multiple faces
US20060045372A1 (en) * 2004-08-27 2006-03-02 National Cheng Kung University Image-capturing device and method for removing strangers from an image
US20060198554A1 (en) * 2002-11-29 2006-09-07 Porter Robert M S Face detection
US20070013791A1 (en) * 2005-07-05 2007-01-18 Koichi Kinoshita Tracking apparatus
US20070076947A1 (en) * 2005-10-05 2007-04-05 Haohong Wang Video sensor-based automatic region-of-interest detection
US20070098303A1 (en) * 2005-10-31 2007-05-03 Eastman Kodak Company Determining a particular person from a collection
US7274803B1 (en) 2002-04-02 2007-09-25 Videomining Corporation Method and system for detecting conscious hand movement patterns and computer-generated visual feedback for facilitating human-computer interaction
US20070237364A1 (en) * 2006-03-31 2007-10-11 Fuji Photo Film Co., Ltd. Method and apparatus for context-aided human identification
US20070242878A1 (en) * 2006-04-13 2007-10-18 Tandent Vision Science, Inc. Method and system for separating illumination and reflectance using a log color space
US7317812B1 (en) 2002-11-15 2008-01-08 Videomining Corporation Method and apparatus for robustly tracking objects
KR100825689B1 (en) 2006-08-18 2008-04-29 학교법인 포항공과대학교 Facial Disguise Discrimination method
US20080279426A1 (en) * 2007-05-09 2008-11-13 Samsung Electronics., Ltd. System and method for verifying face of user using light mask
US20090103779A1 (en) * 2006-03-22 2009-04-23 Daimler Ag Multi-sensorial hypothesis based object detector and object pursuer
US7587082B1 (en) 2006-02-17 2009-09-08 Cognitech, Inc. Object recognition based on 2D images and 3D models
US20100160049A1 (en) * 2008-12-22 2010-06-24 Nintendo Co., Ltd. Storage medium storing a game program, game apparatus and game controlling method
US20100160044A1 (en) * 2008-12-22 2010-06-24 Tetsuya Satoh Game program and game apparatus
KR100996542B1 (en) 2008-03-31 2010-11-24 성균관대학교산학협력단 Image Processing Apparatus and Method for Detecting Motion Information in Real Time
US7916894B1 (en) * 2007-01-29 2011-03-29 Adobe Systems Incorporated Summary of a video using faces
US20110081052A1 (en) * 2009-10-02 2011-04-07 Fotonation Ireland Limited Face recognition performance using additional image features
US20110193863A1 (en) * 2008-10-28 2011-08-11 Koninklijke Philips Electronics N.V. Three dimensional display system
US20120014562A1 (en) * 2009-04-05 2012-01-19 Rafael Advanced Defense Systems Ltd. Efficient method for tracking people
US20120096356A1 (en) * 2010-10-19 2012-04-19 Apple Inc. Visual Presentation Composition
US20120148118A1 (en) * 2010-12-09 2012-06-14 Electronics And Telecommunications Research Institute Method for classifying images and apparatus for the same
US20130307978A1 (en) * 2012-05-17 2013-11-21 Caterpillar, Inc. Personnel Classification and Response System
US20130322741A1 (en) * 2012-06-05 2013-12-05 DRVision Technologies LLC. Teachable pattern scoring method
US8666124B2 (en) 2006-08-11 2014-03-04 DigitalOptics Corporation Europe Limited Real-time face tracking in a digital image acquisition device
US8934680B2 (en) 2006-08-11 2015-01-13 Fotonation Limited Face tracking for controlling imaging parameters
CN112614160A (en) * 2020-12-24 2021-04-06 中标慧安信息技术股份有限公司 Multi-object face tracking method and system
WO2022060339A1 (en) * 2020-09-18 2022-03-24 V-Count Teknoloji Anonim Sirketi System and method of personnel exception in visitor count

Families Citing this family (514)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE47908E1 (en) 1991-12-23 2020-03-17 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
USRE48056E1 (en) 1991-12-23 2020-06-16 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US5903454A (en) 1991-12-23 1999-05-11 Hoffberg; Linda Irene Human-factored interface corporating adaptive pattern recognition based controller apparatus
US6850252B1 (en) 1999-10-05 2005-02-01 Steven M. Hoffberg Intelligent electronic appliance system and method
USRE46310E1 (en) 1991-12-23 2017-02-14 Blanding Hovenweep, Llc Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US8352400B2 (en) 1991-12-23 2013-01-08 Hoffberg Steven M Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore
US10361802B1 (en) 1999-02-01 2019-07-23 Blanding Hovenweep, Llc Adaptive pattern recognition based control system and method
US6650761B1 (en) * 1999-05-19 2003-11-18 Digimarc Corporation Watermarked business cards and methods
US6693666B1 (en) 1996-12-11 2004-02-17 Interval Research Corporation Moving imager camera for track and range capture
US6215898B1 (en) * 1997-04-15 2001-04-10 Interval Research Corporation Data processing system and method
US6188777B1 (en) * 1997-08-01 2001-02-13 Interval Research Corporation Method and apparatus for personnel detection and tracking
JPH11250071A (en) * 1998-02-26 1999-09-17 Minolta Co Ltd Image database constructing method, image database device and image information storage medium
US6593956B1 (en) * 1998-05-15 2003-07-15 Polycom, Inc. Locating an audio source
JP2000048184A (en) * 1998-05-29 2000-02-18 Canon Inc Method for processing image, and method for extracting facial area and device therefor
AUPP400998A0 (en) * 1998-06-10 1998-07-02 Canon Kabushiki Kaisha Face detection in digital images
US6404900B1 (en) * 1998-06-22 2002-06-11 Sharp Laboratories Of America, Inc. Method for robust human face tracking in presence of multiple persons
US6466685B1 (en) * 1998-07-14 2002-10-15 Kabushiki Kaisha Toshiba Pattern recognition apparatus and method
US6358235B1 (en) * 1998-07-29 2002-03-19 The Procter & Gamble Company Soft conformable hollow bag tampon
US20010008561A1 (en) * 1999-08-10 2001-07-19 Paul George V. Real-time object tracking system
US7121946B2 (en) * 1998-08-10 2006-10-17 Cybernet Systems Corporation Real-time head tracking system for computer games and other applications
US7036094B1 (en) 1998-08-10 2006-04-25 Cybernet Systems Corporation Behavior recognition system
US6711278B1 (en) * 1998-09-10 2004-03-23 Microsoft Corporation Tracking semantic objects in vector image sequences
AU1930700A (en) 1998-12-04 2000-06-26 Interval Research Corporation Background estimation and segmentation based on range and color
US7062073B1 (en) 1999-01-19 2006-06-13 Tumey David M Animated toy utilizing artificial intelligence and facial image recognition
US7904187B2 (en) 1999-02-01 2011-03-08 Hoffberg Steven M Internet appliance system and method
US7003134B1 (en) * 1999-03-08 2006-02-21 Vulcan Patents Llc Three dimensional object pose estimation which employs dense depth information
JP3617373B2 (en) * 1999-06-03 2005-02-02 オムロン株式会社 Gate device
US6807291B1 (en) * 1999-06-04 2004-10-19 Intelligent Verification Systems, Inc. Animated toy utilizing artificial intelligence and fingerprint verification
US7050606B2 (en) * 1999-08-10 2006-05-23 Cybernet Systems Corporation Tracking and gesture recognition system particularly suited to vehicular control applications
US6526161B1 (en) * 1999-08-30 2003-02-25 Koninklijke Philips Electronics N.V. System and method for biometrics-based facial feature extraction
WO2001028238A2 (en) * 1999-10-08 2001-04-19 Sarnoff Corporation Method and apparatus for enhancing and indexing video and audio signals
US6792135B1 (en) * 1999-10-29 2004-09-14 Microsoft Corporation System and method for face detection through geometric distribution of a non-intensity image property
EP1102210A3 (en) * 1999-11-16 2005-12-14 Fuji Photo Film Co., Ltd. Image processing apparatus, image processing method and recording medium
US6754389B1 (en) * 1999-12-01 2004-06-22 Koninklijke Philips Electronics N.V. Program classification using object tracking
US6658136B1 (en) * 1999-12-06 2003-12-02 Microsoft Corporation System and process for locating and tracking a person or object in a scene using a series of range images
AUPQ464099A0 (en) * 1999-12-14 2000-01-13 Canon Kabushiki Kaisha Emotive editing system
US6807290B2 (en) * 2000-03-09 2004-10-19 Microsoft Corporation Rapid computer modeling of faces for animation
AUPQ684600A0 (en) * 2000-04-11 2000-05-11 Safehouse International Limited An object monitoring system
US7006950B1 (en) * 2000-06-12 2006-02-28 Siemens Corporate Research, Inc. Statistical modeling and performance characterization of a real-time dual camera surveillance system
US6937744B1 (en) * 2000-06-13 2005-08-30 Microsoft Corporation System and process for bootstrap initialization of nonparametric color models
US6774908B2 (en) * 2000-10-03 2004-08-10 Creative Frontier Inc. System and method for tracking an object in a video and linking information thereto
KR20020031630A (en) * 2000-10-20 2002-05-03 구자홍 Method for extraction of face using distortion data of color
US8564661B2 (en) * 2000-10-24 2013-10-22 Objectvideo, Inc. Video analytic rule detection system and method
US9892606B2 (en) 2001-11-15 2018-02-13 Avigilon Fortress Corporation Video surveillance system employing video primitives
US8711217B2 (en) 2000-10-24 2014-04-29 Objectvideo, Inc. Video surveillance system employing video primitives
US20050162515A1 (en) * 2000-10-24 2005-07-28 Objectvideo, Inc. Video surveillance system
US6973201B1 (en) * 2000-11-01 2005-12-06 Koninklijke Philips Electronics N.V. Person tagging in an image processing system utilizing a statistical model based on both appearance and geometric features
JP4590717B2 (en) * 2000-11-17 2010-12-01 ソニー株式会社 Face identification device and face identification method
US7020305B2 (en) * 2000-12-06 2006-03-28 Microsoft Corporation System and method providing improved head motion estimations for animation
US6697502B2 (en) * 2000-12-14 2004-02-24 Eastman Kodak Company Image processing method for detecting human figures in a digital image
US7424175B2 (en) 2001-03-23 2008-09-09 Objectvideo, Inc. Video segmentation using statistical pixel modeling
US6760465B2 (en) 2001-03-30 2004-07-06 Intel Corporation Mechanism for tracking colored objects in a video sequence
US20020140705A1 (en) * 2001-03-30 2002-10-03 Frazer Matthew E. Automated Calibration for colored object tracking
DE60213032T2 (en) * 2001-05-22 2006-12-28 Matsushita Electric Industrial Co. Ltd. Facial detection device, face paw detection device, partial image extraction device, and method for these devices
US7167576B2 (en) * 2001-07-02 2007-01-23 Point Grey Research Method and apparatus for measuring dwell time of objects in an environment
DE10132013B4 (en) * 2001-07-03 2004-04-08 Siemens Ag Multimodal biometrics
US20050008198A1 (en) * 2001-09-14 2005-01-13 Guo Chun Biao Apparatus and method for selecting key frames of clear faces through a sequence of images
US7432940B2 (en) * 2001-10-12 2008-10-07 Canon Kabushiki Kaisha Interactive animation of sprites in a video production
US7383283B2 (en) * 2001-10-16 2008-06-03 Joseph Carrabis Programable method and apparatus for real-time adaptation of presentations to individuals
CA2359269A1 (en) * 2001-10-17 2003-04-17 Biodentity Systems Corporation Face imaging system for recordal and automated identity confirmation
DE10158990C1 (en) * 2001-11-30 2003-04-10 Bosch Gmbh Robert Video surveillance system incorporates masking of identified object for maintaining privacy until entry of authorisation
AU2002362085A1 (en) * 2001-12-07 2003-07-09 Canesta Inc. User interface for electronic devices
US6999620B1 (en) * 2001-12-10 2006-02-14 Hewlett-Packard Development Company, L.P. Segmenting video input using high-level feedback
US8195597B2 (en) * 2002-02-07 2012-06-05 Joseph Carrabis System and method for obtaining subtextual information regarding an interaction between an individual and a programmable device
US8655804B2 (en) 2002-02-07 2014-02-18 Next Stage Evolution, Llc System and method for determining a characteristic of an individual
US6990639B2 (en) * 2002-02-07 2006-01-24 Microsoft Corporation System and process for controlling electronic components in a ubiquitous computing environment using multimodal integration
US10242255B2 (en) 2002-02-15 2019-03-26 Microsoft Technology Licensing, Llc Gesture recognition system using depth perceptive sensors
WO2003071410A2 (en) * 2002-02-15 2003-08-28 Canesta, Inc. Gesture recognition system using depth perceptive sensors
AU2003219926A1 (en) * 2002-02-26 2003-09-09 Canesta, Inc. Method and apparatus for recognizing objects
US7369685B2 (en) * 2002-04-05 2008-05-06 Identix Corporation Vision-based operating method and system
US20040052418A1 (en) * 2002-04-05 2004-03-18 Bruno Delean Method and apparatus for probabilistic image analysis
AUPS170902A0 (en) * 2002-04-12 2002-05-16 Canon Kabushiki Kaisha Face detection and tracking in a video sequence
US7003136B1 (en) * 2002-04-26 2006-02-21 Hewlett-Packard Development Company, L.P. Plan-view projections of depth image data for object tracking
DE10225077B4 (en) * 2002-06-05 2007-11-15 Vr Magic Gmbh Object tracking device for medical operations
US7224731B2 (en) * 2002-06-28 2007-05-29 Microsoft Corporation Motion estimation/compensation for screen capture video
US7085420B2 (en) * 2002-06-28 2006-08-01 Microsoft Corporation Text detection in continuous tone image segments
US7072512B2 (en) * 2002-07-23 2006-07-04 Microsoft Corporation Segmentation of digital video and images into continuous tone and palettized regions
JP3785456B2 (en) * 2002-07-25 2006-06-14 独立行政法人産業技術総合研究所 Safety monitoring device at station platform
US8351647B2 (en) * 2002-07-29 2013-01-08 Videomining Corporation Automatic detection and aggregation of demographics and behavior of people
US8010402B1 (en) 2002-08-12 2011-08-30 Videomining Corporation Method for augmenting transaction data with visually extracted demographics of people using computer vision
US7151530B2 (en) 2002-08-20 2006-12-19 Canesta, Inc. System and method for determining an input selected by a user through a virtual interface
US7526120B2 (en) 2002-09-11 2009-04-28 Canesta, Inc. System and method for providing intelligent airbag deployment
US7340079B2 (en) * 2002-09-13 2008-03-04 Sony Corporation Image recognition apparatus, image recognition processing method, and image recognition program
GB0222113D0 (en) * 2002-09-24 2002-10-30 Koninkl Philips Electronics Nv Image recognition
US7394486B2 (en) * 2002-09-26 2008-07-01 Seiko Epson Corporation Adjusting output image of image data
JP2004118627A (en) * 2002-09-27 2004-04-15 Toshiba Corp Figure identification device and method
US20040066500A1 (en) * 2002-10-02 2004-04-08 Gokturk Salih Burak Occupancy detection and measurement system and method
EP1563686B1 (en) * 2002-11-12 2010-01-06 Intellivid Corporation Method and system for tracking and behavioral monitoring of multiple objects moving through multiple fields-of-view
US7221775B2 (en) * 2002-11-12 2007-05-22 Intellivid Corporation Method and apparatus for computerized image background analysis
US6791487B1 (en) 2003-03-07 2004-09-14 Honeywell International Inc. Imaging methods and systems for concealed weapon detection
US8745541B2 (en) 2003-03-25 2014-06-03 Microsoft Corporation Architecture for controlling a computer using hand gestures
US7665041B2 (en) 2003-03-25 2010-02-16 Microsoft Corporation Architecture for controlling a computer using hand gestures
US20100002070A1 (en) 2004-04-30 2010-01-07 Grandeye Ltd. Method and System of Simultaneously Displaying Multiple Views for Video Surveillance
US7528881B2 (en) * 2003-05-02 2009-05-05 Grandeye, Ltd. Multiple object processing in wide-angle video camera
US7956889B2 (en) 2003-06-04 2011-06-07 Model Software Corporation Video surveillance system
US8363951B2 (en) 2007-03-05 2013-01-29 DigitalOptics Corporation Europe Limited Face recognition training method and apparatus
US7587068B1 (en) 2004-01-22 2009-09-08 Fotonation Vision Limited Classification database for consumer digital images
US7315630B2 (en) 2003-06-26 2008-01-01 Fotonation Vision Limited Perfecting of digital image rendering parameters within rendering devices using face detection
US7844076B2 (en) * 2003-06-26 2010-11-30 Fotonation Vision Limited Digital image processing using face detection and skin tone information
US8553949B2 (en) 2004-01-22 2013-10-08 DigitalOptics Corporation Europe Limited Classification and organization of consumer digital images using workflow, and face detection and recognition
US8896725B2 (en) 2007-06-21 2014-11-25 Fotonation Limited Image capture device with contemporaneous reference image capture mechanism
US7680342B2 (en) 2004-08-16 2010-03-16 Fotonation Vision Limited Indoor/outdoor classification in digital images
US7620218B2 (en) * 2006-08-11 2009-11-17 Fotonation Ireland Limited Real-time face tracking with reference images
US7792335B2 (en) 2006-02-24 2010-09-07 Fotonation Vision Limited Method and apparatus for selective disqualification of digital images
US7362368B2 (en) * 2003-06-26 2008-04-22 Fotonation Vision Limited Perfecting the optics within a digital image acquisition device using face detection
US9692964B2 (en) 2003-06-26 2017-06-27 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US7565030B2 (en) 2003-06-26 2009-07-21 Fotonation Vision Limited Detecting orientation of digital images using face detection information
US8498452B2 (en) * 2003-06-26 2013-07-30 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US8682097B2 (en) * 2006-02-14 2014-03-25 DigitalOptics Corporation Europe Limited Digital image enhancement with reference images
US7574016B2 (en) 2003-06-26 2009-08-11 Fotonation Vision Limited Digital image processing using face detection information
US8989453B2 (en) * 2003-06-26 2015-03-24 Fotonation Limited Digital image processing using face detection information
US7471846B2 (en) 2003-06-26 2008-12-30 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US7440593B1 (en) 2003-06-26 2008-10-21 Fotonation Vision Limited Method of improving orientation and color balance of digital images using face detection information
US9129381B2 (en) * 2003-06-26 2015-09-08 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US8948468B2 (en) 2003-06-26 2015-02-03 Fotonation Limited Modification of viewing parameters for digital images using face detection information
US7269292B2 (en) * 2003-06-26 2007-09-11 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US8330831B2 (en) * 2003-08-05 2012-12-11 DigitalOptics Corporation Europe Limited Method of gathering visual meta data using a reference image
US8155397B2 (en) * 2007-09-26 2012-04-10 DigitalOptics Corporation Europe Limited Face tracking in a camera processor
US8593542B2 (en) * 2005-12-27 2013-11-26 DigitalOptics Corporation Europe Limited Foreground/background separation using reference images
US8494286B2 (en) 2008-02-05 2013-07-23 DigitalOptics Corporation Europe Limited Face detection in mid-shot digital images
US7317815B2 (en) * 2003-06-26 2008-01-08 Fotonation Vision Limited Digital image processing composition using face detection information
US7616233B2 (en) * 2003-06-26 2009-11-10 Fotonation Vision Limited Perfecting of digital image capture parameters within acquisition devices using face detection
US7792970B2 (en) * 2005-06-17 2010-09-07 Fotonation Vision Limited Method for establishing a paired connection between media devices
JP2005078376A (en) * 2003-08-29 2005-03-24 Sony Corp Object detection device, object detection method, and robot device
US7286157B2 (en) * 2003-09-11 2007-10-23 Intellivid Corporation Computerized method and apparatus for determining field-of-view relationships among multiple image sensors
US7439074B2 (en) * 2003-09-30 2008-10-21 Hoa Duc Nguyen Method of analysis of alcohol by mass spectrometry
US7346187B2 (en) * 2003-10-10 2008-03-18 Intellivid Corporation Method of counting objects in a monitored environment and apparatus for the same
US7280673B2 (en) * 2003-10-10 2007-10-09 Intellivid Corporation System and method for searching for changes in surveillance video
US7308119B2 (en) * 2003-11-26 2007-12-11 Canon Kabushiki Kaisha Image retrieval apparatus and method, and image display apparatus and method thereof
JP2005167517A (en) * 2003-12-01 2005-06-23 Olympus Corp Image processor, calibration method thereof, and image processing program
US7594121B2 (en) * 2004-01-22 2009-09-22 Sony Corporation Methods and apparatus for determining an identity of a user
US7555148B1 (en) 2004-01-22 2009-06-30 Fotonation Vision Limited Classification system for consumer digital images using workflow, face detection, normalization, and face recognition
US7564994B1 (en) 2004-01-22 2009-07-21 Fotonation Vision Limited Classification system for consumer digital images using automatic workflow and face detection and recognition
US7558408B1 (en) 2004-01-22 2009-07-07 Fotonation Vision Limited Classification system for consumer digital images using workflow and user interface modules, and face detection and recognition
US7551755B1 (en) 2004-01-22 2009-06-23 Fotonation Vision Limited Classification and organization of consumer digital images using workflow, and face detection and recognition
JP3847753B2 (en) 2004-01-30 2006-11-22 株式会社ソニー・コンピュータエンタテインメント Image processing apparatus, image processing method, recording medium, computer program, semiconductor device
JP4483334B2 (en) * 2004-02-18 2010-06-16 富士ゼロックス株式会社 Image processing device
US20050232463A1 (en) * 2004-03-02 2005-10-20 David Hirvonen Method and apparatus for detecting a presence prior to collision
US8427538B2 (en) * 2004-04-30 2013-04-23 Oncam Grandeye Multiple view and multiple object processing in wide-angle video camera
US7340443B2 (en) * 2004-05-14 2008-03-04 Lockheed Martin Corporation Cognitive arbitration system
GB2414614A (en) * 2004-05-28 2005-11-30 Sony Uk Ltd Image processing to determine most dissimilar images
JP2005346806A (en) * 2004-06-02 2005-12-15 Funai Electric Co Ltd Dvd recorder and recording and reproducing apparatus
US7552091B2 (en) * 2004-06-04 2009-06-23 Endicott Interconnect Technologies, Inc. Method and system for tracking goods
US7142121B2 (en) 2004-06-04 2006-11-28 Endicott Interconnect Technologies, Inc. Radio frequency device for tracking goods
US7627148B2 (en) * 2004-07-06 2009-12-01 Fujifilm Corporation Image data processing apparatus and method, and image data processing program
CA2568633C (en) * 2004-10-15 2008-04-01 Oren Halpern A system and a method for improving the captured images of digital still cameras
US8320641B2 (en) 2004-10-28 2012-11-27 DigitalOptics Corporation Europe Limited Method and apparatus for red-eye detection using preview or other reference images
ATE425634T1 (en) * 2004-12-02 2009-03-15 Swisscom Mobile Ag METHOD AND SYSTEM FOR REPRODUCING THE IMAGE OF A PERSON
US7715597B2 (en) * 2004-12-29 2010-05-11 Fotonation Ireland Limited Method and component for image recognition
US8503800B2 (en) * 2007-03-05 2013-08-06 DigitalOptics Corporation Europe Limited Illumination detection using classifier chains
US7620266B2 (en) * 2005-01-20 2009-11-17 International Business Machines Corporation Robust and efficient foreground analysis for real-time video surveillance
US8009871B2 (en) 2005-02-08 2011-08-30 Microsoft Corporation Method and system to segment depth images and to detect shapes in three-dimensionally acquired data
US7512262B2 (en) * 2005-02-25 2009-03-31 Microsoft Corporation Stereo-based image processing
WO2007094802A2 (en) 2005-03-25 2007-08-23 Intellivid Corporation Intelligent camera selection and object tracking
US20060233258A1 (en) * 2005-04-15 2006-10-19 Microsoft Corporation Scalable motion estimation
US7403642B2 (en) * 2005-04-21 2008-07-22 Microsoft Corporation Efficient propagation for face annotation
US20090041297A1 (en) * 2005-05-31 2009-02-12 Objectvideo, Inc. Human detection and tracking for security applications
KR100695155B1 (en) * 2005-06-18 2007-03-14 삼성전자주식회사 Apparatus and method for detecting occluded face and apparatus and method for discriminating illicit transactor employing the same
US9036028B2 (en) * 2005-09-02 2015-05-19 Sensormatic Electronics, LLC Object tracking and alerts
US7522769B2 (en) * 2005-09-09 2009-04-21 Hewlett-Packard Development Company, L.P. Method and system for skin color estimation from an image
US8019170B2 (en) * 2005-10-05 2011-09-13 Qualcomm, Incorporated Video frame motion-based automatic region-of-interest detection
US7697827B2 (en) 2005-10-17 2010-04-13 Konicek Jeffrey C User-friendlier interfaces for a camera
US8265392B2 (en) * 2006-02-07 2012-09-11 Qualcomm Incorporated Inter-mode region-of-interest video object segmentation
US8150155B2 (en) * 2006-02-07 2012-04-03 Qualcomm Incorporated Multi-mode region-of-interest video object segmentation
US8265349B2 (en) * 2006-02-07 2012-09-11 Qualcomm Incorporated Intra-mode region-of-interest video object segmentation
EP1821237B1 (en) * 2006-02-15 2010-11-17 Kabushiki Kaisha Toshiba Person identification device and person identification method
US7804983B2 (en) 2006-02-24 2010-09-28 Fotonation Vision Limited Digital image acquisition control and correction method and apparatus
KR100695174B1 (en) * 2006-03-28 2007-03-14 삼성전자주식회사 Method and apparatus for tracking listener's head position for virtual acoustics
US8494052B2 (en) * 2006-04-07 2013-07-23 Microsoft Corporation Dynamic selection of motion estimation search ranges and extended motion vector ranges
US8155195B2 (en) * 2006-04-07 2012-04-10 Microsoft Corporation Switching distortion metrics during motion estimation
JP2009533778A (en) * 2006-04-17 2009-09-17 オブジェクトビデオ インコーポレイテッド Video segmentation using statistical pixel modeling
US20070268964A1 (en) * 2006-05-22 2007-11-22 Microsoft Corporation Unit co-location-based motion estimation
US7671728B2 (en) 2006-06-02 2010-03-02 Sensormatic Electronics, LLC Systems and methods for distributed monitoring of remote sites
US7825792B2 (en) * 2006-06-02 2010-11-02 Sensormatic Electronics Llc Systems and methods for distributed monitoring of remote sites
DE602007012246D1 (en) * 2006-06-12 2011-03-10 Tessera Tech Ireland Ltd PROGRESS IN EXTENDING THE AAM TECHNIQUES FROM GRAY CALENDAR TO COLOR PICTURES
US9042606B2 (en) * 2006-06-16 2015-05-26 Board Of Regents Of The Nevada System Of Higher Education Hand-based biometric analysis
EP2050043A2 (en) * 2006-08-02 2009-04-22 Fotonation Vision Limited Face recognition with combined pca-based datasets
US7403643B2 (en) 2006-08-11 2008-07-22 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US20100272885A1 (en) * 2006-08-16 2010-10-28 SeekTech, Inc., a California corporation Marking Paint Applicator for Portable Locator
AU2007221976B2 (en) * 2006-10-19 2009-12-24 Polycom, Inc. Ultrasonic camera tracking system and associated methods
US20080147488A1 (en) * 2006-10-20 2008-06-19 Tunick James A System and method for monitoring viewer attention with respect to a display and determining associated charges
US8055067B2 (en) * 2007-01-18 2011-11-08 DigitalOptics Corporation Europe Limited Color segmentation
JP5049356B2 (en) * 2007-02-28 2012-10-17 デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド Separation of directional lighting variability in statistical face modeling based on texture space decomposition
EP2123008A4 (en) 2007-03-05 2011-03-16 Tessera Tech Ireland Ltd Face categorization and annotation of a mobile phone contact list
WO2008107002A1 (en) 2007-03-05 2008-09-12 Fotonation Vision Limited Face searching and detection in a digital image acquisition device
JP5121258B2 (en) * 2007-03-06 2013-01-16 株式会社東芝 Suspicious behavior detection system and method
US8478523B2 (en) * 2007-03-13 2013-07-02 Certusview Technologies, Llc Marking apparatus and methods for creating an electronic record of marking apparatus operations
US8473209B2 (en) 2007-03-13 2013-06-25 Certusview Technologies, Llc Marking apparatus and marking methods using marking dispenser with machine-readable ID mechanism
US8060304B2 (en) * 2007-04-04 2011-11-15 Certusview Technologies, Llc Marking system and method
US7640105B2 (en) 2007-03-13 2009-12-29 Certus View Technologies, LLC Marking system and method with location and/or time tracking
US8005238B2 (en) 2007-03-22 2011-08-23 Microsoft Corporation Robust adaptive beamforming with enhanced noise suppression
US8005237B2 (en) 2007-05-17 2011-08-23 Microsoft Corp. Sensor array beamformer post-processor
US7916971B2 (en) * 2007-05-24 2011-03-29 Tessera Technologies Ireland Limited Image processing method and apparatus
WO2008150936A1 (en) * 2007-05-30 2008-12-11 Creatier Interactive, Llc Method and system for enabling advertising and transaction within user generated video content
EP2163095A4 (en) * 2007-06-09 2011-05-18 Sensormatic Electronics Llc System and method for integrating video analytics and data analytics/mining
US7965866B2 (en) 2007-07-03 2011-06-21 Shoppertrak Rct Corporation System and process for detecting, tracking and counting human objects of interest
NO327899B1 (en) * 2007-07-13 2009-10-19 Tandberg Telecom As Procedure and system for automatic camera control
US8629976B2 (en) * 2007-10-02 2014-01-14 Microsoft Corporation Methods and systems for hierarchical de-aliasing time-of-flight (TOF) systems
KR101423916B1 (en) * 2007-12-03 2014-07-29 삼성전자주식회사 Method and apparatus for recognizing the plural number of faces
US20090166684A1 (en) * 2007-12-26 2009-07-02 3Dv Systems Ltd. Photogate cmos pixel for 3d cameras having reduced intra-pixel cross talk
EP2075400B1 (en) * 2007-12-31 2012-08-08 March Networks S.p.A. Video monitoring system
US8750578B2 (en) 2008-01-29 2014-06-10 DigitalOptics Corporation Europe Limited Detecting facial expressions in digital images
US7855737B2 (en) * 2008-03-26 2010-12-21 Fotonation Ireland Limited Method of making a digital camera image of a scene including the camera user
CN103475837B (en) 2008-05-19 2017-06-23 日立麦克赛尔株式会社 Record reproducing device and method
US8385557B2 (en) 2008-06-19 2013-02-26 Microsoft Corporation Multichannel acoustic echo reduction
US8325909B2 (en) 2008-06-25 2012-12-04 Microsoft Corporation Acoustic echo suppression
US8280631B2 (en) 2008-10-02 2012-10-02 Certusview Technologies, Llc Methods and apparatus for generating an electronic record of a marking operation based on marking device actuations
US8965700B2 (en) * 2008-10-02 2015-02-24 Certusview Technologies, Llc Methods and apparatus for generating an electronic record of environmental landmarks based on marking device actuations
US8203699B2 (en) 2008-06-30 2012-06-19 Microsoft Corporation System architecture design for time-of-flight system having reduced differential pixel size, and time-of-flight systems so designed
US10086265B2 (en) * 2008-07-11 2018-10-02 Disney Enterprises, Inc. Video teleconference object enable system
CN102027505A (en) * 2008-07-30 2011-04-20 泰塞拉技术爱尔兰公司 Automatic face and skin beautification using face detection
US8411963B2 (en) 2008-08-08 2013-04-02 The Nielsen Company (U.S.), Llc Methods and apparatus to count persons in a monitored environment
US8442766B2 (en) 2008-10-02 2013-05-14 Certusview Technologies, Llc Marking apparatus having enhanced features for underground facility marking operations, and associated methods and systems
GB2477061B (en) * 2008-10-02 2012-10-17 Certusview Technologies Llc Methods and apparatus for generating electronic records of locate operations
WO2010063463A2 (en) * 2008-12-05 2010-06-10 Fotonation Ireland Limited Face recognition using face tracker classifier data
US8681321B2 (en) 2009-01-04 2014-03-25 Microsoft International Holdings B.V. Gated 3D camera
US8487938B2 (en) * 2009-01-30 2013-07-16 Microsoft Corporation Standard Gestures
US8565477B2 (en) 2009-01-30 2013-10-22 Microsoft Corporation Visual target tracking
US20100199231A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Predictive determination
US8295546B2 (en) 2009-01-30 2012-10-23 Microsoft Corporation Pose tracking pipeline
US8267781B2 (en) 2009-01-30 2012-09-18 Microsoft Corporation Visual target tracking
US8565476B2 (en) 2009-01-30 2013-10-22 Microsoft Corporation Visual target tracking
US8682028B2 (en) * 2009-01-30 2014-03-25 Microsoft Corporation Visual target tracking
US8588465B2 (en) 2009-01-30 2013-11-19 Microsoft Corporation Visual target tracking
US7996793B2 (en) 2009-01-30 2011-08-09 Microsoft Corporation Gesture recognizer system architecture
US8294767B2 (en) 2009-01-30 2012-10-23 Microsoft Corporation Body scan
US8577084B2 (en) * 2009-01-30 2013-11-05 Microsoft Corporation Visual target tracking
US8577085B2 (en) * 2009-01-30 2013-11-05 Microsoft Corporation Visual target tracking
US8448094B2 (en) * 2009-01-30 2013-05-21 Microsoft Corporation Mapping a natural input device to a legacy system
CA2897462A1 (en) 2009-02-11 2010-05-04 Certusview Technologies, Llc Management system, and associated methods and apparatus, for providing automatic assessment of a locate operation
US8773355B2 (en) 2009-03-16 2014-07-08 Microsoft Corporation Adaptive cursor sizing
US9256282B2 (en) * 2009-03-20 2016-02-09 Microsoft Technology Licensing, Llc Virtual object manipulation
US8988437B2 (en) 2009-03-20 2015-03-24 Microsoft Technology Licensing, Llc Chaining animations
US9313376B1 (en) 2009-04-01 2016-04-12 Microsoft Technology Licensing, Llc Dynamic depth power equalization
US8942428B2 (en) 2009-05-01 2015-01-27 Microsoft Corporation Isolate extraneous motions
US8660303B2 (en) * 2009-05-01 2014-02-25 Microsoft Corporation Detection of body and props
US8253746B2 (en) 2009-05-01 2012-08-28 Microsoft Corporation Determine intended motions
US9377857B2 (en) 2009-05-01 2016-06-28 Microsoft Technology Licensing, Llc Show body position
US9015638B2 (en) 2009-05-01 2015-04-21 Microsoft Technology Licensing, Llc Binding users to a gesture based system and providing feedback to the users
US8503720B2 (en) 2009-05-01 2013-08-06 Microsoft Corporation Human body pose estimation
US8340432B2 (en) 2009-05-01 2012-12-25 Microsoft Corporation Systems and methods for detecting a tilt angle from a depth image
US9498718B2 (en) 2009-05-01 2016-11-22 Microsoft Technology Licensing, Llc Altering a view perspective within a display environment
US8638985B2 (en) 2009-05-01 2014-01-28 Microsoft Corporation Human body pose estimation
US8181123B2 (en) 2009-05-01 2012-05-15 Microsoft Corporation Managing virtual port associations to users in a gesture-based computing environment
US9898675B2 (en) 2009-05-01 2018-02-20 Microsoft Technology Licensing, Llc User movement tracking feedback to improve tracking
US8649554B2 (en) 2009-05-01 2014-02-11 Microsoft Corporation Method to control perspective for a camera-controlled computer
US20100295782A1 (en) 2009-05-21 2010-11-25 Yehuda Binder System and method for control based on face ore hand gesture detection
US8744121B2 (en) 2009-05-29 2014-06-03 Microsoft Corporation Device for identifying and tracking multiple humans over time
US8418085B2 (en) 2009-05-29 2013-04-09 Microsoft Corporation Gesture coach
US8542252B2 (en) * 2009-05-29 2013-09-24 Microsoft Corporation Target digitization, extraction, and tracking
US20100302365A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Depth Image Noise Reduction
US9400559B2 (en) 2009-05-29 2016-07-26 Microsoft Technology Licensing, Llc Gesture shortcuts
US9383823B2 (en) 2009-05-29 2016-07-05 Microsoft Technology Licensing, Llc Combining gestures beyond skeletal
US8509479B2 (en) * 2009-05-29 2013-08-13 Microsoft Corporation Virtual object
US9182814B2 (en) 2009-05-29 2015-11-10 Microsoft Technology Licensing, Llc Systems and methods for estimating a non-visible or occluded body part
US8856691B2 (en) 2009-05-29 2014-10-07 Microsoft Corporation Gesture tool
US8320619B2 (en) 2009-05-29 2012-11-27 Microsoft Corporation Systems and methods for tracking a model
US8693724B2 (en) 2009-05-29 2014-04-08 Microsoft Corporation Method and system implementing user-centric gesture control
US8379101B2 (en) 2009-05-29 2013-02-19 Microsoft Corporation Environment and/or target segmentation
US8625837B2 (en) 2009-05-29 2014-01-07 Microsoft Corporation Protocol and format for communicating an image from a camera to a computing environment
US8487871B2 (en) * 2009-06-01 2013-07-16 Microsoft Corporation Virtual desktop coordinate transformation
US8655084B2 (en) * 2009-06-23 2014-02-18 Board Of Regents Of The Nevada System Of Higher Education, On Behalf Of The University Of Nevada, Reno Hand-based gender classification
CN102804759B (en) 2009-06-24 2016-03-02 惠普开发有限公司 Image albums creates
US8390680B2 (en) 2009-07-09 2013-03-05 Microsoft Corporation Visual representation expression based on player expression
US9159151B2 (en) 2009-07-13 2015-10-13 Microsoft Technology Licensing, Llc Bringing a visual representation to life via learned input from the user
CA2771286C (en) * 2009-08-11 2016-08-30 Certusview Technologies, Llc Locating equipment communicatively coupled to or equipped with a mobile/portable device
CA2713282C (en) * 2009-08-20 2013-03-19 Certusview Technologies, Llc Marking device with transmitter for triangulating location during marking operations
CA2710189C (en) * 2009-08-20 2012-05-08 Certusview Technologies, Llc Methods and apparatus for assessing marking operations based on acceleration information
WO2011022102A1 (en) * 2009-08-20 2011-02-24 Certusview Technologies, Llc Methods and marking devices with mechanisms for indicating and/or detecting marking material color
US8264536B2 (en) * 2009-08-25 2012-09-11 Microsoft Corporation Depth-sensitive imaging via polarization-state mapping
US8253792B2 (en) * 2009-08-28 2012-08-28 GM Global Technology Operations LLC Vision system for monitoring humans in dynamic environments
US9141193B2 (en) 2009-08-31 2015-09-22 Microsoft Technology Licensing, Llc Techniques for using human gestures to control gesture unaware programs
US8508919B2 (en) 2009-09-14 2013-08-13 Microsoft Corporation Separation of electrical and optical components
US8330134B2 (en) 2009-09-14 2012-12-11 Microsoft Corporation Optical fault monitoring
US8976986B2 (en) * 2009-09-21 2015-03-10 Microsoft Technology Licensing, Llc Volume adjustment based on listener position
US8428340B2 (en) * 2009-09-21 2013-04-23 Microsoft Corporation Screen space plane identification
US8760571B2 (en) * 2009-09-21 2014-06-24 Microsoft Corporation Alignment of lens and image sensor
KR101640039B1 (en) * 2009-09-22 2016-07-18 삼성전자주식회사 Image processing apparatus and method
US9014546B2 (en) 2009-09-23 2015-04-21 Rovi Guides, Inc. Systems and methods for automatically detecting users within detection regions of media devices
US8452087B2 (en) * 2009-09-30 2013-05-28 Microsoft Corporation Image selection techniques
US8723118B2 (en) * 2009-10-01 2014-05-13 Microsoft Corporation Imager for constructing color and depth images
US7961910B2 (en) 2009-10-07 2011-06-14 Microsoft Corporation Systems and methods for tracking a model
US8963829B2 (en) 2009-10-07 2015-02-24 Microsoft Corporation Methods and systems for determining and tracking extremities of a target
US8867820B2 (en) 2009-10-07 2014-10-21 Microsoft Corporation Systems and methods for removing a background of an image
US8564534B2 (en) 2009-10-07 2013-10-22 Microsoft Corporation Human tracking system
US9400548B2 (en) * 2009-10-19 2016-07-26 Microsoft Technology Licensing, Llc Gesture personalization and profile roaming
US20110099476A1 (en) * 2009-10-23 2011-04-28 Microsoft Corporation Decorating a display environment
US8988432B2 (en) * 2009-11-05 2015-03-24 Microsoft Technology Licensing, Llc Systems and methods for processing an image for target tracking
US8843857B2 (en) 2009-11-19 2014-09-23 Microsoft Corporation Distance scalable no touch computing
EP2333692A1 (en) * 2009-12-11 2011-06-15 Alcatel Lucent Method and arrangement for improved image matching
US9244533B2 (en) 2009-12-17 2016-01-26 Microsoft Technology Licensing, Llc Camera navigation for presentations
US20110151974A1 (en) * 2009-12-18 2011-06-23 Microsoft Corporation Gesture style recognition and reward
US20110150271A1 (en) 2009-12-18 2011-06-23 Microsoft Corporation Motion detection using depth images
US8320621B2 (en) 2009-12-21 2012-11-27 Microsoft Corporation Depth projector system with integrated VCSEL array
US8631355B2 (en) * 2010-01-08 2014-01-14 Microsoft Corporation Assigning gesture dictionaries
US9019201B2 (en) 2010-01-08 2015-04-28 Microsoft Technology Licensing, Llc Evolving universal gesture sets
US9268404B2 (en) * 2010-01-08 2016-02-23 Microsoft Technology Licensing, Llc Application gesture interpretation
US20110169917A1 (en) * 2010-01-11 2011-07-14 Shoppertrak Rct Corporation System And Process For Detecting, Tracking And Counting Human Objects of Interest
US8334842B2 (en) 2010-01-15 2012-12-18 Microsoft Corporation Recognizing user intent in motion capture system
US8933884B2 (en) * 2010-01-15 2015-01-13 Microsoft Corporation Tracking groups of users in motion capture system
RU2426172C1 (en) * 2010-01-21 2011-08-10 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." Method and system for isolating foreground object image proceeding from colour and depth data
US8676581B2 (en) 2010-01-22 2014-03-18 Microsoft Corporation Speech recognition analysis via identification information
US8265341B2 (en) 2010-01-25 2012-09-11 Microsoft Corporation Voice-body identity correlation
US8864581B2 (en) 2010-01-29 2014-10-21 Microsoft Corporation Visual based identitiy tracking
US8891067B2 (en) * 2010-02-01 2014-11-18 Microsoft Corporation Multiple synchronized optical sources for time-of-flight range finding systems
US8619122B2 (en) * 2010-02-02 2013-12-31 Microsoft Corporation Depth camera compatibility
US8687044B2 (en) * 2010-02-02 2014-04-01 Microsoft Corporation Depth camera compatibility
US8717469B2 (en) * 2010-02-03 2014-05-06 Microsoft Corporation Fast gating photosurface
US8659658B2 (en) * 2010-02-09 2014-02-25 Microsoft Corporation Physical interaction zone for gesture-based user interfaces
US8499257B2 (en) * 2010-02-09 2013-07-30 Microsoft Corporation Handles interactions for human—computer interface
US20110199302A1 (en) * 2010-02-16 2011-08-18 Microsoft Corporation Capturing screen objects using a collision volume
US8633890B2 (en) * 2010-02-16 2014-01-21 Microsoft Corporation Gesture detection based on joint skipping
US9819358B2 (en) * 2010-02-19 2017-11-14 Skype Entropy encoding based on observed frequency
US20110206118A1 (en) * 2010-02-19 2011-08-25 Lazar Bivolarsky Data Compression for Video
US9313526B2 (en) * 2010-02-19 2016-04-12 Skype Data compression for video
US8913661B2 (en) * 2010-02-19 2014-12-16 Skype Motion estimation using block matching indexing
US9609342B2 (en) * 2010-02-19 2017-03-28 Skype Compression for frames of a video signal using selected candidate blocks
US8928579B2 (en) * 2010-02-22 2015-01-06 Andrew David Wilson Interacting with an omni-directionally projected display
USD634655S1 (en) 2010-03-01 2011-03-22 Certusview Technologies, Llc Handle of a marking device
USD634656S1 (en) 2010-03-01 2011-03-22 Certusview Technologies, Llc Shaft of a marking device
USD634657S1 (en) 2010-03-01 2011-03-22 Certusview Technologies, Llc Paint holder of a marking device
USD643321S1 (en) 2010-03-01 2011-08-16 Certusview Technologies, Llc Marking device
US8422769B2 (en) * 2010-03-05 2013-04-16 Microsoft Corporation Image segmentation using reduced foreground training data
US8655069B2 (en) 2010-03-05 2014-02-18 Microsoft Corporation Updating image segmentation following user input
US8411948B2 (en) 2010-03-05 2013-04-02 Microsoft Corporation Up-sampling binary images for segmentation
US20110221755A1 (en) * 2010-03-12 2011-09-15 Kevin Geisner Bionic motion
US20110223995A1 (en) * 2010-03-12 2011-09-15 Kevin Geisner Interacting with a computer based application
US8279418B2 (en) * 2010-03-17 2012-10-02 Microsoft Corporation Raster scanning for depth detection
US8213680B2 (en) * 2010-03-19 2012-07-03 Microsoft Corporation Proxy training data for human body tracking
US8514269B2 (en) * 2010-03-26 2013-08-20 Microsoft Corporation De-aliasing depth images
US20110234481A1 (en) * 2010-03-26 2011-09-29 Sagi Katz Enhancing presentations using depth sensing cameras
US8523667B2 (en) * 2010-03-29 2013-09-03 Microsoft Corporation Parental control settings based on body dimensions
US8605763B2 (en) 2010-03-31 2013-12-10 Microsoft Corporation Temperature measurement and control for laser and light-emitting diodes
US9098873B2 (en) 2010-04-01 2015-08-04 Microsoft Technology Licensing, Llc Motion-based interactive shopping environment
US9646340B2 (en) 2010-04-01 2017-05-09 Microsoft Technology Licensing, Llc Avatar-based virtual dressing room
US8315443B2 (en) 2010-04-22 2012-11-20 Qualcomm Incorporated Viewpoint detector based on skin color area and face area
US8351651B2 (en) 2010-04-26 2013-01-08 Microsoft Corporation Hand-location post-process refinement in a tracking system
US8379919B2 (en) 2010-04-29 2013-02-19 Microsoft Corporation Multiple centroid condensation of probability distribution clouds
US8284847B2 (en) 2010-05-03 2012-10-09 Microsoft Corporation Detecting motion for a multifunction sensor device
US8379920B2 (en) * 2010-05-05 2013-02-19 Nec Laboratories America, Inc. Real-time clothing recognition in surveillance videos
US8498481B2 (en) 2010-05-07 2013-07-30 Microsoft Corporation Image segmentation using star-convexity constraints
US8885890B2 (en) 2010-05-07 2014-11-11 Microsoft Corporation Depth map confidence filtering
US8457353B2 (en) 2010-05-18 2013-06-04 Microsoft Corporation Gestures and gesture modifiers for manipulating a user-interface
US8724902B2 (en) * 2010-06-01 2014-05-13 Hewlett-Packard Development Company, L.P. Processing image data
US8803888B2 (en) 2010-06-02 2014-08-12 Microsoft Corporation Recognition system for sharing information
US9008355B2 (en) 2010-06-04 2015-04-14 Microsoft Technology Licensing, Llc Automatic depth camera aiming
US8751215B2 (en) 2010-06-04 2014-06-10 Microsoft Corporation Machine based sign language interpreter
US9557574B2 (en) 2010-06-08 2017-01-31 Microsoft Technology Licensing, Llc Depth illumination and detection optics
US8330822B2 (en) 2010-06-09 2012-12-11 Microsoft Corporation Thermally-tuned depth camera light source
US9384329B2 (en) 2010-06-11 2016-07-05 Microsoft Technology Licensing, Llc Caloric burn determination from body movement
US8749557B2 (en) 2010-06-11 2014-06-10 Microsoft Corporation Interacting with user interface via avatar
US8675981B2 (en) 2010-06-11 2014-03-18 Microsoft Corporation Multi-modal gender recognition including depth data
US8982151B2 (en) 2010-06-14 2015-03-17 Microsoft Technology Licensing, Llc Independently processing planes of display data
US8558873B2 (en) 2010-06-16 2013-10-15 Microsoft Corporation Use of wavefront coding to create a depth image
US8670029B2 (en) 2010-06-16 2014-03-11 Microsoft Corporation Depth camera illuminator with superluminescent light-emitting diode
US8296151B2 (en) 2010-06-18 2012-10-23 Microsoft Corporation Compound gesture-speech commands
US8381108B2 (en) 2010-06-21 2013-02-19 Microsoft Corporation Natural user input for driving interactive stories
US8416187B2 (en) 2010-06-22 2013-04-09 Microsoft Corporation Item navigation using motion-capture data
US9277141B2 (en) * 2010-08-12 2016-03-01 Raytheon Company System, method, and software for image processing
US9075434B2 (en) 2010-08-20 2015-07-07 Microsoft Technology Licensing, Llc Translating user motion into multiple object responses
US8613666B2 (en) 2010-08-31 2013-12-24 Microsoft Corporation User selection and navigation based on looped motions
US8437506B2 (en) 2010-09-07 2013-05-07 Microsoft Corporation System for fast, probabilistic skeletal tracking
US20120058824A1 (en) 2010-09-07 2012-03-08 Microsoft Corporation Scalable real-time motion recognition
US8988508B2 (en) 2010-09-24 2015-03-24 Microsoft Technology Licensing, Llc. Wide angle field of view active illumination imaging system
US8681255B2 (en) 2010-09-28 2014-03-25 Microsoft Corporation Integrated low power depth camera and projection device
US8548270B2 (en) 2010-10-04 2013-10-01 Microsoft Corporation Time-of-flight depth imaging
US9484065B2 (en) 2010-10-15 2016-11-01 Microsoft Technology Licensing, Llc Intelligent determination of replays based on event identification
KR101682137B1 (en) * 2010-10-25 2016-12-05 삼성전자주식회사 Method and apparatus for temporally-consistent disparity estimation using texture and motion detection
US8592739B2 (en) 2010-11-02 2013-11-26 Microsoft Corporation Detection of configuration changes of an optical element in an illumination system
US8866889B2 (en) 2010-11-03 2014-10-21 Microsoft Corporation In-home depth camera calibration
US8542917B2 (en) * 2010-11-10 2013-09-24 Tandent Vision Science, Inc. System and method for identifying complex tokens in an image
US8667519B2 (en) 2010-11-12 2014-03-04 Microsoft Corporation Automatic passive and anonymous feedback system
US10726861B2 (en) 2010-11-15 2020-07-28 Microsoft Technology Licensing, Llc Semi-private communication in open environments
US9349040B2 (en) 2010-11-19 2016-05-24 Microsoft Technology Licensing, Llc Bi-modal depth-image analysis
US10234545B2 (en) 2010-12-01 2019-03-19 Microsoft Technology Licensing, Llc Light source module
US8553934B2 (en) 2010-12-08 2013-10-08 Microsoft Corporation Orienting the position of a sensor
US8618405B2 (en) 2010-12-09 2013-12-31 Microsoft Corp. Free-space gesture musical instrument digital interface (MIDI) controller
US8408706B2 (en) 2010-12-13 2013-04-02 Microsoft Corporation 3D gaze tracker
TW201224955A (en) * 2010-12-15 2012-06-16 Ind Tech Res Inst System and method for face detection using face region location and size predictions and computer program product thereof
US9171264B2 (en) 2010-12-15 2015-10-27 Microsoft Technology Licensing, Llc Parallel processing machine learning decision tree training
US8920241B2 (en) 2010-12-15 2014-12-30 Microsoft Corporation Gesture controlled persistent handles for interface guides
US8884968B2 (en) 2010-12-15 2014-11-11 Microsoft Corporation Modeling an object from image data
US8448056B2 (en) 2010-12-17 2013-05-21 Microsoft Corporation Validation analysis of human target
US8803952B2 (en) 2010-12-20 2014-08-12 Microsoft Corporation Plural detector time-of-flight depth mapping
US8994718B2 (en) 2010-12-21 2015-03-31 Microsoft Technology Licensing, Llc Skeletal control of three-dimensional virtual world
US8385596B2 (en) 2010-12-21 2013-02-26 Microsoft Corporation First person shooter control with virtual skeleton
US9848106B2 (en) 2010-12-21 2017-12-19 Microsoft Technology Licensing, Llc Intelligent gameplay photo capture
US9821224B2 (en) 2010-12-21 2017-11-21 Microsoft Technology Licensing, Llc Driving simulator control with virtual skeleton
US9823339B2 (en) 2010-12-21 2017-11-21 Microsoft Technology Licensing, Llc Plural anode time-of-flight sensor
US9123316B2 (en) 2010-12-27 2015-09-01 Microsoft Technology Licensing, Llc Interactive content creation
US8488888B2 (en) 2010-12-28 2013-07-16 Microsoft Corporation Classification of posture states
US8824823B1 (en) * 2011-01-20 2014-09-02 Verint Americas Inc. Increased quality of image objects based on depth in scene
US9268996B1 (en) 2011-01-20 2016-02-23 Verint Systems Inc. Evaluation of models generated from objects in video
US8401225B2 (en) 2011-01-31 2013-03-19 Microsoft Corporation Moving object segmentation using depth images
US8587583B2 (en) 2011-01-31 2013-11-19 Microsoft Corporation Three-dimensional environment reconstruction
US9247238B2 (en) 2011-01-31 2016-01-26 Microsoft Technology Licensing, Llc Reducing interference between multiple infra-red depth cameras
US8401242B2 (en) 2011-01-31 2013-03-19 Microsoft Corporation Real-time camera tracking using depth maps
US8724887B2 (en) 2011-02-03 2014-05-13 Microsoft Corporation Environmental modifications to mitigate environmental factors
US8942917B2 (en) 2011-02-14 2015-01-27 Microsoft Corporation Change invariant scene recognition by an agent
US8497838B2 (en) 2011-02-16 2013-07-30 Microsoft Corporation Push actuation of interface controls
US8836777B2 (en) 2011-02-25 2014-09-16 DigitalOptics Corporation Europe Limited Automatic detection of vertical gaze using an embedded imaging device
US9551914B2 (en) 2011-03-07 2017-01-24 Microsoft Technology Licensing, Llc Illuminator with refractive optical element
US9067136B2 (en) 2011-03-10 2015-06-30 Microsoft Technology Licensing, Llc Push personalization of interface controls
US20120236105A1 (en) * 2011-03-14 2012-09-20 Motorola Mobility, Inc. Method and apparatus for morphing a user during a video call
US8571263B2 (en) 2011-03-17 2013-10-29 Microsoft Corporation Predicting joint positions
US9470778B2 (en) 2011-03-29 2016-10-18 Microsoft Technology Licensing, Llc Learning from high quality depth measurements
US10642934B2 (en) 2011-03-31 2020-05-05 Microsoft Technology Licensing, Llc Augmented conversational understanding architecture
US9842168B2 (en) 2011-03-31 2017-12-12 Microsoft Technology Licensing, Llc Task driven user intents
US9760566B2 (en) 2011-03-31 2017-09-12 Microsoft Technology Licensing, Llc Augmented conversational understanding agent to identify conversation context between two humans and taking an agent action thereof
US9298287B2 (en) 2011-03-31 2016-03-29 Microsoft Technology Licensing, Llc Combined activation for natural user interface systems
US8503494B2 (en) 2011-04-05 2013-08-06 Microsoft Corporation Thermal management system
US8824749B2 (en) 2011-04-05 2014-09-02 Microsoft Corporation Biometric recognition
US8620113B2 (en) 2011-04-25 2013-12-31 Microsoft Corporation Laser diode modes
US8923629B2 (en) 2011-04-27 2014-12-30 Hewlett-Packard Development Company, L.P. System and method for determining co-occurrence groups of images
US9259643B2 (en) 2011-04-28 2016-02-16 Microsoft Technology Licensing, Llc Control of separate computer game elements
US8702507B2 (en) 2011-04-28 2014-04-22 Microsoft Corporation Manual and camera-based avatar control
US10671841B2 (en) 2011-05-02 2020-06-02 Microsoft Technology Licensing, Llc Attribute state classification
US8888331B2 (en) 2011-05-09 2014-11-18 Microsoft Corporation Low inductance light source module
US9137463B2 (en) 2011-05-12 2015-09-15 Microsoft Technology Licensing, Llc Adaptive high dynamic range camera
US9064006B2 (en) 2012-08-23 2015-06-23 Microsoft Technology Licensing, Llc Translating natural language utterances to keyword search queries
US8788973B2 (en) 2011-05-23 2014-07-22 Microsoft Corporation Three-dimensional gesture controlled avatar configuration interface
US8760395B2 (en) 2011-05-31 2014-06-24 Microsoft Corporation Gesture recognition techniques
US9594430B2 (en) 2011-06-01 2017-03-14 Microsoft Technology Licensing, Llc Three-dimensional foreground selection for vision system
US8526734B2 (en) 2011-06-01 2013-09-03 Microsoft Corporation Three-dimensional background removal for vision system
US8897491B2 (en) 2011-06-06 2014-11-25 Microsoft Corporation System for finger recognition and tracking
US9098110B2 (en) 2011-06-06 2015-08-04 Microsoft Technology Licensing, Llc Head rotation tracking from depth-based center of mass
US9208571B2 (en) 2011-06-06 2015-12-08 Microsoft Technology Licensing, Llc Object digitization
US9013489B2 (en) 2011-06-06 2015-04-21 Microsoft Technology Licensing, Llc Generation of avatar reflecting player appearance
US8597142B2 (en) 2011-06-06 2013-12-03 Microsoft Corporation Dynamic camera based practice mode
US9724600B2 (en) 2011-06-06 2017-08-08 Microsoft Technology Licensing, Llc Controlling objects in a virtual environment
US8929612B2 (en) 2011-06-06 2015-01-06 Microsoft Corporation System for recognizing an open or closed hand
US10796494B2 (en) 2011-06-06 2020-10-06 Microsoft Technology Licensing, Llc Adding attributes to virtual representations of real-world objects
US9597587B2 (en) 2011-06-08 2017-03-21 Microsoft Technology Licensing, Llc Locational node device
US8786730B2 (en) 2011-08-18 2014-07-22 Microsoft Corporation Image exposure using exclusion regions
FR2980292B1 (en) * 2011-09-16 2013-10-11 Prynel METHOD AND SYSTEM FOR ACQUIRING AND PROCESSING IMAGES FOR MOTION DETECTION
US10402631B2 (en) 2011-09-23 2019-09-03 Shoppertrak Rct Corporation Techniques for automatically identifying secondary objects in a stereo-optical counting system
US9177195B2 (en) 2011-09-23 2015-11-03 Shoppertrak Rct Corporation System and method for detecting, tracking and counting human objects of interest using a counting system and a data capture device
US8879836B2 (en) * 2011-10-14 2014-11-04 Tandent Vision Science, Inc. System and method for identifying complex tokens in an image
US9557836B2 (en) 2011-11-01 2017-01-31 Microsoft Technology Licensing, Llc Depth image compression
US9117281B2 (en) 2011-11-02 2015-08-25 Microsoft Corporation Surface segmentation from RGB and depth images
US8854426B2 (en) 2011-11-07 2014-10-07 Microsoft Corporation Time-of-flight camera with guided light
US9111147B2 (en) 2011-11-14 2015-08-18 Massachusetts Institute Of Technology Assisted video surveillance of persons-of-interest
US8724906B2 (en) 2011-11-18 2014-05-13 Microsoft Corporation Computing pose and/or shape of modifiable entities
US8509545B2 (en) 2011-11-29 2013-08-13 Microsoft Corporation Foreground subject detection
EP2786303A4 (en) 2011-12-01 2015-08-26 Lightcraft Technology Llc Automatic tracking matte system
US8635637B2 (en) 2011-12-02 2014-01-21 Microsoft Corporation User interface presenting an animated avatar performing a media reaction
US8803800B2 (en) 2011-12-02 2014-08-12 Microsoft Corporation User interface control based on head orientation
US9100685B2 (en) 2011-12-09 2015-08-04 Microsoft Technology Licensing, Llc Determining audience state or interest using passive sensor data
US8879831B2 (en) 2011-12-15 2014-11-04 Microsoft Corporation Using high-level attributes to guide image processing
US8630457B2 (en) 2011-12-15 2014-01-14 Microsoft Corporation Problem states for pose tracking pipeline
US8971612B2 (en) 2011-12-15 2015-03-03 Microsoft Corporation Learning image processing tasks from scene reconstructions
US8811938B2 (en) 2011-12-16 2014-08-19 Microsoft Corporation Providing a user interface experience based on inferred vehicle state
US9342139B2 (en) 2011-12-19 2016-05-17 Microsoft Technology Licensing, Llc Pairing a computing device to a user
KR20130070340A (en) * 2011-12-19 2013-06-27 한국전자통신연구원 Optical flow accelerator for the motion recognition and method thereof
US8704904B2 (en) 2011-12-23 2014-04-22 H4 Engineering, Inc. Portable system for high quality video recording
US20130201316A1 (en) 2012-01-09 2013-08-08 May Patents Ltd. System and method for server based control
US9720089B2 (en) 2012-01-23 2017-08-01 Microsoft Technology Licensing, Llc 3D zoom imager
US8989455B2 (en) * 2012-02-05 2015-03-24 Apple Inc. Enhanced face detection using depth information
USD684067S1 (en) 2012-02-15 2013-06-11 Certusview Technologies, Llc Modular marking device
AU2013225712B2 (en) 2012-03-01 2017-04-27 H4 Engineering, Inc. Apparatus and method for automatic video recording
US9723192B1 (en) 2012-03-02 2017-08-01 H4 Engineering, Inc. Application dependent video recording device architecture
CA2866131A1 (en) 2012-03-02 2013-06-09 H4 Engineering, Inc. Multifunction automatic video recording device
US8898687B2 (en) 2012-04-04 2014-11-25 Microsoft Corporation Controlling a media program based on a media reaction
US9210401B2 (en) 2012-05-03 2015-12-08 Microsoft Technology Licensing, Llc Projected visual cues for guiding physical movement
CA2775700C (en) 2012-05-04 2013-07-23 Microsoft Corporation Determining a future portion of a currently presented media program
EP2848000B1 (en) 2012-05-11 2018-09-19 Intel Corporation Systems and methods for row causal scan-order optimization stereo matching
JP5899472B2 (en) * 2012-05-23 2016-04-06 パナソニックIpマネジメント株式会社 Person attribute estimation system and learning data generation apparatus
JP6018707B2 (en) 2012-06-21 2016-11-02 マイクロソフト コーポレーション Building an avatar using a depth camera
US9836590B2 (en) 2012-06-22 2017-12-05 Microsoft Technology Licensing, Llc Enhanced accuracy of user presence status determination
US9696427B2 (en) 2012-08-14 2017-07-04 Microsoft Technology Licensing, Llc Wide angle depth detection
US8882310B2 (en) 2012-12-10 2014-11-11 Microsoft Corporation Laser die light source module with low inductance
US9857470B2 (en) 2012-12-28 2018-01-02 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
US9251590B2 (en) 2013-01-24 2016-02-02 Microsoft Technology Licensing, Llc Camera pose estimation for 3D reconstruction
US9052746B2 (en) 2013-02-15 2015-06-09 Microsoft Technology Licensing, Llc User center-of-mass and mass distribution extraction using depth images
US9940553B2 (en) 2013-02-22 2018-04-10 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
US9135516B2 (en) 2013-03-08 2015-09-15 Microsoft Technology Licensing, Llc User body angle, curvature and average extremity positions extraction using depth images
US8957940B2 (en) 2013-03-11 2015-02-17 Cisco Technology, Inc. Utilizing a smart camera system for immersive telepresence
US9092657B2 (en) 2013-03-13 2015-07-28 Microsoft Technology Licensing, Llc Depth image processing
US9274606B2 (en) 2013-03-14 2016-03-01 Microsoft Technology Licensing, Llc NUI video conference controls
US9953213B2 (en) 2013-03-27 2018-04-24 Microsoft Technology Licensing, Llc Self discovery of autonomous NUI devices
US9442186B2 (en) 2013-05-13 2016-09-13 Microsoft Technology Licensing, Llc Interference reduction for TOF systems
WO2015001856A1 (en) * 2013-07-01 2015-01-08 Necソリューションイノベータ株式会社 Attribute estimation system
US9462253B2 (en) 2013-09-23 2016-10-04 Microsoft Technology Licensing, Llc Optical modules that reduce speckle contrast and diffraction artifacts
US9443310B2 (en) 2013-10-09 2016-09-13 Microsoft Technology Licensing, Llc Illumination modules that emit structured light
US9674563B2 (en) 2013-11-04 2017-06-06 Rovi Guides, Inc. Systems and methods for recommending content
US9769459B2 (en) 2013-11-12 2017-09-19 Microsoft Technology Licensing, Llc Power efficient laser diode driver circuit and method
US9508385B2 (en) 2013-11-21 2016-11-29 Microsoft Technology Licensing, Llc Audio-visual project generator
US9971491B2 (en) 2014-01-09 2018-05-15 Microsoft Technology Licensing, Llc Gesture library for natural user input
WO2015162605A2 (en) 2014-04-22 2015-10-29 Snapaid Ltd System and method for controlling a camera based on processing an image captured by other camera
RU2572377C1 (en) * 2014-12-30 2016-01-10 Федеральное государственное бюджетное образовательное учреждение высшего профессионального образования "Донской государственный технический университет" (ФГБОУ ВПО "ДГТУ") Video sequence editing device
US10043146B2 (en) * 2015-02-12 2018-08-07 Wipro Limited Method and device for estimating efficiency of an employee of an organization
WO2016207875A1 (en) 2015-06-22 2016-12-29 Photomyne Ltd. System and method for detecting objects in an image
US10412280B2 (en) 2016-02-10 2019-09-10 Microsoft Technology Licensing, Llc Camera with light valve over sensor array
US10257932B2 (en) 2016-02-16 2019-04-09 Microsoft Technology Licensing, Llc. Laser diode chip on printed circuit board
US10462452B2 (en) 2016-03-16 2019-10-29 Microsoft Technology Licensing, Llc Synchronizing active illumination cameras
US10614578B2 (en) * 2016-03-23 2020-04-07 Akcelita, LLC System and method for tracking people, animals and objects using a volumetric representation and artificial intelligence
US10049462B2 (en) * 2016-03-23 2018-08-14 Akcelita, LLC System and method for tracking and annotating multiple objects in a 3D model
US10360445B2 (en) * 2016-03-23 2019-07-23 Akcelita, LLC System and method for tracking persons using a volumetric representation
US10497014B2 (en) * 2016-04-22 2019-12-03 Inreality Limited Retail store digital shelf for recommending products utilizing facial recognition in a peer to peer network
US10430994B1 (en) 2016-11-07 2019-10-01 Henry Harlyn Baker Techniques for determining a three-dimensional textured representation of a surface of an object from a set of images with varying formats
US10482900B2 (en) 2017-01-18 2019-11-19 Microsoft Technology Licensing, Llc Organization of signal segments supporting sensed features
US10606814B2 (en) 2017-01-18 2020-03-31 Microsoft Technology Licensing, Llc Computer-aided tracking of physical entities
US10679669B2 (en) 2017-01-18 2020-06-09 Microsoft Technology Licensing, Llc Automatic narration of signal segment
US10637814B2 (en) 2017-01-18 2020-04-28 Microsoft Technology Licensing, Llc Communication routing based on physical status
US10635981B2 (en) * 2017-01-18 2020-04-28 Microsoft Technology Licensing, Llc Automated movement orchestration
US11094212B2 (en) 2017-01-18 2021-08-17 Microsoft Technology Licensing, Llc Sharing signal segments of physical graph
US10437884B2 (en) 2017-01-18 2019-10-08 Microsoft Technology Licensing, Llc Navigation of computer-navigable physical feature graph
US10489952B2 (en) * 2017-11-01 2019-11-26 Disney Enterprises, Inc. Cosmetic transformation through image synthesis
AU2017272325A1 (en) * 2017-12-08 2019-06-27 Canon Kabushiki Kaisha System and method of generating a composite frame
US10846327B2 (en) * 2018-11-02 2020-11-24 A9.Com, Inc. Visual attribute determination for content selection
US10657584B1 (en) * 2019-01-31 2020-05-19 StradVision, Inc. Method and device for generating safe clothing patterns for rider of bike
CN110019652B (en) * 2019-03-14 2022-06-03 九江学院 Cross-modal Hash retrieval method based on deep learning
US11495024B2 (en) * 2020-04-01 2022-11-08 Honeywell International Inc. Systems and methods for collecting video clip evidence from a plurality of video streams of a video surveillance system
CN114202941A (en) * 2022-02-18 2022-03-18 长沙海信智能系统研究院有限公司 Control method and device of traffic signal lamp

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5164992A (en) * 1990-11-01 1992-11-17 Massachusetts Institute Of Technology Face recognition system
US5331544A (en) * 1992-04-23 1994-07-19 A. C. Nielsen Company Market research method and system for collecting retail store and shopper market research data
US5550928A (en) 1992-12-15 1996-08-27 A.C. Nielsen Company Audience measurement system and method
US5581625A (en) 1994-01-31 1996-12-03 International Business Machines Corporation Stereo vision system for counting items in a queue
DE69507594T2 (en) 1995-03-31 1999-09-02 Hitachi Europ Ltd Image processing method for determining facial features
US5912980A (en) * 1995-07-13 1999-06-15 Hunke; H. Martin Target acquisition and tracking
US5715325A (en) * 1995-08-30 1998-02-03 Siemens Corporate Research, Inc. Apparatus and method for detecting a face in a video image
US6188777B1 (en) * 1997-08-01 2001-02-13 Interval Research Corporation Method and apparatus for personnel detection and tracking

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6591001B1 (en) * 1998-10-26 2003-07-08 Oki Electric Industry Co., Ltd. Image-input device
US6654047B2 (en) * 1998-10-27 2003-11-25 Toshiba Tec Kabushiki Kaisha Method of and device for acquiring information on a traffic line of persons
US20020044682A1 (en) * 2000-09-08 2002-04-18 Weil Josef Oster Method and apparatus for subject physical position and security determination
US7106885B2 (en) * 2000-09-08 2006-09-12 Carecord Technologies, Inc. Method and apparatus for subject physical position and security determination
US7003135B2 (en) * 2001-05-25 2006-02-21 Industrial Technology Research Institute System and method for rapidly tracking multiple faces
WO2003049035A2 (en) * 2001-12-06 2003-06-12 Koninklijke Philips Electronics N.V. Method and apparatus for automatic face blurring
US6959099B2 (en) 2001-12-06 2005-10-25 Koninklijke Philips Electronics N.V. Method and apparatus for automatic face blurring
WO2003049035A3 (en) * 2001-12-06 2004-06-17 Koninkl Philips Electronics Nv Method and apparatus for automatic face blurring
US7274803B1 (en) 2002-04-02 2007-09-25 Videomining Corporation Method and system for detecting conscious hand movement patterns and computer-generated visual feedback for facilitating human-computer interaction
US7257236B2 (en) 2002-05-22 2007-08-14 A4Vision Methods and systems for detecting and recognizing objects in a controlled wide area
US20030235335A1 (en) * 2002-05-22 2003-12-25 Artiom Yukhin Methods and systems for detecting and recognizing objects in a controlled wide area
US20030231788A1 (en) * 2002-05-22 2003-12-18 Artiom Yukhin Methods and systems for detecting and recognizing an object based on 3D image data
WO2003100710A1 (en) * 2002-05-22 2003-12-04 A4Vision Methods and systems for detecting and recognizing objects in a controlled wide area
US7174033B2 (en) 2002-05-22 2007-02-06 A4Vision Methods and systems for detecting and recognizing an object based on 3D image data
US7317812B1 (en) 2002-11-15 2008-01-08 Videomining Corporation Method and apparatus for robustly tracking objects
US20060198554A1 (en) * 2002-11-29 2006-09-07 Porter Robert M S Face detection
WO2004052691A1 (en) * 2002-12-12 2004-06-24 Daimlerchrysler Ag Method and device for determining a three-dimension position of passengers of a motor car
US20060018518A1 (en) * 2002-12-12 2006-01-26 Martin Fritzsche Method and device for determining the three-dimension position of passengers of a motor car
US20100092080A1 (en) * 2003-11-18 2010-04-15 Fuji Xerox Co., Ltd. System and method for making a correction to a plurality of images
US8280188B2 (en) 2003-11-18 2012-10-02 Fuji Xerox Co., Ltd. System and method for making a correction to a plurality of images
US20100183227A1 (en) * 2003-11-18 2010-07-22 Samsung Electronics Co., Ltd. Person detecting apparatus and method and privacy protection system employing the same
US20100092090A1 (en) * 2003-11-18 2010-04-15 Fuji Xerox Co., Ltd. System and method for making a correction to a plurality of images
US20050105821A1 (en) * 2003-11-18 2005-05-19 Fuji Xerox Co., Ltd. Image processing apparatus, image processing method, and program
US7653246B2 (en) * 2003-11-18 2010-01-26 Fuji Xerox Co., Ltd. System and method for making a correction to a plurality of images
US20050152579A1 (en) * 2003-11-18 2005-07-14 Samsung Electronics Co., Ltd. Person detecting apparatus and method and privacy protection system employing the same
US20050167588A1 (en) * 2003-12-30 2005-08-04 The Mitre Corporation Techniques for building-scale electrostatic tomography
US7330032B2 (en) 2003-12-30 2008-02-12 The Mitre Corporation Techniques for building-scale electrostatic tomography
US7769233B2 (en) * 2004-03-22 2010-08-03 Fujifilm Corporation Particular-region detection method and apparatus
US20050207649A1 (en) * 2004-03-22 2005-09-22 Fuji Photo Film Co., Ltd. Particular-region detection method and apparatus, and program therefor
US7418131B2 (en) * 2004-08-27 2008-08-26 National Cheng Kung University Image-capturing device and method for removing strangers from an image
US20060045372A1 (en) * 2004-08-27 2006-03-02 National Cheng Kung University Image-capturing device and method for removing strangers from an image
US20070013791A1 (en) * 2005-07-05 2007-01-18 Koichi Kinoshita Tracking apparatus
US7940956B2 (en) * 2005-07-05 2011-05-10 Omron Corporation Tracking apparatus that tracks a face position in a dynamic picture image using ambient information excluding the face
KR100997060B1 (en) * 2005-10-05 2010-11-29 퀄컴 인코포레이티드 Video sensor-based automatic region-of-interest detection
US20070076947A1 (en) * 2005-10-05 2007-04-05 Haohong Wang Video sensor-based automatic region-of-interest detection
US8208758B2 (en) * 2005-10-05 2012-06-26 Qualcomm Incorporated Video sensor-based automatic region-of-interest detection
US20070098303A1 (en) * 2005-10-31 2007-05-03 Eastman Kodak Company Determining a particular person from a collection
US7587082B1 (en) 2006-02-17 2009-09-08 Cognitech, Inc. Object recognition based on 2D images and 3D models
US20090103779A1 (en) * 2006-03-22 2009-04-23 Daimler Ag Multi-sensorial hypothesis based object detector and object pursuer
US20070237364A1 (en) * 2006-03-31 2007-10-11 Fuji Photo Film Co., Ltd. Method and apparatus for context-aided human identification
US20070242878A1 (en) * 2006-04-13 2007-10-18 Tandent Vision Science, Inc. Method and system for separating illumination and reflectance using a log color space
US7596266B2 (en) 2006-04-13 2009-09-29 Tandent Vision Science, Inc. Method and system for separating illumination and reflectance using a log color space
WO2007120633A3 (en) * 2006-04-13 2008-04-03 Tandent Vision Science Inc Method and system for separating illumination and reflectance using a log color space
US8666125B2 (en) 2006-08-11 2014-03-04 DigitalOptics Corporation European Limited Real-time face tracking in a digital image acquisition device
US8666124B2 (en) 2006-08-11 2014-03-04 DigitalOptics Corporation Europe Limited Real-time face tracking in a digital image acquisition device
US9398209B2 (en) 2006-08-11 2016-07-19 Fotonation Limited Face tracking for controlling imaging parameters
US8934680B2 (en) 2006-08-11 2015-01-13 Fotonation Limited Face tracking for controlling imaging parameters
US8744145B2 (en) 2006-08-11 2014-06-03 DigitalOptics Corporation Europe Limited Real-time face tracking in a digital image acquisition device
KR100825689B1 (en) 2006-08-18 2008-04-29 학교법인 포항공과대학교 Facial Disguise Discrimination method
US7916894B1 (en) * 2007-01-29 2011-03-29 Adobe Systems Incorporated Summary of a video using faces
US8116538B2 (en) * 2007-05-09 2012-02-14 Samsung Electronics Co., Ltd. System and method for verifying face of user using light mask
US20080279426A1 (en) * 2007-05-09 2008-11-13 Samsung Electronics., Ltd. System and method for verifying face of user using light mask
KR100996542B1 (en) 2008-03-31 2010-11-24 성균관대학교산학협력단 Image Processing Apparatus and Method for Detecting Motion Information in Real Time
US9134540B2 (en) * 2008-10-28 2015-09-15 Koninklijke Philips N.V. Three dimensional display system
US20110193863A1 (en) * 2008-10-28 2011-08-11 Koninklijke Philips Electronics N.V. Three dimensional display system
US8974296B2 (en) * 2008-12-22 2015-03-10 Nintendo Co., Ltd Game program and game apparatus
US9421462B2 (en) 2008-12-22 2016-08-23 Nintendo Co., Ltd. Storage medium storing a game program, game apparatus and game controlling method
US20100160049A1 (en) * 2008-12-22 2010-06-24 Nintendo Co., Ltd. Storage medium storing a game program, game apparatus and game controlling method
US20100160044A1 (en) * 2008-12-22 2010-06-24 Tetsuya Satoh Game program and game apparatus
US8852003B2 (en) * 2008-12-22 2014-10-07 Nintendo Co., Ltd. Storage medium storing a game program, game apparatus and game controlling method
US20120014562A1 (en) * 2009-04-05 2012-01-19 Rafael Advanced Defense Systems Ltd. Efficient method for tracking people
US8855363B2 (en) * 2009-04-05 2014-10-07 Rafael Advanced Defense Systems Ltd. Efficient method for tracking people
US8379917B2 (en) * 2009-10-02 2013-02-19 DigitalOptics Corporation Europe Limited Face recognition performance using additional image features
US20110081052A1 (en) * 2009-10-02 2011-04-07 Fotonation Ireland Limited Face recognition performance using additional image features
US20120096356A1 (en) * 2010-10-19 2012-04-19 Apple Inc. Visual Presentation Composition
US8726161B2 (en) * 2010-10-19 2014-05-13 Apple Inc. Visual presentation composition
US20120148118A1 (en) * 2010-12-09 2012-06-14 Electronics And Telecommunications Research Institute Method for classifying images and apparatus for the same
US20130307978A1 (en) * 2012-05-17 2013-11-21 Caterpillar, Inc. Personnel Classification and Response System
US9080723B2 (en) * 2012-05-17 2015-07-14 Caterpillar Inc. Personnel classification and response system
US9152884B2 (en) * 2012-06-05 2015-10-06 Drvision Technologies Llc Teachable pattern scoring method
US20130322741A1 (en) * 2012-06-05 2013-12-05 DRVision Technologies LLC. Teachable pattern scoring method
WO2022060339A1 (en) * 2020-09-18 2022-03-24 V-Count Teknoloji Anonim Sirketi System and method of personnel exception in visitor count
CN112614160A (en) * 2020-12-24 2021-04-06 中标慧安信息技术股份有限公司 Multi-object face tracking method and system

Also Published As

Publication number Publication date
EP0998718A1 (en) 2000-05-10
WO1999006940A1 (en) 1999-02-11
US6188777B1 (en) 2001-02-13
WO1999006940A9 (en) 1999-04-29
AU8584898A (en) 1999-02-22
US6445810B2 (en) 2002-09-03

Similar Documents

Publication Publication Date Title
US6188777B1 (en) Method and apparatus for personnel detection and tracking
US10546417B2 (en) Method and apparatus for estimating body shape
Graf et al. Multi-modal system for locating heads and faces
Darrell et al. Integrated person tracking using stereo, color, and pattern detection
US7876931B2 (en) Face recognition system and method
US7831087B2 (en) Method for visual-based recognition of an object
Seow et al. Neural network based skin color model for face detection
Everingham et al. Identifying individuals in video by combining'generative'and discriminative head models
Darrell et al. A virtual mirror interface using real-time robust face tracking
Ratan et al. Object detection and localization by dynamic template warping
Niese et al. Emotion recognition based on 2d-3d facial feature extraction from color image sequences
Ko et al. Facial feature tracking and head orientation-based gaze tracking
Kim A personal identity annotation overlay system using a wearable computer for augmented reality
Darrell et al. Robust, real-time people tracking in open environments using integrated stereo, color, and face detection
Prince et al. Pre-Attentive Face Detection for Foveated Wide-Field Surveillance.
Micilotta Detection and tracking of humans for visual interaction
Wang et al. Fusion of appearance and depth information for face recognition
Chen et al. Facial feature detection and tracking in a new multimodal technology-enhanced learning environment for social communication
Darell et al. Tracking people with integrated stereo, color, and face detection
Alqahtani Three-dimensional facial tracker using a stereo vision system
Chen et al. Multi-cue facial feature detection and tracking
Kwolek Pointing arm-posture recognizing using stereo vision system
Luo et al. Tracking of moving heads in cluttered scenes from stereo vision
Yeunghak depth weighted modified Hausdorff distance for range face recognition
Chen et al. Robust facial feature detection and tracking for head pose estimation in a novel multimodal interface for social skills learning

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: VULCAN PATENTS LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERVAL RESEARCH CORPORATION;REEL/FRAME:016345/0030

Effective date: 20041229

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: INTERVAL LICENSING LLC,WASHINGTON

Free format text: MERGER;ASSIGNOR:VULCAN PATENTS LLC;REEL/FRAME:023882/0467

Effective date: 20091223

Owner name: INTERVAL LICENSING LLC, WASHINGTON

Free format text: MERGER;ASSIGNOR:VULCAN PATENTS LLC;REEL/FRAME:023882/0467

Effective date: 20091223

AS Assignment

Owner name: TYZX, INC.,CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERVAL LICENSING, LLC;REEL/FRAME:023892/0902

Effective date: 20100204

Owner name: TYZX, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERVAL LICENSING, LLC;REEL/FRAME:023892/0902

Effective date: 20100204

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TYZX, INC.;REEL/FRAME:029994/0114

Effective date: 20130204

FPAY Fee payment

Year of fee payment: 12