Búsqueda Imágenes Maps Play YouTube Noticias Gmail Drive Más »
Iniciar sesión
Usuarios de lectores de pantalla: deben hacer clic en este enlace para utilizar el modo de accesibilidad. Este modo tiene las mismas funciones esenciales pero funciona mejor con el lector.

Patentes

  1. Búsqueda avanzada de patentes
Número de publicaciónUS6195104 B1
Tipo de publicaciónConcesión
Número de solicitudUS 08/996,677
Fecha de publicación27 Feb 2001
Fecha de presentación23 Dic 1997
Fecha de prioridad23 Dic 1997
TarifaCaducada
También publicado comoEP0960368A2, WO1999034276A2, WO1999034276A3
Número de publicación08996677, 996677, US 6195104 B1, US 6195104B1, US-B1-6195104, US6195104 B1, US6195104B1
InventoresDamian M. Lyons
Cesionario originalPhilips Electronics North America Corp.
Exportar citaBiBTeX, EndNote, RefMan
Enlaces externos: USPTO, Cesión de USPTO, Espacenet
System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
US 6195104 B1
Resumen
A system and method for constructing three-dimensional images using camera-based gesture inputs of a system user. The system comprises a computer-readable memory, a video camera for generating video signals indicative of the gestures of the system user and an interaction area surrounding the system user, and a video image display. The video image display is positioned in front of the system users. The system further comprises a microprocessor for processing the video signals, in accordance with a program stored in the computer-readable memory, to determine the three-dimensional positions of the body and principle body parts of the system user. The microprocessor constructs three-dimensional images of the system user and interaction area on the video image display based upon the three-dimensional positions of the body and principle body parts of the system user. The video image display shows three-dimensional graphical objects superimposed to appear as if they occupy the interaction area, and movement by the system user causes apparent movement of the superimposed, three-dimensional objects displayed on the video image display.
Imágenes(9)
Previous page
Next page
Reclamaciones(20)
What is claimed is:
1. A system for constructing three-dimensional images using camera-based gesture inputs of a user of the system, comprising:
a computer-readable memory means;
means for receiving video signals indicative of the gestures of the system user and an interaction area surrounding the system user;
means for displaying video images, the video image display means being positioned in front of the system user; and
means for processing the video signals, in accordance with a program stored in the computer-readable memory means, to determine the three-dimensional positions of the body and principle body parts of the system user, wherein the video signal processing means constructs three-dimensional images of the system user and interaction area on the video image display means based upon the received video signals, the video image display means displays three-dimensional graphical objects superimposed to appear as if they occupy the interaction area, and movement by the system user causes apparent movement of the superimposed, threedimensional objects displayed on the video image display means.
2. A system for constructing three-dimensional images using camera-based gesture inputs of a user as recited in claim 1, wherein the superimposed, three-dimensional objects appear to move as if they were physical objects moving in the interaction area.
3. A system for constructing three-dimensional images using camera-based gesture inputs of a user as recited in claim 1, wherein the video signal processing means constructs the three-dimensional images of the system user by: projecting two-dimensional positions (u, v) of the feet of the system user to three-dimensional coordinates (x, y, z) of the feet; mapping the head and hands of the system user to three-dimensional coordinates assuming the head and hands are offset from a z position of the feet; using a height (h) of the system user with biometric data to calculate the shoulder offset of the system user from the head, and to calculate the arm length of the system user; calculating the offset of each arm of the system user from a corresponding foot of the system user; and supplying the calculated three-dimensional positions of the head, hands and feet of the system user to the video image display means.
4. A system for constructing three-dimensional images using camera-based gesture inputs of a user as recited in claim 3, wherein the superimposed, three-dimensional graphical objects comprise:
a soccer ball having a set position (f); and
a goal area having a set position on video image display means, wherein video signal processing means compares the foot position (p) of the system user with the set position (f) of the soccer ball so to calculate the foot velocity (fv) of the system user, moves the soccer ball according to the calculated foot velocity (fv), slows the soccer ball down by a predetermined velocity, and sounds a bell if the soccer ball enters the goal area.
5. A system for constructing three-dimensional images using camera-based gesture inputs of a user as recited in claim 1, wherein the three-dimensional graphical objects comprise:
a soccer ball having a set position (f); and
a goal area having a set position on video image display means, wherein video signal processing means compares a foot position (p) of the system user with the set position (f) of the soccer ball so to calculate the foot velocity (fv) of the system user, moves the soccer ball according to the calculated foot velocity (fv), slows the soccer ball down by a predetermined velocity, and sounds a bell if the soccer ball enters the goal area.
6. A method for constructing three-dimensional images using camera-based gesture inputs of a user of a computer system having a computer-readable memory and video image display connected to a microprocessor using a program stored in the computer-readable memory, the method comprising the steps of:
receiving video signals indicative of the gestures of the system user and an interaction area surrounding the system user;
processing the video signals in the microprocessor to determine the three-dimensional positions of the body and principle body parts of the system user;
using the microprocessor to construct three-dimensional images of the system user and interaction area on the video image display based upon the received video signals; and
utilizing the microprocessor to display on the video image display three-dimensional graphical objects superimposed to appear as if they occupied the interaction area, wherein movement by the system user causes apparent movement by the superimposed, three-dimensional objects displayed on the video image display.
7. A method for constructing three-dimensional images using camera-based gesture inputs of a user of a computer system, as recited in claim 6, wherein the superimposed, three-dimensional objects appear to move as if they were physical objects.
8. A method for constructing three-dimensional images using camera-based gesture inputs of a user of a computer system, as recited in claim 6, wherein the three-dimensional image construction step comprises the steps of:
projecting two-dimensional positions (u, v) of the feet of the system user to three-dimensional coordinates (x, y, z) of the feet;
mapping the head and hands of the system user to three-dimensional coordinates assuming the head and hands are offset from a z position of the feet;
using a height (h) of the system user with biometric data to calculate the shoulder offset of the system user from the head, and to calculate the arm length of the system user;
calculating the offset of each arm of the system user from a corresponding foot of the system user; and
supplying the calculated three-dimensional positions of the head, hands and feet of the system user to the video image display.
9. A method for constructing three-dimensional images using camera-based gesture inputs of a user of a computer system, as recited in claim 8, wherein the superimposed, three-dimensional graphical objects comprise a soccer ball having a set position (f), and a goal area having a set position on video image display means, the method further comprising the steps of:
comparing a foot position (p) of the system user with the set position (f) of the soccer ball so to calculate the foot velocity (fv) of the system user;
moving the soccer ball according to the calculated foot velocity (fv);
slowing the soccer ball down by a predetermined velocity; and
sounding a bell if the soccer ball enters the goal area.
10. A method for constructing three-dimensional images using camera-based gesture inputs of a user of a computer system, as recited in claim 6, wherein the three-dimensional graphical objects comprise a soccer ball having a set position (f), and a goal area having a set position on the video image display, the method further comprising the steps of:
comparing a foot position (p) of the system user with the set position (f) of the soccer ball so to calculate the foot velocity (fv) of the system user;
moving the soccer ball according to the calculated foot velocity (fv);
slowing the soccer ball down by a predetermined velocity; and
sounding a bell if the soccer ball enters the goal area.
11. A computer-readable memory device for storing a program that constructs three-dimensional images using camera-based gesture inputs of a user of a computer system having a video image display connected to a microprocessor using instructions stored in the computer-readable memory device, the computer-readable memory device comprising:
instructions for receiving video signals indicative of gestures of the system user to determine the three-dimensional positions of the body and principle body parts of the system user;
instructions for constructing three-dimensional images of the system user and interaction area on the video image display based upon the received video signals; and
instructions for displaying, on the video image display, three-dimensional graphical objects superimposed to appear as if they occupied the interaction area, wherein movement by the system user causes apparent movement by the superimposed, three-dimensional objects displayed on the video image display.
12. A computer-readable memory device for storing a program that constructs three-dimensional images using camera-based gesture inputs of a user, as recited in claim 11, wherein the superimposed, three-dimensional objects appear to move as if they were physical objects.
13. A computer-readable memory device for storing a program that constructs three-dimensional images using camera-based gesture inputs of a user, as recited in claim 11, wherein the instructions for constructing the three-dimensional image comprise:
instructions for projecting two-dimensional positions (u, v) of the feet of the system user to three-dimensional coordinates (x, y, z) of the feet;
instructions for mapping the head and hands of the system user to three-dimensional coordinates assuming the head and hands are offset from a z position of the feet;
instructions for using a height (h) of the system user with biometric data to calculate the shoulder offset of the system user from the head, and to calculate the arm length of the system user;
instructions for calculating the offset of each arm of the system user from a corresponding foot of the system user; and
instructions for supplying the calculated three-dimensional positions of the head, hands and feet of the system user to the video image display.
14. A computer-readable memory device for storing a program that constructs three-dimensional images using camera-based gesture inputs of a user, as recited in claim 13, wherein the three-dimensional graphical objects comprise a soccer ball having a set position (f), and a goal area having a set position on the video image display, the computer-readable memory device further comprising:
instructions for comparing a foot position (p) of the system user with the set position (f) of the soccer ball so to calculate the foot velocity (fv) of the system user;
instructions for moving the soccer ball according to the calculated foot velocity (fv);
instructions for slowing the soccer ball down by a predetermined velocity; and
instructions for sounding a bell if the soccer ball enters the goal area.
15. A computer-readable memory device for storing a program that constructs three-dimensional images using camera-based gesture inputs of a user, as recited in claim 11, wherein the three-dimensional graphical objects comprise a soccer ball having a set position (f), and a goal area having a set position on the video image display, the computer-readable memory device further comprising:
instructions for comparing a foot position (p) of the system user with the set position (f) of the soccer ball so to calculate the foot velocity (fv) of the system user;
instructions for moving the soccer ball according to the calculated foot velocity (fv);
instructions for slowing the soccer ball down by a predetermined velocity; and
instructions for sounding a bell if the soccer ball enters the goal area.
16. A computer program product for constructing three-dimensional images using camera-based gesture inputs of a user of a computer system having a video image display connected to a microprocessor, the computer program product comprising:
means for receiving video signals indicative of gestures of the system user to determine the three-dimensional positions of the body and principle body parts of the system user;
means for constructing three-dimensional images of the system user and interaction area on the video image display based upon the received video signals; and
means for displaying, on the video image display, three-dimensional graphical objects superimposed to appear as if they occupied the interaction area, wherein movement by the system user causes apparent movement by the superimposed, three-dimensional objects displayed on the video image display.
17. A computer program product for constructing three-dimensional images using camera-based gesture inputs of a user of a computer system, as recited in claim 16, wherein the superimposed, three-dimensional objects appear to move as if they were physical objects.
18. A computer program product for constructing three-dimensional images using camera-based gesture inputs of a user of a computer system, as recited in claim 16, wherein the means for constructing the three-dimensional image comprises:
means for projecting two-dimensional positions (u, v) of the feet of the system user to three-dimensional coordinates (x, y, z) of the feet;
means for mapping the head and hands of the system user to three-dimensional coordinates assuming the head and hands are offset from a z position of the feet;
means for using a height (h) of the system user with biometric data to calculate the shoulder offset of the system user from the head, and to calculate the arm length of the system user;
means for calculating the offset of each arm of the system user from a corresponding foot of the system user; and
means for supplying the calculated three-dimensional positions of the head, hands and feet of the system user to the video image display.
19. A computer program product for constructing three-dimensional images using camera-based gesture inputs of a user of a computer system, as recited in claim 18, wherein the three-dimensional graphical objects comprise a soccer ball having a set position (f), and a goal area having a set position on the video image display, the computer program product further comprising:
means for comparing a foot position (p) of the system user with the set position (f) of the soccer ball so to calculate the foot velocity (fv) of the system user;
means for moving the soccer ball according to the calculated foot velocity (fv);
means for slowing the soccer ball down by a predetermined velocity; and
means for sounding a bell if the soccer ball enters the goal area.
20. A computer program product for constructing three-dimensional images using carmera-based gesture inputs of a user of a computer system, as recited in claim 16, wherein the three-dimensional graphical objects comprise a soccer ball having a set position (f), and a goal area having a set position on the video image display, the computer program product further comprising:
means for comparing a foot position (p) of the system user with the set position (f) of the soccer ball so to calculate the foot velocity (fv) of the system user;
means for moving the soccer ball according to the calculated foot velocity (fv);
means for slowing the soccer ball down by a predetermined velocity; and
means for sounding a bell if the soccer ball enters the goal area.
Descripción
BACKGROUND OF THE INVENTION

A. Field of the Invention

The present invention relates generally to multimedia and virtual reality applications, and, more particularly to a system and method for constructing three-dimensional images using camera-based gesture inputs.

B. Description of the Related Art

Multimedia and virtual reality applications permit exciting interaction between a user and a computer. Unfortunately, current computer/user interfaces present a barrier to simplistic user interactivity and thus, consumer acceptance of multimedia and virtual reality applications. Ideally, computer/user interfaces would combine an intuitive interaction format with a broad range of interaction capabilities. Practically, however, these two features conflict. For example, a computer keyboard offers broad interaction capabilities but is not intuitive, whereas a television remote control is more intuitive but offers limited interaction capabilities. Even more flexible interfaces, such as an instrumented body suit, can be both cumbersome and expensive.

A number of approaches to computer/user interface design have been suggested. One approach uses a video camera in a non-invasive way to measure the gestures of a system user, so to control the images displayed to the system user. As shown in FIG. 1, such an interface system 10 comprises a blue wall 12 in which a user 14 stands in front of, permitting two-dimensional silhouette extraction of user 14 and chromakeying of the silhouette. System 10 further includes a video camera 16 for identifying the two-dimensional, user silhouette and for producing a video signal. A microprocessor 18 of a computer identifies the two-dimensional, user silhouette seen by video camera 16, but only as a two-dimensional shape. Thus, motions of user 14 are only understood by microprocessor 18 in terms of the changing image coordinates of the silhouette. Microprocessor 18 displays an image of user 14 on a television display 20. The image displayed on television 20 consists of a two-dimensional scene into which the user's image has been chromakeyed. User 14 can interact with the displayed scene by adopting a specific pose, e.g., hands-over-head, or by moving so that a portion of the user's silhouette touches a designated set of image coordinates making it appear as if user 14 touched a displayed object.

The interface system shown in FIG. 1 provides an easy-to-use, inexpensive interface with multimedia and virtual reality applications. However, the interface system only permits two-dimensional interaction with computer-displayed objects, restricting the capabilities of the interface to two dimensions. For example, in the two-dimensional system of FIG. 1, all of the computer-displayed objects are at the same depth in the window surrounding the user's silhouette.

As seen in FIG. 2, a conventional two-dimensional silhouette extraction process used by the system shown in FIG. 1, comprises both a hardware process (above the dashed line) and a software process (below the dashed line), wherein computer microprocessor 18 performs the software process steps. The hardware process involves a step 22 of inputting an analog video camera signal, followed by a step 24 of digitizing the analog camera signal to produce a gray-scale binary data signal. The hardware process further comprises a step 26 of adjusting the resolution (high or low) of the video camera, and a step 28 of restricting the camera view to a window of the image of interest, i.e., the user's image. The hardware process next comprises a dynamic threshold step 30 where the gray-scale binary data signal is converted into digital binary data, e.g., “1” or “0” . At step 32, the hardware process determines the edges (silhouette) of the user's image, and, based on the edge data, adjusts the picture size (step 34) so to adjust the resolution accordingly at step 26.

The software process involves a first step 36 of subtracting the background from the edge data of step 34, leaving only an image contour of the user's image. The background is a picture of an empty scene as seen by the camera, and is provided at step 38. The software further comprises a step of joining together all of the edge data of the user's image, providing a single contour around the user's image. The software process also comprises an identification step 42 for determining whether the user image contour represents a person, an animal, etc., and a silhouette feature step 44 for identifying the silhouette features (in x, y coordinates) of the user, e.g., head, hands, feet, arms, legs, etc. At step 46, the software process utilizes the contour identification data in order to calculate a bounding box around the user. The bounding box data is provided to the window restricting step 28 for restricting the size of the camera window around the user, and thus, increase the speed of the extraction process.

An alternative approach, proposed by the Media Lab at the Massachusetts Institute of Technology (“MIT” ), allows a user to interact with a computer-generated graphical world by using camera-based body motions and gestures of a system user. Such a system, while being amongst the most versatile of its kind currently available, suffers from the following problems:

(1) it is based on a standard graphical interface (“SGI”) platform; (2) it is sensitive to lighting conditions around the system user; (3) although it tracks the user's foot position in three dimensions, it treats the remainder of the user's body as a two-dimensional object; (4) it is limited to a single user; (5) it provides too coarse of resolution to see user hand details such as fingers; and (6) it is tied to only the “magic mirror” interactive video environment (“IVE”) paradigm, described below. Thus, the alternative approach suffers from the same limitations encountered by the conventional two-dimensional approach, as well as many other problems.

Still another approach includes a method for real-time recognition of a human image, as disclosed Japanese Patent Abstract Publication No. 07-038873 (“JP 07-038873”). JP 07-038873 describes three-dimensional graphical generation of a person that detects the expression, rotation of the head, motion of the fingers, and rotation of the human body. However, JP 07-038873 is limited to graphical model generation of the human body. Furthermore, JP 07-38873 focuses on using three-dimensional graphical animation of a user primarily for teleconferencing purposes, wherein the user cannot control objects in a computer-generated scene. Finally, the reference discloses using three-dimensional animation of a remote user for teleconferencing purposes, as opposed to a three-dimensional animation of a local user.

A final approach, as found in International Patent Application (PCT) WO 96/21321 (“PCT 96/21321”), consists of creating a three-dimensional simulation of an event (e.g., a football game), in real-time or storing it on a CD ROM, using cameras and microphones. The system disclosed in PCT 96/21321, however, merely replays three-dimensional scenes of the event as they are viewed by the cameras. Furthermore, users of the PCT 96/21321 system can only change their perspective of the three-dimensional scenes and are unable to control objects in the scenes.

Unfortunately, none of these proposed approaches described above provides a computer/user interface that combines an intuitive interaction format with a broad range of interaction capabilities.

SUMMARY OF THE INVENTION

An object of the present invention is to address the problems encountered by the two-dimensional interface systems and the alternative approaches proposed by the Media Lab at the Massachusetts Institute of Technology and the other related art discussed above.

Another object is to provide a three-dimensional display of computer-generated objects so that the objects occupy the three-dimensional space around the computer users and the computer users can interact with and control the objects through normal body movements.

A final object is to provide multimedia and virtual reality applications which three-dimensionally displayed computer users can interact and control through normal body movements.

Additional objects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objects and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims.

To achieve the objects and in accordance with the purpose of the invention, as embodied and broadly described herein, the invention comprises a system for constructing three-dimensional images using camera-based gesture inputs of a user of the system, including: a computer-readable memory means; means for generating video signals indicative of the gestures of the system user and an interaction area surrounding the system user; means for displaying video images, the video image display means being positioned in front of the system user; and means for processing the video signals, in accordance with a program stored in the computer-readable memory means, to determine the three-dimensional positions of the body and principle body parts of the system user, wherein the video signal processing means constructs three-dimensional images of the system user and interaction area on the video image display means based upon the three-dimensional positions of the body and principle body parts of the system user, the video image display means displays three-dimensional graphical objects superimposed to appear as if they occupy the interaction area, and movement by the system user causes apparent movement of the superimposed, three-dimensional objects displayed on the video image display means.

To further achieve the objects, the present invention comprises a method for constructing three-dimensional images using camera-based gesture inputs of a user of a computer system having a computer-readable memory and video image display connected to a microprocessor using a program stored in the computer-readable memory, the method comprising the steps of: generating video signals indicative of the gestures of the system user and an interaction area surrounding the system user; processing the video signals in the microprocessor to determine the three-dimensional positions of the body and principle body parts of the system user; using the microprocessor to construct three-dimensional images of the system user and interaction area on the video image display based upon the three-dimensional positions of the body and principle body parts of the system user; and utilizing the microprocessor to display on the video image display three-dimensional graphical objects superimposed to appear as if they occupied the interaction area, wherein movement by the system user causes apparent movement by the superimposed, three-dimensional objects displayed on the video image display.

To still further achieve the objects, the present invention comprises a computer-readable memory device for storing a program that constructs three-dimensional images using camera-based gesture inputs of a user of a computer system having a video image display connected to a microprocessor using instructions stored in the computer-readable memory device, the computer-readable memory device comprising: instructions for processing video signals indicative of gestures of the system user to determine the three-dimensional positions of the body and principle body parts of the system user; instructions for constructing three-dimensional images of the system user and interaction area on the video image display based upon the three-dimensional positions of the body and principle body parts of the system user; and instructions for displaying, on the video image display, three-dimensional graphical objects superimposed to appear as if they occupied the interaction area, wherein movement by the system user causes apparent movement by the superimposed, three-dimensional objects displayed on the video image display.

To even further achieve the objects, the present invention comprises a computer program product for constructing three-dimensional images using camera-based gesture inputs of a user of a computer system having a video image display connected to a microprocessor, the computer program product comprising: means for processing video signals indicative of gestures of the system user to determine the three-dimensional positions of the body and principle body parts of the system user; means for constructing three-dimensional images of the system user and interaction area on the video image display based upon the three-dimensional positions of the body and principle body parts of the system user; and means for displaying, on the video image display, three-dimensional graphical objects superimposed to appear as if they occupied the interaction area, wherein movement by the system user causes apparent movement by the superimposed, three-dimensional objects displayed on the video image display.

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments of the invention and together with the description, serve to explain the principles of the invention. In the drawings:

FIG. 1 is a block diagram of a conventional system for constructing two-dimensional images using camera-based silhouettes of users;

FIG. 2 is a flowchart showing the steps involved in a conventional software process for extracting two-dimensional images using silhouettes of users;

FIG. 3 is a block diagram of a system for constructing three-dimensional images using camera-based gesture inputs of users in accordance with a preferred embodiment of the present invention;

FIG. 4 is a block diagram of a system for constructing three-dimensional images using camera-based gesture inputs of users in accordance with another preferred embodiment of the present invention;

FIG. 5 is a flowchart showing the steps involved in a software process for mapping two-dimensional image features of users and an interactive area onto three-dimensional locations within the interactive area in accordance with the preferred embodiments of the present invention shown in FIGS. 3 and 4;

FIG. 6 is a block diagram showing the three-dimensional soccer game with the system and method for constructing three-dimensional images using camera-based gesture inputs of the preferred embodiment of the present invention shown in FIG. 3;

FIG. 7 is a flowchart showing the steps involved in an application program for a threedimensional soccer game using the system and method for constructing three-dimensional images using camera-based gesture inputs of the preferred embodiment of the present invention shown in FIG. 6; and

FIG. 8 is a biometric data table showing the length of body parts as a ratio of the body height (H), wherein the body height (H) is the height of a standing person.

DESCRIPTION OF THE PREFERRED EMBODIMENTS

Reference will now be made in detail to the present preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.

In accordance with the preferred embodiments, the present invention includes a system and method for constructing three-dimensional images using camera-based gesture inputs of system users. The system comprises a computer-readable memory means, means for generating video signals indicative of the gestures of the system users and an interaction area surrounding the system users, and means for displaying video images. The video image display means is positioned in front of the system users. The system further comprises means for processing the video signals, in accordance with a program stored in the computer-readable memory means, to determine the three-dimensional positions of the bodies and principle body parts of the system users, wherein the video signal processing means constructs three-dimensional images of the system users and interaction area on the video image display means based upon the three-dimensional positions of the bodies and principle body parts of the system users, the video image display means displays three-dimensional graphical objects superimposed to appear as if they occupy the interaction area, and movement by the system users causes apparent movement of the superimposed, three-dimensional objects displayed on the video image display means.

In other words, the present invention is drawn to a natural and intuitive computer/user interface based upon computer vision interaction by system users. As used herein, “computer vision” is the use of a computer to interpret information received from a video imaging device in terms of what objects the imaging device sees. Computer vision permits recognition of user gestures, body motions, head motions, eye motions, etc. The recognized user motions, in turn, are used to interact with multimedia and virtual reality applications. Specifically, the present invention takes the system users' silhouettes in two-dimensional image coordinates and projects them into the three-dimensional image coordinates the system users occupy in the interaction area.

Such an approach is undemanding upon the system users and inexpensive to make. If the system users are interacting via a large-screen display, the computer knows where the system users are looking and pointing, etc., and manipulates the information on the display accordingly. Further, the position and pose of the system users in front of the display screen are extracted and used for interaction with a three-dimensional graphical model. The addition of gesture interpretation to the computer vision system of the present invention adds realism to the interaction with the computer. For example, intuitive hand gestures may be used as an interface with the computer system.

Rather than relying on conventional SGI-based software, the present invention utilizes a camera-based user interface system 50, as best shown in FIG. 3. System 50 comprises a video camera 56, a video display screen 54, and a computer 58 having a Philips® single board image processor (SBIP) 60. SBIP 60 eliminates problems (1)-(3) encountered in the approach proposed by the Media Lab at the Massachusetts Institute of Technology, and thus, also encountered in the two-dimensional systems. Computer 58 also comprises a computer-readable memory 66 encoded with three-dimensional imaging software. SBIP 60 utilizes the software so that system 50 may handle the three-dimensional body motions of the system user. The three-dimensional imaging software of the present invention corrects problems (4) and (5) encountered in the approach proposed by the Media Lab at the Massachusetts Institute of Technology.

To address problem (6) of the Media Lab approach, the present invention provides an interactive video environment (“IVE” ) capable of evaluating several IVE paradigms other than the “magic mirror” paradigm proposed by Massachusetts Institute of Technology. The present invention is capable of evaluating the following four IVE paradigms: (1) the display shows live video of a camera input of a remote site, and the video camera input of the system users is composited with the live video on the display (this is known as “mirror” effect, as in the MIT approach); (2) the display shows live video of the remote site, and the camera input of the users is not composited with the live video on the display (this is known as “window” effect); (3) the display shows graphical images as in virtual reality, and the camera input of the system users is composited with the graphical images on the display; and (4) the display shows graphical images, and the camera input of the system users is not composited with the graphical images on the display.

A. Detailed Description of the System Hardware of the Preferred Embodiments

As embodied herein, a system and method for constructing three-dimensional images using camera-based gesture inputs of a preferred embodiment of the present invention is shown in FIG. 3. Specifically, system 50 comprises a means for processing video signals, such as a computer 58, electrically coupled to a means for generating video signals, such as a video camera 56. Computer 58 is electrically coupled to a means for displaying video images, such as a video display screen 54. Preferably, video display screen 54 is located in front of an interaction area 52 where system users 62 stand. Video camera 56 electronically reads the images of users 62 and interactive area 52, creates video signals corresponding to these images, and provides the video signals to computer 58.

Preferably, computer 58 comprises a typical microprocessor-based computing device such as an IBM-compatible personal computer. Computer 58 further comprises a serial port 64 connected to a microprocessor 60 for receiving the video signals from video camera 56, and a conventional computer-readable memory 66 capable of being encoded with software programs. Microprocessor 60 preferably is a Philips® single board image processor (SBIP). SBIP 60 uses the software (described below), encoded in computer memory 66, for mapping the two-dimensional image features of users 62 and interactive area 52 and calculating the three-dimensional position of system users 62 within interactive area 52. SBIP 60 also preferably uses an application program permanently encoded within computer-readable memory 66, or temporarily encoded therein via an external computer-readable memory, such as for example, a floppy diskette or a CD ROM. Computer 58 further comprises a mode control port 68, connected to SBIP 60 for receiving data from other interactive controls such as a remote control, and a video processor port 70 for delivering video signals to video display screen 54. The software encoded in computer-readable memory 66, and used by SBIP 60, isolates the contours of the system users, determines their body and limb positions in three-dimensional image space, and generates a video signal corresponding to the body and limb position to video display screen 54.

Display screen 54 preferably consists of a conventional audio/visual monitor system capable of displaying three-dimensional graphical information. The type of display screen 54 and video camera 56 used in the present invention is arbitrary and may be. chosen based only upon the intended application of the system of the present invention.

In a more preferred embodiment of the system for constructing three-dimensional images using camera-based gesture inputs of the preferred embodiment, video display screen 34 is a rear-projection Ikegami TPP 1000/1500® projector with a Retroscan RS 125SW® screen (six feet in height in the y direction and eight feet in width in the x direction); interaction area 52 is an eleven feet (in the z direction) by twelve feet (in the x direction) area in front of video display screen 54; and video camera 56 is a Sony® NTSC video camera.

An alternate embodiment of the system and method for constructing three-dimensional images using camera-based gesture inputs in accordance with the present invention is shown in FIG. 4. As shown, the alternate embodiment 80 comprises a video camera 56 and computer 58 with SBIP 60 and computer-readable memory 66 similar to those described with reference to FIG 3. However, the alternate embodiment further comprises a compact disc reader 84 capable of reading an application program encoded on a CD ROM and providing such application program to SBIP 60. The alternate embodiment also comprises a remote controller 86 for controlling features of the application program. Furthermore, in contrast to the display screen of the embodiment shown in FIG. 3, the alternate embodiment includes a conventional television display 82 capable of receiving video signals from SBIP 60 and transmitting information to SBIP 60.

B. Description of the System Software of the Preferred Embodiments

In accordance with the preferred embodiments of the present invention, the software for mapping two-dimensional image features of system users and an interactive area onto three-dimensional locations within the interactive area, as well as the applications programs for use with the preferred embodiments, will now be described. For ease of reference, the software and applications programs are described with reference to a single system user. However, it is to be understood that the camera-based gesture recognition technology of the present invention can be used with multiple users by identifying each user individually and interacting with each user accordingly.

FIG. 5 is a flowchart showing the steps involved in the software process for mapping two-dimensional image features of a system user 62 onto three-dimensional locations the room where system user 62 is located in accordance with the preferred embodiments of the present invention shown in FIGS. 3 and 4. The three-dimensional imaging software may be permanently encoded within computer-readable memory 66 or may be temporarily encoded in memory 66 via a diskette, CD ROM, or similar memory storage means. As shown, the three-dimensional imaging software process comprises a first step 100 of extracting the two-dimensional head, hands and feet features of a user in image space coordinates (u, v) using the two-dimensional extraction process shown in FIG. 2. Coordinates (u, v) correspond to the two-dimensional x-y plane of the user in front of video camera 56. The three-dimensional imaging process further comprises a step 102 of reading the orientation and location of video camera 36 in three-dimensional coordinates (x, y, z) with respect to the room. Assuming the user's feet are on the floor, at step 104 the software process projects the two-dimensional, extracted features of the user's feet to three-dimensional coordinates (x, y, z) of the user's feet using the (x, y, z) orientation of camera 36 with respect to the room. At step 106, the software process projects the two-dimensional, extracted features of the user's head and hands to three-dimensional coordinates (x, y, z) of the user's head and hands, assuming that the head and hands are slightly offset from the position of the feet in the z direction and using the (x, y, z) orientation of camera 36 with respect to the room.

The three-dimensional imaging process further comprises a step 108 of using the measured height (h) of the user to access a biometric data (shown at step 110) indexed by height and stored within computer-readable memory 66. An example of a biometric data table capable of use with the present invention is shown in FIG. 8. The present invention is not limited by the biometric data shown in FIG. 8, since other biometric data may be utilized as set forth in D. Chaffin & G. Andersson, Occupational Biomechanics, 2d ed. (1991), L. Farkas, Anthropometry of the Head and Face, 2d ed. (1994), and N.A.S.A., Anthropometric Source Book, vols. 1-3 (1978). The three-dimensional imaging process assumes that the user's shoulders are offset from the top of the user's head to the bottom of the user's neck by a distance of 0.182h in the y-direction, and that the shoulder width from the center of the user's back to the end of the shoulder blade is 0.129 h in the x direction, wherein h is the user's height. The imaging process further assumes that the user's arm length is 0.44 h, and utilizes the assumed arm length (0.44h) until a measured arm length greater than 0.44 h is extracted by video camera 56. The software process further comprises a step 112 of calculating each arm's offset in the z direction from the corresponding foot, using the assumed arm length (0.44 h) calculated in step 108. At step 112, each arm's actual length in the z direction is calculated from the assumed arm length using the principle of foreshortening. The software process comprises a final step 114 of supplying the three-dimensional positions of the user's head, hands and feet to an application program.

C. Examples of Application Programs for Use with the Preferred Embodiments

The invention will be further clarified by the following examples of application programs capable of use with the system and method for constructing three-dimensional images using camera-based inputs of the present invention. The application programs are intended to be purely exemplary of the uses of the preferred embodiments of the present invention, and are not intended to limit the scope of the broad features of the invention. The preferred embodiments of the present invention can be used with any application requiring calculation of a three-dimensional position of a user so that the user may manipulate graphical computer-generated objects in three dimensions. Examples of application programs include a three-dimensional soccer video game, a home shopping application, an information wall for multiple user interaction, a telecommunications application, a gesture-based remote control, and a home exercise application.

1. Three-Dimensional Soccer Video Game

FIG. 6 is a block diagram showing a three-dimensional soccer (also known as “football” throughout the world) video game application using the system and method of the preferred embodiment shown in FIG. 3. A user 62 of the soccer game stands in front of video display screen 54 on which a graphical image of the virtual soccer game is displayed. Video camera 56 views user 62 and SBIP 60 processes data received from camera 56 by extracting the image of user 62 and by identifying the user body motions, such as the three-dimensional positions of the user's head, hands, legs, feet, etc., as described above.

Video display screen 54 displays the camera image of user 92 and interaction area 52, and also displays a graphical overlay of a soccer field with interaction area 52. Screen 54 displays a graphical image of a goal area 96 on the floor towards one side of interaction area 52, and displays a graphical image of a soccer ball 94 on the floor in the middle of interaction area 52. Goal area 96 and soccer ball 94 are preferably displayed in a scaled and rotated fashion so as to appear as if they were on the floor.

When the user approaches a part of interaction area 52 where the graphical soccer ball 94 resides, the user can seemingly “kick” soccer ball 94. The system of the present invention does not actually respond to the “kick”. Rather, the system responds to the direction from which the user approaches soccer ball 94 and to the closeness of the user to soccer ball 94. Soccer ball 94 moves with a velocity dependent upon the direction and speed with which the user approached the “kicked” soccer ball 94. This simulates a “kicking” effect by the user. Whenever soccer ball 94 hits one of the sides of interaction area 52, e.g., the front of display screen 54, a simulated back wall, or two side panels of display screen 54, soccer ball 94 “bounces” back into the playing area. The object of the virtual soccer game is to get soccer ball 94 into goal area 96.

FIG. 7 is a flowchart showing the steps involved in an application program for a three-dimensional soccer game using the system and method for constructing three-dimensional images using camera-based gesture inputs of the preferred embodiment of the present invention shown in FIG. 3. With reference to FIG. 7, the virtual soccer game application program starts at step 200 and comprises a step 202 of setting the soccer ball position (f) in x and z coordinates, as (fx, fz). At step 204, the video camera 56 orientation with respect to the user is determined and the location of the user is read in from the extracted, three-dimensional image data extracted by the three-dimensional imaging process of FIG. 5. Step 204 further comprises setting up the graphical view of the soccer game (i.e., goal area 96 and soccer ball 94) so it is registered with the camera view, and lumakeying (method of mixing two video streams known in the art) the graphics and video from camera 56 together to yield a meaningful illusion for the user. The virtual soccer game application program further comprises a step 206 of drawing soccer ball 94 and goal area 96 onto the black background of display screen 54, setting the lumakeyer to key the video obtained at step 204 into the black background of display screen 54, and displaying the lumakeyed results onto display screen 54.

The virtual soccer game application program also comprises a step 208 of measuring the user's current foot position (p), in x and z coordinates, as (px, pz). At step 210, if the absolute value of the difference between current foot position (p) and soccer ball position (f), i.e., |p−f|, is less than a predetermined variable (del), then the user's foot velocity (fv) is set equal to k*(p−f). The value “k” is a scaling factor like a spring constant and through experimentation is preferable 1.1. The value “del” represents the foot position from the soccer ball, and through experimentation preferably is five (5) inches. The virtual soccer game application program further comprises a step 212 of moving the ball position (f) according to the foot velocity (fv) for a predetermined number of iterations, e.g., twenty (20) iterations. At step 212, the foot velocity (fv) is decreased by a predetermined variable (vdel) on each iteration so to slow soccer ball 94 down. The value “vdel” is chosen to decrease foot velocity (fv) by ten percent each iteration. All of the predetermined values (k, del, vdel, iterations) are set to ensure that the soccer ball moves as if it were a real soccer ball. Further, at step 212, if soccer ball 94 hits a “wall”, i.e., goes by a predetermined y or z coordinate deemed to be a wall, then soccer ball 94 is bounced from that “wall”. Finally, at step 212, if soccer ball 94 enters the space determined to be goal area 96, a bell is sounded and soccer ball 94 is reset to its initial position.

2. Home Shopping Application

A home shopping application program may also be used with the preferred embodiment of the present invention shown in FIG. 4. The home shopping application utilizes the same concepts discussed above with reference to the three-dimensional soccer video game, but instead of a soccer ball being moved based upon user gestures, clothing is moved as the user tries them on.

One reason why home shopping through a television or computer catalog is uncommon is that consumers find it difficult to determine what the product will look like when they wear it. The preferred embodiment of the present invention can address this problem when used with the home shopping application. The home shopping application offers products (such as shirts, shoes, pants, dresses, hats, etc.) for sale through either a television broadcast or a CD ROM catalog. With the home shopping application, the user stands in front of their television and sees his/herself on the television wearing a selected product. As the user moves and turns, the preferred embodiment of the present invention determines the body motions and transforms the computer-generated graphical image of the product accordingly. Automatic size generation of a product is also possible with the home shopping application.

3. Information Wall for Multiple User Interaction

An information wall application program may also use the system and method of the preferred embodiment shown in FIG. 3. The information wall comprises a large, notice board-like display screen that multiple users can interact with, providing a highly intuitive and interactive information system. Such an application is preferably deployed in shopping malls, museums, libraries, galleries, and other similar environments.

For example, in a shopping mall the information wall would allow shoppers entering the mall to simply stand within a certain distance of the wall to activate it. The information wall then displays an overall map of the mall at the position and height of the person standing in front of it. A number of information icons are displayed around the map from which the shopper can select by pointing. By pointing at the icons, the information wall displays various pieces of information, such as, the location of certain stores and rest-rooms, and so forth. The information wall may also support forms of advertising. For example, by pointing at a store on the map, the shopper could display a short video sequence describing the products and service offered by the store. The information wall may also permit the display to follow the user as he/she walks along its length, pointing in the correct direction to enable a shopper to get where he/she wants to go.

4. Telecommunications Applications

The preferred embodiments of the present invention may also be used with telecommunications applications. Currently, bandwidth problems prevent consumer telecommunications via video. With the present invention, users can communicate via a shared virtual reality world, rather than via their actual environments. Only video from the user silhouette needs to be transmitted and shown in the virtual environment, wherein the preferred embodiments of the present invention extracts the user silhouette. This approach could be simplified even more by showing the users with computer-generated bodies (in the correct position and pose, since the present invention can determine that) and only video of the head region is transmitted.

Multi-user video conferencing may also be aided by the present invention. Currently, a user needs to pan and zoom the camera from user to user of a teleconference. The present invention could be used as part of a commercial teleconferencing system where the camera can be controlled by the gestures of the participants in the teleconference. For example, pointing at a participant causes the camera to focus on that participant, raising your hand attracts the camera to focus on you, etc.

5. Gesture-Based Remote Control

The preferred embodiments of the present invention could also be used as part of the infrastructure of an integrated home entertainment and communications system, replacing the functions currently provided by a remote control unit. For example, the user's position within the room, as well as user body pose and gestures, could all be accessed by the present invention. Pointing at a CD player could display the controls for the CD player on the television, and pointing at menu items on the television could select those items.

If more than one television (or display) is in the room, the position of the user could be used to determine which television is employed. If there are more than one user, it is also conceivable that the present invention could enable separate commands issued by different users, or construct a hierarchy of authority for the different commands.

Additionally, a conventional remote control could be used with the present invention, wherein the present invention simplifies the functionality of the remote control, e.g., so that it has only four buttons. With the present invention, a user could point the remote control at the CD player (or stand adjacent thereto), and the remote control would function as a CD player remote. Alternatively, the user could sit in front of the television and the remote control would function as a channel changer. Finally, the remote control could be used to establish a hierarchy of authority wherein the preferred embodiments of the present invention will respond only to the user holding remote control.

6. Home Exercise Application

The preferred embodiments of the present invention could also be used to support home exercise CD ROM programs, wherein the user buys his/her own celebrity trainer. The present invention provides information on the location of the user in a room to the home exercise program so that the trainer will always look in the direction of the user. The present invention can also determine when the user stops exercising in the middle of an exercise, so that the trainer can recommend an alternate exercise regimen. It is also possible for the trainer to critique the way a user is exercising and offer helpful information.

An additional feature of the home exercise application would be to combine video input of the user with the graphically-generated image of the trainer and display both on a television (similar to the way clothing is displayed on users in the home shopping application). Such a feature gives the user the advantage of seeing themselves in action, and permits the trainer to point or touch portions of the video image of the user so to impart advice, e.g., lift your leg this high.

It will be apparent to those skilled in the art that various modifications and variations can be made in the system and method for constructing three-dimensional images using camera-based gesture inputs of the present invention and in construction of this system without departing from the scope or spirit of the invention. As an example, the system and method could be used with other application programs which require three-dimensional construction of images and users, and require interaction between the users and three-dimensional images. Further, CD reader 84 and remote 86 of the system shown in FIG. 4 may be used with the system shown in FIG. 3. Finally, audio features may be incorporated into the preferred embodiments to provide voice-recognized commands from the system user and sound effects to the display screen.

Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Citas de patentes
Patente citada Fecha de presentación Fecha de publicación Solicitante Título
US6014472 *14 Nov 199611 Ene 2000Sony CorporationSpecial effect device, image processing method, and shadow generating method
JPH0738873A Título no disponible
WO1996021321A121 Dic 199511 Jul 1996Anderson David PVirtual reality television system
Citada por
Patente citante Fecha de presentación Fecha de publicación Solicitante Título
US6753857 *13 Abr 200022 Jun 2004Nippon Telegraph And Telephone CorporationMethod and system for 3-D shared virtual environment display communication virtual conference and programs therefor
US6753879 *3 Jul 200022 Jun 2004Intel CorporationCreating overlapping real and virtual images
US6910064 *21 Sep 200021 Jun 2005Toshiba America Information Systems, Inc.System of delivering content on-line
US694757115 May 200020 Sep 2005Digimarc CorporationCell phones with optical capabilities, and related applications
US7007236 *14 Sep 200128 Feb 2006Accenture Global Services GmbhLab window collaboration
US710261527 Jul 20025 Sep 2006Sony Computer Entertainment Inc.Man-machine interface using a deformable device
US7110909 *22 Nov 200219 Sep 2006Siemens AktiengesellschaftSystem and method for establishing a documentation of working processes for display in an augmented reality system in particular in a production assembly service or maintenance environment
US7116284 *30 Nov 20003 Oct 2006Canon Kabushiki KaishaControl apparatus of virtual common space using communication line
US7173618 *20 Ago 20046 Feb 2007Nintendo Co., Ltd.Image creation program and method of creating image
US717403117 May 20056 Feb 2007Digimarc CorporationMethods for using wireless phones having optical capabilities
US7176908 *10 Ene 200113 Feb 2007Sony CorporationPicture display device and picture display method
US722499510 Ene 200129 May 2007Digimarc CorporationData entry method and system
US72616128 Nov 200028 Ago 2007Digimarc CorporationMethods and systems for read-aloud books
US7340077 *18 Feb 20034 Mar 2008Canesta, Inc.Gesture recognition system using depth perceptive sensors
US7441198 *16 Dic 200521 Oct 2008Accenture Global Services GmbhVirtual collaboration window system and method
US756298528 Abr 200421 Jul 2009Koninklijke Philips Electronics N.V.Mirror assembly with integrated display device
US762311516 Ene 200424 Nov 2009Sony Computer Entertainment Inc.Method and apparatus for light input device
US76271394 May 20061 Dic 2009Sony Computer Entertainment Inc.Computer image and audio processing of intensity and input devices for interfacing with a computer program
US763923328 Feb 200629 Dic 2009Sony Computer Entertainment Inc.Man-machine interface using a deformable device
US764637212 Dic 200512 Ene 2010Sony Computer Entertainment Inc.Methods and systems for enabling direction detection when interfacing with a computer program
US766368916 Ene 200416 Feb 2010Sony Computer Entertainment Inc.Method and apparatus for optimizing capture device settings through depth information
US770143913 Jul 200620 Abr 2010Northrop Grumman CorporationGesture recognition simulation system and method
US770262419 Abr 200520 Abr 2010Exbiblio, B.V.Processing techniques for visual capture data from a rendered document
US770661123 Ago 200527 Abr 2010Exbiblio B.V.Method and system for character recognition
US77070393 Dic 200427 Abr 2010Exbiblio B.V.Automatic modification of web pages
US77429531 Abr 200522 Jun 2010Exbiblio B.V.Adding information or functionality to a rendered document via association with an electronic counterpart
US77602484 May 200620 Jul 2010Sony Computer Entertainment Inc.Selective sound source listening in conjunction with computer interactive processing
US78030508 May 200628 Sep 2010Sony Computer Entertainment Inc.Tracking device with sound emitter for use in obtaining information for controlling game program execution
US781286027 Sep 200512 Oct 2010Exbiblio B.V.Handheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device
US781821517 May 200519 Oct 2010Exbiblio, B.V.Processing techniques for text capture from a rendered document
US782226724 Jun 200826 Oct 2010Gesturetek, Inc.Enhanced object reconstruction
US78319121 Abr 20059 Nov 2010Exbiblio B. V.Publishing techniques for adding value to a rendered document
US784854231 Oct 20077 Dic 2010Gesturetek, Inc.Optical flow based tilt sensor
US78505266 May 200614 Dic 2010Sony Computer Entertainment America Inc.System for tracking user manipulations within an environment
US78546558 May 200621 Dic 2010Sony Computer Entertainment America Inc.Obtaining input for controlling execution of a game program
US78688814 Ene 200711 Ene 2011Sony CorporationPicture display device and picture display method
US787491712 Dic 200525 Ene 2011Sony Computer Entertainment Inc.Methods and systems for enabling depth and direction detection when interfacing with a computer program
US788341515 Sep 20038 Feb 2011Sony Computer Entertainment Inc.Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion
US7904826 *29 Mar 20028 Mar 2011Microsoft CorporationPeek around user interface
US79071178 Ago 200615 Mar 2011Microsoft CorporationVirtual controller for visual displays
US79187336 May 20065 Abr 2011Sony Computer Entertainment America Inc.Multi-input game control mixer
US795327126 Oct 201031 May 2011Gesturetek, Inc.Enhanced object reconstruction
US7975146 *14 May 20035 Jul 2011Tbs Holding AgMethod and apparatus for recognition of biometric data following recording from at least two directions
US799055628 Feb 20062 Ago 2011Google Inc.Association of a portable scanner with input/output and storage devices
US800572018 Ago 200523 Ago 2011Google Inc.Applying scanned information to identify content
US80196481 Abr 200513 Sep 2011Google Inc.Search engines and systems with handheld document data capture devices
US80356291 Dic 200611 Oct 2011Sony Computer Entertainment Inc.Hand-held computer interactive device
US804971914 Oct 20101 Nov 2011Microsoft CorporationVirtual controller for visual displays
US807247029 May 20036 Dic 2011Sony Computer Entertainment Inc.System and method for providing a real-time three-dimensional interactive environment
US80818496 Feb 200720 Dic 2011Google Inc.Portable scanning and memory device
US811573223 Abr 200914 Feb 2012Microsoft CorporationVirtual controller for visual displays
US81391101 Nov 200720 Mar 2012Northrop Grumman Systems CorporationCalibration of a gesture recognition interface system
US81397934 May 200620 Mar 2012Sony Computer Entertainment Inc.Methods and apparatus for capturing audio signals based on a visual image
US81422888 May 200927 Mar 2012Sony Computer Entertainment America LlcBase station movement detection and compensation
US8144118 *23 Ene 200627 Mar 2012Qualcomm IncorporatedMotion-based tracking
US81602694 May 200617 Abr 2012Sony Computer Entertainment Inc.Methods and apparatuses for adjusting a listening area for capturing sounds
US816030419 May 200917 Abr 2012Digimarc CorporationInteractive systems and methods employing wireless mobile devices
US816542226 Jun 200924 Abr 2012Microsoft CorporationMethod and system for reducing effects of undesired signals in an infrared imaging system
US817956329 Sep 201015 May 2012Google Inc.Portable scanning device
US81801145 Jun 200815 May 2012Northrop Grumman Systems CorporationGesture recognition interface system with vertical display
US818896821 Dic 200729 May 2012Sony Computer Entertainment Inc.Methods for interfacing with a program using a light input device
US82136866 Dic 20103 Jul 2012Qualcomm IncorporatedOptical flow based tilt sensor
US82143871 Abr 20053 Jul 2012Google Inc.Document enhancement system and method
US821885827 May 201110 Jul 2012Qualcomm IncorporatedEnhanced object reconstruction
US823320618 Mar 200831 Jul 2012Zebra Imaging, Inc.User interaction with holographic images
US82336424 May 200631 Jul 2012Sony Computer Entertainment Inc.Methods and apparatuses for capturing an audio signal based on a location of the signal
US823457825 Jul 200631 Jul 2012Northrop Grumman Systems CorporatiomNetworked gesture collaboration system
US825182027 Jun 201128 Ago 2012Sony Computer Entertainment Inc.Methods and systems for enabling depth and direction detection when interfacing with a computer program
US826109419 Ago 20104 Sep 2012Google Inc.Secure data gathering from rendered documents
US828248724 Jun 20099 Oct 2012Microsoft CorporationDetermining orientation in an external reference frame
US828737317 Abr 200916 Oct 2012Sony Computer Entertainment Inc.Control device for communicating visual information
US830340521 Dic 20106 Nov 2012Sony Computer Entertainment America LlcController for providing inputs to control execution of a program when inputs are combined
US830341112 Oct 20106 Nov 2012Sony Computer Entertainment Inc.Methods and systems for enabling depth and direction detection when interfacing with a computer program
US831065628 Sep 200613 Nov 2012Sony Computer Entertainment America LlcMapping movements of a hand-held controller to the two-dimensional image plane of a display screen
US83133806 May 200620 Nov 2012Sony Computer Entertainment America LlcScheme for translating movements of a hand-held controller into inputs for a system
US832310624 Jun 20084 Dic 2012Sony Computer Entertainment America LlcDetermination of controller three-dimensional location using image analysis and ultrasonic communication
US834296310 Abr 20091 Ene 2013Sony Computer Entertainment America Inc.Methods and systems for enabling control of artificial intelligence game characters
US834592020 Jun 20081 Ene 2013Northrop Grumman Systems CorporationGesture recognition interface system with a light-diffusive screen
US834662028 Sep 20101 Ene 2013Google Inc.Automatic modification of web pages
US836875317 Mar 20085 Feb 2013Sony Computer Entertainment America LlcController with an integrated depth camera
US839185125 May 20075 Mar 2013Digimarc CorporationGestural techniques with wireless mobile phone devices
US83939648 May 200912 Mar 2013Sony Computer Entertainment America LlcBase station for position location
US839625220 May 201012 Mar 2013Edge 3 TechnologiesSystems and related methods for three dimensional gesture recognition in vehicles
US841805518 Feb 20109 Abr 2013Google Inc.Identifying a document by performing spectral analysis on the contents of the document
US843244810 Ago 200630 Abr 2013Northrop Grumman Systems CorporationStereo camera intrusion detection system
US844233118 Ago 200914 May 2013Google Inc.Capturing text from rendered documents using supplemental information
US844706612 Mar 201021 May 2013Google Inc.Performing actions based on capturing information from rendered documents, such as documents under copyright
US845641918 Abr 20084 Jun 2013Microsoft CorporationDetermining a position of a pointing device
US845744920 Jul 20104 Jun 2013Digimarc CorporationWireless mobile phone methods
US8464160 *25 Sep 200911 Jun 2013Panasonic CorporationUser interface device, user interface method, and recording medium
US846759931 Ago 201118 Jun 2013Edge 3 Technologies, Inc.Method and apparatus for confusion learning
US848962429 Ene 201016 Jul 2013Google, Inc.Processing techniques for text capture from a rendered document
US850509020 Feb 20126 Ago 2013Google Inc.Archive of text captures from rendered documents
US85158161 Abr 200520 Ago 2013Google Inc.Aggregate analysis of text captures performed by multiple users from rendered documents
US85209006 Ago 201027 Ago 2013Digimarc CorporationMethods and devices involving imagery and gestures
US852765720 Mar 20093 Sep 2013Sony Computer Entertainment America LlcMethods and systems for dynamically adjusting update rates in multi-player network gaming
US85380647 Sep 201017 Sep 2013Digimarc CorporationMethods and devices employing content identifiers
US854290715 Dic 200824 Sep 2013Sony Computer Entertainment America LlcDynamic three-dimensional object mapping for user-defined control device
US854740119 Ago 20041 Oct 2013Sony Computer Entertainment Inc.Portable augmented reality device and method
US85529769 Ene 20128 Oct 2013Microsoft CorporationVirtual controller for visual displays
US856097210 Ago 200415 Oct 2013Microsoft CorporationSurface UI for gesture-based interaction
US857037830 Oct 200829 Oct 2013Sony Computer Entertainment Inc.Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera
US858286610 Feb 201112 Nov 2013Edge 3 Technologies, Inc.Method and apparatus for disparity computation in stereo images
US858982413 Jul 200619 Nov 2013Northrop Grumman Systems CorporationGesture recognition interface system
US86001966 Jul 20103 Dic 2013Google Inc.Optical scanners, such as hand-held optical scanners
US86200835 Oct 201131 Dic 2013Google Inc.Method and system for character recognition
US86258557 Feb 20137 Ene 2014Edge 3 Technologies LlcThree dimensional gesture recognition in vehicles
US863836318 Feb 201028 Ene 2014Google Inc.Automatically capturing information, such as capturing information using a document-aware device
US863898916 Ene 201328 Ene 2014Leap Motion, Inc.Systems and methods for capturing motion in three-dimensional space
US864459920 May 20134 Feb 2014Edge 3 Technologies, Inc.Method and apparatus for spawning specialist belief propagation networks
US865509310 Feb 201118 Feb 2014Edge 3 Technologies, Inc.Method and apparatus for performing segmentation of an image
US866614410 Feb 20114 Mar 2014Edge 3 Technologies, Inc.Method and apparatus for determining disparity of texture
US867591514 Dic 201018 Mar 2014Sony Computer Entertainment America LlcSystem for tracking user manipulations within an environment
US86869396 May 20061 Abr 2014Sony Computer Entertainment Inc.System, method, and apparatus for three-dimensional input control
US870587715 Nov 201122 Abr 2014Edge 3 Technologies, Inc.Method and apparatus for fast computational stereo
US870721626 Feb 200922 Abr 2014Microsoft CorporationControlling objects via gesturing
US871341812 Abr 200529 Abr 2014Google Inc.Adding value to a rendered document
US871728828 Feb 20126 May 2014Qualcomm IncorporatedMotion-based tracking
US871838712 Dic 20116 May 2014Edge 3 Technologies, Inc.Method and apparatus for enhanced stereo vision
US87455411 Dic 20033 Jun 2014Microsoft CorporationArchitecture for controlling a computer using hand gestures
US875813227 Ago 201224 Jun 2014Sony Computer Entertainment Inc.Methods and systems for enabling depth and direction detection when interfacing with a computer program
US876150915 Nov 201124 Jun 2014Edge 3 Technologies, Inc.Method and apparatus for fast computational stereo
US878115116 Ago 200715 Jul 2014Sony Computer Entertainment Inc.Object detection using video input combined with tilt angle information
US878122813 Sep 201215 Jul 2014Google Inc.Triggering actions in response to optically or acoustically capturing keywords from a rendered document
US87972606 May 20065 Ago 2014Sony Computer Entertainment Inc.Inertially trackable hand-held controller
US87983589 Oct 20135 Ago 2014Edge 3 Technologies, Inc.Apparatus and method for disparity map generation
US879909913 Sep 20125 Ago 2014Google Inc.Processing techniques for text capture from a rendered document
US883136511 Mar 20139 Sep 2014Google Inc.Capturing text from rendered documents using supplement information
US884047024 Feb 200923 Sep 2014Sony Computer Entertainment America LlcMethods for capturing depth data of a scene and applying computer actions
US884385719 Nov 200923 Sep 2014Microsoft CorporationDistance scalable no touch computing
US88477394 Ago 200830 Sep 2014Microsoft CorporationFusing RFID and vision for surface object tracking
US887450422 Mar 201028 Oct 2014Google Inc.Processing techniques for visual capture data from a rendered document
US88918591 Ene 201418 Nov 2014Edge 3 Technologies, Inc.Method and apparatus for spawning specialist belief propagation networks based upon data classification
US88924958 Ene 201318 Nov 2014Blanding Hovenweep, LlcAdaptive pattern recognition based controller apparatus and method and human-interface therefore
US8941691 *26 Ago 200927 Ene 2015Pure Depth LimitedMulti-layered displays
US89473474 May 20063 Feb 2015Sony Computer Entertainment Inc.Controlling actions in a video game unit
US89538868 Ago 201310 Feb 2015Google Inc.Method and system for character recognition
US896131329 May 200924 Feb 2015Sony Computer Entertainment America LlcMulti-positional three-dimensional controller
US897058924 Jul 20113 Mar 2015Edge 3 Technologies, Inc.Near-touch interaction with a stereo camera grid structured tessellations
US897290222 Ago 20083 Mar 2015Northrop Grumman Systems CorporationCompound gesture recognition
US897626526 Oct 201110 Mar 2015Sony Computer Entertainment Inc.Apparatus for image and sound capture in a game environment
US898313925 Jun 201217 Mar 2015Qualcomm IncorporatedOptical flow based tilt sensor
US89831789 Oct 201317 Mar 2015Edge 3 Technologies, Inc.Apparatus and method for performing segment-based disparity decomposition
US899023512 Mar 201024 Mar 2015Google Inc.Automatically providing content associated with captured information, such as information captured in real-time
US900111814 Ago 20127 Abr 2015Microsoft Technology Licensing, LlcAvatar construction using depth camera
US90084471 Abr 200514 Abr 2015Google Inc.Method and system for character recognition
US903069913 Ago 201312 May 2015Google Inc.Association of a portable scanner with input/output and storage devices
US905805823 Jul 201216 Jun 2015Intellectual Ventures Holding 67 LlcProcessing of gesture-based user interactions activation levels
US907001921 Dic 201230 Jun 2015Leap Motion, Inc.Systems and methods for capturing motion in three-dimensional space
US907577922 Abr 20137 Jul 2015Google Inc.Performing actions based on capturing information from rendered documents, such as documents under copyright
US90817996 Dic 201014 Jul 2015Google Inc.Using gestalt information to identify locations in printed information
US9098873 *1 Abr 20104 Ago 2015Microsoft Technology Licensing, LlcMotion-based interactive shopping environment
US911689011 Jun 201425 Ago 2015Google Inc.Triggering actions in response to optically or acoustically capturing keywords from a rendered document
US914363829 Abr 201322 Sep 2015Google Inc.Data capture from rendered documents using handheld device
US91528532 Dic 20136 Oct 2015Edge 3Technologies, Inc.Gesture recognition in vehicles
US915302813 Dic 20136 Oct 2015Leap Motion, Inc.Systems and methods for capturing motion in three-dimensional space
US916536826 Ago 201120 Oct 2015Microsoft Technology Licensing, LlcMethod and system to segment depth images and to detect shapes in three-dimensionally acquired data
US917145414 Nov 200727 Oct 2015Microsoft Technology Licensing, LlcMagic wand
US91741196 Nov 20123 Nov 2015Sony Computer Entertainement America, LLCController for providing inputs to control execution of a program when inputs are combined
US917738711 Feb 20033 Nov 2015Sony Computer Entertainment Inc.Method and apparatus for real time motion capture
US922910713 Ago 20145 Ene 2016Intellectual Ventures Holding 81 LlcLens system
US92347496 Jun 201212 Ene 2016Qualcomm IncorporatedEnhanced object reconstruction
US924723621 Ago 201226 Ene 2016Intellectual Ventures Holdings 81 LlcDisplay with built in 3D sensing capability and gesture control of TV
US9250788 *18 Mar 20092 Feb 2016IdentifyMine, Inc.Gesture handlers of a gesture engine
US926885213 Sep 201223 Feb 2016Google Inc.Search engines and systems with handheld document data capture devices
US92750517 Nov 20121 Mar 2016Google Inc.Automatic modification of web pages
US928589318 Ene 201315 Mar 2016Leap Motion, Inc.Object detection and tracking with variable-field illumination devices
US9304592 *12 Nov 20105 Abr 2016At&T Intellectual Property I, L.P.Electronic device control based on gestures
US930467716 May 20125 Abr 2016Advanced Touchscreen And Gestures Technologies, LlcTouch screen apparatus for recognizing a touch gesture
US931171531 Oct 201212 Abr 2016Microsoft Technology Licensing, LlcMethod and system to segment depth images and to detect shapes in three-dimensionally acquired data
US932339520 Ene 201526 Abr 2016Edge 3 TechnologiesNear touch interaction with structured light
US93237849 Dic 201026 Abr 2016Google Inc.Image search using text-based elements within the contents of images
US932415427 Mar 201426 Abr 2016Edge 3 TechnologiesMethod and apparatus for enhancing stereo vision through image segmentation
US93778742 Nov 200728 Jun 2016Northrop Grumman Systems CorporationGesture recognition light and video image projector
US938142411 Ene 20115 Jul 2016Sony Interactive Entertainment America LlcScheme for translating movements of a hand-held controller into inputs for a system
US93934877 May 200619 Jul 2016Sony Interactive Entertainment Inc.Method for mapping movements of a hand-held controller to game commands
US941770020 May 201016 Ago 2016Edge3 TechnologiesGesture recognition systems and related methods
US943699812 May 20156 Sep 2016Leap Motion, Inc.Systems and methods of constructing three-dimensional (3D) model of an object using image cross-sections
US94542447 May 200827 Sep 2016Microsoft Technology Licensing, LlcRecognizing a movement of a pointing device
US94654617 Ene 201411 Oct 2016Leap Motion, Inc.Object detection and tracking with audio and optical signals
US94749686 May 200625 Oct 2016Sony Interactive Entertainment America LlcMethod and system for applying gearing effects to visual tracking
US9492748 *19 May 201115 Nov 2016Kabushiki Kaisha SegaVideo game apparatus, video game controlling program, and video game controlling method
US94956134 Dic 201515 Nov 2016Leap Motion, Inc.Enhanced contrast for object detection and characterization by optical imaging using formed difference images
US951413415 Jul 20156 Dic 2016Google Inc.Triggering actions in response to optically or acoustically capturing keywords from a rendered document
US953556312 Nov 20133 Ene 2017Blanding Hovenweep, LlcInternet appliance system and method
US957305622 Abr 200921 Feb 2017Sony Interactive Entertainment Inc.Expandable control device via hardware attachment
US959664315 Jul 201414 Mar 2017Microsoft Technology Licensing, LlcProviding a user interface experience based on inferred vehicle state
US961326215 Ene 20154 Abr 2017Leap Motion, Inc.Object detection and tracking for providing a virtual device experience
US96260157 Ene 201418 Abr 2017Leap Motion, Inc.Power consumption in motion-capture systems with audio and optical signals
US962659113 Dic 201318 Abr 2017Leap Motion, Inc.Enhanced contrast for object detection and characterization by optical imaging
US963265815 Ene 201425 Abr 2017Leap Motion, Inc.Dynamic user interactions for display control and scaling responsiveness of display objects
US963301322 Mar 201625 Abr 2017Google Inc.Triggering actions in response to optically or acoustically capturing keywords from a rendered document
US96463402 Ago 20129 May 2017Microsoft Technology Licensing, LlcAvatar-based virtual dressing room
US965204212 Feb 201016 May 2017Microsoft Technology Licensing, LlcArchitecture for controlling a computer using hand gestures
US965208423 Abr 201616 May 2017Edge 3 Technologies, Inc.Near touch interaction
US965266811 Nov 201616 May 2017Leap Motion, Inc.Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US96724414 Dic 20156 Jun 2017Leap Motion, Inc.Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US967260911 Nov 20116 Jun 2017Edge 3 Technologies, Inc.Method and apparatus for improved depth-map estimation
US967921516 May 201413 Jun 2017Leap Motion, Inc.Systems and methods for machine control
US968231925 Jun 200720 Jun 2017Sony Interactive Entertainment Inc.Combiner method for altering game gearing
US968232031 Jul 201420 Jun 2017Sony Interactive Entertainment Inc.Inertially trackable hand-held controller
US969680817 Dic 20084 Jul 2017Northrop Grumman Systems CorporationHand-gesture recognition method
US969686715 Ene 20144 Jul 2017Leap Motion, Inc.Dynamic user interactions for display control and identifying dominant gestures
US969764312 May 20154 Jul 2017Leap Motion, Inc.Systems and methods of object shape and position determination in three-dimensional (3D) space
US970297714 Mar 201411 Jul 2017Leap Motion, Inc.Determining positional information of an object in space
US972329622 Ene 20141 Ago 2017Edge 3 Technologies, Inc.Apparatus and method for determining disparity of textured regions
US974113621 Dic 201622 Ago 2017Leap Motion, Inc.Systems and methods of object shape and position determination in three-dimensional (3D) space
US974769619 May 201429 Ago 2017Leap Motion, Inc.Systems and methods for providing normalized parameters of motions of objects in three-dimensional space
US20010002831 *30 Nov 20007 Jun 2001Masami KatoControl apparatus of virtual common space using communication line
US20010023193 *10 Ene 200120 Sep 2001Rhoads Geoffrey B.Data entry method and system
US20010031081 *16 Ene 200118 Oct 2001The One Infinite Inc.Mirror to be formed using digital image processing and medium storing program for a computer to perform the processing
US20030001966 *10 Ene 20012 Ene 2003Yoshiaki MatsubaraPicture display device and picture display method
US20030156756 *18 Feb 200321 Ago 2003Gokturk Salih BurakGesture recognition system using depth perceptive sensors
US20030184576 *29 Mar 20022 Oct 2003Vronay David P.Peek around user interface
US20030227453 *8 Abr 200311 Dic 2003Klaus-Peter BeierMethod, system and computer program product for automatically creating an animated 3-D scenario from human position and path data
US20040017473 *27 Jul 200229 Ene 2004Sony Computer Entertainment Inc.Man-machine interface using a deformable device
US20040155902 *14 Sep 200112 Ago 2004Dempski Kelly L.Lab window collaboration
US20040155962 *11 Feb 200312 Ago 2004Marks Richard L.Method and apparatus for real time motion capture
US20040176924 *29 May 20039 Sep 2004Salmon Peter C.Apparatus and method for testing electronic systems
US20050021281 *22 Nov 200227 Ene 2005Wolfgang FriedrichSystem and method for establising a documentation of working processes for display in an augmented reality system in particular in a production assembly service or maintenance enviroment
US20050046625 *20 Ago 20043 Mar 2005Nintendo Co., Ltd.Image creation program and method of creating image
US20050111703 *14 May 200326 May 2005Peter-Michael MerbachMethod and apparatus for recognition of biometric data following recording from at least two directions
US20050157204 *16 Ene 200421 Jul 2005Sony Computer Entertainment Inc.Method and apparatus for optimizing capture device settings through depth information
US20050213790 *17 May 200529 Sep 2005Rhoads Geoffrey BMethods for using wireless phones having optical capabilities
US20060007141 *13 Sep 200512 Ene 2006Microsoft CorporationPointing device and cursor for use in intelligent computing environments
US20060007142 *13 Sep 200512 Ene 2006Microsoft CorporationPointing device and cursor for use in intelligent computing environments
US20060023945 *1 Abr 20052 Feb 2006King Martin TSearch engines and systems with handheld document data capture devices
US20060026078 *1 Abr 20052 Feb 2006King Martin TCapturing text from rendered documents using supplemental information
US20060026140 *1 Abr 20052 Feb 2006King Martin TContent access with handheld document data capture devices
US20060029296 *1 Abr 20059 Feb 2006King Martin TData capture from rendered documents using handheld device
US20060036462 *1 Abr 200516 Feb 2006King Martin TAggregate analysis of text captures performed by multiple users from rendered documents
US20060038833 *19 Ago 200423 Feb 2006Mallinson Dominic SPortable augmented reality device and method
US20060041484 *1 Abr 200523 Feb 2006King Martin TMethods and systems for initiating application processes by data capture from rendered documents
US20060041538 *1 Abr 200523 Feb 2006King Martin TEstablishing an interactive environment for rendered documents
US20060041590 *1 Abr 200523 Feb 2006King Martin TDocument enhancement system and method
US20060041605 *1 Abr 200523 Feb 2006King Martin TDetermining actions involving captured information and electronic content associated with rendered documents
US20060041828 *1 Abr 200523 Feb 2006King Martin TTriggering actions in response to optically or acoustically capturing keywords from a rendered document
US20060047639 *1 Abr 20052 Mar 2006King Martin TAdding information or functionality to a rendered document via association with an electronic counterpart
US20060050996 *1 Abr 20059 Mar 2006King Martin TArchive of text captures from rendered documents
US20060053097 *1 Abr 20059 Mar 2006King Martin TSearching and accessing documents on private networks for use with captures from rendered documents
US20060081714 *23 Ago 200520 Abr 2006King Martin TPortable scanning device
US20060092267 *16 Dic 20054 May 2006Accenture Global Services GmbhLab window collaboration
US20060098899 *27 Sep 200511 May 2006King Martin THandheld device for capturing text from both a document printed on paper and a document displayed on a dynamic display device
US20060098900 *27 Sep 200511 May 2006King Martin TSecure data gathering from rendered documents
US20060122983 *18 Ago 20058 Jun 2006King Martin TLocating electronic instances of documents based on rendered instances, document fragment digest generation, and digest based document fragment determination
US20060139322 *28 Feb 200629 Jun 2006Sony Computer Entertainment America Inc.Man-machine interface using a deformable device
US20060192782 *23 Ene 200631 Ago 2006Evan HildrethMotion-based tracking
US20060239471 *4 May 200626 Oct 2006Sony Computer Entertainment Inc.Methods and apparatus for targeted sound detection and characterization
US20060252541 *6 May 20069 Nov 2006Sony Computer Entertainment Inc.Method and system for applying gearing effects to visual tracking
US20060256371 *28 Feb 200616 Nov 2006King Martin TAssociation of a portable scanner with input/output and storage devices
US20060264258 *6 May 200623 Nov 2006Zalewski Gary MMulti-input game control mixer
US20060264259 *6 May 200623 Nov 2006Zalewski Gary MSystem for tracking user manipulations within an environment
US20060269073 *4 May 200630 Nov 2006Mao Xiao DMethods and apparatuses for capturing an audio signal based on a location of the signal
US20060274911 *8 May 20067 Dic 2006Xiadong MaoTracking device with sound emitter for use in obtaining information for controlling game program execution
US20060277571 *4 May 20067 Dic 2006Sony Computer Entertainment Inc.Computer image and audio processing of intensity and input devices for interfacing with a computer program
US20060280312 *4 May 200614 Dic 2006Mao Xiao DMethods and apparatus for capturing audio signals based on a visual image
US20060287084 *6 May 200621 Dic 2006Xiadong MaoSystem, method, and apparatus for three-dimensional input control
US20060287085 *6 May 200621 Dic 2006Xiadong MaoInertially trackable hand-held controller
US20060287086 *6 May 200621 Dic 2006Sony Computer Entertainment America Inc.Scheme for translating movements of a hand-held controller into inputs for a system
US20070021208 *8 May 200625 Ene 2007Xiadong MaoObtaining input for controlling execution of a game program
US20070060336 *12 Dic 200515 Mar 2007Sony Computer Entertainment Inc.Methods and systems for enabling depth and direction detection when interfacing with a computer program
US20070075966 *1 Dic 20065 Abr 2007Sony Computer Entertainment Inc.Hand-held computer interactive device
US20070109287 *4 Ene 200717 May 2007Sony CorporationPicture display device and picture display method
US20070223732 *13 Mar 200727 Sep 2007Mao Xiao DMethods and apparatuses for adjusting a visual image based on an audio signal
US20070242066 *13 Abr 200718 Oct 2007Patrick Levy RosenthalVirtual video camera device with three-dimensional tracking and virtual object insertion
US20070265075 *10 May 200615 Nov 2007Sony Computer Entertainment America Inc.Attachable structure for use with hand-held controller having tracking ability
US20070279711 *6 Feb 20076 Dic 2007King Martin TPortable scanning and memory device
US20070298882 *12 Dic 200527 Dic 2007Sony Computer Entertainment Inc.Methods and systems for enabling direction detection when interfacing with a computer program
US20070300142 *6 Jun 200727 Dic 2007King Martin TContextual dynamic advertising based upon captured rendered text
US20080009348 *25 Jun 200710 Ene 2008Sony Computer Entertainment Inc.Combiner method for altering game gearing
US20080013826 *13 Jul 200617 Ene 2008Northrop Grumman CorporationGesture recognition interface system
US20080028325 *25 Jul 200631 Ene 2008Northrop Grumman CorporationNetworked gesture collaboration system
US20080036732 *8 Ago 200614 Feb 2008Microsoft CorporationVirtual Controller For Visual Displays
US20080043106 *10 Ago 200621 Feb 2008Northrop Grumman CorporationStereo camera intrusion detection system
US20080094353 *21 Dic 200724 Abr 2008Sony Computer Entertainment Inc.Methods for interfacing with a program using a light input device
US20080100825 *28 Sep 20061 May 2008Sony Computer Entertainment America Inc.Mapping movements of a hand-held controller to the two-dimensional image plane of a display screen
US20080106705 *28 Abr 20048 May 2008Cortenraad Hubertus M RMirror Assembly With Integrated Display Device
US20080137913 *31 Oct 200712 Jun 2008Gesture Tek, Inc.Optical Flow Based Tilt Sensor
US20080137971 *1 Abr 200512 Jun 2008Exbiblio B.V.Method and System For Character Recognition
US20080141117 *12 Abr 200512 Jun 2008Exbiblio, B.V.Adding Value to a Rendered Document
US20080147488 *22 Oct 200719 Jun 2008Tunick James ASystem and method for monitoring viewer attention with respect to a display and determining associated charges
US20080192007 *18 Abr 200814 Ago 2008Microsoft CorporationDetermining a position of a pointing device
US20080204410 *6 May 200828 Ago 2008Microsoft CorporationRecognizing a motion of a pointing device
US20080204411 *7 May 200828 Ago 2008Microsoft CorporationRecognizing a movement of a pointing device
US20080231926 *18 Mar 200825 Sep 2008Klug Michael ASystems and Methods for Updating Dynamic Three-Dimensional Displays with User Input
US20080244468 *5 Jun 20082 Oct 2008Nishihara H KeithGesture Recognition Interface System with Vertical Display
US20080252596 *10 Abr 200816 Oct 2008Matthew BellDisplay Using a Three-Dimensional vision System
US20080259055 *16 Abr 200823 Oct 2008Microsoft CorporationManipulating An Object Utilizing A Pointing Device
US20080278487 *3 Abr 200613 Nov 2008Nxp B.V.Method and Device for Three-Dimensional Rendering
US20080295035 *25 May 200727 Nov 2008Nokia CorporationProjection of visual elements and graphical elements in a 3D UI
US20080313172 *10 Ene 200818 Dic 2008King Martin TDetermining actions involving captured information and electronic content associated with rendered documents
US20090003686 *24 Jun 20081 Ene 2009Gesturetek, Inc.Enhanced object reconstruction
US20090103780 *17 Dic 200823 Abr 2009Nishihara H KeithHand-Gesture Recognition Method
US20090115721 *2 Nov 20077 May 2009Aull Kenneth WGesture Recognition Light and Video Image Projector
US20090116742 *1 Nov 20077 May 2009H Keith NishiharaCalibration of a Gesture Recognition Interface System
US20090121894 *14 Nov 200714 May 2009Microsoft CorporationMagic wand
US20090143141 *5 Nov 20084 Jun 2009IgtIntelligent Multiplayer Gaming System With Multi-Touch Display
US20090158220 *15 Dic 200818 Jun 2009Sony Computer Entertainment AmericaDynamic three-dimensional object mapping for user-defined control device
US20090207135 *27 Abr 200920 Ago 2009Microsoft CorporationSystem and method for determining input from spatial position of an object
US20090208057 *23 Abr 200920 Ago 2009Microsoft CorporationVirtual controller for visual displays
US20090215533 *24 Feb 200927 Ago 2009Gary ZalewskiMethods for capturing depth data of a scene and applying computer actions
US20090237763 *18 Mar 200824 Sep 2009Kramer Kwindla HUser Interaction with Holographic Images
US20090262070 *26 Jun 200922 Oct 2009Microsoft CorporationMethod and System for Reducing Effects of Undesired Signals in an Infrared Imaging System
US20090268945 *30 Jun 200929 Oct 2009Microsoft CorporationArchitecture for controlling a computer using hand gestures
US20090298590 *22 Abr 20093 Dic 2009Sony Computer Entertainment Inc.Expandable Control Device Via Hardware Attachment
US20090316952 *20 Jun 200824 Dic 2009Bran FerrenGesture recognition interface system with a light-diffusive screen
US20100026470 *4 Ago 20084 Feb 2010Microsoft CorporationFusing rfid and vision for surface object tracking
US20100031202 *4 Ago 20084 Feb 2010Microsoft CorporationUser-defined gesture set for surface computing
US20100031203 *24 Jun 20094 Feb 2010Microsoft CorporationUser-defined gesture set for surface computing
US20100050133 *22 Ago 200825 Feb 2010Nishihara H KeithCompound Gesture Recognition
US20100105475 *27 Oct 200829 Abr 2010Sony Computer Entertainment Inc.Determining location and movement of ball-attached controller
US20100146455 *12 Feb 201010 Jun 2010Microsoft CorporationArchitecture For Controlling A Computer Using Hand Gestures
US20100146464 *12 Feb 201010 Jun 2010Microsoft CorporationArchitecture For Controlling A Computer Using Hand Gestures
US20100165105 *6 Jul 20071 Jul 2010Kazufumi MizusawaVehicle-installed image processing apparatus and eye point conversion information generation method
US20100177970 *18 Ago 200915 Jul 2010Exbiblio B.V.Capturing text from rendered documents using supplemental information
US20100241692 *20 Mar 200923 Sep 2010Sony Computer Entertainment America Inc., a Delaware CorporationMethods and systems for dynamically adjusting update rates in multi-player network gaming
US20100241973 *18 Mar 200923 Sep 2010IdentityMine, Inc.Gesture Engine
US20100261527 *10 Abr 200914 Oct 2010Sony Computer Entertainment America Inc., a Delaware CorporationMethods and systems for enabling control of artificial intelligence game characters
US20100269072 *25 Sep 200921 Oct 2010Kotaro SakataUser interface device, user interface method, and recording medium
US20100295783 *20 May 201025 Nov 2010Edge3 Technologies LlcGesture recognition systems and related methods
US20100304868 *29 May 20092 Dic 2010Sony Computer Entertainment America Inc.Multi-positional three-dimensional controller
US20110004329 *17 Sep 20106 Ene 2011Microsoft CorporationControlling electronic components in a computing environment
US20110014981 *27 Sep 201020 Ene 2011Sony Computer Entertainment Inc.Tracking device with sound emitter for use in obtaining information for controlling game program execution
US20110025601 *14 Oct 20103 Feb 2011Microsoft CorporationVirtual Controller For Visual Displays
US20110033080 *29 Ene 201010 Feb 2011Exbiblio B.V.Processing techniques for text capture from a rendered document
US20110034244 *12 Oct 201010 Feb 2011Sony Computer Entertainment Inc.Methods and systems for enabling depth and direction detection when interfacing with a computer program
US20110038530 *26 Oct 201017 Feb 2011Gesturetek, Inc.Enhanced object reconstruction
US20110066658 *7 Sep 201017 Mar 2011Rhoads Geoffrey BMethods and Devices Employing Content Identifiers
US20110074974 *6 Dic 201031 Mar 2011Gesturetek, Inc.Optical flow based tilt sensor
US20110078585 *28 Sep 201031 Mar 2011King Martin TAutomatic modification of web pages
US20110086708 *14 Dic 201014 Abr 2011Sony Computer Entertainment America LlcSystem for tracking user manipulations within an environment
US20110107216 *24 May 20105 May 2011Qualcomm IncorporatedGesture-based user interface
US20110118021 *11 Ene 201119 May 2011Sony Computer Entertainment America LlcScheme for translating movements of a hand-held controller into inputs for a system
US20110246329 *1 Abr 20106 Oct 2011Microsoft CorporationMotion-based interactive shopping environment
US20110310121 *26 Ago 200922 Dic 2011Pure Depth LimitedMulti-layered displays
US20120124516 *12 Nov 201017 May 2012At&T Intellectual Property I, L.P.Electronic Device Control Based on Gestures
US20130084982 *19 May 20114 Abr 2013Kabushiki Kaisha Sega Doing Business As Sega CorporationVideo game apparatus, video game controlling program, and video game controlling method
US20140267123 *30 May 201418 Sep 2014Lester F. LudwigWearable gesture based control device
CN100394774C28 Abr 200411 Jun 2008皇家飞利浦电子股份有限公司Mirror assembly with integrated display device
CN100409261C6 Oct 20036 Ago 2008索尼电脑娱乐公司Method and apparatus for real time motion capture
CN102201099A *31 Mar 201128 Sep 2011微软公司Motion-based interactive shopping environment
CN103593519A *31 Oct 201319 Feb 2014中国运载火箭技术研究院Carrier-rocket overall-parameter optimization method based on experiment design
CN103593519B *31 Oct 20134 May 2016中国运载火箭技术研究院一种基于试验设计的运载火箭总体参数优化方法
WO2004072909A1 *6 Oct 200326 Ago 2004Sony Computer Entertainment Inc.Method and apparatus for real time motion capture
WO2004100534A1 *28 Abr 200418 Nov 2004Koninklijke Philips Electronics N.V.Mirror assembly with integrated display device
WO2005094958A1 *23 Mar 200513 Oct 2005Harmonix Music Systems, Inc.Method and apparatus for controlling a three-dimensional character in a three-dimensional gaming environment
WO2008115997A2 *19 Mar 200825 Sep 2008Zebra Imaging, Inc.Systems and methods for updating dynamic three-dimensional display with user input
WO2008115997A3 *19 Mar 200814 May 2009Mak E HolzbachSystems and methods for updating dynamic three-dimensional display with user input
WO2010030822A1 *10 Sep 200918 Mar 2010Oblong Industries, Inc.Gestural control of autonomous and semi-autonomous systems
Clasificaciones
Clasificación de EE.UU.345/473
Clasificación internacionalG06F3/00, G06T1/00, G06T7/20, G06F3/01, A63F13/00
Clasificación cooperativaA63F2300/1093, G06F3/017, H04N5/23219, A63F2300/69
Clasificación europeaH04N5/232H, G06F3/01G
Eventos legales
FechaCódigoEventoDescripción
14 Ago 2000ASAssignment
Owner name: PHILIPS ELECTRONICS NORTH AMERICA CORP., NEW YORK
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LYONS, DAMIAN M.;REEL/FRAME:011098/0946
Effective date: 19980605
17 Ago 2000ASAssignment
Owner name: PHILIPS ELECTRONICS NORTH AMERICA CORP., NEW YORK
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LYONS, DAMIAN M.;REEL/FRAME:011174/0214
Effective date: 19980605
1 Ene 2002CCCertificate of correction
26 Jul 2004FPAYFee payment
Year of fee payment: 4
4 Ago 2008FPAYFee payment
Year of fee payment: 8
8 Oct 2012REMIMaintenance fee reminder mailed
27 Feb 2013LAPSLapse for failure to pay maintenance fees
16 Abr 2013FPExpired due to failure to pay maintenance fee
Effective date: 20130227