US20050024324A1 - Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device - Google Patents
Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device Download PDFInfo
- Publication number
- US20050024324A1 US20050024324A1 US10/750,452 US75045203A US2005024324A1 US 20050024324 A1 US20050024324 A1 US 20050024324A1 US 75045203 A US75045203 A US 75045203A US 2005024324 A1 US2005024324 A1 US 2005024324A1
- Authority
- US
- United States
- Prior art keywords
- plane
- virtual
- user
- optical system
- optical
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1626—Constructional details or arrangements for portable computers with a single-body enclosure integrating a flat display, e.g. Personal Digital Assistants [PDAs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1632—External expansion units, e.g. docking stations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/16—Constructional details or arrangements
- G06F1/1613—Constructional details or arrangements for portable computers
- G06F1/1633—Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
- G06F1/1662—Details related to the integrated keyboard
- G06F1/1673—Arrangements for projecting a virtual keyboard
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/0202—Constructional details or processes of manufacture of the input device
- G06F3/0221—Arrangements for reducing keyboard size for transport or storage, e.g. foldable keyboards, keyboards with collapsible keys
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
- G06F3/0421—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by interrupting or reflecting a light beam, e.g. optical touch-screen
- G06F3/0423—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means by interrupting or reflecting a light beam, e.g. optical touch-screen using sweeping light beams, e.g. using rotating or vibrating mirror
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/042—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
- G06F3/0425—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
- G06F3/0426—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected tracking fingers with respect to a virtual keyboard projected or printed on the surface
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/22—Character recognition characterised by the type of writing
- G06V30/228—Character recognition characterised by the type of writing of three-dimensional handwriting, e.g. writing in the air
Definitions
- the invention relates generally to sensing proximity of a stylus or user finger relative to a device to input or transfer commands and/or data to a system, and more particularly to such sensing relative to a virtual device used to input or transfer commands and/or data and/or other information to a system.
- a system could be used to determine when a user's fingers or stylus contacted a virtual keyboard, and what fingers contacted what virtual keys thereon, the output of the system could perhaps be input to the PDA in lieu of keyboard information.
- fingers or “fingers”, and “stylus” are used interchangeably herein.
- a virtual keyboard might be a piece of paper, perhaps that unfolds to the size of a keyboard, with keys printed thereon, to guide the user's hands. It is understood that the virtual keyboard or other input device is simply a work surface and has no sensors or mechanical or electronic components.
- the paper and keys would not actually input information, but the interaction or interface between the user's fingers and portions of the paper, or if not paper, portions of a work surface, whereon keys would exist, could be used to input information to the PDA.
- a similar virtual device and system might be useful to input e-mail to a cellular telephone.
- a virtual piano-type keyboard might be used to play a real musical instrument. The challenge is how to detect or sense where the user's fingers or a stylus are relative to the virtual device.
- a light source emitted optical energy towards a target object e.g., a virtual device
- energy reflected by portions of the object within the imaging path was detected by an array of photodiodes.
- the actual time-of-flight between emission of the optical energy and its detection by the photodiode array was determined. This measurement permitted calculating the vector distance to the point on the target object in three-dimensions, e.g., (x,y,z).
- the described system examined reflected emitted energy, and could function without ambient light. If for example the target object were a layout of a computer keyboard, perhaps a piece of paper with printed keys thereon, the system could determine which user finger touched what portion of the target, e.g., which virtual key, in what order. Of course the piece of paper would be optional and would be used to guide the user's fingers.
- Three-dimensional data obtained with the Bamji invention could be softwareprocessed to localize user fingers as they come in contact with a touch surface, e.g., a virtual input device.
- the software could identify finger contact with a location on the surface as a request to input a keyboard event to an application executed by an associated electronic device or system (e.g., a computer, PDA, cell phone, Kiosk device, point of sale device, etc.).
- an associated electronic device or system e.g., a computer, PDA, cell phone, Kiosk device, point of sale device, etc.
- the Bamji system worked and could be used to input commands and/or data to a computer system using three-dimensional imaging to analyze the interface of a user's fingers and a virtual input device, a less complex and perhaps less sophisticated system is desirable.
- such new system should be relatively inexpensive to mass produce and should consume relatively little operating power such that battery operation is feasible.
- the present invention provides such a system.
- the present invention localizes interaction between a user finger or stylus and a passive touch surface (e.g., virtual input device), defined above a work surface, using planar quasi-three-dimensional sensing. Quasi-three-dimensional sensing implies that determination of an interaction point can be made essentially in three dimensions, using as a reference a two-dimensional surface that is arbitrarily oriented in three-dimensional space.
- the invention localizes the touch region to determine where on a virtual input device the touching occurred, and what data or command keystroke, corresponding to the localized region that was touched, is to be generated in response to the touch.
- the virtual input device might include a virtual mouse or trackball.
- the present invention would detect and report coordinates of the point of contact with the virtual input device, which coordinates would be coupled to an application, perhaps to move a cursor on a display (in a virtual mouse or trackball implementation) and/or to lay so-called digital ink for a drawing or writing application (virtual pen or stylus implementation).
- triangulation analysis methods preferably are used to determine where user-object “contact” with the virtual input device occurs.
- the invention includes a first optical system (OS1) that generates a plane of optical energy defining a fanbeam of beam angle ⁇ parallel to and a small stand-off distance ⁇ Y above the work surface whereon the virtual input device may be defined.
- the plane of interest is the plane of light produced by OS1, typically a laser or LED light generator.
- the two parallel planes may typically be horizontal, but they may be disposed vertically or at any other angle that may be convenient.
- the invention further includes a second optical system (OS2) that is responsive to optical energy of the same wavelength as emitted by OS1.
- OS2 is disposed above OS1 and angled with offset ⁇ , relative to the fan-beam plane, toward the region where the virtual input device is defined.
- OS2 is responsive to energy emitted by OS1, but the wavelength of the optical energy need not be visible to humans.
- the invention may also be implemented using non-structured-light configurations that may be active or passive.
- OS1 is a camera rather than an active source of optical energy
- OS2 is a camera responsive to the same optical energy as OS1, and preferably disposed as described above.
- the plane of interest is the projection plane of a scan line of the OS1 camera.
- OS1 and OS2 are cameras and the invention further includes an active light source that emits optical energy having wavelengths to which OS1 and OS2 respond.
- OS1 and OS2 can each include a shutter mechanism synchronized to output from the active light source, such that shutters in OS1 and OS2 are open when optical energy is emitted, and are otherwise closed.
- An advantage of a non-structured light configuration using two cameras is that bumps or irregularities in the work surface are better tolerated.
- the plane defined by OS1 may be selected by choosing an appropriate row of OS1 sensing pixel elements to conform to the highest y-dimension point (e.g., bump) of the work surface.
- OS2 will not detect optical energy until an object, e.g., a user finger or stylus, begins to touch the work surface region whereon the virtual input device is defined. However, as soon as the object penetrates the plane of optical energy emitted by OS1, the portion of the finger or stylus intersecting the plane will be illuminated (visibly or invisibly to a user). OS2 senses the intersection with the plane of interest by detecting optical energy reflected towards OS2 by the illuminated object region. Essentially only one plane is of interest to the present invention, as determined by configuration of OS1, and all other planes definable in three-dimensional space parallel to the virtual input device can be ignored as irrelevant. Thus, a planar three-dimensional sensor system senses user interactions with a virtual input device occurring on the emitted fan-beam plane, and ignores any interactions on other planes.
- an object e.g., a user finger or stylus
- the present invention detects that an object has touched the virtual input device. Having sensed that a relevant touch-intersection is occurring, the invention then localizes in two-dimensions the location of the touch upon the plane of the virtual device.
- localized events can include identifying which virtual keys on a virtual computer keyboard or musical keyboard are touched by the user. The user may touch more than one virtual key at a time, for example the “shift” key and another key. Note too that the time order of the touchings is determined by the present invention.
- the present invention will recognize what is being input as “T” then “h” and then “e”, or “The”. It will be appreciated that the present invention does not rely upon ambient light, and thus can be fully operative even absent ambient light, assuming that the user knows the location of the virtual input device.
- Structured-light and/or non-structured light passive triangulation methods may be used to determine a point of contact (x,z) between a user's hand and the sense plane. Since the baseline distance B between OS1 and OS2 is known, a triangle is formed between OS1, OS2 and point (x,z), whose sides are B, and projection rays R 1 and R 2 from OS1, OS2 to (x,z). OS1 and OS2 allow determination of triangle angular distance from a reference plane, as well as angles ⁇ 1 and ⁇ 2 formed by the projection rays, and trigonometry yields distance z to the surface point (x,z), as well as projection ray lengths.
- a processor unit associated with the present invention executes software to identify each intersection of a user-controlled object with the virtual input device and determines therefrom the appropriate user-intended input data and/or command, preferably using triangulation analysis.
- the data and/or commands can then be output by the present invention as input to a device or system for which the virtual input device is used.
- the present invention may be implemented within the companion device or system, especially for PDAs, cellular telephones, and other small form factor device or systems that often lack a large user input device such as a keyboard.
- FIG. 1A depicts a planar quasi-three-dimensional detection structured-light system used to detect user input to a virtual input device, according to the present invention
- FIG. 1B depicts a planar quasi-three-dimensional detection non-structured active light system used to detect user input to a virtual input device, according to the present invention
- FIG. 1C depicts a planar quasi-three-dimensional detection non-structured passive light system used to detect user input to a virtual input device, according to the present invention
- FIG. 2A depicts geometry associated with location determination using triangulation, according to the present invention
- FIG. 2B depicts use of a spaced-apart optical emitter and reflector as a first optical system, according to the present invention
- FIGS. 3A-3E depict design tradeoffs associated with varying orientations of OS2 sensor, OS2 lens, and detection plane upon effective field of view and image quality, according to the present invention
- FIG. 4 is a block diagram depicting functions carried out by a processor unit in the exemplary system of FIG. 1B , according to an embodiment of the present invention
- FIG. 5A depicts an embodiment wherein the virtual device has five userselectable regions and the companion device is a monitor, according to the present invention
- FIG. 5B depicts an embodiment wherein the virtual device is a computer keyboard and the companion device is a mobile transceiver, according to the present invention
- FIG. 5C depicts an embodiment wherein the virtual device is mounted or projected on a wall and the companion device is a monitor, according to the present invention
- FIG. 6 depicts planar range sensing, according to the present invention.
- FIG. 7 depicts coordinate distance measurements used in an exemplary calculation of touch location for use in outputting corresponding information or data or command, according to the present invention.
- FIG. 1A depicts a preferred embodiment of a quasi-planar three-dimensional sensing system 10 comprising, in a structured-light system embodiment, a first optical system (OS1) 20 that emits a fan-beam plane 30 of optical energy parallel to a planar work surface 40 upon which there is defined a virtual input device 50 and/or 50 ′ and/or 50 ′′.
- the fan-beam defines a fan angle ⁇ , and is spaced-apart from the work surface by a-small stand-off distance ⁇ Y. Any object (e.g., a user finger or stylus) attempting to touch the work surface must first contact the fan-beam and will thereby be illuminated (visibly or not visibly) with emitted optical energy.
- fan-beam plane 30 and the work surface plane 40 are shown horizontally disposed in FIG. 1A , these two planes may be disposed vertically or indeed at any other angle that may be desired for a system.
- work surface 40 could be a portion of a work desk, a table top, a portion of a vehicle, e.g., a tray in an airplane, a windshield or dashboard, a wall, a display including a projected image, or a display such as a CRT, an LCD, etc.
- the term “plane” will be understood to include a subset of a full plane.
- fan-beam plane 30 will be termed a plane, even though it has finite width and does not extend infinitely in all directions.
- virtual input device it is meant that an image of an input device may be present on work surface 40 , perhaps by placing a paper bearing a printed image, or perhaps system 10 projects a visible image of the input device onto the work surface, or there literally may be no image whatsoever visible upon work surface 40 .
- virtual input device 50 , 50 ′, 50 ′′ requires no mechanical parts such as working keys, and need not be sensitive to a touch by a finger or stylus; in short, the virtual input device preferably is passive.
- virtual input device 50 is a computer-type keyboard that may be full sized or scaled up or down from an actual sized keyboard.
- the virtual input device may comprise or include a virtual trackball 50 ′ and/or a virtual touchpad 50 ′′.
- a fan angle ⁇ of about 50° to 90° and preferably about 90° will ensure that fan beam 30 encompasses the entire virtual input device at distances commonly used.
- a stand-off distance ⁇ Y of up to a few mm works well, preferably about 1 mm.
- System 10 further includes a second optical system (OS2) 60 , typically a camera with a planar sensor, that is preferably spaced apart from and above 0 S1 20 , and inclined toward work surface 40 and plane 30 at an angle ⁇ , about 10° to about 90°, and preferably about 250°.
- System 10 further includes an electronic processing system 70 that, among other tasks, supervises OS1 and OS2.
- System 70 preferably includes at least a central processor unit (CPU) and associated memory that can include read-only-memory (ROM) and random access memory (RAM).
- CPU central processor unit
- ROM read-only-memory
- RAM random access memory
- system 10 elements OS1 20 , OS2 60 , and processor unit 70 are shown disposed on or in a device 80 .
- Device 80 may be a stand-alone implementation of system 10 or may in fact be a system or device for which virtual input device 50 is used to input data or commands. In the latter case, device 80 may, without limitation, be a computer, a PDA (as shown in FIG. 1A ), a cellular telephone, a musical instrument, etc. If system or device 80 is not being controlled by the virtual input device, the device 90 being so controlled can be coupled electrically to system/device 80 to receive data and/or commands input from virtual device 50.
- the virtual device is a trackball (or mouse) 50 ′ or touchpad 50 ′′
- user interaction with such virtual device can directly output raw information or data comprising touch coordinates (x,z) for use by device 80 .
- user interaction with virtual input device 50 ′ or 50 ′′ might reposition a cursor 160 on a display 140 , or otherwise alter an application executed by device 80 , or lay down a locus of so-called digital ink 180 that follows what a user might “write” using a virtual mouse or trackball 50 ′, or using a stylus 120 ′ and a virtual touchpad 50 ′′.
- System/device 90 can be electrically coupled to system 80 by a medium 100 that may without limitation include wire(s) or be wireless, or can be a network including the internet.
- OS1 20 emits optical energy in fan-beam 30 , parallel to the x-z plane 30 .
- OS1 may include a laser line generator or an LED line generator, although other optical energy sources could be used to emit plane 30 .
- a line generator OS1 is so called because it emits a plane of light that when intersected by a second plane illuminates what OS2 would view as a line on the second plane. For example if a cylindrical object intersected plane 30 , OS2 would see the event as an illuminated portion of an elliptical arc whose aspect ratio would depend upon distance of OS2 above plane 30 and surface 40 .
- detection by OS2 of an elliptical arc on plane 30 denotes a touching event, e.g., that an object such as 120 R has contacted or penetrated plane 30 .
- a laser diode outputting perhaps 3 mW average power at a wavelength of between 300 nm to perhaps 1,000 nm could be used.
- ambient light wavelengths e.g., perhaps 350 nm to 700 nm
- the effects of ambient light may be minimized without filtering or shutters if such wavelengths are avoided.
- wavelengths of about 600 nm (visible red) up to perhaps 1,000 nm (deep infrared) could be used.
- a laser diode outputting 850 nm wavelength optical energy would represent an economical emitter, although OS2 would preferably include a filter to reduce the effects of ambient light.
- OS1 preferably is stationary in a structured light embodiment, it is understood that a fan-beam 30 could be generated by mechanically sweeping a single emitted line of optical energy to define the fan-beam plane 30 .
- OS1 may in fact include an optical energy emitter 20 -A that emits a fan beam, and a reflecting mirror 20 -B that directs the fan beam 30 substantially parallel to surface 40 .
- optical energy emitted by OS1 20 may be visible to humans or not visible.
- OS2 60 preferably includes a camera system responsive to optical energy of the wavelength emitted by OS1 20 .
- OS2 recognizes energy of the same wavelength emitted by OS1, and ideally will not recognize or respond to energy of substantially differing wavelength.
- OS2 may include a filter system such that optical energy of wavelength other than that emitted by OS1 is not detected, for example a color filter.
- OS2 could be made responsive substantially solely to optical energy emitted from OS1 by synchronously switching OS1 and OS2 on and off at the same time, e.g., under control of unit 70 .
- OS1 and OS2 preferably would include shutter mechanisms, depicted as elements 22 , that would functionally open and close in synchronized fashion.
- electronic processing system 70 could synchronously switch-on OS1, OS2, or shutter mechanisms 22 for a time period t 1 with a desired duty cycle, where t 1 is perhaps in the range of about 0.1 ms to about 35 ms, and then switch-off OS1 and OS2.
- OS1 could be operated at all times, where plane 30 is permitted to radiate only when shutter 22 in front of OS1 20 is open.
- repetition rate of the synchronous switching is preferably in the range of 20 Hz to perhaps 300 Hz to promote an adequate rate of frame data acquisition. To conserve operating power and reduce computational overhead, a repetition rate of perhaps 30 Hz to 100 Hz represents an acceptable rate.
- other devices and methods for ensuring that OS2 responds substantially only to optical energy emitted by OS1 may also be used.
- shutters 22 are depicted as mechanical elements, but in practice the concept of shutters 22 is understood to include turning on and off light sources and cameras in any of a variety of ways.
- source(s) of optical energy used with the present invention could be made to carry a so-called signature to better enable such energy to be discerned from ambient light energy.
- sources might be modulated at a fixed frequency such that cameras or other sensor units used with the present invention can more readily recognize such energy while ambient light energy would, by virtue of lacking such signature, be substantially rejected.
- signature techniques such as selecting wavelengths for optical energy that differ from ambient light, techniques that involve synchronized operation of light sources and camera sensors, and modulating or otherwise tagging light source energy can all improve the signal/noise ratio of information acquired by the present invention.
- the virtual input device is entirely passive. Since device 50 is passive, it can be scaled to be smaller than a full-sized device, if necessary. Further, the cost of a passive virtual input device can be nil, especially if the “device” is simply a piece of paper bearing a printed graphic image of an actual input device.
- OS1 may emit optical energy fan-beam plane 30
- OS2 detects nothing because no object intersects plane 30 .
- a portion 110 of a finger of a user's left or right hand 120 L, 120 R moves downward to touch a portion of the area of work surface 40 whereon the virtual input device 50 is defined.
- portion 110 ′ of a user-controlled stylus 120 ′ could be moved downward to touch a relevant portion of work surface 40 .
- a touch is interpreted by software associated with the invention as a request to send a keyboard event to an application running on a companion device or system 80 or 90 , e.g., notebook, PDA, cell phone, Kiosk device, point of sale device, etc.
- a companion device or system 80 or 90 e.g., notebook, PDA, cell phone, Kiosk device, point of sale device, etc.
- planar quasi-three-dimensional sense system 10 detects optical energy reflected by the interaction of a user controlled object (e.g., a finger, a stylus, etc.) occurring at a plane of interest defined by fan-beam plane 30 . Any interaction(s) that may occur on any other plane are deemed not relevant and may be ignored by the present invention.
- a user controlled object e.g., a finger, a stylus, etc.
- system 10 can generate and input to system 80 or 90 keystrokes representing data and/or commands that a user would have entered on an actual keyboard.
- Such input to system 80 or 90 can be used to show information 140 on display 150 , as the information is entered by the user on virtual input device 50 .
- an enlarged cursor region 160 could be implemented to provide additional visual input to aid the user who is inputting information.
- processor unit 70 could cause system 80 and/or 90 to emit audible feedback to help the user, e.g., electronic keyclick sounds 170 corresponding with the “pressing” of a virtual key on virtual input device 50 .
- system 80 or 90 were a musical instrument rather than a computer or PDA or cellular telephone, musical sounds 170 would be emitted, and virtual input-device 50 could instead have the configuration similar to a piano keyboard or keyboards associated with synthetic music generators.
- FIG. 1B depicts a non-structured active light system 10 , in which a camera 20 ′ in a first optical system OS1 defines a plane of interest 30 ′ that in essence replaces plane 30 defined by optical emitter OS1 in the embodiment of FIG. 1A .
- Camera 20 ′ OS1 preferably is similar to camera 60 OS2, which may be similar to camera 60 OS2 in the embodiment of FIG. 1A .
- OS1 20 ′ may have a sensor array that comprises at least one line and preferably several lines of pixel detector elements.
- the embodiment of FIG. 1B is active in that one or more light sources 190 , disposed intermediate OS1 20 ′ and OS2 60 generate optical energy of a wavelength that is detectable by camera OS1 20 ′ and by camera OS2 60 .
- each camera and each optical energy emitter 190 operates in cooperation with a shutter mechanism, preferably synchronized, e.g., by unit 70 .
- a shutter mechanism preferably synchronized, e.g., by unit 70 .
- similar shutters 22 will permit cameras OS1 and OS2 to detect optical energy.
- the interaction of user-object, e.g., 120 L with plane 30 ′ is detected by OS1 and by OS2.
- the location of the point of intersection is then calculated, e.g., using triangulation methods described later herein.
- FIG. 1B a bump or irregularity in the plane of work surface 40 is shown near the point of contact 110 with the user-object 120 L.
- An advantage of the presence of second camera OS1 20 ′ is that the plane of interest 30 ′ may be selected, perhaps by unit 70 , to lie just above the highest irregular portion of work surface 40 . If irregularities were present in work surface 40 in the embodiment of FIG. 1A , it would be necessary to somehow reposition the laser plane 30 relative to the work surface. But in FIG. 1B , the effect of such repositioning is attained electronically simply by selecting an appropriate line of pixels from the detector array with OS1 20′.
- shutters 22 can permit cameras OC1 and OS2 to gather image data during a time that emitters 190 are turned off, e.g., by control unit 70 . Any image data then acquired by OS1 and/or OS2 will represent background noise resulting from ambient light. (Again it is understood that to minimize effects of ambient light, emitters 190 and cameras OS1, OS2 preferably operate at a wavelength regime removed from that of ambient light.) Having acquired what might be termed a background noise signal, cameras OS1 and OS2 can now be operated normally and in synchronism with emitter(s) 190 .
- Image data acquired by cameras OS1 and OS2 in synchronism with emitter(s) 190 will include actual data, e.g., user-object interface with plane 30 ′, plus any (undesired) effects due to ambient light.
- Processor unit 70 (or another unit) can then dynamically subtract the background noise signal from the actual data plus noise signal, to arrive at an actual data signal, thus enhancing the signal/noise ratio.
- FIG. 1C depicts a non-structured passive embodiment of the present invention.
- System 10 in FIG. 1C is passive in that whatever source 195 of ambient light is present provides optical energy used during imaging.
- OS1 is a camera 20 ′ that defines a plane of interest 30 ′
- OS2 is a camera 60 .
- plane 30 ′ will be defined a distance ⁇ Y′ above work surface 40 , typically a distance of a few mm.
- User-object interaction with plane 30 ′ is detected by OS1 and OS2, using optical energy from ambient light source 195 . Triangulation methods may then be used to localize the point of interaction or intersection with plane 30 ′, as described elsewhere herein.
- FIG. 2A depicts the geometry with which location (x,z) of the intercept point between a user's finger or object 12 OR and plane 30 may be determined using triangulation.
- FIG. 2A and FIG. 2B may be used to describe analysis of the various embodiments shown in FIGS. 1A-1C .
- triangulation helps determine the shape of surfaces in a field of view of interest by geometric analysis of triangles formed by the projection rays, e.g., R 1 , R 2 of two optical systems, e.g., OS1 20 , OS2 60 .
- a baseline B represents the known length of the line that connects the centers of projection of the two optical systems, OS1, OS2.
- a triangle may be defined by the location of the point and by locations of OS1, and OS2. The three sides of the triangle are B, R 1 , and R 2 .
- OS1 and OS2 can determine the angular distance of the triangle from a reference plane, as well as the angles ⁇ 1 and ⁇ 2 formed by the projection rays that connect the surface point with the centers of projection of the two optical systems. Angles ⁇ 1 and ⁇ 2 and baseline B completely determine the shape of the triangle. Simple trigonometry can be used to yield the distance to the surface point (x,z), as well as length of projection ray R 1 and/or R 2 .
- FIG. 2B depicts a structured-light embodiment in which the first optical system is bifurcate: one portion OS1-A 20 -A is a light emitter disposed distance B from OS2 and from the second portion OS1-B 20 -B, a light reflecting device such as a mirror. An incoming fan beam generated by OS1-A is deflected by mirror 20 -B to form the plane 30 .
- mirror 20 -B is inclined about 45° relative to the horizontal plane, and deflection is from a substantially vertical plane to a substantial horizontal plane.
- FIG. 2B depicts a structured-light embodiment in which the first optical system is bifurcate: one portion OS1-A 20 -A is a light emitter disposed distance B from OS2 and from the second portion OS1-B 20 -B, a light reflecting device such as a mirror.
- An incoming fan beam generated by OS1-A is deflected by mirror 20 -B to form the plane 30 .
- mirror 20 -B is inclined about 45° relative to
- OS2 60 will be a camera aimed at angle ⁇ generally toward the field of view of interest, namely where a user's finger or stylus will be to “use” a virtual input device disposed beneath fan plane 30 .
- Triangulation preferably uses a standard camera with a planar sensor as OS2 60 .
- the nature of OS1 20 distinguishes between two rather broad classes of triangulation.
- OS1 20 is typically a laser or the like whose beam may be shaped as a single line that is moved to project a moving point onto a surface. Alternatively the laser beam may be planar and moved to project a planar curve.
- another class of triangulation system may be termed passive triangulation in which a camera is used as OS1 20 .
- Structured-light systems tend to be more complex to build and consume more operating power, due to the need to project a plane of light. Passive systems are less expensive, and consume less power.
- passive system must solve the so-called correspondence problem, e.g., to determine which pairs of points in the two images are projections of the same point in the real world.
- passive non-structured-light triangulation embodiments may be used, according to the present invention.
- system 10 is implemented as a structured-light system in which OS1 actively emits light and OS2 is a camera, or as a passive system in which OS1 and OS2 are both cameras, information from OS2 and OS1 will be coupled to a processing unit, e.g., 70 , that can determine what events are occurring.
- a processing unit e.g. 70
- OS1 and OS2 are both cameras
- information from OS2 and OS1 will be coupled to a processing unit, e.g., 70 , that can determine what events are occurring.
- a processing unit e.g., 70
- OS1 when an object such as 120 R intersects the projection plane 30 associated with OS1 20 , the intersection is detectable.
- OS1 emits optical energy
- OS2 typically a camera.
- the intersection is seen by OS1, a camera, and also by OS2, a camera.
- System 10 preferably includes a computing system 70 that receives data from OS1, OS2 and uses geometry to determine the plane intersection position (x,z) from reflected image coordinates in a structured-light embodiment, or from camera image coordinates in a passive system.
- a computing system 70 that receives data from OS1, OS2 and uses geometry to determine the plane intersection position (x,z) from reflected image coordinates in a structured-light embodiment, or from camera image coordinates in a passive system.
- touch events are detected and declared when OS1 recognizes the intersection of plane 30 with an intruding object such as 120 R.
- OS2 camera coordinates are transformed into touch-area (x-axis, z-axis) coordinates to locate the (x,z) coordinate position of the event within the area of interest in plane 30 .
- processor unit 70 executes algorithms to compute intersect positions in plane 30 from image coordinates of points visible to OS2.
- a passive light system must distinguish intruding objects from background in images from OS1 and OS2. Where system 10 is a passive light system, correspondence-needs to be established between the images from camera OS1 and from camera OS2. Where system 10 is a structured-light system, it is desired to minimize interference from ambient light.
- This homography matrix may be found using a calibration procedure. Since the sensor rests on the surface, sensor position relative to the surface is constant, and the calibration procedure need be executed only once. For calibration, a grid of known pitch is placed on the flat surface on which the sensor is resting. The coordinates p i of the image points corresponding to the grid vertices P i are measured in the image. A direct linear transform (DLT) algorithm can be used to determine the homography matrix H. Such DLT transform is known in the art; see for example Richard Hartley and Andrew Zisserman. Multiple View Geometry in Computer Vision , Cambridge University Press, Cambridge, UK, 2000.
- DLT direct linear transform
- the surface point P corresponding to a point p in the image is immediately computed by the matrix-vector multiplication above. Preferably such computations are executed by system 70 .
- S b (x, z) can be a three-vector with the averages, medians, or other statistics of R bi (x, z), G bi (x, z), B bi (x, z) at pixel position (x, z) over all background images I 1 , . . . , I n , possibly normalized to de-emphasize variations in image brightness.
- This second summary is a single vector, rather than an image of vectors as for S b (x, z).
- s t does not depend on the pixel position (x, z).
- This new summary can be computed, for instance, by asking a user to place finger tips or stylus in the sensitive area of the surface, and recording values only at pixel positions (x, z) whose color is very different from the background summary s b (x, z) at (x, z), and computing statistics over all values of j, x, z.
- a particular pixel at (x, z) is attributed to either tip or background by a suitable discrimination rule.
- a distance d(c 1 , c 2 ) can be defined between three-vectors (Euclidean distance is one example), and pixels are assigned based on the following exemplary rule:
- OS2 needs to distinguish between ambient light and light produced by the line generator and reflected back by an intruding object.
- OS1 emits energy in a region of the light spectrum where ambient light has little power, for instance, in the near infrared.
- An infrared filter on camera OS2 can ensure that the light detected by the OS2 sensor is primarily reflected from the object (e.g., 120 R) into the lens of camera OS2.
- OS1 operates in the visible part of the spectrum, but is substantially brighter than ambient light. Although this can be achieved in principle with any color of the light source, for indoor applications it may be useful to use a blue-green light source for OS1 (500 nm to 550 nm) because standard fluorescent lights have relatively low emission in this band.
- OS2 will including a matched filter to ensure that response to other wavelengths are substantially attenuated.
- the term d(c(x,z), s t ) will be significantly non-zero, which in turn yields a substantially non-zero value for C(x,z).
- This methodology achieves the desired goal of identifying essentially only the object tip pixels illuminated by laser (or other emitter) OS1.
- This method can be varied to use light emitters of different colors, to use other distance definitions for the distance d, and to use different summaries S b (x, z) and s t .
- FIG. 1A if device 80 is a compact system such as a PDA or cell telephone, it becomes especially desirable to reduce the size needed to implement the present invention.
- a smaller overall form fact can result if OS2 is inclined at some angle ⁇ , as shown in FIGS. 1A-1C , 2 A, 2 B, with respect to plane 30 or surface 40 . But as angle ⁇ decreases, camera OS2 sees plane 30 from a shallower angle.
- the effective area subtended by the field of view decreases. The result is to decrease effective OS2 resolution and thus to decrease accuracy of z-depth measurements as shown in FIG. 3A , where L denotes a camera lens associated with OS2, whose plane of pixel detectors is shown as a straight line labeled OS2.
- FIG. 3C It is apparent that different points on the touch sensitive area of interest on plane 30 are at different distances from lens L of camera OS2. This means that one cannot focus the entire sensitive area of interest precisely if lens L is positioned as shown in FIG. 3A or in FIG. 3B . While closing the camera iris could increase the depth of field, resultant images would become dimmer, and image signal-to-noise ratio would be degraded.
- FIG. 3C may be employed in which lens L is repositioned relative to FIG. 3B .
- touch surface 30 , the camera OS2 sensor, and camera lens L are said to satisfy the so-called Scheimpflug condition, in which their respective planes intersect along a common line, a line that is at infinity in FIG. 3C .
- Scheimpflug condition may be found at The Optical Society of America. Handbook of Optics , Michael Bass, Editor in Chief, McGraw-Hill, Inc., 1995.
- FIG. 3C when the relevant optical system satisfies this condition, all points on touch surface 30 will be in focus.
- FIG. 3D depicts one such intermediate configuration, in which lens L is purposely tilted slightly away from a Scheimpflug-satisfying orientation with respect to planes of OS2 and 30 .
- FIG. 3E depicts another alternative intermediate configuration, one in which the Scheimpflug condition is exactly verified, but the camera sensor OS2 is tilted away from horizontal.
- the configuration of FIG. 3E can achieve exact focus but with somewhat lower image resolution and more distortion than the configuration of FIG. 3C .
- FIG. 4 is a block diagram depicting operative portions of processor unit 70 within system 10 , which processor unit preferably carries out the various triangulation and other calculations described herein to sense and identify (x,z) intercepts with the plane of interest 30 .
- information from OS1 20 and OS2 30 is input respectively to pixel maps 200 - 1 , 200 - 2 .
- OS1 and OS2 inputs refer to a stream of frames of digitized images are generated by optical system 1 ( 20 ) and optical system 2 ( 60 ) in a planar range sensor system 10 , according to the present invention.
- optical system generates at least about 30 frames per second (fps).
- Pixel map modules 200 - 1 , 200 - 2 construct digital frames from OS1 and OS2 in memory associated with computational unit 70 .
- Synchronizer module 210 ensures that the two optical systems produce frames of digitized images at approximately the same time. If desired, a double-buffering system may be implemented to permit construction of one frame while the previous frame (in time) is being processed by the other modules.
- Touch detection module 220 detects a touch (e.g., intersection of a user finger or stylus with the optical plane sensed by OS1) when the outline of a fingertip or stylus appears in a selected row of the frame.
- tip detection module 230 When a touch is detected, tip detection module 230 records the outline of the corresponding fingertip into the appropriate pixel map, 200 - 1 or 200 - 2 .
- OS1 is a light beam generator
- touch detection will use input from OS2 rather than from OS1.
- Touch position module 240 uses tip pixel coordinates from tip detection module 230 at the time a touch is reported from touch detection module 220 to find the (x-z) coordinates of the touch on the touch surface.
- a touch is tantamount to penetration of plane 30 associated with an optical emitter OS1 in a structured-light embodiment, or in a passive light embodiment, associated with a plane of view of a camera OS1. Mathematical methods to convert the pixel coordinates to the X-Z touch position are described elsewhere herein.
- Key identification module 260 uses the X-Z position of a touch and maps the position to a key identification using a keyboard layout table 250 preferably stored in memory associated with computation unit 70 .
- Keyboard layout table 250 typically defines the top/bottom/left and right coordinates of each key relative to a zero origin.
- a function of key identification module 260 is to perform a search of table 250 and determine which key contains the (x,z) coordinates of the touch point.
- translation module 270 maps the key to a predetermined KEYCODE value.
- the KEYCODE value is output or passed to an application that is being executed on the companion device or system 80 (executing on a companion device) that is waiting to receive-a notification of a keystroke event.
- the application under execution interprets the keystroke event and assigns a meaning to it. For instance, a text input application uses the value to determine what symbol was typed. An electronic piano application determines what musical note was pressed and plays that note, etc.
- the X-Z touch coordinates can be passed directly to application 280 .
- Application 280 could use the coordinate data to control the position of a cursor on a display in a virtual mouse or virtual trackball embodiment, or to control a source of digital ink whose locus is shown on a display for a drawing or hand-writing type application in a virtual pen or virtual stylus embodiment.
- FIG. 5A is a simplified view of system 10 in which virtual device 50 is now a control with five regions, and in which the companion device 80 , 90 includes a monitor.
- companion device 80 or 90 is shown with a display 150 that may include icons 140 , one of which is surrounded by a cursor 310 and a user can move using virtual device 50 ′, here a virtual trackball or mouse.
- virtual device 50 ′ here a virtual trackball or mouse.
- the cursor should move to the right, e.g., to “select” the icon of a loaf of bread, and if virtual region 300 - 4 is pressed, the cursor should move towards the bottom of the display on device 80 , 90 . If user presses the fifth region 300 - 5 , a “thumbs up” region, companion device 80 , 90 knows that the user-selection is now complete. In FIG. 5A , if the user now presses region 300 - 5 , the “hotdog” icon is selected.
- the icons might be various destinations, and device 80 or 90 could indicate routes, schedules, and fares to the destinations, and could even dispense tickets for use on a bus, a subway, an airline, a boat, etc.
- a user could, for example, press two regions of input device 50 ′ representing trip originating point and trip destination point, whereupon system 10 could cause a display of appropriate transportation vehicles, schedules, fares, etc. to be displayed and, if desired, printed out.
- information generated by system 10 may simply be raw (x,z) coordinates that a software application executed by a companion device may use to reposition a cursor or other information on a display.
- virtual device 50 ′ is passive; its outline may be printed or-painted onto an underlying work surface, or perhaps its outline can be projected by system 10 .
- the various regions of interest in virtual device 50 may be identified in terms of coordinates relative to the x-z plane.
- touch position module 240 determines the (x,z) coordinates of the touch point 110 .
- touch point 110 is within “B” region 300 - 4 .
- Key identification module 260 uses the keyboard layout 250 information, in this example as shown in Table 1, to determine where in the relevant (x,z) plane the touch point coordinates occur.
- touch coordinates (x,z) are (1.5,0.5).
- a search routine preferably stored in memory associated with unit 70 (see FIG. 1A ) and executed by unit 70 determines that 1 ⁇ x ⁇ 2, and ⁇ 1 ⁇ z ⁇ 1.
- the key identification module will determine that touch point 110 falls within entry B.
- companion device 80 and 90 receives data from system 10 advising that region B has been touched.
- Processor unit 70 in system 10 can cause the companion device to receive such other information as may be required to perform the task associated with the event, for example to move the cursor downward on the display.
- FIG. 5B depicts an embodiment of system 10 similar to that shown in FIG. 1A .
- the virtual input device 50 is a computer keyboard and the companion device 80 , 90 is a mobile transceiver, a cellular telephone for example.
- system 10 could in fact be implemented within device 80 , 90 .
- OS1 might emit fan-beam 30 from a lower portion of device 80 , 90
- OS2 might be disposed in an upper portion of the same device.
- the virtual input device 50 could, if desired, be projected optically from device 80 , 90 .
- virtual input device 50 might be printed on a foldable substrate, e.g., plastic, paper, etc.
- the location of virtual input device 50 in front of device 80 , 90 would be such that OS1 can emit a fan-beam 30 encompassing the virtual input device, and OS2 can detect intersection 110 of an object, e.g., a user's finger or cursor, etc., with a location in the fan-beam overlying any region of interest in virtual input device 50 .
- OS1 can emit a fan-beam 30 encompassing the virtual input device
- OS2 can detect intersection 110 of an object, e.g., a user's finger or cursor, etc., with a location in the fan-beam overlying any region of interest in virtual input device 50 .
- keyboard layout table 250 will have at least one entry for each virtual key, e.g., “1”, “2”, . . . “Q”, “W”, . . . “SHIFT” defined on virtual input device 50 .
- 5A is carried out, preferably by unit 70 , and the relevant virtual key that underlies touch point 110 can be identified.
- the relevant key is “I”, which letter “I” is shown on display 150 as part of e-mail message text 140 being input into cellular telephone 80 , 90 by a portion of the user's hand 120 R (or by a stylus).
- the ability to rapidly touchtype messages into cellular telephone 80 , 90 using virtual keyboard 50 as contrasted with laboriously inputting messages using the cellular telephone keypad will be appreciated.
- FIG. 5C an embodiment of system 10 is shown in which the workspace 40 is a vertical wall, perhaps in a store or mall, and virtual input device 50 is also vertically disposed.
- virtual input device 50 is shown with several icons and/or words 320 that when touched by a user's hand 120 , e.g., at touch point 110 , will cause an appropriate text and/or graphic image 140 to appear on display 150 in companion device 80 , 90 .
- icons 320 may represent locations or departments in a store, and display 150 will interactively provide further information in response to user touching of an icon region.
- the various icons may represent entire stores, or department or regions within a store, etc.
- processor unit 70 within system 10 executes software, also stored within or loadable into processor unit 70 , to determine what icon or text portion of virtual input device 50 has been touched, and what commands and/or data should be communicated to host system 80 , 90 .
- the virtual input device 50 may be back projected from within wall 40 . Understandably if the layout and location of the various icons 320 change, mapping information stored within unit 70 in system 10 will also be changed. The ability to rapidly change the nature and content of the virtual input device without necessarily be locked-into having icons of a fixed size in a fixed location can be very useful.
- buttons may indeed be fixed in size and location on device 50 , and their touching by a user may be used to select a re-mapping of what is shown on input device 50 , and what is mapped by software within unit 70 . It is understood that in addition to simply displaying information, which may include advertisements, companion device 80 , 90 may be used to issue promotional coupons 330 for users.
- system 10 determines whether system 10 is a structured-light system or a passive light system.
- OS1 may be a line generating laser system
- OS1 may be a digital camera.
- Each system defines a plane 30 that when intercepted by an object such as 120 R will define a touch event whose (x,z) coordinates are to then be determined.
- the present invention can decide what input or command was intended by the person using the system. Such input or command can be passed to a companion device, which device may in fact also house the present invention.
- system 10 is a passive light system
- a touch event is registered when the outline of a fingertip appears in a selected frame row of OS1, a digital camera.
- the (x,z) plane 30 location of the touch is determined by the pixel position of the corresponding object tip (e.g., 120 R) in OS2, when a touch is detected in OS1.
- the range or distance from camera OS1 to the touch point is an affine function of the number of pixels from the “near” end of the pixel frame.
- OS1 will typically be a laser line generator, and OS2 will be a camera primarily sensitive to wavelength of the light energy emitted by OS1. As noted, this can be achieved by installing a narrowband light filter on OS2 such that only wavelength corresponding to that emitted by OS1 will pass.
- OS2 can be understood to include a shutter that opens and closes in synchronism to pulse output of OS1, e.g., OS2 can see optical energy only at time that OS1 emits optical energy.
- OS2 preferably will only detect objects that intercept plane 30 and thus reflect energy emitted by OS1.
- touch sense detection and range calculation are carried out by system 10 .
- a touch event is registered when the outline of an object, e.g., fingertip 120 R, appears within the viewing range of OS2.
- range distance may be calculated as an affine function of the number of pixels from the “near” end of pixel frame.
- the virtual input device is a keyboard 50 , such as depicted in FIG. 1A
- system 10 is expected to output information comprising at least the scan code corresponding to the virtual key that the user has “touched” on virtual keyboard 50 .
- the upper portion e.g., the row with virtual keys “ESC”, “F 1 ”, “F 2 ”, etc.
- camera OS2 60 has a lens with a focal length of about 4 mm, and a camera sensor arrayed with 480 rows and 640 columns.
- the homography H that maps points in the image to points on the virtual device depends on the tilt of camera OS2 60 .
- the above matrix preferably need be determined only once during a calibration procedure, described elsewhere herein.
- OS1 20 Before the user's finger 120 L (or stylus) intersects the plane of sensor OS1 20 , the latter detects no light, and sees an image made of black pixels, as illustrated shown in vignette 340 at the figure bottom. However, as soon as the user-object intersects optical plane 30 , the intersection event or interface becomes visible to OS1 20 . OS1 20 now generates an image similar to the one depicted in vignette 350 at the bottom of FIG. 6 . When the downward moving tip 110 of user-object (e.g., finger 120 L) reaches surface 40 , more of the finger becomes visible. The finger contour may now be determined, e.g., by unit 70 using edge detection. Such determination is depicted at the bottom of FIG. 6 as “TOUCH” event vignette 360 . Touch detection module 220 in FIG. 4 then determines that the user-object has touched surface 40 , and informs tip detection module 230 of this occurrence.
- TOUCH event vignette 360
- the virtual ‘T’ key is found in the second row of virtual keyboard 50 , and is therefore relatively close to sensor OS1 20 .
- this situation corresponds to the fingertip in position 110 ′.
- the projection of the bottom of the fingertip position 110 ′ onto the sensor of optical system OS2 60 is relatively close to the top of the image.
- the edge of the fingertip image thus produced is similar that shown in vignette 370 at the top of FIG. 6 .
- the two gray squares shown represent the bottom edge pixels of the fingertip.
- tip detection module 230 in FIG. 4 runs an edge detection algorithm, and thereby finds the bottom center of the “blob” representing the generalized region of contact to be at image row 65 and column 492 .
- the homogeneous image coordinate vector p is then multiplied by the homography matrix H to yield the coordinates P of the user fingertip in the frame of reference of the virtual keyboard:
- Key identification module 260 in FIG. 4 searches keyboard layout 250 for a key such that x min ⁇ 1153 ⁇ x max and y min ⁇ 249 ⁇ y max .
- key identification module 260 therefore determines that a user-object is touching virtual key “T” on virtual keyboard 50 , and informs translation module 270 of this occurrence.
- the occurrence need not necessarily be a keystroke.
- the userobject or finger may have earlier contacted the “T” key and may have remained in touch contact with the key thereafter. In such case, no keystroke event should be communicated to application 280 running on the companion device 80 or 90 .
- Key translation module 270 preferably stores the up-state or down-state of each key internally. This module determines at every frame whether any key has changed state. In the above example, if the key “T” is found to be in the down-state in the current frame but was in the up-state in the previous frame, translation module 270 sends a KEYCODE message to application 280 .
- the KEYCODE code will include a ′ KEY DOWN′ event identifier, along with a ′ KEY ID′ tag that identifies the “T” key, and thereby informs application 280 that the “T” key has just be “pressed” by the user-object.
- the KEYCODE would include a ′ KEY HELD′ event identifier, together with the ′ KEY ID′ associated with the “T” key. Sending the ′ KEY HELD′ event at each frame (excepting the first frame) in which the key is in the down-state frees application 280 from having to maintain any state about the keys.
- translation module 270 sends a KEYCODE with a ′ KEY UP′ event identifier, again with a ′ KEY ID′ tag identifying the “T” key, informing application 280 that the “T” key was just “released” by the user-object.
- frame images comprise only the tips of the user-object, e.g., fingertips.
- the various embodiments of the present invention use less than full three-dimensional image information acquired from within a relatively shallow volume defined slightly above a virtual input or virtual transfer device.
- a system implementing these embodiments can be relatively inexpensively fabricated and operated from a self-contained battery source. Indeed, the system could be constructed within common devices such as PDAs, cellular telephones, etc. to hasten the input or transfer of information from a user.
- undesired effects from ambient light may be reduced by selection of wavelengths in active light embodiments, by synchronization of camera(s) and light sources, by signal processing techniques that acquire and subtract-out images representing background noise.
Abstract
A system used with a virtual device inputs or transfers information to a companion device, and includes two optical systems OS1, OS2. In a structured-light embodiment, OS1 emits a fan beam plane of optical energy parallel to and above the virtual device. When a user-object penetrates the beam plane of interest, OS2 registers the event. Triangulation methods can locate the virtual contact, and transfer user-intended information to the companion system. In a non-structured active light embodiment, OS1 is preferably a digital camera whose field of view defines the plane of interest, which is illuminated by an active source of optical energy. Preferably the active source, OS1, and OS2 operate synchronously to reduce effects of ambient light. A non-structured passive light embodiment is similar except the source of optical energy is ambient light. A subtraction technique preferably enhances the signal/noise ratio. The companion device may in fact house the present invention.
Description
- Priority is claimed from applicants' co-pending U.S. provisional patent application Ser. No. 60/287,115 filed on 27 Apr. 2001 entitled “Input Methods Using Planar Range Sensors”, from co-pending U.S. Provisional patent application Ser. No. 60/272,120 filed on 27 Feb. 2001 entitled “Vertical Triangulation System for a Virtual Touch-Sensitive Surface”, and from co-pending U.S. provisional patent application Ser. No. 60/231,184 filed on 7 Sep. 2000 entitled “Application of Image Processing Techniques for a Virtual Keyboard System”. Further, this application is a continuation-in-part from co-pending U.S. patent application Ser. No. 09/502,499 filed on 11 Feb. 2000 entitled “Method And Apparatus for Entering Data Using A Virtual Input Device”. Each of said applications is incorporated herein by reference.
- The invention relates generally to sensing proximity of a stylus or user finger relative to a device to input or transfer commands and/or data to a system, and more particularly to such sensing relative to a virtual device used to input or transfer commands and/or data and/or other information to a system.
- It is often desirable to use virtual input devices to input commands and/or data and/or transfer other information to electronic systems, for example a computer system, a musical instrument, even telephones. For example, although computers can now be implemented in almost pocket-size, inputting data or commands on a mini-keyboard can be time consuming and error prone. While many cellular telephones can today handle e-mail communication, actually inputting messages using the small telephone touch pad can be difficult. For example, a PDA has much of the functionality of a computer but suffers from a tiny or non-existent keyboard. If a system could be used to determine when a user's fingers or stylus contacted a virtual keyboard, and what fingers contacted what virtual keys thereon, the output of the system could perhaps be input to the PDA in lieu of keyboard information. (The terms “finger” or “fingers”, and “stylus” are used interchangeably herein.) In this example a virtual keyboard might be a piece of paper, perhaps that unfolds to the size of a keyboard, with keys printed thereon, to guide the user's hands. It is understood that the virtual keyboard or other input device is simply a work surface and has no sensors or mechanical or electronic components. The paper and keys would not actually input information, but the interaction or interface between the user's fingers and portions of the paper, or if not paper, portions of a work surface, whereon keys would exist, could be used to input information to the PDA. A similar virtual device and system might be useful to input e-mail to a cellular telephone. A virtual piano-type keyboard might be used to play a real musical instrument. The challenge is how to detect or sense where the user's fingers or a stylus are relative to the virtual device.
- U.S. Pat. No. 5,767,848 to Korth (1998) entitled “Method and Device For Optical Input of Commands or Data” attempts to implement virtual devices using a two-dimensional TV video camera. Such optical systems rely upon luminance data and require a stable source of ambient light, but unfortunately luminance data can confuse an imaging system. For example, a user's finger in the image foreground may be indistinguishable from regions of the background. Further, shadows and other image-blocking phenomena resulting from a user's hands obstructing the virtual device would seem to make implementing a Korth system somewhat imprecise in operation. Korth would also require examination of the contour of a user's fingers, finger position relative to the virtual device, and a determination of finger movement.
- U.S. Pat. No.______ to Bamji et al. (2001) entitled “CMOS-Compatible Three-Dimensional Image Sensor IC”, application Ser. No. 09/406,059, filed 22 Sep. 1999, discloses a sophisticated three-dimensional imaging system usable with virtual devices to input commands and data to electronic systems. In that patent, various range finding systems were disclosed, which systems could be used to determine the interface between a user's fingertip and a virtual input device, e.g., a keyboard. Imaging was determined in three-dimensions using time-of-flight measurements. A light source emitted optical energy towards a target object, e.g., a virtual device, and energy reflected by portions of the object within the imaging path was detected by an array of photodiodes. Using various sophisticated techniques, the actual time-of-flight between emission of the optical energy and its detection by the photodiode array was determined. This measurement permitted calculating the vector distance to the point on the target object in three-dimensions, e.g., (x,y,z). The described system examined reflected emitted energy, and could function without ambient light. If for example the target object were a layout of a computer keyboard, perhaps a piece of paper with printed keys thereon, the system could determine which user finger touched what portion of the target, e.g., which virtual key, in what order. Of course the piece of paper would be optional and would be used to guide the user's fingers.
- Three-dimensional data obtained with the Bamji invention could be softwareprocessed to localize user fingers as they come in contact with a touch surface, e.g., a virtual input device. The software could identify finger contact with a location on the surface as a request to input a keyboard event to an application executed by an associated electronic device or system (e.g., a computer, PDA, cell phone, Kiosk device, point of sale device, etc.). While the Bamji system worked and could be used to input commands and/or data to a computer system using three-dimensional imaging to analyze the interface of a user's fingers and a virtual input device, a less complex and perhaps less sophisticated system is desirable. Like the Bamji system, such new system should be relatively inexpensive to mass produce and should consume relatively little operating power such that battery operation is feasible.
- The present invention provides such a system.
- The present invention localizes interaction between a user finger or stylus and a passive touch surface (e.g., virtual input device), defined above a work surface, using planar quasi-three-dimensional sensing. Quasi-three-dimensional sensing implies that determination of an interaction point can be made essentially in three dimensions, using as a reference a two-dimensional surface that is arbitrarily oriented in three-dimensional space. Once a touch has been detected, the invention localizes the touch region to determine where on a virtual input device the touching occurred, and what data or command keystroke, corresponding to the localized region that was touched, is to be generated in response to the touch. Alternatively, the virtual input device might include a virtual mouse or trackball. In such an embodiment, the present invention would detect and report coordinates of the point of contact with the virtual input device, which coordinates would be coupled to an application, perhaps to move a cursor on a display (in a virtual mouse or trackball implementation) and/or to lay so-called digital ink for a drawing or writing application (virtual pen or stylus implementation). In the various embodiments, triangulation analysis methods preferably are used to determine where user-object “contact” with the virtual input device occurs.
- In a so-called structured-light embodiment, the invention includes a first optical system (OS1) that generates a plane of optical energy defining a fanbeam of beam angle φ parallel to and a small stand-off distance ΔY above the work surface whereon the virtual input device may be defined. In this embodiment, the plane of interest is the plane of light produced by OS1, typically a laser or LED light generator. The two parallel planes may typically be horizontal, but they may be disposed vertically or at any other angle that may be convenient. The invention further includes a second optical system (OS2) that is responsive to optical energy of the same wavelength as emitted by OS1. Preferably OS2 is disposed above OS1 and angled with offset θ, relative to the fan-beam plane, toward the region where the virtual input device is defined. OS2 is responsive to energy emitted by OS1, but the wavelength of the optical energy need not be visible to humans. The invention may also be implemented using non-structured-light configurations that may be active or passive. In a passive triangulation embodiment, OS1 is a camera rather than an active source of optical energy, and OS2 is a camera responsive to the same optical energy as OS1, and preferably disposed as described above. In such embodiment, the plane of interest is the projection plane of a scan line of the OS1 camera. In a non-structured-light embodiment such as an active triangulation embodiment, OS1 and OS2 are cameras and the invention further includes an active light source that emits optical energy having wavelengths to which OS1 and OS2 respond. Optionally in such embodiment, OS1 and OS2 can each include a shutter mechanism synchronized to output from the active light source, such that shutters in OS1 and OS2 are open when optical energy is emitted, and are otherwise closed. An advantage of a non-structured light configuration using two cameras is that bumps or irregularities in the work surface are better tolerated. The plane defined by OS1 may be selected by choosing an appropriate row of OS1 sensing pixel elements to conform to the highest y-dimension point (e.g., bump) of the work surface.
- In the structured-light embodiment, OS2 will not detect optical energy until an object, e.g., a user finger or stylus, begins to touch the work surface region whereon the virtual input device is defined. However, as soon as the object penetrates the plane of optical energy emitted by OS1, the portion of the finger or stylus intersecting the plane will be illuminated (visibly or invisibly to a user). OS2 senses the intersection with the plane of interest by detecting optical energy reflected towards OS2 by the illuminated object region. Essentially only one plane is of interest to the present invention, as determined by configuration of OS1, and all other planes definable in three-dimensional space parallel to the virtual input device can be ignored as irrelevant. Thus, a planar three-dimensional sensor system senses user interactions with a virtual input device occurring on the emitted fan-beam plane, and ignores any interactions on other planes.
- In this fashion, the present invention detects that an object has touched the virtual input device. Having sensed that a relevant touch-intersection is occurring, the invention then localizes in two-dimensions the location of the touch upon the plane of the virtual device. In the preferred implementation, localized events can include identifying which virtual keys on a virtual computer keyboard or musical keyboard are touched by the user. The user may touch more than one virtual key at a time, for example the “shift” key and another key. Note too that the time order of the touchings is determined by the present invention. Thus, if the user touches virtual keys for “shift” and “t”, and then for the letters “h” and then “e”, the present invention will recognize what is being input as “T” then “h” and then “e”, or “The”. It will be appreciated that the present invention does not rely upon ambient light, and thus can be fully operative even absent ambient light, assuming that the user knows the location of the virtual input device.
- Structured-light and/or non-structured light passive triangulation methods may be used to determine a point of contact (x,z) between a user's hand and the sense plane. Since the baseline distance B between OS1 and OS2 is known, a triangle is formed between OS1, OS2 and point (x,z), whose sides are B, and projection rays R1 and R2 from OS1, OS2 to (x,z). OS1 and OS2 allow determination of triangle angular distance from a reference plane, as well as angles α1 and α2 formed by the projection rays, and trigonometry yields distance z to the surface point (x,z), as well as projection ray lengths.
- A processor unit associated with the present invention executes software to identify each intersection of a user-controlled object with the virtual input device and determines therefrom the appropriate user-intended input data and/or command, preferably using triangulation analysis. The data and/or commands can then be output by the present invention as input to a device or system for which the virtual input device is used. If desired the present invention may be implemented within the companion device or system, especially for PDAs, cellular telephones, and other small form factor device or systems that often lack a large user input device such as a keyboard.
- Other features and advantages of the invention will appear from the following description in which the preferred embodiments have been set forth in detail, in conjunction with their accompanying drawings.
-
FIG. 1A depicts a planar quasi-three-dimensional detection structured-light system used to detect user input to a virtual input device, according to the present invention; -
FIG. 1B depicts a planar quasi-three-dimensional detection non-structured active light system used to detect user input to a virtual input device, according to the present invention; -
FIG. 1C depicts a planar quasi-three-dimensional detection non-structured passive light system used to detect user input to a virtual input device, according to the present invention; -
FIG. 2A depicts geometry associated with location determination using triangulation, according to the present invention; -
FIG. 2B depicts use of a spaced-apart optical emitter and reflector as a first optical system, according to the present invention; -
FIGS. 3A-3E depict design tradeoffs associated with varying orientations of OS2 sensor, OS2 lens, and detection plane upon effective field of view and image quality, according to the present invention; -
FIG. 4 is a block diagram depicting functions carried out by a processor unit in the exemplary system ofFIG. 1B , according to an embodiment of the present invention; -
FIG. 5A depicts an embodiment wherein the virtual device has five userselectable regions and the companion device is a monitor, according to the present invention; -
FIG. 5B depicts an embodiment wherein the virtual device is a computer keyboard and the companion device is a mobile transceiver, according to the present invention; -
FIG. 5C depicts an embodiment wherein the virtual device is mounted or projected on a wall and the companion device is a monitor, according to the present invention; -
FIG. 6 depicts planar range sensing, according to the present invention; and -
FIG. 7 depicts coordinate distance measurements used in an exemplary calculation of touch location for use in outputting corresponding information or data or command, according to the present invention. -
FIG. 1A depicts a preferred embodiment of a quasi-planar three-dimensional sensing system 10 comprising, in a structured-light system embodiment, a first optical system (OS1) 20 that emits a fan-beam plane 30 of optical energy parallel to aplanar work surface 40 upon which there is defined avirtual input device 50 and/or 50 ′ and/or 50 ″. Preferably the fan-beam defines a fan angle φ, and is spaced-apart from the work surface by a-small stand-off distance ΔY. Any object (e.g., a user finger or stylus) attempting to touch the work surface must first contact the fan-beam and will thereby be illuminated (visibly or not visibly) with emitted optical energy. While fan-beam plane 30 and thework surface plane 40 are shown horizontally disposed inFIG. 1A , these two planes may be disposed vertically or indeed at any other angle that may be desired for a system. Note that, without limitation,work surface 40 could be a portion of a work desk, a table top, a portion of a vehicle, e.g., a tray in an airplane, a windshield or dashboard, a wall, a display including a projected image, or a display such as a CRT, an LCD, etc. As used herein, the term “plane” will be understood to include a subset of a full plane. For example, fan-beam plane 30 will be termed a plane, even though it has finite width and does not extend infinitely in all directions. - By “virtual input device” it is meant that an image of an input device may be present on
work surface 40, perhaps by placing a paper bearing a printed image, or perhapssystem 10 projects a visible image of the input device onto the work surface, or there literally may be no image whatsoever visible uponwork surface 40. As such,virtual input device - In the example of
FIG. 1A ,virtual input device 50 is a computer-type keyboard that may be full sized or scaled up or down from an actual sized keyboard. If desired the virtual input device may comprise or include avirtual trackball 50′ and/or avirtual touchpad 50″. Whensystem 10 is used with a virtualkeyboard input device 50, orvirtual trackball 50′ orvirtual touchpad 50″, a fan angle φ of about 50° to 90° and preferably about 90° will ensure thatfan beam 30 encompasses the entire virtual input device at distances commonly used. Further, for such a virtual input device, a stand-off distance ΔY of up to a few mm works well, preferably about 1 mm. -
System 10 further includes a second optical system (OS2) 60, typically a camera with a planar sensor, that is preferably spaced apart from and above0 S1 20, and inclined towardwork surface 40 andplane 30 at an angle θ, about 10° to about 90°, and preferably about 250°.System 10 further includes anelectronic processing system 70 that, among other tasks, supervises OS1 and OS2.System 70 preferably includes at least a central processor unit (CPU) and associated memory that can include read-only-memory (ROM) and random access memory (RAM). - In
FIG. 1A ,system 10elements OS1 20,OS2 60, andprocessor unit 70 are shown disposed on or in adevice 80.Device 80 may be a stand-alone implementation ofsystem 10 or may in fact be a system or device for whichvirtual input device 50 is used to input data or commands. In the latter case,device 80 may, without limitation, be a computer, a PDA (as shown inFIG. 1A ), a cellular telephone, a musical instrument, etc. If system ordevice 80 is not being controlled by the virtual input device, thedevice 90 being so controlled can be coupled electrically to system/device 80 to receive data and/or commands input fromvirtual device 50. Where the virtual device is a trackball (or mouse) 50′ ortouchpad 50″, user interaction with such virtual device can directly output raw information or data comprising touch coordinates (x,z) for use bydevice 80. For example, user interaction withvirtual input device 50′ or 50″might reposition acursor 160 on adisplay 140, or otherwise alter an application executed bydevice 80, or lay down a locus of so-calleddigital ink 180 that follows what a user might “write” using a virtual mouse ortrackball 50′, or using astylus 120′ and avirtual touchpad 50″. System/device 90 can be electrically coupled tosystem 80 by a medium 100 that may without limitation include wire(s) or be wireless, or can be a network including the internet. - In a structured-light embodiment,
OS1 20 emits optical energy in fan-beam 30, parallel to thex-z plane 30. OS1 may include a laser line generator or an LED line generator, although other optical energy sources could be used to emitplane 30. A line generator OS1 is so called because it emits a plane of light that when intersected by a second plane illuminates what OS2 would view as a line on the second plane. For example if a cylindrical object intersectedplane 30, OS2 would see the event as an illuminated portion of an elliptical arc whose aspect ratio would depend upon distance of OS2 aboveplane 30 andsurface 40. Thus, excluding ambient light, detection by OS2 of an elliptical arc onplane 30 denotes a touching event, e.g., that an object such as 120R has contacted or penetratedplane 30. Although a variety of optical emitters may be used, a laser diode outputting perhaps 3 mW average power at a wavelength of between 300 nm to perhaps 1,000 nm could be used. While ambient light wavelengths (e.g., perhaps 350 nm to 700 nm) could be used, the effects of ambient light may be minimized without filtering or shutters if such wavelengths are avoided. Thus, wavelengths of about 600 nm (visible red) up to perhaps 1,000 nm (deep infrared) could be used. A laser diode outputting 850 nm wavelength optical energy would represent an economical emitter, although OS2 would preferably include a filter to reduce the effects of ambient light. - While OS1 preferably is stationary in a structured light embodiment, it is understood that a fan-
beam 30 could be generated by mechanically sweeping a single emitted line of optical energy to define the fan-beam plane 30. As shown inFIG. 2B , OS1 may in fact include an optical energy emitter 20-A that emits a fan beam, and a reflecting mirror 20-B that directs thefan beam 30 substantially parallel to surface 40. For purposes of the present invention, in a structured light embodiment, optical energy emitted byOS1 20 may be visible to humans or not visible.OS2 60 preferably includes a camera system responsive to optical energy of the wavelength emitted byOS1 20. By “responsive” it is meant that OS2 recognizes energy of the same wavelength emitted by OS1, and ideally will not recognize or respond to energy of substantially differing wavelength. For example, OS2 may include a filter system such that optical energy of wavelength other than that emitted by OS1 is not detected, for example a color filter. - If desired, OS2 could be made responsive substantially solely to optical energy emitted from OS1 by synchronously switching OS1 and OS2 on and off at the same time, e.g., under control of
unit 70. OS1 and OS2 preferably would include shutter mechanisms, depicted aselements 22, that would functionally open and close in synchronized fashion. For example,electronic processing system 70 could synchronously switch-on OS1, OS2, or shuttermechanisms 22 for a time period t1 with a desired duty cycle, where t1 is perhaps in the range of about 0.1 ms to about 35 ms, and then switch-off OS1 and OS2. If desired, OS1 could be operated at all times, whereplane 30 is permitted to radiate only whenshutter 22 in front ofOS1 20 is open. In the various shutter configuration, repetition rate of the synchronous switching is preferably in the range of 20 Hz to perhaps 300 Hz to promote an adequate rate of frame data acquisition. To conserve operating power and reduce computational overhead, a repetition rate of perhaps 30 Hz to 100 Hz represents an acceptable rate. Of course other devices and methods for ensuring that OS2 responds substantially only to optical energy emitted by OS1 may also be used. For ease ofillustration shutters 22 are depicted as mechanical elements, but in practice the concept ofshutters 22 is understood to include turning on and off light sources and cameras in any of a variety of ways. - If desired, source(s) of optical energy used with the present invention could be made to carry a so-called signature to better enable such energy to be discerned from ambient light energy. For example and without limitation, such sources might be modulated at a fixed frequency such that cameras or other sensor units used with the present invention can more readily recognize such energy while ambient light energy would, by virtue of lacking such signature, be substantially rejected. In short, signature techniques such as selecting wavelengths for optical energy that differ from ambient light, techniques that involve synchronized operation of light sources and camera sensors, and modulating or otherwise tagging light source energy can all improve the signal/noise ratio of information acquired by the present invention.
- Note that there is no requirement that
work surface 40 be reflective or nonreflective with respect to the wavelength emitted by OS1 since the fan-beam or other emission of optical energy does not reach the surface per se. Note too that preferably the virtual input device is entirely passive. Sincedevice 50 is passive, it can be scaled to be smaller than a full-sized device, if necessary. Further, the cost of a passive virtual input device can be nil, especially if the “device” is simply a piece of paper bearing a printed graphic image of an actual input device. - In
FIG. 1A , assume initially that the user ofsystem 10 is not in close proximity tovirtual input device 50. In a structured-light embodiment, although OS1 may emit optical energy fan-beam plane 30, OS2 detects nothing because no object intersectsplane 30. Assume now that aportion 110 of a finger of a user's left orright hand work surface 40 whereon thevirtual input device 50 is defined. Alternatively,portion 110′ of a user-controlledstylus 120′ could be moved downward to touch a relevant portion ofwork surface 40. Within the context of the present invention, a touch is interpreted by software associated with the invention as a request to send a keyboard event to an application running on a companion device orsystem - In
FIG. 1A , as the user's finger moves downward and begins to intersectoptical energy plane 30, a portion of the finger tip facing OS1 will now reflectoptical energy 130. At least some reflectedoptical energy 130 will be detected by OS2, since the wavelength of the reflected energy is the same as the energy emitted by OS1, and OS2 is responsive to energy of such wavelength. Thus, planar quasi-three-dimensional sense system 10 detects optical energy reflected by the interaction of a user controlled object (e.g., a finger, a stylus, etc.) occurring at a plane of interest defined by fan-beam plane 30. Any interaction(s) that may occur on any other plane are deemed not relevant and may be ignored by the present invention. - Thus, until an object such as a portion of a user's hand or perhaps of a stylus intersects the
optical energy plane 30 emitted byOS1 20, there will be no reflectedoptical energy 130 forOS2 60 to detect. Under such conditions,system 10 knows that no user input is being made. However as soon as the optical energy plane is penetrated, the intersection of the penetrating object (e.g., fingertip, stylus tip, etc.) is detected byOS2 60, and the location (x,z) of the penetration can be determined byprocessor unit 70 associated withsystem 10. InFIG. 1A , if the user's left forefinger is touching the portion ofvirtual input device 50 defined as co-ordinate (x7, z3), then software associated with the invention can determine that the letter “t” has been “pressed”. Since no “shift key” is also being pressed, the pressed letter would be understood to be lower case “t”. - In the embodiment shown,
system 10 can generate and input tosystem system information 140 ondisplay 150, as the information is entered by the user onvirtual input device 50. If desired, anenlarged cursor region 160 could be implemented to provide additional visual input to aid the user who is inputting information. If desired,processor unit 70 could causesystem 80 and/or 90 to emit audible feedback to help the user, e.g., electronic keyclick sounds 170 corresponding with the “pressing” of a virtual key onvirtual input device 50. It is understood that ifsystem musical sounds 170 would be emitted, and virtual input-device 50 could instead have the configuration similar to a piano keyboard or keyboards associated with synthetic music generators. -
FIG. 1B depicts a non-structured activelight system 10, in which acamera 20′ in a first optical system OS1 defines a plane ofinterest 30′ that in essence replacesplane 30 defined by optical emitter OS1 in the embodiment ofFIG. 1A .Camera 20′ OS1 preferably is similar tocamera 60 OS2, which may be similar tocamera 60 OS2 in the embodiment ofFIG. 1A . For example,OS1 20′ may have a sensor array that comprises at least one line and preferably several lines of pixel detector elements. The embodiment ofFIG. 1B is active in that one or morelight sources 190, disposedintermediate OS1 20′ andOS2 60 generate optical energy of a wavelength that is detectable bycamera OS1 20′ and bycamera OS2 60. To reduce the effects of ambient light upon detection by cameras OS1 and OS2, preferably each camera and eachoptical energy emitter 190 operates in cooperation with a shutter mechanism, preferably synchronized, e.g., byunit 70. Thus, during the times thatshutters 22 permit optical energy fromemitter 190 to radiate towards thevirtual input device similar shutters 22 will permit cameras OS1 and OS2 to detect optical energy. The interaction of user-object, e.g., 120L withplane 30′ is detected by OS1 and by OS2. The location of the point of intersection is then calculated, e.g., using triangulation methods described later herein. - In
FIG. 1B , a bump or irregularity in the plane ofwork surface 40 is shown near the point ofcontact 110 with the user-object 120L. An advantage of the presence ofsecond camera OS1 20′ is that the plane ofinterest 30′ may be selected, perhaps byunit 70, to lie just above the highest irregular portion ofwork surface 40. If irregularities were present inwork surface 40 in the embodiment ofFIG. 1A , it would be necessary to somehow reposition thelaser plane 30 relative to the work surface. But inFIG. 1B , the effect of such repositioning is attained electronically simply by selecting an appropriate line of pixels from the detector array withOS1 20′. - Note that the configuration of
FIG. 1B lends itself to various methods to improve the signal/noise ratio. For example,shutters 22 can permit cameras OC1 and OS2 to gather image data during a time thatemitters 190 are turned off, e.g., bycontrol unit 70. Any image data then acquired by OS1 and/or OS2 will represent background noise resulting from ambient light. (Again it is understood that to minimize effects of ambient light,emitters 190 and cameras OS1, OS2 preferably operate at a wavelength regime removed from that of ambient light.) Having acquired what might be termed a background noise signal, cameras OS1 and OS2 can now be operated normally and in synchronism with emitter(s) 190. Image data acquired by cameras OS1 and OS2 in synchronism with emitter(s) 190 will include actual data, e.g., user-object interface withplane 30′, plus any (undesired) effects due to ambient light. Processor unit 70 (or another unit) can then dynamically subtract the background noise signal from the actual data plus noise signal, to arrive at an actual data signal, thus enhancing the signal/noise ratio. -
FIG. 1C depicts a non-structured passive embodiment of the present invention.System 10 inFIG. 1C is passive in that whateversource 195 of ambient light is present provides optical energy used during imaging. Similar tosystem 10 inFIG. 1B , OS1 is acamera 20′ that defines a plane ofinterest 30′, and OS2 is acamera 60. Typically plane 30′ will be defined a distance ΔY′ abovework surface 40, typically a distance of a few mm. User-object interaction withplane 30′ is detected by OS1 and OS2, using optical energy from ambientlight source 195. Triangulation methods may then be used to localize the point of interaction or intersection withplane 30′, as described elsewhere herein. -
FIG. 2A depicts the geometry with which location (x,z) of the intercept point between a user's finger or object 12OR andplane 30 may be determined using triangulation.FIG. 2A andFIG. 2B may be used to describe analysis of the various embodiments shown inFIGS. 1A-1C . - As used herein, triangulation helps determine the shape of surfaces in a field of view of interest by geometric analysis of triangles formed by the projection rays, e.g., R1, R2 of two optical systems, e.g.,
OS1 20,OS2 60. A baseline B represents the known length of the line that connects the centers of projection of the two optical systems, OS1, OS2. For a point (x,z) on a visible surface in the field of view of interest, a triangle may be defined by the location of the point and by locations of OS1, and OS2. The three sides of the triangle are B, R1, and R2. OS1 and OS2 can determine the angular distance of the triangle from a reference plane, as well as the angles α1 and α2 formed by the projection rays that connect the surface point with the centers of projection of the two optical systems. Angles α1 and α2 and baseline B completely determine the shape of the triangle. Simple trigonometry can be used to yield the distance to the surface point (x,z), as well as length of projection ray R1 and/or R2. - It is not required that
OS1 20 be implemented as a single unit. For exampleFIG. 2B depicts a structured-light embodiment in which the first optical system is bifurcate: one portion OS1-A 20-A is a light emitter disposed distance B from OS2 and from the second portion OS1-B 20-B, a light reflecting device such as a mirror. An incoming fan beam generated by OS1-A is deflected by mirror 20-B to form theplane 30. In the orientation ofFIG. 2B , mirror 20-B is inclined about 45° relative to the horizontal plane, and deflection is from a substantially vertical plane to a substantial horizontal plane. InFIG. 2B and indeed in a passive light embodiment,OS2 60 will be a camera aimed at angle φ generally toward the field of view of interest, namely where a user's finger or stylus will be to “use” a virtual input device disposed beneathfan plane 30. - Triangulation according to the present invention preferably uses a standard camera with a planar sensor as
OS2 60. The nature ofOS1 20 distinguishes between two rather broad classes of triangulation. In a structured-light triangulation,OS1 20 is typically a laser or the like whose beam may be shaped as a single line that is moved to project a moving point onto a surface. Alternatively the laser beam may be planar and moved to project a planar curve. As noted, another class of triangulation system may be termed passive triangulation in which a camera is used asOS1 20. Structured-light systems tend to be more complex to build and consume more operating power, due to the need to project a plane of light. Passive systems are less expensive, and consume less power. However passive system must solve the so-called correspondence problem, e.g., to determine which pairs of points in the two images are projections of the same point in the real world. As will be described, passive non-structured-light triangulation embodiments may be used, according to the present invention. - Whether
system 10 is implemented as a structured-light system in which OS1 actively emits light and OS2 is a camera, or as a passive system in which OS1 and OS2 are both cameras, information from OS2 and OS1 will be coupled to a processing unit, e.g., 70, that can determine what events are occurring. In either embodiment, when an object such as 120R intersects theprojection plane 30 associated withOS1 20, the intersection is detectable. In a structured-light embodiment in which OS1 emits optical energy, the intersection is noted by optical energy reflected from the intersectedobject 120R and detected by OS2, typically a camera. In a passive light embodiment, the intersection is seen by OS1, a camera, and also by OS2, a camera. In each embodiment, the intersection withplane 30 is detected as though the region ofsurface 40 underlying the (x,z) plane intersection were touched byobject 120R.System 10 preferably includes acomputing system 70 that receives data from OS1, OS2 and uses geometry to determine the plane intersection position (x,z) from reflected image coordinates in a structured-light embodiment, or from camera image coordinates in a passive system. As such, the dual tasks of detecting initial and continuing contact and penetration of plane 30 (e.g., touch events), and determining intersection coordinate positions on the plane may be thus accomplished. - To summarize thus far, touch events are detected and declared when OS1 recognizes the intersection of
plane 30 with an intruding object such as 120R. In a two-camera system, a correspondence is established between points in the perceived image from OS1 and from those in OS2. Thereafter, OS2 camera coordinates are transformed into touch-area (x-axis, z-axis) coordinates to locate the (x,z) coordinate position of the event within the area of interest inplane 30. Preferably such transformations are carried out byprocessor unit 70, which executes algorithms to compute intersect positions inplane 30 from image coordinates of points visible to OS2. Further, a passive light system must distinguish intruding objects from background in images from OS1 and OS2. Wheresystem 10 is a passive light system, correspondence-needs to be established between the images from camera OS1 and from camera OS2. Wheresystem 10 is a structured-light system, it is desired to minimize interference from ambient light. - Consider now computation of the (X,Z) intersection or tip position on
plane 30. In perspective projection, a plane in the world and its image are related by a transformation called a homography. Let a point (X,Z) on such plane be represented by the column vector P=(X, Z, 1)T, where the superscript T denotes transposition. Similarly, let the corresponding image point be represented by p=(x, z, 1)T. - A homography then is a linear transformation P=Hp, where H is a 3×3 matrix.
- This homography matrix may be found using a calibration procedure. Since the sensor rests on the surface, sensor position relative to the surface is constant, and the calibration procedure need be executed only once. For calibration, a grid of known pitch is placed on the flat surface on which the sensor is resting. The coordinates pi of the image points corresponding to the grid vertices Pi are measured in the image. A direct linear transform (DLT) algorithm can be used to determine the homography matrix H. Such DLT transform is known in the art; see for example Richard Hartley and Andrew Zisserman. Multiple View Geometry in Computer Vision, Cambridge University Press, Cambridge, UK, 2000.
- Once H is known, the surface point P corresponding to a point p in the image is immediately computed by the matrix-vector multiplication above. Preferably such computations are executed by
system 70. - Image correspondence for passive light embodiments will now be described. Cameras OS1 20 and
OS2 60 see the same plane in space. As a consequence, mapping between the line-scan camera image from OS1 and the camera image from OS2 will itself be a homography. This is similar to mapping between the OS2 camera image and theplane 30 touch surface described above with respect to computation of the tip intercept position. Thus a similar procedure can be used to compute this mapping. - Note that since line
scan camera OS1 20 essentially sees or grazes the touch surface collapsed to a single line, homography between the two images is degenerate. For each OS2 camera point there is one OS1 line-scan image point, but for each OS1 line-scan image point there is an entire line of OS2 camera points. Because of this degeneracy, the above-described DLT algorithm will be (trivially) modified to yield a point-to-line correspondence. By definition, a passive light embodiment of the present invention has no control over ambient lighting, and it can be challenging to distinguish intruding intersecting objects or tips from the general background. In short, how to tell whether a particular image pixel in an OS1 image or OS2 image represents the image of a point on an object such as 120R, or is a point in the general background. An algorithm executable bysystem 70 will now be described. - Initially, assume one or more background images 1 1, . . . , In with only the touch surface portion of
plane 30 in view. Assume that cameras OS1 and OS2 can respond to color, and let Rbi(x, z), Gbi(x, z), Bbi(x, z) be the red, green, and blue components of the background image intensity li at pixel position (x, z). Let sb(x, z) be a summary of Rbi(x, z), Gbi(x, z), Bbi(x, z) over all images. For instance, Sb(x, z) can be a three-vector with the averages, medians, or other statistics of Rbi(x, z), Gbi(x, z), Bbi(x, z) at pixel position (x, z) over all background images I1, . . . , In, possibly normalized to de-emphasize variations in image brightness. - Next, collect a similar summary st for tip pixels over a new sequence of images J1, . . . , Jm. This second summary is a single vector, rather than an image of vectors as for Sb(x, z). In other words, st does not depend on the pixel position (x, z). This new summary can be computed, for instance, by asking a user to place finger tips or stylus in the sensitive area of the surface, and recording values only at pixel positions (x, z) whose color is very different from the background summary sb(x, z) at (x, z), and computing statistics over all values of j, x, z.
- Then, given a new image with color components c(x, z)=(R(x, z), G(x, z), B(x, z)), a particular pixel at (x, z) is attributed to either tip or background by a suitable discrimination rule. For instance, a distance d(c1, c2) can be defined between three-vectors (Euclidean distance is one example), and pixels are assigned based on the following exemplary rule:
-
- Background if d(c(x,z), sb(x, z))<<d(c(x,z), st).
- Tip if d(c(x,z), sb(x, z))>>d(c(x,z), st).
- Unknown otherwise.
- Techniques for reducing ambient light interference, especially for a structured-light triangulation embodiment will now be described. In such embodiment, OS2 needs to distinguish between ambient light and light produced by the line generator and reflected back by an intruding object.
- Using a first method, OS1 emits energy in a region of the light spectrum where ambient light has little power, for instance, in the near infrared. An infrared filter on camera OS2 can ensure that the light detected by the OS2 sensor is primarily reflected from the object (e.g., 120R) into the lens of camera OS2.
- In a second method, OS1 operates in the visible part of the spectrum, but is substantially brighter than ambient light. Although this can be achieved in principle with any color of the light source, for indoor applications it may be useful to use a blue-green light source for OS1 (500 nm to 550 nm) because standard fluorescent lights have relatively low emission in this band. Preferably OS2 will including a matched filter to ensure that response to other wavelengths are substantially attenuated.
- A third method to reduce effects of ambient light uses a standard visible laser source for OS1, and a color camera sensor for OS2. This method uses the same background subtraction algorithm described above. Let the following combination be defined, using the same terminology as above:
C(x, z)=min{d(c(x,z), sb(x, z)), d(c(x,z), st)}. - This combination will be exactly zero when c(x,z) is equal to the representative object tip summary st (since d(st, st)=0) and for the background image sb(x, z) (since d(sb(x, z), sb(x, z))=0), and close to zero for other object tip image patches and for visible parts of the background. In other words, object tips and background will be hardly visible in the image C(x,z). By comparison, at positions where the
projection plane 30 from laser emitter OS1 intersectsobject tips 120R, the term d(c(x,z), st) will be significantly non-zero, which in turn yields a substantially non-zero value for C(x,z). This methodology achieves the desired goal of identifying essentially only the object tip pixels illuminated by laser (or other emitter) OS1. This method can be varied to use light emitters of different colors, to use other distance definitions for the distance d, and to use different summaries Sb(x, z) and st. - In
FIG. 1A , ifdevice 80 is a compact system such as a PDA or cell telephone, it becomes especially desirable to reduce the size needed to implement the present invention. A smaller overall form fact can result if OS2 is inclined at some angle θ, as shown inFIGS. 1A-1C , 2A, 2B, with respect to plane 30 orsurface 40. But as angle θdecreases, camera OS2 seesplane 30 from a shallower angle. For a fixed size for the sensitive area ofplane 30, i.e., the surface rectangle that is to be “touched” by a user object to manipulate an underlying virtual input device, as distance B and angle θ decrease, the effective area subtended by the field of view decreases. The result is to decrease effective OS2 resolution and thus to decrease accuracy of z-depth measurements as shown inFIG. 3A , where L denotes a camera lens associated with OS2, whose plane of pixel detectors is shown as a straight line labeled OS2. - As noted in
FIG. 3A , moving OS2 closer to plane 30 results in a shallower viewpoint and in a smaller, less accurately perceived, camera image. These adverse side effects may be diminished as shown inFIG. 3B by tilting the plane of pixel detectors in camera OS2, indeed tilting almost parallel to plane 30. With the tilted configuration ofFIG. 3B , note that a substantially greater number of image scan lines intersect the cone of rays from the sensitive area onplane 30, which increases depth resolution accordingly. Compare, for example, the relatively small distance Dx inFIG. 3A with the larger distance Dx′ inFIG. 3B , representing the larger number of image scan lines now in use. Further, as the OS2 camera sensor plane becomes more parallel to the plane of the touch surface or to plane 30, less distortion of the touch surface image results. This implies that parallel lines on the touch surface (or on plane 30) will remain parallel in the OS2 camera image. An advantage is the simplification of the homography H to an affine transformation (a shift and a scale). Further, image resolution is rendered more uniform over the entire sensitive area within the field of view of interest. - Consider now the configuration of
FIG. 3C . It is apparent that different points on the touch sensitive area of interest onplane 30 are at different distances from lens L of camera OS2. This means that one cannot focus the entire sensitive area of interest precisely if lens L is positioned as shown inFIG. 3A or inFIG. 3B . While closing the camera iris could increase the depth of field, resultant images would become dimmer, and image signal-to-noise ratio would be degraded. - Accordingly the configuration of
FIG. 3C may be employed in which lens L is repositioned relative toFIG. 3B . In this configuration,touch surface 30, the camera OS2 sensor, and camera lens L are said to satisfy the so-called Scheimpflug condition, in which their respective planes intersect along a common line, a line that is at infinity inFIG. 3C . Further details as to the Scheimpflug condition may be found at The Optical Society of America. Handbook of Optics, Michael Bass, Editor in Chief, McGraw-Hill, Inc., 1995. InFIG. 3C , when the relevant optical system satisfies this condition, all points ontouch surface 30 will be in focus. Thus, by using an appropriately tilted sensor OS2, an appropriately positioned lens S that satisfy the Scheimpflug condition, the image seen by OS2 of points of interest onsurface plane 30 will be in focus, and will exhibit high resolution with little distortion. But meeting the Scheimpflug condition can result in loss of image brightness because the angle that the lens subtends when viewed from the center of the sensitive area onplane 30 is reduced with respect to the configuration ofFIG. 3B . As a consequence, it may be preferable in some applications to reach a compromise between sharpness of focus and image brightness, by placing OS2 camera lens in an orientation intermediate between those ofFIG. 3B andFIG. 3C .FIG. 3D depicts one such intermediate configuration, in which lens L is purposely tilted slightly away from a Scheimpflug-satisfying orientation with respect to planes of OS2 and 30. - Such intermediate orientations do not satisfy the Scheimpflug condition, but by a lesser degree and therefore still exhibit good focusing than a configuration whose lens axis points directly towards the center of the sensitive area of
plane 3.FIG. 3E depicts another alternative intermediate configuration, one in which the Scheimpflug condition is exactly verified, but the camera sensor OS2 is tilted away from horizontal. The configuration ofFIG. 3E can achieve exact focus but with somewhat lower image resolution and more distortion than the configuration ofFIG. 3C . -
FIG. 4 is a block diagram depicting operative portions ofprocessor unit 70 withinsystem 10, which processor unit preferably carries out the various triangulation and other calculations described herein to sense and identify (x,z) intercepts with the plane ofinterest 30. As the left portion ofFIG. 4 , information fromOS1 20 andOS2 30 is input respectively to pixel maps 200-1, 200-2. InFIG. 4 , OS1 and OS2 inputs refer to a stream of frames of digitized images are generated by optical system 1 (20) and optical system 2 (60) in a planarrange sensor system 10, according to the present invention. In a preferred embodiment, optical system generates at least about 30 frames per second (fps). Higher frame rates are desirable in that at 30 fps, the tip of the user's finger or stylus can move several pixels while “typing” on virtual input device between two frames. Pixel map modules 200-1, 200-2 construct digital frames from OS1 and OS2 in memory associated withcomputational unit 70.Synchronizer module 210 ensures that the two optical systems produce frames of digitized images at approximately the same time. If desired, a double-buffering system may be implemented to permit construction of one frame while the previous frame (in time) is being processed by the other modules.Touch detection module 220 detects a touch (e.g., intersection of a user finger or stylus with the optical plane sensed by OS1) when the outline of a fingertip or stylus appears in a selected row of the frame. When a touch is detected,tip detection module 230 records the outline of the corresponding fingertip into the appropriate pixel map, 200-1 or 200-2. InFIG. 4 , in a structured-light embodiment where OS1 is a light beam generator, no pixel map is produced, and touch detection will use input from OS2 rather than from OS1. -
Touch position module 240 uses tip pixel coordinates fromtip detection module 230 at the time a touch is reported fromtouch detection module 220 to find the (x-z) coordinates of the touch on the touch surface. As noted, a touch is tantamount to penetration ofplane 30 associated with an optical emitter OS1 in a structured-light embodiment, or in a passive light embodiment, associated with a plane of view of a camera OS1. Mathematical methods to convert the pixel coordinates to the X-Z touch position are described elsewhere herein. -
Key identification module 260 uses the X-Z position of a touch and maps the position to a key identification using a keyboard layout table 250 preferably stored in memory associated withcomputation unit 70. Keyboard layout table 250 typically defines the top/bottom/left and right coordinates of each key relative to a zero origin. As such, a function ofkey identification module 260 is to perform a search of table 250 and determine which key contains the (x,z) coordinates of the touch point. When the touched (virtual) key is identified,translation module 270 maps the key to a predetermined KEYCODE value. The KEYCODE value is output or passed to an application that is being executed on the companion device or system 80 (executing on a companion device) that is waiting to receive-a notification of a keystroke event. The application under execution interprets the keystroke event and assigns a meaning to it. For instance, a text input application uses the value to determine what symbol was typed. An electronic piano application determines what musical note was pressed and plays that note, etc. - Alternatively, as shown in
FIG. 4 , the X-Z touch coordinates can be passed directly toapplication 280.Application 280 could use the coordinate data to control the position of a cursor on a display in a virtual mouse or virtual trackball embodiment, or to control a source of digital ink whose locus is shown on a display for a drawing or hand-writing type application in a virtual pen or virtual stylus embodiment. -
FIG. 5A is a simplified view ofsystem 10 in whichvirtual device 50 is now a control with five regions, and in which thecompanion device companion device display 150 that may includeicons 140, one of which is surrounded by acursor 310 and a user can move usingvirtual device 50′, here a virtual trackball or mouse. For example, withinvirtual device 50′, if a portion of the user'shand 120R (or stylus) presses virtual region 300-1, the displayedcursor 310 oncompanion device device companion device FIG. 5A , if the user now presses region 300-5, the “hotdog” icon is selected. Ifdevice device device device input device 50′ representing trip originating point and trip destination point, whereuponsystem 10 could cause a display of appropriate transportation vehicles, schedules, fares, etc. to be displayed and, if desired, printed out. It will be appreciated that information generated bysystem 10 may simply be raw (x,z) coordinates that a software application executed by a companion device may use to reposition a cursor or other information on a display. - It is understood in
FIG. 5A thatvirtual device 50′ is passive; its outline may be printed or-painted onto an underlying work surface, or perhaps its outline can be projected bysystem 10. The various regions of interest invirtual device 50 may be identified in terms of coordinates relative to the x-z plane. Consider the information in Table 1, below, which corresponds to information inkeyboard layout 250 inFIG. 4 :TABLE 1 REGION TOP BOTTOM LEFT RIGHT U −2 −1 −1 1 B 1 2 −1 1 R −1 1 1 2 L −1 1 −2 −1 −1 1 −1 1 - When the user's finger (or stylus) touches a region of
virtual input device 50, touch position module 240 (seeFIG. 4 ) determines the (x,z) coordinates of thetouch point 110. InFIG. 5 ,touch point 110 is within “B” region 300-4.Key identification module 260 uses thekeyboard layout 250 information, in this example as shown in Table 1, to determine where in the relevant (x,z) plane the touch point coordinates occur. By way of example, assume touch coordinates (x,z) are (1.5,0.5). A search routine preferably stored in memory associated with unit 70 (seeFIG. 1A ) and executed byunit 70 determines that 1<x<2, and −1<z<1. Searching information in Table 1, the key identification module will determine thattouch point 110 falls within entry B. In this example,companion device system 10 advising that region B has been touched.Processor unit 70 insystem 10 can cause the companion device to receive such other information as may be required to perform the task associated with the event, for example to move the cursor downward on the display. -
FIG. 5B depicts an embodiment ofsystem 10 similar to that shown inFIG. 1A . InFIG. 5B thevirtual input device 50 is a computer keyboard and thecompanion device system 10 could in fact be implemented withindevice beam 30 from a lower portion ofdevice virtual input device 50 could, if desired, be projected optically fromdevice virtual input device 50 might be printed on a foldable substrate, e.g., plastic, paper, etc. that can be retained withindevice device virtual input device 50 in front ofdevice beam 30 encompassing the virtual input device, and OS2 can detectintersection 110 of an object, e.g., a user's finger or cursor, etc., with a location in the fan-beam overlying any region of interest invirtual input device 50. - In
FIG. 5B , OS2 will not detect reflected optical energy untilobject 120R intercepts fan-beam 130, whereupon some optical energy emitted by OS1 will be reflected (130) and will be detected by OS2. Relative to the (x,z) coordinate system shown inFIG. 1A , the point ofinterception 110 is approximately location (13,5). Referring toFIG. 4 , it is understood that keyboard layout table 250 will have at least one entry for each virtual key, e.g., “1”, “2”, . . . “Q”, “W”, . . . “SHIFT” defined onvirtual input device 50. An entry search process similar to that described with respect toFIG. 5A is carried out, preferably byunit 70, and the relevant virtual key that underliestouch point 110 can be identified. InFIG. 5B , the relevant key is “I”, which letter “I” is shown ondisplay 150 as part ofe-mail message text 140 being input intocellular telephone hand 120R (or by a stylus). The ability to rapidly touchtype messages intocellular telephone virtual keyboard 50, as contrasted with laboriously inputting messages using the cellular telephone keypad will be appreciated. - In
FIG. 5C , an embodiment ofsystem 10 is shown in which theworkspace 40 is a vertical wall, perhaps in a store or mall, andvirtual input device 50 is also vertically disposed. In this embodiment,virtual input device 50 is shown with several icons and/orwords 320 that when touched by a user'shand 120, e.g., attouch point 110, will cause an appropriate text and/orgraphic image 140 to appear ondisplay 150 incompanion device icons 320 may represent locations or departments in a store, and display 150 will interactively provide further information in response to user touching of an icon region. In a mall, the various icons may represent entire stores, or department or regions within a store, etc. The detection and localization of touchpoints such as 110 is preferably carried out as has been described with respect to the embodiments ofFIGS. 3A and 3B . Preferablyprocessor unit 70 withinsystem 10 executes software, also stored within or loadable intoprocessor unit 70, to determine what icon or text portion ofvirtual input device 50 has been touched, and what commands and/or data should be communicated tohost system - In the embodiment of
FIG. 5C , if thevirtual input device 50 is apt to be changed frequently, e.g., perhaps it is a menu in a restaurant wheredisplay 150 can provide detailed information such as calories, contents of sauces, etc.,device 50 may be back projected from withinwall 40. Understandably if the layout and location of thevarious icons 320 change, mapping information stored withinunit 70 insystem 10 will also be changed. The ability to rapidly change the nature and content of the virtual input device without necessarily be locked-into having icons of a fixed size in a fixed location can be very useful. If desired, some icons may indeed be fixed in size and location ondevice 50, and their touching by a user may be used to select a re-mapping of what is shown oninput device 50, and what is mapped by software withinunit 70. It is understood that in addition to simply displaying information, which may include advertisements,companion device promotional coupons 330 for users. - Turning now to
FIG. 6 , the manner of registering a touch event and localizing its position is determined bysystem 10 in a manner, depending upon whethersystem 10 is a structured-light system or a passive light system. As noted earlier, in a structured-light system OS1 may be a line generating laser system, and in a passive light system, OS1 may be a digital camera. Each system defines aplane 30 that when intercepted by an object such as 120R will define a touch event whose (x,z) coordinates are to then be determined. Once the (x,z) coordinates of the virtual touch are determined, the present invention can decide what input or command was intended by the person using the system. Such input or command can be passed to a companion device, which device may in fact also house the present invention. - If
system 10 is a passive light system, a touch event is registered when the outline of a fingertip appears in a selected frame row of OS1, a digital camera. The (x,z)plane 30 location of the touch is determined by the pixel position of the corresponding object tip (e.g., 120R) in OS2, when a touch is detected in OS1. As shown inFIG. 6 , the range or distance from camera OS1 to the touch point is an affine function of the number of pixels from the “near” end of the pixel frame. - As noted, in a structured-light embodiment, OS1 will typically be a laser line generator, and OS2 will be a camera primarily sensitive to wavelength of the light energy emitted by OS1. As noted, this can be achieved by installing a narrowband light filter on OS2 such that only wavelength corresponding to that emitted by OS1 will pass. Alternatively, OS2 can be understood to include a shutter that opens and closes in synchronism to pulse output of OS1, e.g., OS2 can see optical energy only at time that OS1 emits optical energy. In either embodiment of a structured-light system, OS2 preferably will only detect objects that
intercept plane 30 and thus reflect energy emitted by OS1. - In the above case, touch sense detection and range calculation are carried out by
system 10. Thus, a touch event is registered when the outline of an object, e.g.,fingertip 120R, appears within the viewing range of OS2. As in the above example, range distance may be calculated as an affine function of the number of pixels from the “near” end of pixel frame. - A further example of analytical steps carried out in
FIG. 4 by the present invention will now be given. Assume that the virtual input device is akeyboard 50, such as depicted inFIG. 1A , and thatsystem 10 is expected to output information comprising at least the scan code corresponding to the virtual key that the user has “touched” onvirtual keyboard 50. InFIG. 1A andFIG. 2A , assume that the upper portion (e.g., the row with virtual keys “ESC”, “F1”, “F2”, etc.) is a distance of about 20 cm fromoptical system OS1 20. Assume thatcamera OS2 60 is mounted on a PDA orother device 80 that is about 10 cm tall, and is placed at a known angle α1=120° relative to theplane 30. Assume too thatcamera OS2 60 has a lens with a focal length of about 4 mm, and a camera sensor arrayed with 480 rows and 640 columns. - The Z coordinate of the upper left corner of
virtual keyboard 50 is set by convention to be x=0 and z=0, e.g., (0,0). The homography H that maps points in the image to points on the virtual device depends on the tilt ofcamera OS2 60. An exemplary homography matrix for the configuration above is as follows: - The above matrix preferably need be determined only once during a calibration procedure, described elsewhere herein.
- Referring now to
FIG. 1A andFIG. 7 , assume thatuser 120L touches the region ofvirtual keyboard 50 corresponding to the letter “T”, which letter “T”may be printed on a substrate to guide the user's fingers or may be part of an image of the virtual input device perhaps projected bysystem 10. Using the system of coordinates defined above, key “T” may be said to lie between horizontal coordinates xmin=10.5 and xmax=12.4 cm, and between vertical coordinates zmin=1.9 and zmax=3.8 cm, as shown inFIG. 7 . - Referring now to
FIG. 6 , before the user'sfinger 120L (or stylus) intersects the plane ofsensor OS1 20, the latter detects no light, and sees an image made of black pixels, as illustrated shown invignette 340 at the figure bottom. However, as soon as the user-object intersectsoptical plane 30, the intersection event or interface becomes visible toOS1 20.OS1 20 now generates an image similar to the one depicted invignette 350 at the bottom ofFIG. 6 . When the downward movingtip 110 of user-object (e.g.,finger 120L) reachessurface 40, more of the finger becomes visible. The finger contour may now be determined, e.g., byunit 70 using edge detection. Such determination is depicted at the bottom ofFIG. 6 as “TOUCH”event vignette 360.Touch detection module 220 inFIG. 4 then determines that the user-object has touchedsurface 40, and informstip detection module 230 of this occurrence. - As seen in
FIG. 1A , the virtual ‘T’ key is found in the second row ofvirtual keyboard 50, and is therefore relatively close tosensor OS1 20. InFIG. 6 , this situation corresponds to the fingertip inposition 110′. As further shown inFIG. 6 , the projection of the bottom of thefingertip position 110′ onto the sensor ofoptical system OS2 60 is relatively close to the top of the image. The edge of the fingertip image thus produced is similar that shown invignette 370 at the top ofFIG. 6 . Invignette 370, the two gray squares shown represent the bottom edge pixels of the fingertip. - Had the user instead struck the spacebar or some other key closer to the bottom of
virtual keyboard 50, that is, further away from thesensor OS1 20, the situation depicted byfingertip position 110 inFIG. 6 would have arisen. Such a relatively far location on the virtual keyboard is mapped to a pixel closer to the bottom of the image, and an edge image similar to that sketched invignette 380 at the top ofFIG. 6 would have instead arisen. Intermediate virtual key contact situations would produce edge images that are more similar to that depicted asvignette 390 at the top ofFIG. 6 . - In the above example in which virtual key ′ T′ is pressed,
tip detection module 230 inFIG. 4 runs an edge detection algorithm, and thereby finds the bottom center of the “blob” representing the generalized region of contact to be at image row 65 and column 492. The homogeneous image coordinate vector p, given below is therefore formed: - The homogeneous image coordinate vector p is then multiplied by the homography matrix H to yield the coordinates P of the user fingertip in the frame of reference of the virtual keyboard:
- The user-object or
finger 120L is thus determined to have touchedvirtual keyboard 50 at a location point having coordinates x=11.53 and z=2.49 cm.Key identification module 260 inFIG. 4 searcheskeyboard layout 250 for a key such that xmin≦1153<xmax and ymin≦249<ymax. - These conditions are satisfied for the virtual “T” key because 10.5<11.53<12.4, and 1.9<2.49<3.8. Referring to
FIG. 4 ,key identification module 260 therefore determines that a user-object is touching virtual key “T” onvirtual keyboard 50, and informstranslation module 270 of this occurrence. - The occurrence need not necessarily be a keystroke. For example, the userobject or finger may have earlier contacted the “T” key and may have remained in touch contact with the key thereafter. In such case, no keystroke event should be communicated to
application 280 running on thecompanion device -
Key translation module 270 preferably stores the up-state or down-state of each key internally. This module determines at every frame whether any key has changed state. In the above example, if the key “T” is found to be in the down-state in the current frame but was in the up-state in the previous frame,translation module 270 sends a KEYCODE message toapplication 280. The KEYCODE code will include a ′ KEY DOWN′ event identifier, along with a ′ KEY ID′ tag that identifies the “T” key, and thereby informsapplication 280 that the “T” key has just be “pressed” by the user-object. If the “T” key were found to have been also in the down-state during previous frames, the KEYCODE would include a ′ KEY HELD′ event identifier, together with the ′ KEY ID′ associated with the “T” key. Sending the ′ KEY HELD′ event at each frame (excepting the first frame) in which the key is in the down-state freesapplication 280 from having to maintain any state about the keys. Once the “T” key is found to be in the up-state in the current frame but was in the downstate in previous frames,translation module 270 sends a KEYCODE with a ′ KEY UP′ event identifier, again with a ′ KEY ID′ tag identifying the “T” key, informingapplication 280 that the “T” key was just “released” by the user-object. - From the foregoing, it will be appreciated that it suffices that frame images comprise only the tips of the user-object, e.g., fingertips. The various embodiments of the present invention use less than full three-dimensional image information acquired from within a relatively shallow volume defined slightly above a virtual input or virtual transfer device. A system implementing these embodiments can be relatively inexpensively fabricated and operated from a self-contained battery source. Indeed, the system could be constructed within common devices such as PDAs, cellular telephones, etc. to hasten the input or transfer of information from a user. As described, undesired effects from ambient light may be reduced by selection of wavelengths in active light embodiments, by synchronization of camera(s) and light sources, by signal processing techniques that acquire and subtract-out images representing background noise.
- Modifications and variations may be made to the disclosed embodiments without departing from the subject and spirit of the invention as defined by the following claims.
Claims (15)
1-13. (Cancelled)
14. A system enabling a user-manipulated user-object used with a virtual transfer device to transfer information to a companion device, the system comprising:
a central processor unit including memory storing at least one software routine;
a first optical system defining a plane substantially parallel-to and spaced-above a presumed location of said virtual transfer device;
a second optical system having a relevant field of view encompassing at least portions of said plane and responsive to user-object penetration of said plane to interact with said virtual transfer device;
means for determining relative position of a portion of said user-object on said plane;
wherein said system transfers information to said companion device enabling user-object with said virtual transfer device to affect operation of said companion device
15. The system of claim 14 , wherein said means for determining includes determining said relative position using triangulation analysis.
16. The system of claim 14 , wherein said means for determining includes said processor unit executing said routine to determine said relative position.
17. The system of claim 14 , wherein:
said first optical system includes means for generating a plane of optical energy; and
said second optical system includes a camera sensor that detects a reflected portion of said optical energy when said user-object penetrates said plane.
18. The system of claim 14 , wherein:
said first optical system includes at least one of (i) a laser to generate said plane, and (ii) an LED to generate said plane; and
said second optical system includes a camera sensor that detects a reflected portion of said optical energy when said user-object penetrates said plane.
19. The system of claim 14 , further including means for enhancing responsiveness of said second optical system to said user-object penetration while decreasing said responsiveness to ambient light.
20. The system of claim 19 , wherein said means for enhancing includes at least one of (a) providing a signature associated with generation of said plane, (b) selecting a common wavelength for energy within said plane defined by said first optical system and for responsiveness of said second optical system, and (c) synchronizing operation of said first optical system and operation of said second optical system.
21. The system of claim 14 , wherein said first optical system includes a first camera sensor that defines said plane.
22. The system of claim 14 , wherein:
said first optical system includes a first camera sensor that defines said plane;
said second optical system includes a second camera to sense said penetration;
and further including:
a source of optical energy directed generally toward said virtual transfer device; and
means for synchronizing operation of at least two of same first optical system, said second optical system, and said source of optical energy;
wherein effects of ambient light upon accuracy of information obtained with said system are reduced.
23. The system of claim 14 , wherein:
said first optical system includes a generator of optical energy of a desired wavelength; and
said second optical system is sensitive substantially only to optical energy of said desired wavelength.
24. The system of claim 14 , wherein said companion device includes at least one of (i) a PDA, (ii) a portable communication device, (iii) an electronic device, (iv) an electronic game device, and (v) a musical instrument, and said virtual transfer device is at least one of (I) a virtual keyboard, (II) a virtual mouse, (III) a virtual trackball, (IV) a virtual pen, (V) a virtual trackpad, and (VI) a user-interface selector.
25. The system of claim 14 , wherein said virtual transfer device is mapped to a work surface selected from at least one of (i) a table top, (ii) a desk top, (iii) a wall, (iv) a point-of-sale appliance, (v) a point-of-service appliance, (vi) a kiosk, (vii) a surface in a vehicle, (viii) a projected display, (ix) a physical display, (x) a CRT, and (xi) an LCD.
26. The system of claim 14 , wherein at least one of said first operating system and said second operating system is a camera sensor having a lens and an image plane;
wherein at least one of said lens ad said image plane is tilted to enhance at least one of resolution and depth of field.
27. The system of claim 14 , further including means for enhancing distinguishment of said user-object from a background object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/750,452 US20050024324A1 (en) | 2000-02-11 | 2003-12-30 | Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/502,499 US6614422B1 (en) | 1999-11-04 | 2000-02-11 | Method and apparatus for entering data using a virtual input device |
US23118400P | 2000-09-07 | 2000-09-07 | |
US27212001P | 2001-02-27 | 2001-02-27 | |
US28711501P | 2001-04-27 | 2001-04-27 | |
US09/948,508 US6710770B2 (en) | 2000-02-11 | 2001-09-07 | Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device |
US10/750,452 US20050024324A1 (en) | 2000-02-11 | 2003-12-30 | Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/948,508 Continuation US6710770B2 (en) | 1999-11-04 | 2001-09-07 | Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device |
Publications (1)
Publication Number | Publication Date |
---|---|
US20050024324A1 true US20050024324A1 (en) | 2005-02-03 |
Family
ID=34109189
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/948,508 Expired - Fee Related US6710770B2 (en) | 1999-11-04 | 2001-09-07 | Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device |
US10/750,452 Abandoned US20050024324A1 (en) | 2000-02-11 | 2003-12-30 | Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/948,508 Expired - Fee Related US6710770B2 (en) | 1999-11-04 | 2001-09-07 | Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device |
Country Status (1)
Country | Link |
---|---|
US (2) | US6710770B2 (en) |
Cited By (67)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040085286A1 (en) * | 2002-10-31 | 2004-05-06 | Microsoft Corporation | Universal computing device |
US20040125147A1 (en) * | 2002-12-31 | 2004-07-01 | Chen-Hao Liu | Device and method for generating a virtual keyboard/display |
US20040136083A1 (en) * | 2002-10-31 | 2004-07-15 | Microsoft Corporation | Optical system design for a universal computing device |
US20050111700A1 (en) * | 2003-10-03 | 2005-05-26 | O'boyle Michael E. | Occupant detection system |
US20050162409A1 (en) * | 2000-08-18 | 2005-07-28 | International Business Machines Corporation | Projector and camera arrangement with shared optics and optical marker for use with whiteboard systems |
US20050193292A1 (en) * | 2004-01-06 | 2005-09-01 | Microsoft Corporation | Enhanced approach of m-array decoding and error correction |
US6968073B1 (en) | 2001-04-24 | 2005-11-22 | Automotive Systems Laboratory, Inc. | Occupant detection system |
WO2005114636A2 (en) * | 2004-05-12 | 2005-12-01 | Northrop Grumman Corporation | Projector pen image stabilization system |
US20060182309A1 (en) * | 2002-10-31 | 2006-08-17 | Microsoft Corporation | Passive embedded interaction coding |
US20060215913A1 (en) * | 2005-03-24 | 2006-09-28 | Microsoft Corporation | Maze pattern analysis with image matching |
US20060242562A1 (en) * | 2005-04-22 | 2006-10-26 | Microsoft Corporation | Embedded method for embedded interaction code array |
US20060274948A1 (en) * | 2005-06-02 | 2006-12-07 | Microsoft Corporation | Stroke localization and binding to electronic document |
US20070041654A1 (en) * | 2005-08-17 | 2007-02-22 | Microsoft Corporation | Embedded interaction code enabled surface type identification |
US20070121087A1 (en) * | 2005-11-29 | 2007-05-31 | Garg Sachin K | Image projection system for personal media player |
US20080025612A1 (en) * | 2004-01-16 | 2008-01-31 | Microsoft Corporation | Strokes Localization by m-Array Decoding and Fast Image Matching |
US20090027241A1 (en) * | 2005-05-31 | 2009-01-29 | Microsoft Corporation | Fast error-correcting of embedded interaction codes |
US20090109175A1 (en) * | 2007-10-31 | 2009-04-30 | Fein Gene S | Method and apparatus for user interface of input devices |
US20090128716A1 (en) * | 2007-11-15 | 2009-05-21 | Funai Electric Co., Ltd. | Projector and method for projecting image |
FR2924654A1 (en) * | 2007-12-10 | 2009-06-12 | Valeo Systemes Thermiques | DEVICE FOR CONTROLLING A AUTOMOTIVE ACCESSORY AND METHOD FOR IMPLEMENTING SUCH A CONTROL DEVICE |
USRE40880E1 (en) * | 2000-05-17 | 2009-08-25 | P. Milton Holdings, Llc | Optical system for inputting pointer and character data into electronic equipment |
US7826074B1 (en) | 2005-02-25 | 2010-11-02 | Microsoft Corporation | Fast embedded interaction code printing with custom postscript commands |
US20100302144A1 (en) * | 2009-05-28 | 2010-12-02 | Microsoft Corporation | Creating a virtual mouse input device |
US20100302155A1 (en) * | 2009-05-28 | 2010-12-02 | Microsoft Corporation | Virtual input devices created by touch input |
US20110090151A1 (en) * | 2008-04-18 | 2011-04-21 | Shanghai Hanxiang (Cootek) Information Technology Co., Ltd. | System capable of accomplishing flexible keyboard layout |
US20110090147A1 (en) * | 2009-10-20 | 2011-04-21 | Qualstar Corporation | Touchless pointing device |
US20110096032A1 (en) * | 2009-10-26 | 2011-04-28 | Seiko Epson Corporation | Optical position detecting device and display device with position detecting function |
US20110096031A1 (en) * | 2009-10-26 | 2011-04-28 | Seiko Epson Corporation | Position detecting function-added projection display apparatus |
US20110187679A1 (en) * | 2010-02-01 | 2011-08-04 | Acer Incorporated | Optical touch display device and method thereof |
US20110202836A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Typing assistance for editing |
US20110242054A1 (en) * | 2010-04-01 | 2011-10-06 | Compal Communication, Inc. | Projection system with touch-sensitive projection image |
US8156153B2 (en) | 2005-04-22 | 2012-04-10 | Microsoft Corporation | Global metadata embedding and decoding |
US20120176341A1 (en) * | 2011-01-11 | 2012-07-12 | Texas Instruments Incorporated | Method and apparatus for camera projector system for enabling an interactive surface |
US20120268376A1 (en) * | 2011-04-20 | 2012-10-25 | Qualcomm Incorporated | Virtual keyboards and methods of providing the same |
US8319773B2 (en) | 2007-10-31 | 2012-11-27 | Fein Gene S | Method and apparatus for user interface communication with an image manipulator |
US20120306817A1 (en) * | 2011-05-30 | 2012-12-06 | Era Optoelectronics Inc. | Floating virtual image touch sensing apparatus |
US20130181904A1 (en) * | 2012-01-12 | 2013-07-18 | Fujitsu Limited | Device and method for detecting finger position |
US20130222247A1 (en) * | 2012-02-29 | 2013-08-29 | Eric Liu | Virtual keyboard adjustment based on user input offset |
US20130265283A1 (en) * | 2012-04-10 | 2013-10-10 | Pixart Imaging Inc. | Optical operation system |
US20140098025A1 (en) * | 2012-10-09 | 2014-04-10 | Cho-Yi Lin | Portable electrical input device capable of docking an electrical communication device and system thereof |
US8714749B2 (en) | 2009-11-06 | 2014-05-06 | Seiko Epson Corporation | Projection display device with position detection function |
US20140191947A1 (en) * | 2013-01-04 | 2014-07-10 | Texas Instruments Incorporated | Using Natural Movements of a Hand-Held Device to Manipulate Digital Content |
TWI451221B (en) * | 2010-03-11 | 2014-09-01 | Osram Opto Semiconductors Gmbh | Portable electronic device |
US8860640B2 (en) * | 2012-05-30 | 2014-10-14 | Christie Digital Systems Usa, Inc. | Zonal illumination for high dynamic range projection |
US20150029111A1 (en) * | 2011-12-19 | 2015-01-29 | Ralf Trachte | Field analysis for flexible computer inputs |
US20150084869A1 (en) * | 2012-04-13 | 2015-03-26 | Postech Academy-Industry Foundation | Method and apparatus for recognizing key input from virtual keyboard |
WO2013175389A3 (en) * | 2012-05-20 | 2015-08-13 | Extreme Reality Ltd. | Methods circuits apparatuses systems and associated computer executable code for providing projection based human machine interfaces |
US9232335B2 (en) | 2014-03-06 | 2016-01-05 | Sony Corporation | Networked speaker system with follow me |
US9288597B2 (en) | 2014-01-20 | 2016-03-15 | Sony Corporation | Distributed wireless speaker system with automatic configuration determination when new speakers are added |
US9369801B2 (en) | 2014-01-24 | 2016-06-14 | Sony Corporation | Wireless speaker system with noise cancelation |
US9426551B2 (en) | 2014-01-24 | 2016-08-23 | Sony Corporation | Distributed wireless speaker system with light show |
US9483997B2 (en) | 2014-03-10 | 2016-11-01 | Sony Corporation | Proximity detection of candidate companion display device in same room as primary display using infrared signaling |
US9560449B2 (en) | 2014-01-17 | 2017-01-31 | Sony Corporation | Distributed wireless speaker system |
US9693168B1 (en) | 2016-02-08 | 2017-06-27 | Sony Corporation | Ultrasonic speaker assembly for audio spatial effect |
US9693169B1 (en) | 2016-03-16 | 2017-06-27 | Sony Corporation | Ultrasonic speaker assembly with ultrasonic room mapping |
US9696414B2 (en) | 2014-05-15 | 2017-07-04 | Sony Corporation | Proximity detection of candidate companion display device in same room as primary display using sonic signaling |
US20170262133A1 (en) * | 2016-03-08 | 2017-09-14 | Serafim Technologies Inc. | Virtual input device for mobile phone |
US9794724B1 (en) | 2016-07-20 | 2017-10-17 | Sony Corporation | Ultrasonic speaker assembly using variable carrier frequency to establish third dimension sound locating |
US9826330B2 (en) | 2016-03-14 | 2017-11-21 | Sony Corporation | Gimbal-mounted linear ultrasonic speaker assembly |
US9826332B2 (en) | 2016-02-09 | 2017-11-21 | Sony Corporation | Centralized wireless speaker system |
US9854362B1 (en) | 2016-10-20 | 2017-12-26 | Sony Corporation | Networked speaker system with LED-based wireless communication and object detection |
US9866986B2 (en) | 2014-01-24 | 2018-01-09 | Sony Corporation | Audio speaker system with virtual music performance |
US9924286B1 (en) | 2016-10-20 | 2018-03-20 | Sony Corporation | Networked speaker system with LED-based wireless communication and personal identifier |
US10070291B2 (en) | 2014-05-19 | 2018-09-04 | Sony Corporation | Proximity detection of candidate companion display device in same room as primary display using low energy bluetooth |
US10075791B2 (en) | 2016-10-20 | 2018-09-11 | Sony Corporation | Networked speaker system with LED-based wireless communication and room mapping |
US10623859B1 (en) | 2018-10-23 | 2020-04-14 | Sony Corporation | Networked speaker system with combined power over Ethernet and audio delivery |
US10785428B2 (en) * | 2015-10-16 | 2020-09-22 | Capsovision Inc. | Single image sensor for capturing mixed structured-light images and regular images |
US10963159B2 (en) * | 2016-01-26 | 2021-03-30 | Lenovo (Singapore) Pte. Ltd. | Virtual interface offset |
Families Citing this family (316)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7831358B2 (en) * | 1992-05-05 | 2010-11-09 | Automotive Technologies International, Inc. | Arrangement and method for obtaining information using phase difference of modulated illumination |
JP4052498B2 (en) | 1999-10-29 | 2008-02-27 | 株式会社リコー | Coordinate input apparatus and method |
JP2001184161A (en) | 1999-12-27 | 2001-07-06 | Ricoh Co Ltd | Method and device for inputting information, writing input device, method for managing written data, method for controlling display, portable electronic writing device, and recording medium |
US6701005B1 (en) | 2000-04-29 | 2004-03-02 | Cognex Corporation | Method and apparatus for three-dimensional object segmentation |
ATE521929T1 (en) * | 2000-05-05 | 2011-09-15 | Pace Plc | PORTABLE COMMUNICATION DEVICE |
JP5042437B2 (en) * | 2000-07-05 | 2012-10-03 | スマート テクノロジーズ ユーエルシー | Camera-based touch system |
US6803906B1 (en) | 2000-07-05 | 2004-10-12 | Smart Technologies, Inc. | Passive touch system and method of detecting user input |
FI113094B (en) * | 2000-12-15 | 2004-02-27 | Nokia Corp | An improved method and arrangement for providing a function in an electronic device and an electronic device |
EP1402243B1 (en) | 2001-05-17 | 2006-08-16 | Xenogen Corporation | Method and apparatus for determining target depth, brightness and size within a body region |
US7298415B2 (en) * | 2001-07-13 | 2007-11-20 | Xenogen Corporation | Structured light imaging apparatus |
DE10294159D2 (en) * | 2001-09-07 | 2004-07-22 | Me In Gmbh | operating device |
JP3920067B2 (en) * | 2001-10-09 | 2007-05-30 | 株式会社イーアイティー | Coordinate input device |
EP1315120A1 (en) * | 2001-11-26 | 2003-05-28 | Siemens Aktiengesellschaft | Pen input system |
US20030165048A1 (en) * | 2001-12-07 | 2003-09-04 | Cyrus Bamji | Enhanced light-generated interface for use with electronic devices |
US7071924B2 (en) * | 2002-01-10 | 2006-07-04 | International Business Machines Corporation | User input method and apparatus for handheld computers |
WO2003071410A2 (en) * | 2002-02-15 | 2003-08-28 | Canesta, Inc. | Gesture recognition system using depth perceptive sensors |
US10242255B2 (en) | 2002-02-15 | 2019-03-26 | Microsoft Technology Licensing, Llc | Gesture recognition system using depth perceptive sensors |
US6894771B1 (en) | 2002-05-15 | 2005-05-17 | Hunter Engineering Company | Wheel alignment apparatus and method utilizing three-dimensional imaging |
JP2004005272A (en) * | 2002-05-31 | 2004-01-08 | Cad Center:Kk | Virtual space movement control device, method and program |
US7307661B2 (en) * | 2002-06-26 | 2007-12-11 | Vbk Inc. | Multifunctional integrated image sensor and application to virtual interface technology |
US20040001144A1 (en) | 2002-06-27 | 2004-01-01 | Mccharles Randy | Synchronization of camera images in camera-based touch system to enhance position determination of fast moving objects |
US7616985B2 (en) * | 2002-07-16 | 2009-11-10 | Xenogen Corporation | Method and apparatus for 3-D imaging of internal light sources |
US7599731B2 (en) * | 2002-07-16 | 2009-10-06 | Xenogen Corporation | Fluorescent light tomography |
US6616358B1 (en) * | 2002-07-25 | 2003-09-09 | Inventec Appliances Corporation | Keyboard structure alteration method |
JP4006290B2 (en) * | 2002-07-30 | 2007-11-14 | キヤノン株式会社 | Coordinate input device, control method of coordinate input device, and program |
US7151530B2 (en) | 2002-08-20 | 2006-12-19 | Canesta, Inc. | System and method for determining an input selected by a user through a virtual interface |
US7920718B2 (en) * | 2002-09-05 | 2011-04-05 | Cognex Corporation | Multi-zone passageway monitoring system and method |
US7526120B2 (en) * | 2002-09-11 | 2009-04-28 | Canesta, Inc. | System and method for providing intelligent airbag deployment |
US20040066500A1 (en) * | 2002-10-02 | 2004-04-08 | Gokturk Salih Burak | Occupancy detection and measurement system and method |
US7671843B2 (en) * | 2002-11-12 | 2010-03-02 | Steve Montellese | Virtual holographic input method and device |
US6954197B2 (en) * | 2002-11-15 | 2005-10-11 | Smart Technologies Inc. | Size/scale and orientation determination of a pointer in a camera-based touch system |
DE10260305A1 (en) * | 2002-12-20 | 2004-07-15 | Siemens Ag | HMI setup with an optical touch screen |
US20040140988A1 (en) * | 2003-01-21 | 2004-07-22 | David Kim | Computing system and device having interactive projected display |
WO2004070485A1 (en) * | 2003-02-03 | 2004-08-19 | Siemens Aktiengesellschaft | Projection of synthetic information |
US7146036B2 (en) * | 2003-02-03 | 2006-12-05 | Hewlett-Packard Development Company, L.P. | Multiframe correspondence estimation |
US8508508B2 (en) | 2003-02-14 | 2013-08-13 | Next Holdings Limited | Touch screen signal processing with single-point calibration |
US8456447B2 (en) | 2003-02-14 | 2013-06-04 | Next Holdings Limited | Touch screen signal processing |
US7629967B2 (en) | 2003-02-14 | 2009-12-08 | Next Holdings Limited | Touch screen signal processing |
US7176905B2 (en) * | 2003-02-19 | 2007-02-13 | Agilent Technologies, Inc. | Electronic device having an image-based data input system |
US7532206B2 (en) * | 2003-03-11 | 2009-05-12 | Smart Technologies Ulc | System and method for differentiating between pointers used to contact touch surface |
US7256772B2 (en) | 2003-04-08 | 2007-08-14 | Smart Technologies, Inc. | Auto-aligning touch system and method |
US7176438B2 (en) * | 2003-04-11 | 2007-02-13 | Canesta, Inc. | Method and system to differentially enhance sensor dynamic range using enhanced common mode reset |
US6919549B2 (en) * | 2003-04-11 | 2005-07-19 | Canesta, Inc. | Method and system to differentially enhance sensor dynamic range |
US20040222987A1 (en) * | 2003-05-08 | 2004-11-11 | Chang Nelson Liang An | Multiframe image processing |
FR2857132A1 (en) * | 2003-07-03 | 2005-01-07 | Thomson Licensing Sa | DEVICE, SYSTEM AND METHOD FOR CODING DIGITAL IMAGES |
DE10336276A1 (en) * | 2003-08-07 | 2005-03-10 | Siemens Ag | Operating unit, in particular for medical devices |
US7411575B2 (en) | 2003-09-16 | 2008-08-12 | Smart Technologies Ulc | Gesture recognition method and touch system incorporating the same |
US7439074B2 (en) * | 2003-09-30 | 2008-10-21 | Hoa Duc Nguyen | Method of analysis of alcohol by mass spectrometry |
US20050078092A1 (en) * | 2003-10-08 | 2005-04-14 | Clapper Edward O. | Whiteboard desk projection display |
US7274356B2 (en) | 2003-10-09 | 2007-09-25 | Smart Technologies Inc. | Apparatus for determining the location of a pointer within a region of interest |
EP1524588A1 (en) * | 2003-10-15 | 2005-04-20 | Sony Ericsson Mobile Communications AB | User input device for a portable electronic device |
US20080297614A1 (en) * | 2003-10-31 | 2008-12-04 | Klony Lieberman | Optical Apparatus for Virtual Interface Projection and Sensing |
US7623674B2 (en) * | 2003-11-05 | 2009-11-24 | Cognex Technology And Investment Corporation | Method and system for enhanced portal security through stereoscopy |
US8326084B1 (en) | 2003-11-05 | 2012-12-04 | Cognex Technology And Investment Corporation | System and method of auto-exposure control for image acquisition hardware using three dimensional information |
US7317955B2 (en) * | 2003-12-12 | 2008-01-08 | Conmed Corporation | Virtual operating room integration |
US7317954B2 (en) * | 2003-12-12 | 2008-01-08 | Conmed Corporation | Virtual control of electrosurgical generator functions |
US7355593B2 (en) * | 2004-01-02 | 2008-04-08 | Smart Technologies, Inc. | Pointer tracking across multiple overlapping coordinate input sub-regions defining a generally contiguous input region |
US7232986B2 (en) * | 2004-02-17 | 2007-06-19 | Smart Technologies Inc. | Apparatus for detecting a pointer within a region of interest |
JP2005230476A (en) * | 2004-02-23 | 2005-09-02 | Aruze Corp | Game machine |
US7379562B2 (en) * | 2004-03-31 | 2008-05-27 | Microsoft Corporation | Determining connectedness and offset of 3D objects relative to an interactive surface |
US20050227217A1 (en) * | 2004-03-31 | 2005-10-13 | Wilson Andrew D | Template matching on interactive surface |
GB2424269A (en) * | 2004-04-01 | 2006-09-20 | Robert Michael Lipman | Control apparatus |
US7460110B2 (en) * | 2004-04-29 | 2008-12-02 | Smart Technologies Ulc | Dual mode touch system |
US7394459B2 (en) * | 2004-04-29 | 2008-07-01 | Microsoft Corporation | Interaction between objects and a virtual environment display |
US7492357B2 (en) | 2004-05-05 | 2009-02-17 | Smart Technologies Ulc | Apparatus and method for detecting a pointer relative to a touch surface |
US7538759B2 (en) | 2004-05-07 | 2009-05-26 | Next Holdings Limited | Touch panel display system with illumination and detection provided from a single edge |
US8120596B2 (en) | 2004-05-21 | 2012-02-21 | Smart Technologies Ulc | Tiled touch system |
US7787706B2 (en) * | 2004-06-14 | 2010-08-31 | Microsoft Corporation | Method for controlling an intensity of an infrared source used to detect objects adjacent to an interactive display surface |
US7593593B2 (en) * | 2004-06-16 | 2009-09-22 | Microsoft Corporation | Method and system for reducing effects of undesired signals in an infrared imaging system |
KR100636483B1 (en) | 2004-06-25 | 2006-10-18 | 삼성에스디아이 주식회사 | Transistor and fabrication method thereof and light emitting display |
US7519223B2 (en) * | 2004-06-28 | 2009-04-14 | Microsoft Corporation | Recognizing gestures and using gestures for interacting with software applications |
JP4507796B2 (en) * | 2004-09-28 | 2010-07-21 | 株式会社ユニバーサルエンターテインメント | Game machine, game control method, and program thereof |
US7925996B2 (en) * | 2004-11-18 | 2011-04-12 | Microsoft Corporation | Method and system for providing multiple input connecting user interface |
US20060118634A1 (en) * | 2004-12-07 | 2006-06-08 | Blythe Michael M | Object with symbology |
US20060152482A1 (en) * | 2005-01-07 | 2006-07-13 | Chauncy Godwin | Virtual interface and control device |
TW200627244A (en) * | 2005-01-17 | 2006-08-01 | Era Optoelectronics Inc | Data input device |
US8009871B2 (en) | 2005-02-08 | 2011-08-30 | Microsoft Corporation | Method and system to segment depth images and to detect shapes in three-dimensionally acquired data |
US20100302165A1 (en) * | 2009-05-26 | 2010-12-02 | Zienon, Llc | Enabling data entry based on differentiated input objects |
GB2440683B (en) * | 2005-02-23 | 2010-12-08 | Zienon L L C | Method and apparatus for data entry input |
US9760214B2 (en) * | 2005-02-23 | 2017-09-12 | Zienon, Llc | Method and apparatus for data entry input |
WO2006090386A2 (en) * | 2005-02-24 | 2006-08-31 | Vkb Inc. | A virtual keyboard device |
JP4612853B2 (en) * | 2005-03-29 | 2011-01-12 | キヤノン株式会社 | Pointed position recognition device and information input device having the same |
US7571855B2 (en) * | 2005-03-29 | 2009-08-11 | Hewlett-Packard Development Company, L.P. | Display with symbology |
US20060224151A1 (en) * | 2005-03-31 | 2006-10-05 | Sherwood Services Ag | System and method for projecting a virtual user interface for controlling electrosurgical generator |
US7499027B2 (en) * | 2005-04-29 | 2009-03-03 | Microsoft Corporation | Using a light pointer for input on an interactive display surface |
US20060244720A1 (en) * | 2005-04-29 | 2006-11-02 | Tracy James L | Collapsible projection assembly |
EP1880263A1 (en) * | 2005-05-04 | 2008-01-23 | Koninklijke Philips Electronics N.V. | System and method for projecting control graphics |
US8044996B2 (en) * | 2005-05-11 | 2011-10-25 | Xenogen Corporation | Surface construction using combined photographic and structured light information |
US20060267927A1 (en) * | 2005-05-27 | 2006-11-30 | Crenshaw James E | User interface controller method and apparatus for a handheld electronic device |
KR100714722B1 (en) * | 2005-06-17 | 2007-05-04 | 삼성전자주식회사 | Apparatus and method for implementing pointing user interface using signal of light emitter |
US7525538B2 (en) * | 2005-06-28 | 2009-04-28 | Microsoft Corporation | Using same optics to image, illuminate, and project |
EP2259169B1 (en) * | 2005-07-04 | 2018-10-24 | Electrolux Home Products Corporation N.V. | Houshold appliance with virtual data interface |
US20070019099A1 (en) * | 2005-07-25 | 2007-01-25 | Vkb Inc. | Optical apparatus for virtual interface projection and sensing |
US20070019103A1 (en) * | 2005-07-25 | 2007-01-25 | Vkb Inc. | Optical apparatus for virtual interface projection and sensing |
US20070035521A1 (en) * | 2005-08-10 | 2007-02-15 | Ping-Chang Jui | Open virtual input and display device and method thereof |
US20070046924A1 (en) * | 2005-08-30 | 2007-03-01 | Chang Nelson L A | Projecting light patterns encoding correspondence information |
US7911444B2 (en) * | 2005-08-31 | 2011-03-22 | Microsoft Corporation | Input method for surface of interactive display |
US8111904B2 (en) | 2005-10-07 | 2012-02-07 | Cognex Technology And Investment Corp. | Methods and apparatus for practical 3D vision system |
US8018579B1 (en) | 2005-10-21 | 2011-09-13 | Apple Inc. | Three-dimensional imaging and display system |
US9046962B2 (en) * | 2005-10-31 | 2015-06-02 | Extreme Reality Ltd. | Methods, systems, apparatuses, circuits and associated computer executable code for detecting motion, position and/or orientation of objects within a defined spatial region |
US8279168B2 (en) * | 2005-12-09 | 2012-10-02 | Edge 3 Technologies Llc | Three-dimensional virtual-touch human-machine interface system and method therefor |
US8060840B2 (en) | 2005-12-29 | 2011-11-15 | Microsoft Corporation | Orientation free user interface |
US20070165007A1 (en) * | 2006-01-13 | 2007-07-19 | Gerald Morrison | Interactive input system |
US7515143B2 (en) * | 2006-02-28 | 2009-04-07 | Microsoft Corporation | Uniform illumination of interactive display panel |
US20070205994A1 (en) * | 2006-03-02 | 2007-09-06 | Taco Van Ieperen | Touch system and method for interacting with the same |
FR2898315B1 (en) * | 2006-03-08 | 2009-02-20 | Peugeot Citroen Automobiles Sa | CONTROL INTERFACE OF A FIXED EQUIPMENT OR NOMAD OF VEHICLE, WITH USE OF A VIRTUAL KEYBOARD |
US9152241B2 (en) * | 2006-04-28 | 2015-10-06 | Zienon, Llc | Method and apparatus for efficient data input |
JP4635957B2 (en) * | 2006-05-12 | 2011-02-23 | 株式会社デンソー | In-vehicle operation system |
US8086971B2 (en) * | 2006-06-28 | 2011-12-27 | Nokia Corporation | Apparatus, methods and computer program products providing finger-based and hand-based gesture commands for portable electronic device applications |
JP4627052B2 (en) * | 2006-07-06 | 2011-02-09 | 株式会社ソニー・コンピュータエンタテインメント | Audio output method and apparatus linked to image |
US10335038B2 (en) | 2006-08-24 | 2019-07-02 | Xenogen Corporation | Spectral unmixing for in-vivo imaging |
US10775308B2 (en) * | 2006-08-24 | 2020-09-15 | Xenogen Corporation | Apparatus and methods for determining optical tissue properties |
TW200813785A (en) * | 2006-09-08 | 2008-03-16 | Era Optoelectronics Inc | Image position interpretation apparatus |
DE102006048726A1 (en) * | 2006-10-16 | 2008-04-17 | Robert Bosch Gmbh | Method for measuring the wheel or axle geometry of a vehicle |
US20080110981A1 (en) * | 2006-11-13 | 2008-05-15 | Gilbarco Inc. | Projected user input device for a fuel dispenser and related applications |
US7831923B2 (en) * | 2006-11-28 | 2010-11-09 | International Business Machines Corporation | Providing visual keyboard guides according to a programmable set of keys |
US9442607B2 (en) | 2006-12-04 | 2016-09-13 | Smart Technologies Inc. | Interactive input system and method |
US8212857B2 (en) * | 2007-01-26 | 2012-07-03 | Microsoft Corporation | Alternating light sources to reduce specular reflection |
EP2135155B1 (en) | 2007-04-11 | 2013-09-18 | Next Holdings, Inc. | Touch screen system with hover and click input methods |
US8126260B2 (en) * | 2007-05-29 | 2012-02-28 | Cognex Corporation | System and method for locating a three-dimensional object using machine vision |
WO2008148609A1 (en) * | 2007-06-08 | 2008-12-11 | International Business Machines Corporation | Language independent login method and system |
US8094137B2 (en) | 2007-07-23 | 2012-01-10 | Smart Technologies Ulc | System and method of detecting contact on a display |
US8432377B2 (en) | 2007-08-30 | 2013-04-30 | Next Holdings Limited | Optical touchscreen with improved illumination |
US8384693B2 (en) | 2007-08-30 | 2013-02-26 | Next Holdings Limited | Low profile touch panel systems |
US11441919B2 (en) * | 2007-09-26 | 2022-09-13 | Apple Inc. | Intelligent restriction of device operations |
US8027029B2 (en) | 2007-11-07 | 2011-09-27 | Magna Electronics Inc. | Object detection and tracking system |
US9171454B2 (en) * | 2007-11-14 | 2015-10-27 | Microsoft Technology Licensing, Llc | Magic wand |
US8405636B2 (en) | 2008-01-07 | 2013-03-26 | Next Holdings Limited | Optical position sensing system and optical position sensor assembly |
US20090189858A1 (en) * | 2008-01-30 | 2009-07-30 | Jeff Lev | Gesture Identification Using A Structured Light Pattern |
CH707346B1 (en) * | 2008-04-04 | 2014-06-30 | Heig Vd Haute Ecole D Ingénierie Et De Gestion Du Canton De Vaud | Method and device for performing a multi-touch surface from one flat surface and for detecting the position of an object on such a surface. |
US20090278794A1 (en) * | 2008-05-09 | 2009-11-12 | Smart Technologies Ulc | Interactive Input System With Controlled Lighting |
US20090277697A1 (en) * | 2008-05-09 | 2009-11-12 | Smart Technologies Ulc | Interactive Input System And Pen Tool Therefor |
US8902193B2 (en) * | 2008-05-09 | 2014-12-02 | Smart Technologies Ulc | Interactive input system and bezel therefor |
US8952894B2 (en) * | 2008-05-12 | 2015-02-10 | Microsoft Technology Licensing, Llc | Computer vision-based multi-touch sensing using infrared lasers |
US8602564B2 (en) * | 2008-06-17 | 2013-12-10 | The Invention Science Fund I, Llc | Methods and systems for projecting in response to position |
US8608321B2 (en) * | 2008-06-17 | 2013-12-17 | The Invention Science Fund I, Llc | Systems and methods for projecting in response to conformation |
US8403501B2 (en) | 2008-06-17 | 2013-03-26 | The Invention Science Fund, I, LLC | Motion responsive devices and systems |
US20090310103A1 (en) * | 2008-06-17 | 2009-12-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for receiving information associated with the coordinated use of two or more user responsive projectors |
US20110176119A1 (en) * | 2008-06-17 | 2011-07-21 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for projecting in response to conformation |
US8723787B2 (en) * | 2008-06-17 | 2014-05-13 | The Invention Science Fund I, Llc | Methods and systems related to an image capture projection surface |
US8540381B2 (en) | 2008-06-17 | 2013-09-24 | The Invention Science Fund I, Llc | Systems and methods for receiving information associated with projecting |
US20100066689A1 (en) * | 2008-06-17 | 2010-03-18 | Jung Edward K Y | Devices related to projection input surfaces |
US20090312854A1 (en) * | 2008-06-17 | 2009-12-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for transmitting information associated with the coordinated use of two or more user responsive projectors |
US8384005B2 (en) * | 2008-06-17 | 2013-02-26 | The Invention Science Fund I, Llc | Systems and methods for selectively projecting information in response to at least one specified motion associated with pressure applied to at least one projection surface |
US20090313151A1 (en) * | 2008-06-17 | 2009-12-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods associated with projection system billing |
US8641203B2 (en) * | 2008-06-17 | 2014-02-04 | The Invention Science Fund I, Llc | Methods and systems for receiving and transmitting signals between server and projector apparatuses |
US8267526B2 (en) * | 2008-06-17 | 2012-09-18 | The Invention Science Fund I, Llc | Methods associated with receiving and transmitting information related to projection |
US20090309828A1 (en) * | 2008-06-17 | 2009-12-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for transmitting instructions associated with user parameter responsive projection |
US20100066983A1 (en) * | 2008-06-17 | 2010-03-18 | Jun Edward K Y | Methods and systems related to a projection surface |
US8733952B2 (en) * | 2008-06-17 | 2014-05-27 | The Invention Science Fund I, Llc | Methods and systems for coordinated use of two or more user responsive projectors |
US8955984B2 (en) * | 2008-06-17 | 2015-02-17 | The Invention Science Fund I, Llc | Projection associated methods and systems |
US20090309826A1 (en) * | 2008-06-17 | 2009-12-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Systems and devices |
US8308304B2 (en) * | 2008-06-17 | 2012-11-13 | The Invention Science Fund I, Llc | Systems associated with receiving and transmitting information related to projection |
US8936367B2 (en) * | 2008-06-17 | 2015-01-20 | The Invention Science Fund I, Llc | Systems and methods associated with projecting in response to conformation |
US8944608B2 (en) * | 2008-06-17 | 2015-02-03 | The Invention Science Fund I, Llc | Systems and methods associated with projecting in response to conformation |
US20090310039A1 (en) * | 2008-06-17 | 2009-12-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for user parameter responsive projection |
US20090313150A1 (en) * | 2008-06-17 | 2009-12-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods associated with projection billing |
US20090310098A1 (en) * | 2008-06-17 | 2009-12-17 | Searete Llc, A Limited Liability Corporation Of The State Of Delaware | Methods and systems for projecting in response to conformation |
US8847739B2 (en) * | 2008-08-04 | 2014-09-30 | Microsoft Corporation | Fusing RFID and vision for surface object tracking |
TW201009671A (en) * | 2008-08-21 | 2010-03-01 | Tpk Touch Solutions Inc | Optical semiconductor laser touch-control device |
US8228345B2 (en) | 2008-09-24 | 2012-07-24 | International Business Machines Corporation | Hand image feedback method and system |
US20100079385A1 (en) * | 2008-09-29 | 2010-04-01 | Smart Technologies Ulc | Method for calibrating an interactive input system and interactive input system executing the calibration method |
US8446389B2 (en) * | 2008-10-15 | 2013-05-21 | Lenovo (Singapore) Pte. Ltd | Techniques for creating a virtual touchscreen |
US8525776B2 (en) * | 2008-10-27 | 2013-09-03 | Lenovo (Singapore) Pte. Ltd | Techniques for controlling operation of a device with a virtual touchscreen |
US8339378B2 (en) * | 2008-11-05 | 2012-12-25 | Smart Technologies Ulc | Interactive input system with multi-angle reflector |
US8797274B2 (en) * | 2008-11-30 | 2014-08-05 | Lenovo (Singapore) Pte. Ltd. | Combined tap sequence and camera based user interface |
GB2466023A (en) * | 2008-12-08 | 2010-06-09 | Light Blue Optics Ltd | Holographic Image Projection Systems |
US20100155575A1 (en) * | 2008-12-19 | 2010-06-24 | Sony Ericsson Mobile Communications Ab | Arrangement and method in an electronic device for detecting a user input to a key |
GB2466497B (en) * | 2008-12-24 | 2011-09-14 | Light Blue Optics Ltd | Touch sensitive holographic displays |
TWI470478B (en) * | 2008-12-26 | 2015-01-21 | Inventec Appliances Corp | Virtual keyboard of an electronic device and a data inputting method therefor |
TW201027393A (en) * | 2009-01-06 | 2010-07-16 | Pixart Imaging Inc | Electronic apparatus with virtual data input device |
US20100214135A1 (en) * | 2009-02-26 | 2010-08-26 | Microsoft Corporation | Dynamic rear-projected user interface |
US20100229090A1 (en) * | 2009-03-05 | 2010-09-09 | Next Holdings Limited | Systems and Methods for Interacting With Touch Displays Using Single-Touch and Multi-Touch Gestures |
JP5201015B2 (en) * | 2009-03-09 | 2013-06-05 | ブラザー工業株式会社 | Head mounted display |
US9971458B2 (en) | 2009-03-25 | 2018-05-15 | Mep Tech, Inc. | Projection of interactive environment |
US20110256927A1 (en) | 2009-03-25 | 2011-10-20 | MEP Games Inc. | Projection of interactive game environment |
US20110165923A1 (en) | 2010-01-04 | 2011-07-07 | Davis Mark L | Electronic circle game system |
US8692768B2 (en) | 2009-07-10 | 2014-04-08 | Smart Technologies Ulc | Interactive input system |
CN103558931A (en) * | 2009-07-22 | 2014-02-05 | 罗技欧洲公司 | System and method for remote, virtual on screen input |
EP2292751A1 (en) * | 2009-08-20 | 2011-03-09 | Roche Diagnostics GmbH | Stabilisation of enzymes with stable coenzymes |
US8681124B2 (en) | 2009-09-22 | 2014-03-25 | Microsoft Corporation | Method and system for recognition of user gesture interaction with passive surface video displays |
KR101680343B1 (en) * | 2009-10-06 | 2016-12-12 | 엘지전자 주식회사 | Mobile terminal and information prcessing method thereof |
US20110095977A1 (en) * | 2009-10-23 | 2011-04-28 | Smart Technologies Ulc | Interactive input system incorporating multi-angle reflecting structure |
KR101851264B1 (en) | 2010-01-06 | 2018-04-24 | 주식회사 셀루온 | System and Method for a Virtual Multi-touch Mouse and Stylus Apparatus |
US20110267262A1 (en) * | 2010-04-30 | 2011-11-03 | Jacques Gollier | Laser Scanning Projector Device for Interactive Screen Applications |
JP2011253292A (en) * | 2010-06-01 | 2011-12-15 | Sony Corp | Information processing system, method and program |
JP5561092B2 (en) * | 2010-10-15 | 2014-07-30 | ソニー株式会社 | INPUT DEVICE, INPUT CONTROL SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM |
TWI423113B (en) * | 2010-11-30 | 2014-01-11 | Compal Communication Inc | Interactive user interface and the portable communication device using same |
WO2012124844A1 (en) * | 2011-03-16 | 2012-09-20 | Lg Electronics Inc. | Method and electronic device for gesture-based key input |
US9857868B2 (en) | 2011-03-19 | 2018-01-02 | The Board Of Trustees Of The Leland Stanford Junior University | Method and system for ergonomic touch-free interface |
US8840466B2 (en) | 2011-04-25 | 2014-09-23 | Aquifi, Inc. | Method and system to create three-dimensional mapping in a two-dimensional game |
GB201110157D0 (en) | 2011-06-16 | 2011-07-27 | Light Blue Optics Ltd | Touch sensitive display devices |
GB201110156D0 (en) | 2011-06-16 | 2011-07-27 | Light Blue Optics Ltd | Touch-sensitive display devices |
GB201110159D0 (en) | 2011-06-16 | 2011-07-27 | Light Blue Optics Ltd | Touch sensitive display devices |
US8971572B1 (en) | 2011-08-12 | 2015-03-03 | The Research Foundation For The State University Of New York | Hand pointing estimation for human computer interaction |
TWI454996B (en) | 2011-08-18 | 2014-10-01 | Au Optronics Corp | Display and method of determining a position of an object applied to a three-dimensional interactive display |
CN103019391A (en) * | 2011-09-22 | 2013-04-03 | 纬创资通股份有限公司 | Input device and method using captured keyboard image as instruction input foundation |
GB201117542D0 (en) | 2011-10-11 | 2011-11-23 | Light Blue Optics Ltd | Touch-sensitive display devices |
CN102495672A (en) * | 2011-10-20 | 2012-06-13 | 广州市迪拓信息科技有限公司 | Position judging method in touch control |
US9501152B2 (en) | 2013-01-15 | 2016-11-22 | Leap Motion, Inc. | Free-space user interface and control using virtual constructs |
US11493998B2 (en) | 2012-01-17 | 2022-11-08 | Ultrahaptics IP Two Limited | Systems and methods for machine control |
GB2513498A (en) * | 2012-01-20 | 2014-10-29 | Light Blue Optics Ltd | Touch sensitive image display devices |
US20140362052A1 (en) | 2012-01-20 | 2014-12-11 | Light Blue Optics Ltd | Touch Sensitive Image Display Devices |
US8854433B1 (en) | 2012-02-03 | 2014-10-07 | Aquifi, Inc. | Method and system enabling natural user interface gestures with an electronic system |
EP2634672A1 (en) * | 2012-02-28 | 2013-09-04 | Alcatel Lucent | System and method for inputting symbols |
GB201205303D0 (en) | 2012-03-26 | 2012-05-09 | Light Blue Optics Ltd | Touch sensing systems |
US9092090B2 (en) | 2012-05-17 | 2015-07-28 | Hong Kong Applied Science And Technology Research Institute Co., Ltd. | Structured light for touch or gesture detection |
US9507462B2 (en) * | 2012-06-13 | 2016-11-29 | Hong Kong Applied Science and Technology Research Institute Company Limited | Multi-dimensional image detection apparatus |
US20130335576A1 (en) * | 2012-06-19 | 2013-12-19 | Martin GOTSCHLICH | Dynamic adaptation of imaging parameters |
US9111135B2 (en) | 2012-06-25 | 2015-08-18 | Aquifi, Inc. | Systems and methods for tracking human hands using parts based template matching using corresponding pixels in bounded regions of a sequence of frames that are a specified distance interval from a reference camera |
US9098739B2 (en) | 2012-06-25 | 2015-08-04 | Aquifi, Inc. | Systems and methods for tracking human hands using parts based template matching |
DE102012013503B4 (en) * | 2012-07-06 | 2014-10-09 | Audi Ag | Method and control system for operating a motor vehicle |
US9317109B2 (en) * | 2012-07-12 | 2016-04-19 | Mep Tech, Inc. | Interactive image projection accessory |
JP5875953B2 (en) | 2012-07-17 | 2016-03-02 | 株式会社東芝 | Optical device |
US9305229B2 (en) | 2012-07-30 | 2016-04-05 | Bruno Delean | Method and system for vision based interfacing with a computer |
US20140055368A1 (en) * | 2012-08-22 | 2014-02-27 | Ming-Hsein Yu | Method and Apparatus by Using Touch Screen to Implement Functions of Touch Screen and Keypad |
US8497841B1 (en) | 2012-08-23 | 2013-07-30 | Celluon, Inc. | System and method for a virtual keyboard |
US8836768B1 (en) | 2012-09-04 | 2014-09-16 | Aquifi, Inc. | Method and system enabling natural user interface gestures with user wearable glasses |
GB2506849A (en) * | 2012-09-26 | 2014-04-16 | Light Blue Optics Ltd | A touch sensing system using a pen |
CN103838303A (en) * | 2012-11-27 | 2014-06-04 | 英业达科技有限公司 | Tablet computer combination set, accessory thereof and tablet computer input method. |
JP2014109876A (en) * | 2012-11-30 | 2014-06-12 | Toshiba Corp | Information processor, information processing method and program |
US9459697B2 (en) | 2013-01-15 | 2016-10-04 | Leap Motion, Inc. | Dynamic, free-space user interactions for machine control |
US9632658B2 (en) | 2013-01-15 | 2017-04-25 | Leap Motion, Inc. | Dynamic user interactions for display control and scaling responsiveness of display objects |
US9092665B2 (en) | 2013-01-30 | 2015-07-28 | Aquifi, Inc | Systems and methods for initializing motion tracking of human hands |
US9129155B2 (en) | 2013-01-30 | 2015-09-08 | Aquifi, Inc. | Systems and methods for initializing motion tracking of human hands using template matching within bounded regions determined using a depth map |
JP6111706B2 (en) | 2013-02-01 | 2017-04-12 | セイコーエプソン株式会社 | Position detection apparatus, adjustment method, and adjustment program |
US9128671B2 (en) | 2013-03-07 | 2015-09-08 | Hewlett-Packard Development Company, L.P. | Docking device |
US9621771B2 (en) * | 2013-03-14 | 2017-04-11 | Pelco, Inc. | System and method for imaging utility panel elements |
US9524059B2 (en) * | 2013-03-15 | 2016-12-20 | Texas Instruments Incorporated | Interaction detection using structured light images |
US9298266B2 (en) | 2013-04-02 | 2016-03-29 | Aquifi, Inc. | Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects |
JP2014203323A (en) | 2013-04-08 | 2014-10-27 | 船井電機株式会社 | Space input device |
US9916009B2 (en) | 2013-04-26 | 2018-03-13 | Leap Motion, Inc. | Non-tactile interface systems and methods |
US9747696B2 (en) | 2013-05-17 | 2017-08-29 | Leap Motion, Inc. | Systems and methods for providing normalized parameters of motions of objects in three-dimensional space |
KR101411569B1 (en) * | 2013-06-05 | 2014-06-27 | 고려대학교 산학협력단 | Device and method for information processing using virtual keyboard |
TW201500968A (en) * | 2013-06-21 | 2015-01-01 | Utechzone Co Ltd | Three-dimensional interactive system and interactive sensing method thereof |
WO2015009276A1 (en) * | 2013-07-15 | 2015-01-22 | Intel Corporation | Hands-free assistance |
JP6160329B2 (en) * | 2013-07-24 | 2017-07-12 | 船井電機株式会社 | projector |
JP2015026219A (en) * | 2013-07-25 | 2015-02-05 | 船井電機株式会社 | Electronic device |
US9798388B1 (en) | 2013-07-31 | 2017-10-24 | Aquifi, Inc. | Vibrotactile system to augment 3D input systems |
US10281987B1 (en) * | 2013-08-09 | 2019-05-07 | Leap Motion, Inc. | Systems and methods of free-space gestural interaction |
US9778546B2 (en) | 2013-08-15 | 2017-10-03 | Mep Tech, Inc. | Projector for projecting visible and non-visible images |
CN103488354B (en) * | 2013-10-10 | 2016-03-16 | 南京芒冠科技股份有限公司 | The synchronous electric whiteboard system of a kind of intercontrolling |
US20140132521A1 (en) * | 2013-12-11 | 2014-05-15 | Evan Shellshear | Keyboard input system via hand motion detection and recognition of the printed locations of the keys on a flat surface using a video camera and range imaging device. |
US9317150B2 (en) * | 2013-12-28 | 2016-04-19 | Intel Corporation | Virtual and configurable touchscreens |
US9507417B2 (en) | 2014-01-07 | 2016-11-29 | Aquifi, Inc. | Systems and methods for implementing head tracking based graphical user interfaces (GUI) that incorporate gesture reactive interface objects |
JP6387644B2 (en) * | 2014-01-21 | 2018-09-12 | セイコーエプソン株式会社 | Position detection device, position detection system, and position detection method |
US9619105B1 (en) | 2014-01-30 | 2017-04-11 | Aquifi, Inc. | Systems and methods for gesture based interaction with viewpoint dependent user interfaces |
DE102014207902A1 (en) * | 2014-04-28 | 2015-10-29 | Robert Bosch Gmbh | Module and method for operating a module |
DE102014207920A1 (en) * | 2014-04-28 | 2015-10-29 | Robert Bosch Gmbh | Electrical device and method for operating an electrical device |
WO2016010524A1 (en) * | 2014-07-15 | 2016-01-21 | Hewlett-Packard Development Company, L.P. | Virtual keyboard |
CN105320258B (en) * | 2014-08-05 | 2019-01-01 | 深圳Tcl新技术有限公司 | Virtual keyboard system and its entering method |
US10199025B2 (en) * | 2014-08-23 | 2019-02-05 | Moon Key Lee | Image capturing device and electronic keyboard using the image capturing device |
WO2016032501A1 (en) | 2014-08-29 | 2016-03-03 | Hewlett-Packard Development Company, L.P. | Multi-device collaboration |
JP2016114963A (en) * | 2014-12-11 | 2016-06-23 | 株式会社リコー | Input operation detection device, projector device, electronic blackboard device, digital signage device, and projector system |
US10101817B2 (en) * | 2015-03-03 | 2018-10-16 | Intel Corporation | Display interaction detection |
JP6459705B2 (en) * | 2015-03-27 | 2019-01-30 | セイコーエプソン株式会社 | Interactive projector, interactive projection system, and interactive projector control method |
JP6477131B2 (en) * | 2015-03-27 | 2019-03-06 | セイコーエプソン株式会社 | Interactive projector, interactive projection system, and control method of interactive projector |
JP6477130B2 (en) * | 2015-03-27 | 2019-03-06 | セイコーエプソン株式会社 | Interactive projector and interactive projection system |
US10419723B2 (en) | 2015-06-25 | 2019-09-17 | Magna Electronics Inc. | Vehicle communication system with forward viewing camera and integrated antenna |
US10137904B2 (en) | 2015-10-14 | 2018-11-27 | Magna Electronics Inc. | Driver assistance system with sensor offset correction |
US11027654B2 (en) | 2015-12-04 | 2021-06-08 | Magna Electronics Inc. | Vehicle vision system with compressed video transfer via DSRC link |
US10166995B2 (en) * | 2016-01-08 | 2019-01-01 | Ford Global Technologies, Llc | System and method for feature activation via gesture recognition and voice command |
US10481739B2 (en) * | 2016-02-16 | 2019-11-19 | Microvision, Inc. | Optical steering of component wavelengths of a multi-wavelength beam to enable interactivity |
US10703204B2 (en) | 2016-03-23 | 2020-07-07 | Magna Electronics Inc. | Vehicle driver monitoring system |
US10571562B2 (en) | 2016-03-25 | 2020-02-25 | Magna Electronics Inc. | Vehicle short range sensing system using RF sensors |
US10534081B2 (en) | 2016-05-02 | 2020-01-14 | Magna Electronics Inc. | Mounting system for vehicle short range sensors |
US9847079B2 (en) * | 2016-05-10 | 2017-12-19 | Google Llc | Methods and apparatus to use predicted actions in virtual reality environments |
EP3400505A1 (en) | 2016-05-10 | 2018-11-14 | Google LLC | Volumetric virtual reality keyboard methods, user interface, and interactions |
US10040481B2 (en) | 2016-05-17 | 2018-08-07 | Magna Electronics Inc. | Vehicle trailer angle detection system using ultrasonic sensors |
TWI653563B (en) | 2016-05-24 | 2019-03-11 | 仁寶電腦工業股份有限公司 | Projection touch image selection method |
US10768298B2 (en) | 2016-06-14 | 2020-09-08 | Magna Electronics Inc. | Vehicle sensing system with 360 degree near range sensing |
EP3482503A4 (en) | 2016-07-08 | 2020-03-04 | Magna Electronics Inc. | 2d mimo radar system for vehicle |
US10239446B2 (en) | 2016-07-13 | 2019-03-26 | Magna Electronics Inc. | Vehicle sensing system using daisy chain of sensors |
US10708227B2 (en) | 2016-07-19 | 2020-07-07 | Magna Electronics Inc. | Scalable secure gateway for vehicle |
US10641867B2 (en) | 2016-08-15 | 2020-05-05 | Magna Electronics Inc. | Vehicle radar system with shaped radar antennas |
US10852418B2 (en) | 2016-08-24 | 2020-12-01 | Magna Electronics Inc. | Vehicle sensor with integrated radar and image sensors |
US10677894B2 (en) | 2016-09-06 | 2020-06-09 | Magna Electronics Inc. | Vehicle sensing system for classification of vehicle model |
US10836376B2 (en) | 2016-09-06 | 2020-11-17 | Magna Electronics Inc. | Vehicle sensing system with enhanced detection of vehicle angle |
US10347129B2 (en) | 2016-12-07 | 2019-07-09 | Magna Electronics Inc. | Vehicle system with truck turn alert |
US10462354B2 (en) | 2016-12-09 | 2019-10-29 | Magna Electronics Inc. | Vehicle control system utilizing multi-camera module |
US10761188B2 (en) * | 2016-12-27 | 2020-09-01 | Microvision, Inc. | Transmitter/receiver disparity for occlusion-based height estimation |
US11002855B2 (en) | 2016-12-27 | 2021-05-11 | Microvision, Inc. | Occlusion-based height estimation |
US11200973B2 (en) * | 2017-01-09 | 2021-12-14 | International Business Machines Corporation | System, for food intake control |
US10703341B2 (en) | 2017-02-03 | 2020-07-07 | Magna Electronics Inc. | Vehicle sensor housing with theft protection |
US10782388B2 (en) | 2017-02-16 | 2020-09-22 | Magna Electronics Inc. | Vehicle radar system with copper PCB |
US11536829B2 (en) | 2017-02-16 | 2022-12-27 | Magna Electronics Inc. | Vehicle radar system with radar embedded into radome |
US11142200B2 (en) | 2017-02-23 | 2021-10-12 | Magna Electronics Inc. | Vehicular adaptive cruise control with enhanced vehicle control |
CN106933376B (en) * | 2017-03-23 | 2018-03-13 | 哈尔滨拓博科技有限公司 | A kind of scaling method of smooth projected keyboard |
US10884103B2 (en) | 2017-04-17 | 2021-01-05 | Magna Electronics Inc. | Calibration system for vehicle radar system |
US10870426B2 (en) | 2017-06-22 | 2020-12-22 | Magna Electronics Inc. | Driving assistance system with rear collision mitigation |
CN108638969A (en) | 2017-06-30 | 2018-10-12 | 麦格纳电子(张家港)有限公司 | The vehicle vision system communicated with trailer sensor |
US10962638B2 (en) | 2017-09-07 | 2021-03-30 | Magna Electronics Inc. | Vehicle radar sensing system with surface modeling |
US10962641B2 (en) | 2017-09-07 | 2021-03-30 | Magna Electronics Inc. | Vehicle radar sensing system with enhanced accuracy using interferometry techniques |
US10877148B2 (en) | 2017-09-07 | 2020-12-29 | Magna Electronics Inc. | Vehicle radar sensing system with enhanced angle resolution using synthesized aperture |
US11150342B2 (en) | 2017-09-07 | 2021-10-19 | Magna Electronics Inc. | Vehicle radar sensing system with surface segmentation using interferometric statistical analysis |
US10933798B2 (en) | 2017-09-22 | 2021-03-02 | Magna Electronics Inc. | Vehicle lighting control system with fog detection |
US11391826B2 (en) | 2017-09-27 | 2022-07-19 | Magna Electronics Inc. | Vehicle LIDAR sensor calibration system |
US11486968B2 (en) | 2017-11-15 | 2022-11-01 | Magna Electronics Inc. | Vehicle Lidar sensing system with sensor module |
US10816666B2 (en) | 2017-11-21 | 2020-10-27 | Magna Electronics Inc. | Vehicle sensing system with calibration/fusion of point cloud partitions |
US11167771B2 (en) | 2018-01-05 | 2021-11-09 | Magna Mirrors Of America, Inc. | Vehicular gesture monitoring system |
US11199611B2 (en) | 2018-02-20 | 2021-12-14 | Magna Electronics Inc. | Vehicle radar system with T-shaped slot antennas |
US11047977B2 (en) | 2018-02-20 | 2021-06-29 | Magna Electronics Inc. | Vehicle radar system with solution for ADC saturation |
US11808876B2 (en) | 2018-10-25 | 2023-11-07 | Magna Electronics Inc. | Vehicular radar system with vehicle to infrastructure communication |
US11683911B2 (en) | 2018-10-26 | 2023-06-20 | Magna Electronics Inc. | Vehicular sensing device with cooling feature |
US11638362B2 (en) | 2018-10-29 | 2023-04-25 | Magna Electronics Inc. | Vehicular radar sensor with enhanced housing and PCB construction |
US11454720B2 (en) | 2018-11-28 | 2022-09-27 | Magna Electronics Inc. | Vehicle radar system with enhanced wave guide antenna system |
US11096301B2 (en) | 2019-01-03 | 2021-08-17 | Magna Electronics Inc. | Vehicular radar sensor with mechanical coupling of sensor housing |
US11332124B2 (en) | 2019-01-10 | 2022-05-17 | Magna Electronics Inc. | Vehicular control system |
US11294028B2 (en) | 2019-01-29 | 2022-04-05 | Magna Electronics Inc. | Sensing system with enhanced electrical contact at PCB-waveguide interface |
US11609304B2 (en) | 2019-02-07 | 2023-03-21 | Magna Electronics Inc. | Vehicular front camera testing system |
US11333739B2 (en) | 2019-02-26 | 2022-05-17 | Magna Electronics Inc. | Vehicular radar system with automatic sensor alignment |
US11267393B2 (en) | 2019-05-16 | 2022-03-08 | Magna Electronics Inc. | Vehicular alert system for alerting drivers of other vehicles responsive to a change in driving conditions |
CN115066662A (en) | 2020-01-10 | 2022-09-16 | 马格纳电子系统公司 | Communication system and method |
US11823395B2 (en) | 2020-07-02 | 2023-11-21 | Magna Electronics Inc. | Vehicular vision system with road contour detection feature |
US11749105B2 (en) | 2020-10-01 | 2023-09-05 | Magna Electronics Inc. | Vehicular communication system with turn signal identification |
US20220308659A1 (en) * | 2021-03-23 | 2022-09-29 | Htc Corporation | Method for interacting with virtual environment, electronic device, and computer readable storage medium |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4507557A (en) * | 1983-04-01 | 1985-03-26 | Siemens Corporate Research & Support, Inc. | Non-contact X,Y digitizer using two dynamic ram imagers |
US4855590A (en) * | 1987-06-25 | 1989-08-08 | Amp Incorporated | Infrared touch input device having ambient compensation |
US5196835A (en) * | 1988-09-30 | 1993-03-23 | International Business Machines Corporation | Laser touch panel reflective surface aberration cancelling |
US5317140A (en) * | 1992-11-24 | 1994-05-31 | Dunthorn David I | Diffusion-assisted position location particularly for visual pen detection |
US5528263A (en) * | 1994-06-15 | 1996-06-18 | Daniel M. Platzker | Interactive projected video image display system |
US5676842A (en) * | 1995-08-07 | 1997-10-14 | K. J. Manufacturing Co. | Integral or filter mount and method of changing oil |
US5909210A (en) * | 1995-06-07 | 1999-06-01 | Compaq Computer Corporation | Keyboard-compatible optical determination of object's position |
US5936615A (en) * | 1996-09-12 | 1999-08-10 | Digital Equipment Corporation | Image-based touchscreen |
US6043805A (en) * | 1998-03-24 | 2000-03-28 | Hsieh; Kuan-Hong | Controlling method for inputting messages to a computer |
US6091405A (en) * | 1992-06-30 | 2000-07-18 | International Business Machines Corporation | Input device |
US6252598B1 (en) * | 1997-07-03 | 2001-06-26 | Lucent Technologies Inc. | Video hand image computer interface |
US6266048B1 (en) * | 1998-08-27 | 2001-07-24 | Hewlett-Packard Company | Method and apparatus for a virtual display/keyboard for a PDA |
US6281878B1 (en) * | 1994-11-01 | 2001-08-28 | Stephen V. R. Montellese | Apparatus and method for inputing data |
US20010030642A1 (en) * | 2000-04-05 | 2001-10-18 | Alan Sullivan | Methods and apparatus for virtual touchscreen computer interface controller |
US6323942B1 (en) * | 1999-04-30 | 2001-11-27 | Canesta, Inc. | CMOS-compatible three-dimensional image sensor IC |
US20020061217A1 (en) * | 2000-11-17 | 2002-05-23 | Robert Hillman | Electronic input device |
US6587186B2 (en) * | 2000-06-06 | 2003-07-01 | Canesta, Inc. | CMOS-compatible three-dimensional image sensing using reduced peak energy |
US6594023B1 (en) * | 1999-09-10 | 2003-07-15 | Ricoh Company, Ltd. | Coordinate inputting/detecting apparatus, method and computer program product designed to precisely recognize a designating state of a designating device designating a position |
US6611252B1 (en) * | 2000-05-17 | 2003-08-26 | Dufaux Douglas P. | Virtual data input device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07117868B2 (en) | 1991-04-30 | 1995-12-18 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Method and device for defining touch-type operating keyboard |
EP0554492B1 (en) | 1992-02-07 | 1995-08-09 | International Business Machines Corporation | Method and device for optical input of commands or data |
-
2001
- 2001-09-07 US US09/948,508 patent/US6710770B2/en not_active Expired - Fee Related
-
2003
- 2003-12-30 US US10/750,452 patent/US20050024324A1/en not_active Abandoned
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4507557A (en) * | 1983-04-01 | 1985-03-26 | Siemens Corporate Research & Support, Inc. | Non-contact X,Y digitizer using two dynamic ram imagers |
US4855590A (en) * | 1987-06-25 | 1989-08-08 | Amp Incorporated | Infrared touch input device having ambient compensation |
US5196835A (en) * | 1988-09-30 | 1993-03-23 | International Business Machines Corporation | Laser touch panel reflective surface aberration cancelling |
US6091405A (en) * | 1992-06-30 | 2000-07-18 | International Business Machines Corporation | Input device |
US5317140A (en) * | 1992-11-24 | 1994-05-31 | Dunthorn David I | Diffusion-assisted position location particularly for visual pen detection |
US5528263A (en) * | 1994-06-15 | 1996-06-18 | Daniel M. Platzker | Interactive projected video image display system |
US6281878B1 (en) * | 1994-11-01 | 2001-08-28 | Stephen V. R. Montellese | Apparatus and method for inputing data |
US5909210A (en) * | 1995-06-07 | 1999-06-01 | Compaq Computer Corporation | Keyboard-compatible optical determination of object's position |
US5676842A (en) * | 1995-08-07 | 1997-10-14 | K. J. Manufacturing Co. | Integral or filter mount and method of changing oil |
US5936615A (en) * | 1996-09-12 | 1999-08-10 | Digital Equipment Corporation | Image-based touchscreen |
US6252598B1 (en) * | 1997-07-03 | 2001-06-26 | Lucent Technologies Inc. | Video hand image computer interface |
US6043805A (en) * | 1998-03-24 | 2000-03-28 | Hsieh; Kuan-Hong | Controlling method for inputting messages to a computer |
US6266048B1 (en) * | 1998-08-27 | 2001-07-24 | Hewlett-Packard Company | Method and apparatus for a virtual display/keyboard for a PDA |
US6323942B1 (en) * | 1999-04-30 | 2001-11-27 | Canesta, Inc. | CMOS-compatible three-dimensional image sensor IC |
US6594023B1 (en) * | 1999-09-10 | 2003-07-15 | Ricoh Company, Ltd. | Coordinate inputting/detecting apparatus, method and computer program product designed to precisely recognize a designating state of a designating device designating a position |
US20010030642A1 (en) * | 2000-04-05 | 2001-10-18 | Alan Sullivan | Methods and apparatus for virtual touchscreen computer interface controller |
US6611252B1 (en) * | 2000-05-17 | 2003-08-26 | Dufaux Douglas P. | Virtual data input device |
US6587186B2 (en) * | 2000-06-06 | 2003-07-01 | Canesta, Inc. | CMOS-compatible three-dimensional image sensing using reduced peak energy |
US20020061217A1 (en) * | 2000-11-17 | 2002-05-23 | Robert Hillman | Electronic input device |
Cited By (114)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
USRE40880E1 (en) * | 2000-05-17 | 2009-08-25 | P. Milton Holdings, Llc | Optical system for inputting pointer and character data into electronic equipment |
US20050162409A1 (en) * | 2000-08-18 | 2005-07-28 | International Business Machines Corporation | Projector and camera arrangement with shared optics and optical marker for use with whiteboard systems |
US7355584B2 (en) * | 2000-08-18 | 2008-04-08 | International Business Machines Corporation | Projector and camera arrangement with shared optics and optical marker for use with whiteboard systems |
US6968073B1 (en) | 2001-04-24 | 2005-11-22 | Automotive Systems Laboratory, Inc. | Occupant detection system |
US7684618B2 (en) | 2002-10-31 | 2010-03-23 | Microsoft Corporation | Passive embedded interaction coding |
US20040136083A1 (en) * | 2002-10-31 | 2004-07-15 | Microsoft Corporation | Optical system design for a universal computing device |
US7009594B2 (en) | 2002-10-31 | 2006-03-07 | Microsoft Corporation | Universal computing device |
US20060109263A1 (en) * | 2002-10-31 | 2006-05-25 | Microsoft Corporation | Universal computing device |
US20060182309A1 (en) * | 2002-10-31 | 2006-08-17 | Microsoft Corporation | Passive embedded interaction coding |
US7133031B2 (en) * | 2002-10-31 | 2006-11-07 | Microsoft Corporation | Optical system design for a universal computing device |
US20040085286A1 (en) * | 2002-10-31 | 2004-05-06 | Microsoft Corporation | Universal computing device |
US7215327B2 (en) * | 2002-12-31 | 2007-05-08 | Industrial Technology Research Institute | Device and method for generating a virtual keyboard/display |
US20040125147A1 (en) * | 2002-12-31 | 2004-07-01 | Chen-Hao Liu | Device and method for generating a virtual keyboard/display |
US20050111700A1 (en) * | 2003-10-03 | 2005-05-26 | O'boyle Michael E. | Occupant detection system |
US7406181B2 (en) | 2003-10-03 | 2008-07-29 | Automotive Systems Laboratory, Inc. | Occupant detection system |
US20050193292A1 (en) * | 2004-01-06 | 2005-09-01 | Microsoft Corporation | Enhanced approach of m-array decoding and error correction |
US20080025612A1 (en) * | 2004-01-16 | 2008-01-31 | Microsoft Corporation | Strokes Localization by m-Array Decoding and Fast Image Matching |
WO2005114636A2 (en) * | 2004-05-12 | 2005-12-01 | Northrop Grumman Corporation | Projector pen image stabilization system |
WO2005114636A3 (en) * | 2004-05-12 | 2006-09-14 | Northrop Grumman Corp | Projector pen image stabilization system |
US20050280628A1 (en) * | 2004-05-12 | 2005-12-22 | Northrop Grumman Corp. | Projector pen image stabilization system |
US7826074B1 (en) | 2005-02-25 | 2010-11-02 | Microsoft Corporation | Fast embedded interaction code printing with custom postscript commands |
US20060215913A1 (en) * | 2005-03-24 | 2006-09-28 | Microsoft Corporation | Maze pattern analysis with image matching |
US20060242562A1 (en) * | 2005-04-22 | 2006-10-26 | Microsoft Corporation | Embedded method for embedded interaction code array |
US8156153B2 (en) | 2005-04-22 | 2012-04-10 | Microsoft Corporation | Global metadata embedding and decoding |
US20090027241A1 (en) * | 2005-05-31 | 2009-01-29 | Microsoft Corporation | Fast error-correcting of embedded interaction codes |
US7729539B2 (en) | 2005-05-31 | 2010-06-01 | Microsoft Corporation | Fast error-correcting of embedded interaction codes |
US20060274948A1 (en) * | 2005-06-02 | 2006-12-07 | Microsoft Corporation | Stroke localization and binding to electronic document |
US20070041654A1 (en) * | 2005-08-17 | 2007-02-22 | Microsoft Corporation | Embedded interaction code enabled surface type identification |
US7817816B2 (en) | 2005-08-17 | 2010-10-19 | Microsoft Corporation | Embedded interaction code enabled surface type identification |
US7445342B2 (en) * | 2005-11-29 | 2008-11-04 | Symbol Technologies, Inc. | Image projection system for personal media player |
US20070121087A1 (en) * | 2005-11-29 | 2007-05-31 | Garg Sachin K | Image projection system for personal media player |
US9110563B2 (en) * | 2007-10-31 | 2015-08-18 | Genedics Llc | Method and apparatus for user interface of input devices |
US8730165B2 (en) * | 2007-10-31 | 2014-05-20 | Gene S. Fein | Method and apparatus for user interface of input devices |
US20140218293A1 (en) * | 2007-10-31 | 2014-08-07 | Gene S. Fein | Method And Apparatus For User Interface Of Input Devices |
US20090109175A1 (en) * | 2007-10-31 | 2009-04-30 | Fein Gene S | Method and apparatus for user interface of input devices |
US9335890B2 (en) * | 2007-10-31 | 2016-05-10 | Genedics Llc | Method and apparatus for user interface of input devices |
US8319773B2 (en) | 2007-10-31 | 2012-11-27 | Fein Gene S | Method and apparatus for user interface communication with an image manipulator |
US9939987B2 (en) | 2007-10-31 | 2018-04-10 | Genedics Llc | Method and apparatus for user interface of input devices |
US8902225B2 (en) | 2007-10-31 | 2014-12-02 | Genedics Llc | Method and apparatus for user interface communication with an image manipulator |
US8477098B2 (en) * | 2007-10-31 | 2013-07-02 | Gene S. Fein | Method and apparatus for user interface of input devices |
EP2063347A1 (en) * | 2007-11-15 | 2009-05-27 | Funai Electric Co., Ltd. | Projector and method for projecting image |
US20090128716A1 (en) * | 2007-11-15 | 2009-05-21 | Funai Electric Co., Ltd. | Projector and method for projecting image |
US8123361B2 (en) | 2007-11-15 | 2012-02-28 | Funai Electric Co., Ltd. | Dual-projection projector and method for projecting images on a plurality of planes |
FR2924654A1 (en) * | 2007-12-10 | 2009-06-12 | Valeo Systemes Thermiques | DEVICE FOR CONTROLLING A AUTOMOTIVE ACCESSORY AND METHOD FOR IMPLEMENTING SUCH A CONTROL DEVICE |
EP2070758A3 (en) * | 2007-12-10 | 2010-01-27 | Valeo Systèmes Thermiques | Control device for an automobile accessory and method for implementing such a control device |
US9323345B2 (en) * | 2008-04-18 | 2016-04-26 | Shanghai Chule (Cootek) Information Technology Co., Ltd. | System capable of accomplishing flexible keyboard layout |
US20110090151A1 (en) * | 2008-04-18 | 2011-04-21 | Shanghai Hanxiang (Cootek) Information Technology Co., Ltd. | System capable of accomplishing flexible keyboard layout |
US20100302144A1 (en) * | 2009-05-28 | 2010-12-02 | Microsoft Corporation | Creating a virtual mouse input device |
US9141284B2 (en) | 2009-05-28 | 2015-09-22 | Microsoft Technology Licensing, Llc | Virtual input devices created by touch input |
US9207806B2 (en) * | 2009-05-28 | 2015-12-08 | Microsoft Technology Licensing, Llc | Creating a virtual mouse input device |
US20100302155A1 (en) * | 2009-05-28 | 2010-12-02 | Microsoft Corporation | Virtual input devices created by touch input |
US20110090147A1 (en) * | 2009-10-20 | 2011-04-21 | Qualstar Corporation | Touchless pointing device |
US8907894B2 (en) | 2009-10-20 | 2014-12-09 | Northridge Associates Llc | Touchless pointing device |
US20110096032A1 (en) * | 2009-10-26 | 2011-04-28 | Seiko Epson Corporation | Optical position detecting device and display device with position detecting function |
US20110096031A1 (en) * | 2009-10-26 | 2011-04-28 | Seiko Epson Corporation | Position detecting function-added projection display apparatus |
US9141235B2 (en) * | 2009-10-26 | 2015-09-22 | Seiko Epson Corporation | Optical position detecting device and display device with position detecting function |
US9098137B2 (en) * | 2009-10-26 | 2015-08-04 | Seiko Epson Corporation | Position detecting function-added projection display apparatus |
US8714749B2 (en) | 2009-11-06 | 2014-05-06 | Seiko Epson Corporation | Projection display device with position detection function |
US20110187679A1 (en) * | 2010-02-01 | 2011-08-04 | Acer Incorporated | Optical touch display device and method thereof |
US9613015B2 (en) | 2010-02-12 | 2017-04-04 | Microsoft Technology Licensing, Llc | User-centric soft keyboard predictive technologies |
US20110201387A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Real-time typing assistance |
US9165257B2 (en) | 2010-02-12 | 2015-10-20 | Microsoft Technology Licensing, Llc | Typing assistance for editing |
US20110202876A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | User-centric soft keyboard predictive technologies |
US10126936B2 (en) | 2010-02-12 | 2018-11-13 | Microsoft Technology Licensing, Llc | Typing assistance for editing |
US8782556B2 (en) | 2010-02-12 | 2014-07-15 | Microsoft Corporation | User-centric soft keyboard predictive technologies |
US10156981B2 (en) | 2010-02-12 | 2018-12-18 | Microsoft Technology Licensing, Llc | User-centric soft keyboard predictive technologies |
US20110202836A1 (en) * | 2010-02-12 | 2011-08-18 | Microsoft Corporation | Typing assistance for editing |
TWI451221B (en) * | 2010-03-11 | 2014-09-01 | Osram Opto Semiconductors Gmbh | Portable electronic device |
US8861789B2 (en) | 2010-03-11 | 2014-10-14 | Osram Opto Semiconductors Gmbh | Portable electronic device |
US20110242054A1 (en) * | 2010-04-01 | 2011-10-06 | Compal Communication, Inc. | Projection system with touch-sensitive projection image |
US20120176341A1 (en) * | 2011-01-11 | 2012-07-12 | Texas Instruments Incorporated | Method and apparatus for camera projector system for enabling an interactive surface |
US8928589B2 (en) * | 2011-04-20 | 2015-01-06 | Qualcomm Incorporated | Virtual keyboards and methods of providing the same |
US20120268376A1 (en) * | 2011-04-20 | 2012-10-25 | Qualcomm Incorporated | Virtual keyboards and methods of providing the same |
US20120306817A1 (en) * | 2011-05-30 | 2012-12-06 | Era Optoelectronics Inc. | Floating virtual image touch sensing apparatus |
US20170060343A1 (en) * | 2011-12-19 | 2017-03-02 | Ralf Trachte | Field analysis for flexible computer inputs |
US20150029111A1 (en) * | 2011-12-19 | 2015-01-29 | Ralf Trachte | Field analysis for flexible computer inputs |
US8902161B2 (en) * | 2012-01-12 | 2014-12-02 | Fujitsu Limited | Device and method for detecting finger position |
US20130181904A1 (en) * | 2012-01-12 | 2013-07-18 | Fujitsu Limited | Device and method for detecting finger position |
US20130222247A1 (en) * | 2012-02-29 | 2013-08-29 | Eric Liu | Virtual keyboard adjustment based on user input offset |
US20130265283A1 (en) * | 2012-04-10 | 2013-10-10 | Pixart Imaging Inc. | Optical operation system |
US9766714B2 (en) * | 2012-04-13 | 2017-09-19 | Postech Academy-Industry Foundation | Method and apparatus for recognizing key input from virtual keyboard |
US20150084869A1 (en) * | 2012-04-13 | 2015-03-26 | Postech Academy-Industry Foundation | Method and apparatus for recognizing key input from virtual keyboard |
WO2013175389A3 (en) * | 2012-05-20 | 2015-08-13 | Extreme Reality Ltd. | Methods circuits apparatuses systems and associated computer executable code for providing projection based human machine interfaces |
US8860640B2 (en) * | 2012-05-30 | 2014-10-14 | Christie Digital Systems Usa, Inc. | Zonal illumination for high dynamic range projection |
US9250748B2 (en) * | 2012-10-09 | 2016-02-02 | Cho-Yi Lin | Portable electrical input device capable of docking an electrical communication device and system thereof |
US20140098025A1 (en) * | 2012-10-09 | 2014-04-10 | Cho-Yi Lin | Portable electrical input device capable of docking an electrical communication device and system thereof |
US11301051B2 (en) | 2013-01-04 | 2022-04-12 | Texas Instruments Incorporated | Using natural movements of a hand-held device to manipulate digital content |
US10152137B2 (en) | 2013-01-04 | 2018-12-11 | Texas Instruments Incorporated | Using natural movements of a hand-held device to manipulate digital content |
US9489925B2 (en) * | 2013-01-04 | 2016-11-08 | Texas Instruments Incorporated | Using natural movements of a hand-held device to manipulate digital content |
US10866647B2 (en) | 2013-01-04 | 2020-12-15 | Texas Instruments Incorporated | Using natural movements of a hand-held device to manipulate digital content |
US20140191947A1 (en) * | 2013-01-04 | 2014-07-10 | Texas Instruments Incorporated | Using Natural Movements of a Hand-Held Device to Manipulate Digital Content |
US9560449B2 (en) | 2014-01-17 | 2017-01-31 | Sony Corporation | Distributed wireless speaker system |
US9288597B2 (en) | 2014-01-20 | 2016-03-15 | Sony Corporation | Distributed wireless speaker system with automatic configuration determination when new speakers are added |
US9866986B2 (en) | 2014-01-24 | 2018-01-09 | Sony Corporation | Audio speaker system with virtual music performance |
US9426551B2 (en) | 2014-01-24 | 2016-08-23 | Sony Corporation | Distributed wireless speaker system with light show |
US9369801B2 (en) | 2014-01-24 | 2016-06-14 | Sony Corporation | Wireless speaker system with noise cancelation |
US9699579B2 (en) | 2014-03-06 | 2017-07-04 | Sony Corporation | Networked speaker system with follow me |
US9232335B2 (en) | 2014-03-06 | 2016-01-05 | Sony Corporation | Networked speaker system with follow me |
US9483997B2 (en) | 2014-03-10 | 2016-11-01 | Sony Corporation | Proximity detection of candidate companion display device in same room as primary display using infrared signaling |
US9696414B2 (en) | 2014-05-15 | 2017-07-04 | Sony Corporation | Proximity detection of candidate companion display device in same room as primary display using sonic signaling |
US9858024B2 (en) | 2014-05-15 | 2018-01-02 | Sony Corporation | Proximity detection of candidate companion display device in same room as primary display using sonic signaling |
US10070291B2 (en) | 2014-05-19 | 2018-09-04 | Sony Corporation | Proximity detection of candidate companion display device in same room as primary display using low energy bluetooth |
US10785428B2 (en) * | 2015-10-16 | 2020-09-22 | Capsovision Inc. | Single image sensor for capturing mixed structured-light images and regular images |
US10963159B2 (en) * | 2016-01-26 | 2021-03-30 | Lenovo (Singapore) Pte. Ltd. | Virtual interface offset |
US9693168B1 (en) | 2016-02-08 | 2017-06-27 | Sony Corporation | Ultrasonic speaker assembly for audio spatial effect |
US9826332B2 (en) | 2016-02-09 | 2017-11-21 | Sony Corporation | Centralized wireless speaker system |
US20170262133A1 (en) * | 2016-03-08 | 2017-09-14 | Serafim Technologies Inc. | Virtual input device for mobile phone |
US9826330B2 (en) | 2016-03-14 | 2017-11-21 | Sony Corporation | Gimbal-mounted linear ultrasonic speaker assembly |
US9693169B1 (en) | 2016-03-16 | 2017-06-27 | Sony Corporation | Ultrasonic speaker assembly with ultrasonic room mapping |
US9794724B1 (en) | 2016-07-20 | 2017-10-17 | Sony Corporation | Ultrasonic speaker assembly using variable carrier frequency to establish third dimension sound locating |
US9924286B1 (en) | 2016-10-20 | 2018-03-20 | Sony Corporation | Networked speaker system with LED-based wireless communication and personal identifier |
US9854362B1 (en) | 2016-10-20 | 2017-12-26 | Sony Corporation | Networked speaker system with LED-based wireless communication and object detection |
US10075791B2 (en) | 2016-10-20 | 2018-09-11 | Sony Corporation | Networked speaker system with LED-based wireless communication and room mapping |
US10623859B1 (en) | 2018-10-23 | 2020-04-14 | Sony Corporation | Networked speaker system with combined power over Ethernet and audio delivery |
Also Published As
Publication number | Publication date |
---|---|
US6710770B2 (en) | 2004-03-23 |
US20020021287A1 (en) | 2002-02-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6710770B2 (en) | Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device | |
EP1336172B1 (en) | Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device | |
US7554528B2 (en) | Method and apparatus for computer input using six degrees of freedom | |
KR100648368B1 (en) | Method and Apparatus for providing projected user interface for computing device | |
CA2620149C (en) | Input method for surface of interactive display | |
US20030226968A1 (en) | Apparatus and method for inputting data | |
US8115753B2 (en) | Touch screen system with hover and click input methods | |
US7015894B2 (en) | Information input and output system, method, storage medium, and carrier wave | |
US20020061217A1 (en) | Electronic input device | |
JP2003114755A (en) | Device for inputting coordinates | |
US20090189857A1 (en) | Touch sensing for curved displays | |
US20030218760A1 (en) | Method and apparatus for approximating depth of an object's placement onto a monitored region with applications to virtual interface devices | |
US20100225588A1 (en) | Methods And Systems For Optical Detection Of Gestures | |
US11556211B2 (en) | Displays and information input devices | |
CN1701351A (en) | Quasi-three-dimensional method and apparatus to detect and localize interaction of user-object and virtual transfer device | |
KR100936666B1 (en) | Apparatus for touching reflection image using an infrared screen | |
US20150185321A1 (en) | Image Display Device | |
JP4560224B2 (en) | Information input device, information input / output system, program, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |