US20160103486A1 - Method and Apparatus for Communication Between Humans and Devices - Google Patents

Method and Apparatus for Communication Between Humans and Devices Download PDF

Info

Publication number
US20160103486A1
US20160103486A1 US14/722,504 US201514722504A US2016103486A1 US 20160103486 A1 US20160103486 A1 US 20160103486A1 US 201514722504 A US201514722504 A US 201514722504A US 2016103486 A1 US2016103486 A1 US 2016103486A1
Authority
US
United States
Prior art keywords
user
information
screen
sensor
outputting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/722,504
Inventor
Roel Vertegaal
Jeffrey S. Shell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Queens University at Kingston
Original Assignee
Queens University at Kingston
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=32988008&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20160103486(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Queens University at Kingston filed Critical Queens University at Kingston
Priority to US14/722,504 priority Critical patent/US20160103486A1/en
Publication of US20160103486A1 publication Critical patent/US20160103486A1/en
Assigned to QUEEN'S UNIVERSITY AT KINGSTON reassignment QUEEN'S UNIVERSITY AT KINGSTON ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VERTEGAAL, ROEL
Assigned to QUEEN'S UNIVERSITY AT KINGSTON reassignment QUEEN'S UNIVERSITY AT KINGSTON ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHELL, JEFFREY S.
Assigned to QUEEN'S UNIVERSITY AT KINGSTON reassignment QUEEN'S UNIVERSITY AT KINGSTON ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DICKIE, CONNOR
Priority to US15/429,733 priority patent/US10296084B2/en
Priority to US16/407,591 priority patent/US10915171B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Definitions

  • This invention relates to attentive user interfaces for improving communication between humans and devices. More particularly, this invention relates to use of eye contact/gaze direction information by technological devices and appliances to more effectively communicate with users, in device or subject initiated communications.
  • eye contact sensor for determining whether a user is looking at a target area, and using the determination of eye contact to control a device.
  • eye contact information can be used together with voice information, to disambiguate voice commands when more than one voice-activated devices are present.
  • a method of modulating operation of a device comprising: providing an attentive user interface for obtaining information about an attentive state of a user; and modulating operation of a device on the basis of said obtained information, wherein said operation that is modulated is initiated by said device.
  • said information about an attentive state of said user is based on one or more indices selected from the group consisting of eye contact, eye movement, eye position, eye gaze direction, voice, body presence, body orientation, head and/or face orientation, user activity, and brain activity/arousal.
  • said attentive user interface may be attached to or embedded in said device, or attached to or embedded in a member of the group consisting of clothing, eyewear, jewelry, and furniture.
  • the device may be a personal computer, a cellular telephone, a telephone, a personal digital assistant (PDA), or an appliance.
  • PDA personal digital assistant
  • a method of modulating operation of a network of devices comprising: providing each device of a network of devices with an attentive user interface for obtaining information about an attentive state of a user with respect to each device; and modulating operation of said devices on the basis of said obtained information, wherein said operation that is modulated is initiated by at least one of said devices.
  • said operation that is modulated may comprise notification, communication, information transfer, and a combination thereof, or routing said notification, communication, information transfer, or combination thereof, to a device with which said user is engaged.
  • the modulating operation may further comprise modulating notification of said user progressively, from a less interruptive notification to a more interruptive notification.
  • said information about said user's attentive state is eye contact of said user with each said device, said eye contact being sensed by said attentive user interface.
  • communication between said first and second devices is enabled when respective proxies indicate that attentive states of said first and second users are toward respective devices.
  • the device may be a telephone
  • the proxy may be a representation of a user's eyes.
  • the network comprises more than two devices.
  • FIG. 4 shows eye glasses equipped with an eye contact sensor in accordance with an embodiment of the invention
  • FIG. 5 is a schematic diagram of a device equipped with a mechanical eye proxy and an eye contact sensor in accordance with an embodiment of the invention.
  • the term “user” is intended to mean the entity, preferably human, who is using a device.
  • the term “device” is intended to mean any digital device, object, machine, or appliance that requires, solicits, receives, or competes for a user's attention.
  • the term “device” includes any device that typically is not interactive, but could be made more user-friendly by providing interaction with a user as described herein.
  • the term “subject” is intended to mean the human, device, or other object with which a user might be engaged.
  • an attentive user interface is intended to mean any hardware and/or software that senses, receives, obtains, and negotiates a user's attention by sensing one or more indices of a user's attentive state (e.g., eye contact, eye movement, eye position, eye gaze direction, voice, body presence, body orientation, head and/or face orientation, activity, brain activity/arousal), with appropriate hardware and associated algorithms and/or software for interfacing the attentive user interface with a device or a network of devices.
  • An attentive user interface comprises portions for sensing user attentive state and for processing and interfacing/relaying information about the user's attentive state to a device. Such portions can be housed as a unit or as multiple units.
  • Interfacing an attentive user interface with a device comprises providing an output from the attentive user interface to the device, which controls operation of the device.
  • An attentive user interface of the invention can perform one or more tasks, such as, but not limited to, making decisions about user presence/absence, making decisions about the state of user attention, prioritizing communications in relation to current priorities in user attention as sensed by the attentive user interface, modulating channels and modes of delivery of notifications and/or information and/or communications to the user, modulating presentation of visual or auditory information, and communicating information (e.g., indices) about user attention to other subjects.
  • the term “attentive state” is intended to mean a measure or index of a user's engagement with or attention toward a subject. Examples of such indices are eye contact, eye movement, eye position, eye gaze direction, voice, body presence, body orientation, head and/or face orientation, activity, and brain activity/arousal.
  • notify is intended to mean the signalling or soliciting, usually by a device, for a user's attention.
  • notification can employ any cue(s) that act on a user's senses to solicit the user's attention, such as one or more of audio, visual, tactile, and olfactory cues.
  • modulating is intended to mean controlling, enabling and/or disabling, or adjusting (e.g., increasing and/or decreasing).
  • modulating includes, for example, turning notification on or off, delaying notification, changing the volume or type of notification, and the like.
  • notification can be gradually modulated from less interruptive (e.g., quiet) to more interruptive (e.g., loud), as time passes without user acknowledgement.
  • Modulating also refers to changing the vehicle or channel for notification, communication, or data transfer; for example, by routing such through a network to a more appropriate device. For example, in the case of an urgent notification, modulation might encompass routing the notification to a device with which the user is engaged, increasing the likelihood that the user receives the notification (see Example 4, below).
  • mediated communication and “mediated conversation” refer to communication or conversation that takes place through a medium such as video or audio devices/systems, such that there is no face-to-face conversation between the participants.
  • mediated communications participants involved are remotely located relative to one another.
  • an attentive user interface dynamically prioritizes the information it presents, and the way it is presented, to a user, such that information processing resources of both user and system are optimally used. This might involve, for example, optimally distributing resources across a set of tasks.
  • An attentive user interface does this on the basis of knowledge—consisting of a combination of measures and models—of the present, and preferably also the past and/or future states of the user's attention, taking into account the availability of system resources.
  • Attentive user interfaces may employ one or more of eye contact, eye movement, eye position, eye gaze direction, voice, body presence, body orientation, head and/or face orientation, activity, brain activity/arousal to detect attentive state. Attentive user interfaces may store any of the above measures as a model, used to govern decisions about the user's attentive state.
  • CMOS imaging technology e.g., Silicon Imaging MegaPixel Camera SI-3170U or SI-3200U
  • Si-3170U or SI-3200U allows the manufacture of low-cost high-resolution eye contact sensors.
  • the output of the image sensor is connected to circuitry which uses the camera frame sync signal to illuminate the space in front of the camera with on-axis light produced by, e.g., an array of infrared LEDs 42 , and off-axis light produced by, e.g., two arrays of infrared LEDs 44 , 52 .
  • On-axis and off-axis light is produced alternately with odd and even frames. For example, on-axis light is produced each odd frame and off-axis light is produced every even frame. Images are processed to locate the user's/subject's eyes, and corresponding information is relayed to hardware/software of an attentive user interface.
  • the information is used by the attentive user interface to determine, whether, how, when, etc., to interrupt or send a notification to a user.
  • the image processing circuitry and software may reside in the eye contact sensor unit 40 , whereas in other embodiments the circuitry and software are remote (e.g., associated with a host computer) and suitably connected to the eye contact sensor unit 40 using, e.g., a high-bandwidth video link, which can be wireless, such as Apple® FireWire® or USB 2 based.
  • information relating to eye contact may include whether eyes are found in the image, where the eyes are, how many eyes are present, whether the eyes are blinking, and if the unit is calibrated, what the eyes are looking at in screen coordinates.
  • the information may also include a flag for each eye when the eyes are looking straight at the camera.
  • the eye contact sensor determines the orientation of pupils with a spatial accuracy of, for example, 1 meter at 5 meters distance (about 10 degrees of arc) and a head movement tolerance of, for example, 20 degrees of arc, at a distance of 5 meters or more.
  • the frame rate of the eye contact sensor's camera should be as high as possible, and in the order of 100 Hz.
  • the effective sampling rate of the sensor preferably corresponds to at least 20 Hz, given that the minimum human fixation time is in the order of 100 ms.
  • a subtraction algorithm to locate pupils results in a tradeoff between temporal and spatial resolution.
  • image subtraction occurs within frames (see, e.g., U.S. Pat. No. 6,393,136 to Amir et al.), resulting in an effective spatial resolution of the sensor of only half that of the camera.
  • the image processing algorithm and LEDs are synchronized with half-frame fields generated by an NTSC or other interlaced camera technology.
  • the invention provides, in one aspect, a method and apparatus for obtaining eye contact information in which image subtraction occurs between frames (by subtracting an odd frame from an even frame, or vice versa), as shown in the algorithm of FIG. 2 .
  • This allows the use of the full camera resolution, and thus a greater tracking range, while reducing the effective frame or sampling rate by half.
  • the subtraction algorithm and LEDs are synchronized with a full frame clock generated by the camera and the minimum sampling frequency of the camera is preferably in the order of about 30 to about 40 Hz.
  • an attentive user interface uses eye gaze direction as input about a user's attentive state. Eye gaze direction is detected by an eye tracker, such as that described in detail in U.S. Pat. No. 6,152,563 to Hutchinson et al.
  • An attentive user interface of the invention may be applied to user-initiated control of a device using, for example, eye contact and/or eye gaze direction, with or without further input, such as voice, body presence, and the like.
  • the invention is particularly applicable to device-initiated communication with a user, such as, for example, notifying a user of an incoming message, or of a task requiring user input. As shown in FIG.
  • an attentive user interface running on a such a device, senses and evaluates one or more indices of user attention (e.g., eye contact, eye movement, eye position, eye gaze direction, voice, body presence, body orientation, head and/or face orientation, activity, brain activity/arousal) to determine whether, when, and how to notify, interrupt, respond or respond to the user, open/close communication channels, an the like.
  • user attention e.g., eye contact, eye movement, eye position, eye gaze direction, voice, body presence, body orientation, head and/or face orientation, activity, brain activity/arousal
  • an attentive user interface might progressively signal for the user's attention. Initially this may happen through a channel that is peripheral to the user's current activity.
  • the interface may then wait for user acknowledgement, provided through, e.g., an input device, before opening a direct channel to the user. If, however, no user acknowledgement is received within a given period, the attentive user interface may proceed to a more direct channel to the user, increase the urgency level of the notification, or defer notification.
  • user acknowledgement provided through, e.g., an input device
  • information obtained about a user's attentive state is communicated to one or more subjects who might wish to contact the user.
  • Such communication can be through any network by which the user and subject(s) are connected, such as a local area network, a wide area network (e.g., the internet), or hard-wired or wireless (e.g., cellular) telephone network.
  • Subjects can evaluate the information about the user's attentive state, and, using rules of social engagement, decide whether or not to contact the user. For example, in telephonic communications (as described in detail in Example 1), information about the user's current attentive state is communicated to a subject attempting to telephone the user. The subject can decide whether to proceed with the telephone call on the basis of such information.
  • the invention provides for an environment in which multiple devices, each equipped with attentive user interfaces, are networked, such that information concerning to which device the user's attention is directed is available to all devices on the network.
  • progressively signaling notifications e.g., in the case of a cell phone, the phone starts by ringing quietly and progressively rings louder depending on urgency of the call and/or proximity to the user; or, an icon on the cell phone's screen changes as urgency increases
  • a notification and/or message can be forwarded to the appropriate device so that the message is received with minimal interruption of the user's primary task.
  • Two rows of off-axis LED illuminators 14 , 16 are positioned near the outer peripheries of the lenses 6 , 8 .
  • the camera feed as well as the LED arrays are connected through wires to a control unit worn by the user.
  • This control unit contains power and circuitry for illumination of the LEDs and camera synchronization.
  • the control unit performs computer vision processing according to an algorithm using an embedded processor board.
  • data is sent over a wireless or wired network link to a host.
  • camera images are sent over a wireless or wired network to an external computer vision processing facility.
  • Eye contact glasses can be used, for example, to open/close communication channels between co-located but distant users, or for regulating messaging to a user or between two or more users.
  • eye contact glasses can track how many individuals have looked at the user during a specified period. These data or statistics can be made available on the user through an LCD display, or sent to a networking device for further processing or display. Combined with computer vision or other means, the eye contact glasses can determine who has looked at the user, for how long, and when.
  • the eye contact glasses provides a personal attention sensor (i.e., a “hit counter”), which indicates to a user when he/she is being looked at by a subject. For example, a counter could be incremented whenever the user has been looked at by a subject, to provide information about the number of “hits”. Such an embodiment can provide amusement to users in certain social settings.
  • an attentive user interface of the invention includes a sensor for detecting one or more indices of user attentive state in combination with a “proxy”.
  • proxy is intended to mean any hardware or virtual (e.g., an image on a computer screen) representation of a (remote) subject's attention.
  • a proxy can be a pair of eyes, either mechanical or virtual (e.g., pictured on a computer screen), that inform a user of the state of attention of a subject with which the user is attempting to establish mediated communication (e.g., via telephone).
  • Eye proxies are preferred because of what they represent; that is, the establishment of eye contact is related to the establishment of communication between individuals.
  • an attentive user interface including a proxy, is used not only to obtain information about the attention of its user, but also functions to communicate robot, machine, or remote user attention directed towards a user.
  • an eye contact sensor can be mounted on a robotic actuation device that allows rotation of the eye contact sensor in 3 orientation directions.
  • the eye contact sensor functions as virtual eyes directing the robotic device in establishing eye contact with the user when the attentive user interface's attention is directed towards that user.
  • the robotic device may feature a pair of mechanical eyes, or an image or video of a remote user or computer agent.
  • FIG. 5 shows an embodiment in which a pair of robotic mechanical eyes 60 and an eye contact sensor with camera lens 62 , on-axis LED array 64 , and off-axis LED arrays 66 , 68 are mounted on a device 70 , such as a telephone.
  • an attentive user interface with a sensor such as an eye contact sensor or an eye tracker can be used with any device to sense whether a user is available for communication, and whether a user is communicating with that device, via any route such as a keyboard, speech recognition, or manual interactions.
  • a proxy can signal the device's attention to the user by alignment of the eye contact sensor and/or virtual eyes with the user's eyes. If the device has not recently received visual attention from the user, it chooses an unobtrusive method to signal the user (i.e., by vibrating, rotating its eyeballs to obtain attention or any other nonverbal means).
  • a device remains in the periphery of user activity until the user has acknowledged the device's request for attention. At that time that the device receives user attention, as measured with the eye contact sensor or through other means, a mediated communication channel with the user is established, including, for example, speech production or display of information.
  • Example 2 describes an example of this embodiment in detail.
  • an attentive user interface can be embedded in digital devices such as computers, personal digital assistants (PDAs), pvr/tvi/vcr/cameras, telephones, household appliances, furniture, vehicles, and any other location where information about a user's attentive state can advantageously be used to modulate their behavior (see the Examples, below).
  • An attentive user interface can be used to control video and audio recording and transmission, or to sense attention during remote or colocated meeting for retroactive automated editing (i.e., a virtual director), or for video conferencing camera selection and remote personal attention sensing (see Example 3, below).
  • Yet other applications include, but are not limited to, remote (instant) messaging (i.e., open/close communication with a user at a distance, such as during remote arbitrage); colocated messaging (i.e., open/close communication with a user at a physical distance); dynamic email filter based on time spent reading; intelligent agent communication of attention; robot communication of attention; avatar/remote person communication of attention; presence detection for any kind of messaging system; receipt of message acknowledgement for any kind of system; notification negotiation (i.e., user acknowledgement of information presentation); notification optimization (i.e., forwarding to current device); optimization of information presentation (i.e., present notification or other information on device or part of device where user is looking); for pointing to items on displays; to determine target of keyboard commands; look to talk; eye telepointing systems (i.e., presentation and remote collaboration); vehicle navigation system operation (selection of information retrieval system); vehicle phone call answering; vehicle operator fatigue sensor; visualization and monitoring of user attention (see Example 4); attentive reasoning networks for telecommunication for telemarkete
  • an attentive user interface was used to apply some of the basic social rules that surround human face-to-face conversation (discussed above) to a personal electronic device, in this case a cell phone.
  • a personal electronic device in this case a cell phone.
  • the embodiment described in this example could be implemented in any electronic device or appliance.
  • an attentive cell phone was created by augmenting a Compaq iPAQ handheld with an attentive user interface employing a low-cost wearable eye contact sensor for detecting when a user is in a face-to-face conversation with another human.
  • Wearable microphone headsets are becoming increasingly common with cell phones.
  • the signal from such microphones is available with high fidelity even when the user is not making a call.
  • We modified the cell phone to accept such input, allowing it to monitor user speech activity to estimate the chance that its user is engaged in a face-to-face conversation.
  • Wireless phone functionality was provided by voice-over-ip software connected through a wireless LAN to a desktop-based call router.
  • An attentive state processor running on the same machine sampled the energy level of the voice signal coming from the cell phone.
  • Vertegaal (1999) To avoid triggering by non-speech behavior we used a simplified version of a turn detection algorithm described by Vertegaal (1999).
  • Speech detection works well in situations where the user is the active speaker in conversation. However, when the user is engaged in prolonged listening, speech detection alone does not suffice. Given that there is no easy way to access the speech activity of an interlocutor without violating privacy laws, we used an alternative source of input, eye contact.
  • eye tracking provides an extremely reliable source of information about the conversational attention of users.
  • the eye contact sensor detected eye gaze toward a user by an interlocutor (i.e., a subject) to determine when the user was engaged in a conversation with the subject.
  • the contact sensor was worn on a cap worn on the user's head.
  • the sensor was embedded in the eye glasses worn by the user (see above and FIG. 4 ).
  • the sensor consisted of a video camera with a set of infrared LEDs mounted on-axis with the camera lens. Another set of LEDs was mounted off-axis.
  • the attentive state processor determined the probability that the user was in a conversation by summating the speech activity and eye contact estimates. The resulting probability was applied in two ways. Firstly, it set the default notification level of the user's cell phone. Secondly, it was communicated over the network to provide information about the status of the user to potential callers.
  • the attentive phone updates the attentive state information for all visible contacts.
  • a menu shows the preferred notification channel.
  • Notification channels are listed according to their interruption level: message; vibrate; private knock; public knock; and public ring. Users can set their preferred level of interruption for any attentive state. They can also choose whether to allow callers to override this choice.
  • contacts are available for communication, their portraits display eye contact.
  • a typical preferred notification channel in this mode is a knocking sound presented privately through the contact's head set.
  • his/her portrait shows the back of his/her head.
  • a preferred notification channel in this mode is a vibration through a pager unit.
  • callers may choose a different notification strategy, if allowed. However, in this mode the contact's phone will never ring in public. Users can press a “Don't Answer” button to manually forestall notifications by outside callers for a set time interval. This is communicated to callers by turning the contact's portrait into a gray silhouette. Offline communication is still possible in this mode, allowing the user to leave voicemail or a text message.
  • the above example demonstrates how the interruptiveness of notification of a device such as a cell phone can be reduced by allowing a) the device to sense the attentive state of the user, b) the device to communicate this attentive state to subjects, and c) subjects to follow social rules of engagement on the basis of this information. Secondly, interruptiveness is reduced by the device making intelligent decisions about its notification method on the basis of obtained information about the user's attentive state.
  • eyePHONE our attentive telephone
  • telephones were equipped with an attentive user interface including an eye proxy and an eye contact sensor.
  • the eye proxy serves as a surrogate that indicates to a user the availability and attention of a remote user for communication
  • the eye contact sensor conveys information about the user's attention to the remote user.
  • Users initiate a call by jointly looking at each other's eye proxy. This allows users to implement some of the basic social rules of face-to-face conversations in mediated conversations.
  • This example relates to use of only two devices (telephones); however, it will be understood that this technology could be applied to any number of devices on a network.
  • the eye proxy consisted of a pair of Styrofoam® eyes, actuated by a motorized Sony EVI-D30 camera. The eyes were capable of rotating 180° horizontally and 80° vertically around their base. Eye contact of a user looking at the eye proxy was detected by an eye contact sensor, as described above (see FIG. 5 ), mounted above the eyes. Once the pupils of a user were located, the proxy maintained eye contact by adjusting the orientation of the eyes such that pupils stayed centered within the eye contact sensor image. Audio communication between eyePHONES was established through a voice-over-IP connection.
  • Connor wishes to place a call to Alex. He looks at Alex's proxy, which begins setting up a voice connection after a user-configurable threshold of 1.5 s of prolonged eye contact. The proxy communicates that it is busy by iteratively glancing up—and looking back at Connor (see FIG. 6 b ). On the other side of the line, Connor's proxy starts moving its eyes, and uses the eye contact sensor to find the pupils of Alex (see FIG. 6 a ).
  • Alex observes the activity of Connor's proxy on his desk, and starts looking at the proxy's eye balls.
  • the eyePHONES establish a voice connection (see FIG. 6 c ).
  • Alex does not want to take the call, he either ignores the proxy or looks away after having made brief eye contact.
  • Alex's proxy on Connor's desk conveys Alex's unavailability by shaking its eyes, breaking eye contact, and not establishing a voice connection (see FIG. 6 d ).
  • Alex decides his call is too urgent, he may choose to press a button that produces an audible ring.
  • calls may be set to complete automatically when proxies determine a lack of eye contact over a user-configurable time period.
  • televisions and other audiovisual content delivery systems can be augmented with eye contact sensors to determine whether that content is being viewed, and to take appropriate action when it is no longer viewed. In combination with a personal video recording system, this may involve tracking user attention automatically for various shows, skipping commercials on the basis of perceived attentiveness, modulating volume level or messages delivered through that medium, or live pausing of audiovisual material.
  • the term “attention space” refers to the limited attention a user has available to process/respond to stimuli, given that the capacity of a user to process information simultaneously from various sources is limited.
  • Eye contact sensors function as an intermediary to the management of a user's physical attention.
  • miniaturized eye contact sensors can be embedded in, and augment, small electronic devices such as PDAs, cell phones, personal entertainment systems, appliances, or any other object to deliver information when a user is paying attention to the device, deferring that information's delivery when the user's attention is directed elsewhere.
  • This information may be used, for example, to dynamically route audio or video calls, instant messages, email messages, or any other communications to the correct location of the user's current attention, and to infer and modulate quality of service of the network.
  • EyeREASON decides, on the basis of information about the user's prior, current, and/or future attentive state, the priority of a message originating from a subject in relationship to that of tasks the user is attending to. By examining parameters of the message and user task(s), including attentive states of subjects pertaining to that message, eyeREASON makes decisions about whether, when, and how to forward notifications to the user, or to defer message delivery for later retrieval by the user.
  • a message can be in any format, such as email, instant messaging or voice connection, speech recognition, or messages from sensors, asynchronous or synchronous.
  • any speech communication between a user and device(s) can be routed through a wired or wireless headset worn by the user, and processed by a speech recognition and production system on the server.
  • eyeREASON switches its vocabulary to the lexicon of the focus device, sending commands through that device's in/out ( 1 /O) channels.
  • Each device reports to the eyeREASON server when it senses that a user is paying attention to it. EyeREASON uses this information to determine when and how to relay messages from devices to the user.
  • eyeREASON Using information about the attentive state of the user, such as what devices the user is currently operating, what communication channels with the user are currently occupied, and the priority of the message relative to the tasks the user is engaged in, eyeREASON dynamically chooses an optimal notification device with appropriate channels and levels of notification. Notifications can migrate between devices, tracking the attention of the user, as is illustrated by the below scenario.
  • One application of eyeREASON is the management of prioritized delivery of unified messages.
  • the following scenario illustrates interactions of a user with various devices enabled with attentive user interfaces, employing eye contact sensing capability, through eyeREASON's attentive reasoning system. It shows how awareness of a user's attentive context may facilitate turn-taking between the user and remote ubiquitous devices.
  • Alex enters his living room, which senses his presence (e.g., via the RF ID tag he is wearing) and reports his presence to his eyeREASON server. He turns on his television, which has live pausing capability (e.g., TiVo, personal video recorder (PVR)).
  • the television is augmented with an attentive user interface having an eye contact sensor, which notifies the server that it is being watched.
  • the eyeREASON server updates the visual and auditory interruption levels of all people present in the living room.
  • Alex goes to the kitchen to get himself a cold drink from his attentive refrigerator, which is augmented with a RF ID tag reader. As he enters the kitchen, his interruption levels are adjusted appropriate to his interactions with devices in the kitchen. In the living room, the TV pauses because its eye contact sensor reports that no one is watching. Alex queries his attentive fridge and finds that there are no cold drinks within. He gets a bottle of soda from a cupboard in the kitchen and puts it in the freezer compartment of the fridge. Informed by a RF ID tag on the bottle, the fridge estimates the amount of time it will take for the bottle to freeze and break. It records Alex's tag and posts a notification with a timed priority level to his eyeREASON server. Alex returns to the living room and looks at the TV, which promptly resumes the program.
  • Alex's eyeREASON server determines that the TV is an appropriate device to use for notifying Alex. It chooses the visual communication channel, because it is less disruptive than audio. A box with a message from the fridge appears in the corner of the TV. As time progresses, the priority of the notification increases, and the box grows in size on the screen, demonstrating with increased urgency that Alex's drink is freezing. Alex gets up, the TV pauses and he sits down at his computer to check his email. His eyeREASON server determines that the priority of the fridge notification is greater than that of his current email, and moves the alert to his computer. Alex acknowledges this alert, and retrieves his drink, causing the fridge to withdraw the notification. Had Alex not acknowledged this alert, the eyeREASON server would have forwarded the notification to Alex's email, or chosen an alternative channel.
  • an attentive user interface By placing an attentive user interface in the vicinity of any visual material that one would be interested in tracking the response to, such as advertisements (virtual or real), television screens, and billboards, the attention of users for the visual material can be monitored.
  • Applications include, for example, gathering marketing information and monitoring of the effectiveness of advertisements.
  • This example relates to use of an attentive user interface in a windowing system, referred to herein as “eyeWINDOWS”, for a graphical user interface which incorporates fisheye windows or views that use eye fixation, rather than manual pointing, to select the focus window.
  • the windowing system allocates display space to a given window based on the amount of visual attention received by that window.
  • Use of eye input facilitates contextual activity while maintaining user focus. It allows more continuous accommodation of the windowing system to shifts in user attention, and more efficient use of manual input.
  • Windowing systems of commercial desktop interfaces have experienced little change over the last 20 years.
  • Current systems employ the same basic technique of allocating display space using manually arranged, overlapping windows into the task world.
  • system prompts, incoming email messages, and other notifications a user's attention shifts almost continuously between tasks.
  • Such behavior requires a more flexible windowing systems that allows a user to more easily move between alternate activities.
  • This problem has prompted new research into windowing systems that allow more fluent interaction through, e.g., zooming task bars (Cadiz et al., 2002) or fisheye views (Gutwin, 2002).
  • eyeWINDOWS observes user eye fixations at windows with an LC Technologies eye tracker.
  • the focus window is zoomed to maximum magnification.
  • Surrounding windows contract with distance to the focus window.
  • the enlarged window does not obscure the surrounding contracted windows, such that the user can readily view all windows.
  • eyeWINDOWS affects all active applications.
  • Traditional icons are replaced with active thumbnail views that provide full functionality, referred to herein as “eyecons”. Eyecons zoom into a focus window when a user looks at them.
  • Initial user observations appear to favor the use of key triggering for focus window selection.
  • the following scenario illustrates this process: a user is working on a text in the focus window in the center of the screen.
  • the focus window is surrounded by eyecons of related documents, with associated file names.
  • the user wishes to copy a picture from the document to the right of his focus window. He looks at its eyecon and presses the space bar, and the eyecon zooms into a focus window, while the old focus window shrinks into an eyecon. After having found the picture, he places it in the clipboard and shifts his attention back to the original document. It zooms into a focus window and the user pastes the picture into the document.
  • This scenario illustrates how contextual actions are supported without the need for multiple pointing gestures to resize or reposition windows.
  • EyeWINDOWS also supports more attention-sensitive notification. For example, the user is notified of a message by a notification eyecon at the bottom of the screen. When the user fixates at the notification eyecon it zooms to reveal its message. The notification is dismissed once eyeWINDOWS detects the message was read. This illustrates how an attentive user interface supports user focus within the context of more peripheral events.
  • a small computer embedded in the fridge and connected to a network through a tcp ip connection, runs a simple program that allows the fridge to reason about its contents, and interact with the user, by incorporating eye contact with the user.
  • the fridge may contain software for processing and producing speech, and a speech recognition and production engine residing on eyeREASON can advantageously be employed to process speech for it, responding to contextualized verbal queries by a user. This is accomplished by sending xml speech recognition grammars and lexicons from the fridge to eyeREASON that are contextualized upon the state of the fridge's sensing systems.
  • the fridge will send xml grammars and enable speech processing whenever a user is in close proximity to it, and/or making eye contact with the fridge, and/or holding objects from the fridge in his/her hand.
  • the user is connected to the speech recognition and production engine on eyeREASON through a wireless headset (e.g., BlueTooth®). This allows eyeREASON to process speech by the user, with the contextualized grammars provided by the appliance the user is interacting with.
  • a wireless headset e.g., BlueTooth®
  • EyeREASON determines a) whether speech should be processed; e.g., focus events sent by the appliance on the basis of information from its eye contact sensor; b) for which appliance, and with which grammar speech should be processed; c) what commands should be sent to the appliance as a consequence; and d) what the priority of messages returned from the appliance should be. Messages sent by appliances during synchronous interactions with a user will receive the highest notification levels.
  • the following scenario illustrates the process: User A is standing near his attentive fridge. He asks what is contained in the fridge while looking at the fridge.
  • the fridge senses his presence, detects eye contact, and determines the identity of the user. It sends an xml grammar containing the speech vocabulary suitable for answering queries to user A's eyeREASON server.
  • the eyeREASON server switches its speech recognition lexicon to process speech for the fridge, as instructed by the current xml grammar. It parses the user's speech according to the grammar, recognizes that the user wants a list of items in the fridge, and sends a command to the fridge to provide a list of items, according the xml specs.
  • the fridge responds by sending a text message to eyeREASON listing the items in the fridge.
  • eyeREASON Since the user is directly engaged in a synchronous interaction with the fridge, eyeREASON decides the message should be forwarded to the user immediately. Since the user has been interacting with the fridge through speech over his headset, eyeREASON uses this same path, speaking the message to the user with its speech production system. The user opens the fridge and retrieves some cheese. The fridge recognizes that the hand of user A is in the fridge, and has removed the cheese. It sends a hand focus event, and subsequently an object focus event to the eyeREASON server with the RF ID of the cheese object, with corresponding grammar for handling any user speech. The user may query any property of the cheese object, for example its expiration date.
  • eyeREASON will record any voice message and tag it with the RF ID of the object the user was holding, as well as the ID of the user. It will stop recording when user puts the object back into the fridge, tagging the object with a voice message. It forwards this voice message with a store command to the embedded processor in the fridge. The next time any user other than user A retrieves the same object, the fridge will forward the voice message to pertaining to this object to that user.

Abstract

This invention relates to methods and apparatus for improving communications between humans and devices. The invention provides a method of modulating operation of a device, comprising: providing an attentive user interface for obtaining information about an attentive state of a user; and modulating operation of a device on the basis of the obtained information, wherein the operation that is modulated is initiated by the device. Preferably, the information about the user's attentive state is eye contact of the user with the device that is sensed by the attentive user interface.

Description

    FIELD OF THE INVENTION
  • This invention relates to attentive user interfaces for improving communication between humans and devices. More particularly, this invention relates to use of eye contact/gaze direction information by technological devices and appliances to more effectively communicate with users, in device or subject initiated communications.
  • BACKGROUND OF THE INVENTION
  • Interaction with technological devices is becoming an ever-increasing part of everyday life. However, effectiveness and efficiency of such interaction is generally lacking. In particular, when seeking user input, devices such as computers, cellular telephones and personal digital assistants (PDAs) are often disruptive, because such devices cannot assess the user's current interest or focus of attention. More efficient, user-friendly interaction is desirable in interactions with household appliances and electronic equipment, computers, and digital devices.
  • One way that human-device interactions can be improved is by employing user input such as voice and/or eye contact, movement, or position to allow users to control the device. Many previous attempts relate to controlling computer functions by tracking eye gaze direction. For example, U.S. Pat. No. 6,152,563 to Hutchinson et al. and U.S. Pat. No. 6,204,828 to Amir et al. teach systems for controlling a cursor on a computer screen based on user eye gaze direction. U.S. Pat. Nos. 4,836,670 and 4,973,49 to Hutchinson, U.S. Pat. No. 4,595,990 to Garwin et al., U.S. Pat. No. 6,437,758 to Nielsen et al., and U.S. Pat. No. 6,421,064 and U.S. Patent Application No. 2002/0105482 to Lemelson et al. relate to controlling information transfer, downloading, and scrolling on a computer based on the direction of a user's eye gaze relative to portions of the computer screen. U.S. Pat. No. 6,456,262 to Bell provides an electronic device with a microdisplay in which a displayed image may be selected by gazing upon it. U.S. Patent Application No. 2002/0141614 to Lin teaches enhancing the perceived video quality of the portion of a computer display corresponding to a user's gaze.
  • Use of eye and/or voice information for interaction with devices other than computers is less common. U.S. Pat. No. 6,282,553 teaches activation of a keypad for a security system, also using an eye tracker. Other systems employ detection of direct eye contact. For example, U.S. Pat. No. 4,169,663 to Murr describes an eye attention monitor which provides information simply relating to whether or not a user is looking at a target area, and U.S. Pat. No. 6,397,137 to Alpert et al. relates to a system for selecting left or right side-view mirrors of a vehicle for adjustment based on which mirror the operator is viewing. U.S. Pat. No. 6,393,136 to Amir et al. teaches an eye contact sensor for determining whether a user is looking at a target area, and using the determination of eye contact to control a device. The Amir et al. patent suggests that eye contact information can be used together with voice information, to disambiguate voice commands when more than one voice-activated devices are present.
  • While it is evident that considerable effort has been directed to improving user-initiated communications, little work has been done to improve device-initiated interactions or communications.
  • SUMMARY OF THE INVENTION
  • According to a first aspect of the invention there is provided a method of modulating operation of a device, comprising: providing an attentive user interface for obtaining information about an attentive state of a user; and modulating operation of a device on the basis of said obtained information, wherein said operation that is modulated is initiated by said device.
  • In a preferred embodiment, said information about said user's attentive state is eye contact of said user with said device that is sensed by said attentive user interface. In another embodiment, said information about said user's attentive state is eye contact of said user with a subject that is sensed by said attentive user interface. In one embodiment, said subject is human, and said information about said user's attentive state is eye contact of said user with said human that is sensed by said attentive user interface. In another embodiment, said subject is another device. In accordance with this embodiment, when said user's attention is directed toward said other device, said modulating step comprises routing a notification to said other device. In various embodiments, said information about an attentive state of said user is based on one or more indices selected from the group consisting of eye contact, eye movement, eye position, eye gaze direction, voice, body presence, body orientation, head and/or face orientation, user activity, and brain activity/arousal.
  • In one embodiment of the method said sensing of eye contact comprises: obtaining successive full-frame video fields of alternating bright and dark video images of said user's pupils; and subtracting said images between frames to locate said pupils; wherein locating said pupils confirms eye contact of said user. In a preferred embodiment, said sensing of eye contact further comprises: detecting a glint in the user's eyes; and confirming eye contact of said user when said glint is aligned with said pupils.
  • In accordance with the first aspect of the invention, when said user's attention is not directed toward said device, said modulating step comprises notifying said user progressively, from a less interruptive notification to a more interruptive notification. In various embodiments, said notification is of at least one type selected from the group consisting of audio, visual, and tactile.
  • In various embodiments, said attentive user interface may be attached to or embedded in said device, or attached to or embedded in a member of the group consisting of clothing, eyewear, jewelry, and furniture. In some embodiments, the device may be a personal computer, a cellular telephone, a telephone, a personal digital assistant (PDA), or an appliance.
  • In various embodiments, said modulating step may comprise forwarding said obtained information to another device or a network of devices, modulating a notification being sent to said user, or forwarding said obtained information to another device or a network of devices.
  • According to a second aspect of the invention there is provided a method of modulating operation of a network of devices, comprising: providing each device of a network of devices with an attentive user interface for obtaining information about an attentive state of a user with respect to each device; and modulating operation of said devices on the basis of said obtained information, wherein said operation that is modulated is initiated by at least one of said devices.
  • In various embodiments, said operation that is modulated may comprise notification, communication, information transfer, and a combination thereof, or routing said notification, communication, information transfer, or combination thereof, to a device with which said user is engaged. The modulating operation may further comprise modulating notification of said user progressively, from a less interruptive notification to a more interruptive notification. In a preferred embodiment, said information about said user's attentive state is eye contact of said user with each said device, said eye contact being sensed by said attentive user interface.
  • According to a third aspect of the invention there is provided a method of modulating communication over a network of at least two devices, comprising: providing a first device of a network of devices with an attentive user interface for obtaining information about a first user's attentive state toward said first device; providing a second device of a network of devices with an attentive user interface for obtaining information about a second user's attentive state toward said second device; providing said first device of said network with a proxy for communicating to said first user said information about said second user's attentive state toward said second device; providing said second device of said network with a proxy for communicating to said second user said information about said first user's attentive state toward said first device; relaying to said network said information about said first and second users' attentive states toward said respective first and second devices; wherein communication between said first and second devices is modulated on the basis of the attentive states of said first and second users toward their respective devices.
  • In one embodiment, communication between said first and second devices is enabled when respective proxies indicate that attentive states of said first and second users are toward respective devices. In other embodiments, the device may be a telephone, and the proxy may be a representation of a user's eyes. In a further embodiment, the network comprises more than two devices.
  • According to a fourth aspect of the invention there is provided a method of modulating operation of a cellular telephone, comprising: providing an attentive user interface for obtaining information about an attentive state of a user; and modulating operation of a cellular telephone on the basis of said obtained information, wherein said operation that is modulated is initiated by said cellular telephone. In a preferred embodiment, said information about said user's attentive state is eye contact of said user with said cellular telephone that is sensed by said attentive user interface.
  • According to a fifth aspect of the invention there is provided a method of modulating operation of a graphical user interface, comprising: providing a graphical user interface for displaying one or more images to a user; determining said user's eye gaze direction to obtain information about which image is being viewed by said user; and using said information to enlarge, on said graphical user interface, said image being viewed by said user, and to shrink, on said graphical user interface, one or more images not being viewed by said user, wherein said enlarging of an image does not obscure said one or more images not being viewed.
  • According to a sixth aspect of the invention there is provided an apparatus for detecting eye contact of a subject looking at a user, comprising an eye contact sensor worn by said user that indicates eye contact of a subject looking at the user. In a preferred embodiment, the apparatus comprises eyeglasses.
  • According to a seventh aspect of the invention there is provided an eye contact sensor, comprising: an image sensor for obtaining successive full-frame video fields of alternating bright and dark video images of a user's pupils; and means for subtracting said images between frames to locate said pupils; wherein said located pupils indicate eye contact of said user. In a preferred embodiment, the eye contact sensor further comprises means for detecting alignment of a glint in said user's eyes with said user's pupils; wherein alignment of said glint with said pupils indicates eye contact of said user.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the invention are described below, by way of example, with reference to the accompanying drawings, wherein:
  • FIG. 1 is a schematic diagram of an eye contact sensor;
  • FIG. 2 depicts an algorithm for an eye contact sensor in accordance with an embodiment of the invention;
  • FIG. 3 depicts an algorithm for an attentive user interface in accordance with an embodiment of the invention;
  • FIG. 4 shows eye glasses equipped with an eye contact sensor in accordance with an embodiment of the invention;
  • FIG. 5 is a schematic diagram of a device equipped with a mechanical eye proxy and an eye contact sensor in accordance with an embodiment of the invention; and
  • FIG. 6 depicts a scheme for telephone eye proxy in accordance with an embodiment of the invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention is based, at least in part, on the recognition that human-device interaction can be improved by implementing in devices some of the basic social rules that govern human face-to-face conversation. Such social rules are exemplified in the following scenario: Person A is in conversation with person B (or engaged in a task), and person C wishes to gain A's attention. There are a number of ways in which C may do so without interfering with A's activities. Firstly, C may position himself such that A becomes peripherally aware of his presence. Secondly, C may use proximity, movement, gaze or touch to capture A's attention without using verbal interruption. The use of nonverbal visual cues by C allows A to finish his conversation/task before acknowledging C's request for attention, e.g., by making eye contact. If A does not provide acknowledgement, C may choose to withdraw his request by moving out of A's visual field. Indeed, Frolich (1994) found that initiators of conversations often wait for visual cues of attention, in particular, the establishment of eye contact, before launching into their conversation during unplanned face-to-face encounters. Face-to-face interaction is therefore different from the way we typically interact with most technological devices in that it provides a rich selection of both verbal and nonverbal communication channels. This richness is characterized by (i) flexibility in choosing alternate channels of communication to avoid interference or interruption, (ii) a continuous nature of the information conveyed, and (iii) a bi-directionality of communication.
  • Electronic devices that require user input or attention do not follow such social rules in communicating with users. As a result they often generate intrusive and annoying interruptions. With the advent of devices such as cell phones and personal digital assistants (PDAs; e.g., Blackberry®, Palm Pilot®), users are regularly interrupted with requests for their attention. The present invention solves this problem by augmenting devices with attentive user interfaces: user interfaces that negotiate the attention they receive from or provide to users by negotiations through peripheral channels of interaction. Attentive user interfaces according to the invention follow social rules of human group communication, where, likewise, many people might simultaneously have an interest in speaking. In human group conversations, eye contact functions as a nonverbal visual signal that peripherally conveys who is attending to whom without interrupting the verbal auditory channel. With it, humans achieve a remarkably efficient process of conversational turn-taking. Without it, turn-taking breaks down. Thus, an attentive user interface according to the invention applies such social rules to device-initiated interactions or communications, by assessing a user's attentive state, and making a determination as to whether, when, and how to interrupt (e.g., notify) the user on the basis of the user's attentive state.
  • To facilitate turn-taking between devices and users in a non-intrusive manner, an attentive user interface according to the invention assesses a user's attentive state by sensing one or more parameters of the user. Such parameters are indicative of the user's attentive state, and include, but are not limited to, eye contact, eye movement, eye position, eye gaze direction, voice, body presence, body orientation, head and/or face orientation, activity, and brain activity/arousal. In the case of eye contact, movement, or position, an attentive user interface senses the eyes of the user, or between the user and a subject (e.g., another human), to determine when, whether, and how to interrupt the user. For example, notification by a PDA seeking user input can be modulated on the basis of whether the user is engaged with the PDA, with another device, or a subject. The PDA then can decide whether, when, and how to notify; for example, directly, or indirectly via another device with which the user is engaged. Body presence can be sensed in various ways, such as, for example, a motion detector, a radio frequency (RF) ID tag worn by a user and sensed using, e.g., BlueTooth®, a visual tag, electro-magnetic sensors for sensing presence/location/orientation of a user within a magnetic field, and a global positioning system (GPS).
  • As used herein, the term “user” is intended to mean the entity, preferably human, who is using a device.
  • As used herein, the term “device” is intended to mean any digital device, object, machine, or appliance that requires, solicits, receives, or competes for a user's attention. The term “device” includes any device that typically is not interactive, but could be made more user-friendly by providing interaction with a user as described herein.
  • As used herein, the term “subject” is intended to mean the human, device, or other object with which a user might be engaged.
  • As used herein, the term “attentive user interface” is intended to mean any hardware and/or software that senses, receives, obtains, and negotiates a user's attention by sensing one or more indices of a user's attentive state (e.g., eye contact, eye movement, eye position, eye gaze direction, voice, body presence, body orientation, head and/or face orientation, activity, brain activity/arousal), with appropriate hardware and associated algorithms and/or software for interfacing the attentive user interface with a device or a network of devices. An attentive user interface comprises portions for sensing user attentive state and for processing and interfacing/relaying information about the user's attentive state to a device. Such portions can be housed as a unit or as multiple units. Interfacing an attentive user interface with a device comprises providing an output from the attentive user interface to the device, which controls operation of the device. An attentive user interface of the invention can perform one or more tasks, such as, but not limited to, making decisions about user presence/absence, making decisions about the state of user attention, prioritizing communications in relation to current priorities in user attention as sensed by the attentive user interface, modulating channels and modes of delivery of notifications and/or information and/or communications to the user, modulating presentation of visual or auditory information, and communicating information (e.g., indices) about user attention to other subjects.
  • As used herein, the term “attentive state” is intended to mean a measure or index of a user's engagement with or attention toward a subject. Examples of such indices are eye contact, eye movement, eye position, eye gaze direction, voice, body presence, body orientation, head and/or face orientation, activity, and brain activity/arousal.
  • As used herein, the term “notify” or “notification” is intended to mean the signalling or soliciting, usually by a device, for a user's attention. For example, notification can employ any cue(s) that act on a user's senses to solicit the user's attention, such as one or more of audio, visual, tactile, and olfactory cues.
  • As used herein, the term “modulating” is intended to mean controlling, enabling and/or disabling, or adjusting (e.g., increasing and/or decreasing). With respect to notification, modulating includes, for example, turning notification on or off, delaying notification, changing the volume or type of notification, and the like. For example, notification can be gradually modulated from less interruptive (e.g., quiet) to more interruptive (e.g., loud), as time passes without user acknowledgement. Modulating also refers to changing the vehicle or channel for notification, communication, or data transfer; for example, by routing such through a network to a more appropriate device. For example, in the case of an urgent notification, modulation might encompass routing the notification to a device with which the user is engaged, increasing the likelihood that the user receives the notification (see Example 4, below).
  • As used herein, the terms “mediated communication” and “mediated conversation” refer to communication or conversation that takes place through a medium such as video or audio devices/systems, such that there is no face-to-face conversation between the participants. In most mediated communications, participants involved are remotely located relative to one another.
  • In one embodiment of the invention, an attentive user interface dynamically prioritizes the information it presents, and the way it is presented, to a user, such that information processing resources of both user and system are optimally used. This might involve, for example, optimally distributing resources across a set of tasks. An attentive user interface does this on the basis of knowledge—consisting of a combination of measures and models—of the present, and preferably also the past and/or future states of the user's attention, taking into account the availability of system resources. Attentive user interfaces may employ one or more of eye contact, eye movement, eye position, eye gaze direction, voice, body presence, body orientation, head and/or face orientation, activity, brain activity/arousal to detect attentive state. Attentive user interfaces may store any of the above measures as a model, used to govern decisions about the user's attentive state.
  • In a preferred embodiment, an attentive user interface employs eye contact and/or eye gaze direction information, optionally in combination with any further measures of user presence mentioned above. Eye contact sensors as used in the invention are distinguished from eye trackers, in that eye contact sensors detect eye contact when a subject or user is looking at the sensor, whereas eye trackers detect eye movement to determine the direction a subject or user is looking.
  • In some embodiments, an attentive user interface employs an eye contact sensor based on bright-dark pupil detection using a video camera (see, for example, U.S. Pat. No. 6,393,136 to Amir et al.). This technique uses intermittent on-camera axis and off-camera axis illumination of the eyes to obtain an isolated camera image of the user's pupil. The on-axis illumination during one video field results in a clear reflection of the retina through the pupil (i.e., the bright pupil effect). This reflection does not occur when the eyes are illuminated by the off-axis light source in the next video field. By alternating on-axis with off-axis illumination, synchronized with the camera clock, successive video fields produce alternating bright and dark images of the pupil. By subtracting these images in real time, pupils can easily be identified within the field of view of a low-cost camera. Preferably, eyes are illuminated with infrared (IR) light, which does not distract the user.
  • However, accuracy of the eye contact sensor can be improved by measuring the glint, or first purkinje image, of the eyes. The glint is a reflection of light on the outer side of the cornea, that acts as a relative reference point, which can be used to eliminate the confounding effects of head movements. The glint moves with the head, but does not rotate with the pupil because the eye is spherical. Thus, the position of the glint relative to the pupil can be used to determine the direction a user or subject is looking. For example, when the user is looking at the camera and the glint is inside the pupil, the pupil, glint, and camera are aligned on the camera axis, indicating that the user is looking at the camera, and hence eye contact is detected.
  • We have used this technique in attentive user interfaces to identify eye contact of users at approximately 3 meters distance, using standard 320×240 CCD cameras with analog NTSC imaging. The ability to obtain a reliable estimate of the pupils at larger distances is limited by the resolution of such cameras. Use of mega-pixel CCD cameras, although expensive, make possible the detection of pupils at greater distances. Alternatively, high-resolution CMOS imaging technology (e.g., Silicon Imaging MegaPixel Camera SI-3170U or SI-3200U) allows the manufacture of low-cost high-resolution eye contact sensors.
  • An example of a high-resolution eye contact sensor is shown in FIG. 1. The high-resolution eye contact sensor 40 comprises an image sensor (i.e., a camera), such as a black and white high-resolution CCD or CMOS image sensor (3 Mpixels or more), with a multifocus lens 48. Preferably, infrared light is used to illuminate the eyes, and accordingly an infrared filter is disposed beneath the lens 48. The output of the image sensor is connected to circuitry which uses the camera frame sync signal to illuminate the space in front of the camera with on-axis light produced by, e.g., an array of infrared LEDs 42, and off-axis light produced by, e.g., two arrays of infrared LEDs 44,52. On-axis and off-axis light is produced alternately with odd and even frames. For example, on-axis light is produced each odd frame and off-axis light is produced every even frame. Images are processed to locate the user's/subject's eyes, and corresponding information is relayed to hardware/software of an attentive user interface. The information is used by the attentive user interface to determine, whether, how, when, etc., to interrupt or send a notification to a user. In some embodiments the image processing circuitry and software may reside in the eye contact sensor unit 40, whereas in other embodiments the circuitry and software are remote (e.g., associated with a host computer) and suitably connected to the eye contact sensor unit 40 using, e.g., a high-bandwidth video link, which can be wireless, such as Apple® FireWire® or USB 2 based. As shown in the eye protocol specification below, information relating to eye contact may include whether eyes are found in the image, where the eyes are, how many eyes are present, whether the eyes are blinking, and if the unit is calibrated, what the eyes are looking at in screen coordinates. The information may also include a flag for each eye when the eyes are looking straight at the camera.
  • Eye Protocol Specification { }=Data set ( )=Subset
  • 1. EYE_NOT_FOUND
  • ID End
    0 CR & LF
    ASCII CR = 77 or 4D
  • 2. HEAD_FOUND
  • ID D1 D2 End
    1 Number of Heads {(T L B R)1 . . . (T L B R)9} CR & LF
    D1: Number of Heads D1 = {1, . . . , 9}
    D2: Head Boundary Box D2 = {(Top Left Bottom Right)1, . . . , (Top Left Bottom Right)9}
    Numbers in ASCII format (unsigned int) separated by ASCII space
  • 3. EYE_FOUND
  • ID D1 D2 End
    2 Number of {(Xg Yg Xp Yp)1 . . . (Xg Yg Xp Yp)9} CR & LF
    Eyes
    D1: Number of Eyes D1 = {1, . . . , 9}
    D2: Glint and pupil Coordinate D2 = ((Xg Yg Xp Yp)1, . . . , (Xg Yg Xp Yp)9)
    Numbers in ASCII format (unsigned int) separated by ASCII space
  • 4. EYE_BLINK
  • ID D1 D2 End
    3 Number of Eyes {F1 . . . F9} CR & LF
    D1: Number of Eyes D1 = {1, . . . , 9}
    D2: Blink D2 F = {0, 1} 0 = NOT_BLINK 1 = BLINK
    Numbers in ASCII format (unsigned int) separated by ASCII space
  • 5. EYE_CONTACT
  • ID D1 D2 End
    4 Number of Eyes {F1 . . . F9} CR & LF
    D1: Number of Eyes D1 = {1, . . . , 9}
    D2: Eye Contact D2 F = {0, 5} 0 = No Contact 5 = Contact
    Numbers in ASCII format (unsigned int) separated by ASCII space

    6. CALIBRATED_SCREEN_COORDINATE
  • ID D1 End
    5 (x, y) CR & LF
    D1: Screen Coordinate (x, y)
    Numbers in ASCII format (unsigned int) separated by ASCII space
  • Preferably, the eye contact sensor determines the orientation of pupils with a spatial accuracy of, for example, 1 meter at 5 meters distance (about 10 degrees of arc) and a head movement tolerance of, for example, 20 degrees of arc, at a distance of 5 meters or more. For best performance, the frame rate of the eye contact sensor's camera should be as high as possible, and in the order of 100 Hz. The effective sampling rate of the sensor preferably corresponds to at least 20 Hz, given that the minimum human fixation time is in the order of 100 ms.
  • It should be noted that the use of a subtraction algorithm to locate pupils results in a tradeoff between temporal and spatial resolution. In one embodiment, in which image subtraction occurs within frames (see, e.g., U.S. Pat. No. 6,393,136 to Amir et al.), resulting in an effective spatial resolution of the sensor of only half that of the camera. Here, the image processing algorithm and LEDs are synchronized with half-frame fields generated by an NTSC or other interlaced camera technology.
  • However, the invention provides, in one aspect, a method and apparatus for obtaining eye contact information in which image subtraction occurs between frames (by subtracting an odd frame from an even frame, or vice versa), as shown in the algorithm of FIG. 2. This allows the use of the full camera resolution, and thus a greater tracking range, while reducing the effective frame or sampling rate by half. The subtraction algorithm and LEDs are synchronized with a full frame clock generated by the camera and the minimum sampling frequency of the camera is preferably in the order of about 30 to about 40 Hz.
  • In other embodiments an attentive user interface uses eye gaze direction as input about a user's attentive state. Eye gaze direction is detected by an eye tracker, such as that described in detail in U.S. Pat. No. 6,152,563 to Hutchinson et al.
  • An attentive user interface of the invention may be applied to user-initiated control of a device using, for example, eye contact and/or eye gaze direction, with or without further input, such as voice, body presence, and the like. However, the invention is particularly applicable to device-initiated communication with a user, such as, for example, notifying a user of an incoming message, or of a task requiring user input. As shown in FIG. 3, an attentive user interface, running on a such a device, senses and evaluates one or more indices of user attention (e.g., eye contact, eye movement, eye position, eye gaze direction, voice, body presence, body orientation, head and/or face orientation, activity, brain activity/arousal) to determine whether, when, and how to notify, interrupt, respond or respond to the user, open/close communication channels, an the like. By progressively sampling the user's attention, and appropriately signaling notifications, the user can be notified with minimal interruption. For example, as shown in FIG. 3, an attentive user interface might progressively signal for the user's attention. Initially this may happen through a channel that is peripheral to the user's current activity. The interface may then wait for user acknowledgement, provided through, e.g., an input device, before opening a direct channel to the user. If, however, no user acknowledgement is received within a given period, the attentive user interface may proceed to a more direct channel to the user, increase the urgency level of the notification, or defer notification.
  • In one embodiment, information obtained about a user's attentive state is communicated to one or more subjects who might wish to contact the user. Such communication can be through any network by which the user and subject(s) are connected, such as a local area network, a wide area network (e.g., the internet), or hard-wired or wireless (e.g., cellular) telephone network. Subjects can evaluate the information about the user's attentive state, and, using rules of social engagement, decide whether or not to contact the user. For example, in telephonic communications (as described in detail in Example 1), information about the user's current attentive state is communicated to a subject attempting to telephone the user. The subject can decide whether to proceed with the telephone call on the basis of such information.
  • Further, the invention provides for an environment in which multiple devices, each equipped with attentive user interfaces, are networked, such that information concerning to which device the user's attention is directed is available to all devices on the network. By progressively signaling notifications (e.g., in the case of a cell phone, the phone starts by ringing quietly and progressively rings louder depending on urgency of the call and/or proximity to the user; or, an icon on the cell phone's screen changes as urgency increases), and by determining which device the user is currently attending to, a notification and/or message can be forwarded to the appropriate device so that the message is received with minimal interruption of the user's primary task.
  • There are numerous applications of an attentive user interface according to the invention, in addition to those discussed above. In some embodiments, the hardware component of the attentive user interface is small and light weight, such that it can be embedded in or attached to a personal electronic device such as a cell phone, jewelry, clothing, or eyeglasses, and the like. For example, FIG. 4 shows a front view of a pair of eye glasses having an eye contact sensor attached thereto. The eye glasses 2 have a frame 4 and lenses 6 and 8. A camera lens 10 is embedded in the frame 4 of the glasses, pointing outward. Surrounding the camera lens 10 is an array of on-axis LED illuminators 12. Two rows of off- axis LED illuminators 14,16 are positioned near the outer peripheries of the lenses 6,8. The camera feed as well as the LED arrays are connected through wires to a control unit worn by the user. This control unit contains power and circuitry for illumination of the LEDs and camera synchronization. In one embodiment, the control unit performs computer vision processing according to an algorithm using an embedded processor board. In such an embodiment, data is sent over a wireless or wired network link to a host. In another embodiment, camera images are sent over a wireless or wired network to an external computer vision processing facility. Eye contact glasses can be used, for example, to open/close communication channels between co-located but distant users, or for regulating messaging to a user or between two or more users.
  • One application of eye contact glasses is to track how many individuals have looked at the user during a specified period. These data or statistics can be made available on the user through an LCD display, or sent to a networking device for further processing or display. Combined with computer vision or other means, the eye contact glasses can determine who has looked at the user, for how long, and when. In one embodiment, the eye contact glasses provides a personal attention sensor (i.e., a “hit counter”), which indicates to a user when he/she is being looked at by a subject. For example, a counter could be incremented whenever the user has been looked at by a subject, to provide information about the number of “hits”. Such an embodiment can provide amusement to users in certain social settings.
  • In other embodiments, an attentive user interface of the invention includes a sensor for detecting one or more indices of user attentive state in combination with a “proxy”.
  • As used herein, the term “proxy” is intended to mean any hardware or virtual (e.g., an image on a computer screen) representation of a (remote) subject's attention. For example, a proxy can be a pair of eyes, either mechanical or virtual (e.g., pictured on a computer screen), that inform a user of the state of attention of a subject with which the user is attempting to establish mediated communication (e.g., via telephone). Eye proxies are preferred because of what they represent; that is, the establishment of eye contact is related to the establishment of communication between individuals.
  • In such embodiment, an attentive user interface, including a proxy, is used not only to obtain information about the attention of its user, but also functions to communicate robot, machine, or remote user attention directed towards a user. For example, an eye contact sensor can be mounted on a robotic actuation device that allows rotation of the eye contact sensor in 3 orientation directions. The eye contact sensor functions as virtual eyes directing the robotic device in establishing eye contact with the user when the attentive user interface's attention is directed towards that user. To convey attention, the robotic device may feature a pair of mechanical eyes, or an image or video of a remote user or computer agent. FIG. 5 shows an embodiment in which a pair of robotic mechanical eyes 60 and an eye contact sensor with camera lens 62, on-axis LED array 64, and off-axis LED arrays 66,68 are mounted on a device 70, such as a telephone.
  • In accordance with this embodiment, an attentive user interface with a sensor such as an eye contact sensor or an eye tracker can be used with any device to sense whether a user is available for communication, and whether a user is communicating with that device, via any route such as a keyboard, speech recognition, or manual interactions. Conversely, a proxy can signal the device's attention to the user by alignment of the eye contact sensor and/or virtual eyes with the user's eyes. If the device has not recently received visual attention from the user, it chooses an unobtrusive method to signal the user (i.e., by vibrating, rotating its eyeballs to obtain attention or any other nonverbal means). A device remains in the periphery of user activity until the user has acknowledged the device's request for attention. At that time that the device receives user attention, as measured with the eye contact sensor or through other means, a mediated communication channel with the user is established, including, for example, speech production or display of information. Example 2 describes an example of this embodiment in detail.
  • In further embodiments, an attentive user interface can be embedded in digital devices such as computers, personal digital assistants (PDAs), pvr/tvi/vcr/cameras, telephones, household appliances, furniture, vehicles, and any other location where information about a user's attentive state can advantageously be used to modulate their behavior (see the Examples, below). An attentive user interface can be used to control video and audio recording and transmission, or to sense attention during remote or colocated meeting for retroactive automated editing (i.e., a virtual director), or for video conferencing camera selection and remote personal attention sensing (see Example 3, below). Yet other applications include, but are not limited to, remote (instant) messaging (i.e., open/close communication with a user at a distance, such as during remote arbitrage); colocated messaging (i.e., open/close communication with a user at a physical distance); dynamic email filter based on time spent reading; intelligent agent communication of attention; robot communication of attention; avatar/remote person communication of attention; presence detection for any kind of messaging system; receipt of message acknowledgement for any kind of system; notification negotiation (i.e., user acknowledgement of information presentation); notification optimization (i.e., forwarding to current device); optimization of information presentation (i.e., present notification or other information on device or part of device where user is looking); for pointing to items on displays; to determine target of keyboard commands; look to talk; eye telepointing systems (i.e., presentation and remote collaboration); vehicle navigation system operation (selection of information retrieval system); vehicle phone call answering; vehicle operator fatigue sensor; visualization and monitoring of user attention (see Example 4); attentive reasoning networks for telecommunication for telemarketeering purposes (e.g., determine where users are and what they pay attention to (see Example 5), to forward calls, or to data-mine subjects in user's attention); displaying networks of attention between users or between users and subjects; surveillance and security camera monitoring; and modifying the size, resolution, or content of a window on a graphical user interface (see Examples 6 and 7).
  • The contents of all cited patents, patent applications, and publications are incorporated herein by reference in their entirety.
  • The invention is further described by way of the following non-limiting examples.
  • Example 1 Attentive Cell Phone
  • In this example, an attentive user interface was used to apply some of the basic social rules that surround human face-to-face conversation (discussed above) to a personal electronic device, in this case a cell phone. However, the embodiment described in this example could be implemented in any electronic device or appliance.
  • The subtlety of interruption patterns typically used during human face-to-face communication is completely lost when using cell phones. Firstly, a person making a call usually is unaware of the status of interruptability of the user being called. Secondly, there is limited freedom in choosing alternative channels of interruption. Thirdly, the channels that do exist do not allow for any subtlety of expression. In this example, an attentive cell phone was created by augmenting a Compaq iPAQ handheld with an attentive user interface employing a low-cost wearable eye contact sensor for detecting when a user is in a face-to-face conversation with another human.
  • Wearable microphone headsets are becoming increasingly common with cell phones. The signal from such microphones is available with high fidelity even when the user is not making a call. We modified the cell phone to accept such input, allowing it to monitor user speech activity to estimate the chance that its user is engaged in a face-to-face conversation. Wireless phone functionality was provided by voice-over-ip software connected through a wireless LAN to a desktop-based call router. An attentive state processor running on the same machine sampled the energy level of the voice signal coming from the cell phone. To avoid triggering by non-speech behavior we used a simplified version of a turn detection algorithm described by Vertegaal (1999). That is, when more than half the samples inside a one-second window indicate speech energy, and those samples are evenly balanced across the window, the probability of speech activity by its user is estimated at 100%. For each second that the user is silent, 5% is subtracted from this estimate, until zero probability is reached. Thus we achieved a short-term memory of 20 seconds for speech activity by its user.
  • Speech detection works well in situations where the user is the active speaker in conversation. However, when the user is engaged in prolonged listening, speech detection alone does not suffice. Given that there is no easy way to access the speech activity of an interlocutor without violating privacy laws, we used an alternative source of input, eye contact.
  • According to Vertegaal (1999), eye tracking provides an extremely reliable source of information about the conversational attention of users. In dyadic conversations, speakers look at the eyes of their conversational partner for about 40% of the time. The eye contact sensor detected eye gaze toward a user by an interlocutor (i.e., a subject) to determine when the user was engaged in a conversation with the subject. In one embodiment, the contact sensor was worn on a cap worn on the user's head. In another embodiment, the sensor was embedded in the eye glasses worn by the user (see above and FIG. 4). The sensor consisted of a video camera with a set of infrared LEDs mounted on-axis with the camera lens. Another set of LEDs was mounted off-axis.
  • By synchronizing the LEDs with the camera clock, bright and dark pupil effects were produced in alternate fields of each video frame. A simple algorithm found any eyes in front of the user by subtracting the even and odd fields of each video frame (Morimoto, 2000). The LEDs also produced a reflection from the cornea of the eyes. These glints appeared near the center of the detected pupils when the subject was looking at the user, allowing the sensor to detect eye contact without calibration. By mounting the sensor on the head, pointing outwards, the sensor's field of view was always aligned with that of the user. Sensor data was sent over a TCP/IP connection to the attentive state processor, which processes the data using an algorithm similar to that used for speech to determine the probability that the user received gaze by an onlooker in the past 20 seconds.
  • The attentive state processor determined the probability that the user was in a conversation by summating the speech activity and eye contact estimates. The resulting probability was applied in two ways. Firstly, it set the default notification level of the user's cell phone. Secondly, it was communicated over the network to provide information about the status of the user to potential callers.
  • Communicating Attentive State to Callers
  • When the user opens his/her contact list to make a phone call, the attentive phone updates the attentive state information for all visible contacts. In this example, below the contact's name a menu shows the preferred notification channel. Notification channels are listed according to their interruption level: message; vibrate; private knock; public knock; and public ring. Users can set their preferred level of interruption for any attentive state. They can also choose whether to allow callers to override this choice. When contacts are available for communication, their portraits display eye contact. A typical preferred notification channel in this mode is a knocking sound presented privately through the contact's head set. When a user is busy, his/her portrait shows the back of his/her head. A preferred notification channel in this mode is a vibration through a pager unit. When a request times out, callers may choose a different notification strategy, if allowed. However, in this mode the contact's phone will never ring in public. Users can press a “Don't Answer” button to manually forestall notifications by outside callers for a set time interval. This is communicated to callers by turning the contact's portrait into a gray silhouette. Offline communication is still possible in this mode, allowing the user to leave voicemail or a text message.
  • The above example demonstrates how the interruptiveness of notification of a device such as a cell phone can be reduced by allowing a) the device to sense the attentive state of the user, b) the device to communicate this attentive state to subjects, and c) subjects to follow social rules of engagement on the basis of this information. Secondly, interruptiveness is reduced by the device making intelligent decisions about its notification method on the basis of obtained information about the user's attentive state.
  • Example 2 Telephone Proxy
  • Mediated communications systems such as a telephone typically require callers to interrupt remote individuals before engaging into conversation. While previous research has focused on solving this problem by providing awareness cues about the other person's availability for communication, there has been little work on supporting the negotiation of availability that typically precedes communication in face-to-face situations. Face-to-face interactions provide a rich selection of verbal and non-verbal cues that allow potential interlocutors to negotiate the availability of their attention with great subtlety.
  • In this example we present a mechanism for initiating mediated conversations through eye contact. In our attentive telephone, referred to herein as “eyePHONE”, telephones were equipped with an attentive user interface including an eye proxy and an eye contact sensor. The eye proxy serves as a surrogate that indicates to a user the availability and attention of a remote user for communication, and the eye contact sensor conveys information about the user's attention to the remote user. Users initiate a call by jointly looking at each other's eye proxy. This allows users to implement some of the basic social rules of face-to-face conversations in mediated conversations. This example relates to use of only two devices (telephones); however, it will be understood that this technology could be applied to any number of devices on a network.
  • The eye proxy consisted of a pair of Styrofoam® eyes, actuated by a motorized Sony EVI-D30 camera. The eyes were capable of rotating 180° horizontally and 80° vertically around their base. Eye contact of a user looking at the eye proxy was detected by an eye contact sensor, as described above (see FIG. 5), mounted above the eyes. Once the pupils of a user were located, the proxy maintained eye contact by adjusting the orientation of the eyes such that pupils stayed centered within the eye contact sensor image. Audio communication between eyePHONES was established through a voice-over-IP connection.
  • To communicate the negotiation of mutual attention, we developed a set of gestures for eyePHONEs, shown in FIG. 6. With reference to FIG. 6, the following scenario illustrates how users may gradually negotiate connections through these eye gestures: Connor wishes to place a call to Alex. He looks at Alex's proxy, which begins setting up a voice connection after a user-configurable threshold of 1.5 s of prolonged eye contact. The proxy communicates that it is busy by iteratively glancing up—and looking back at Connor (see FIG. 6b ). On the other side of the line, Connor's proxy starts moving its eyes, and uses the eye contact sensor to find the pupils of Alex (see FIG. 6a ). Alex observes the activity of Connor's proxy on his desk, and starts looking at the proxy's eye balls. When Connor's proxy detects eye contact with Alex, the eyePHONES establish a voice connection (see FIG. 6c ). If Alex does not want to take the call, he either ignores the proxy or looks away after having made brief eye contact. Alex's proxy on Connor's desk conveys Alex's unavailability by shaking its eyes, breaking eye contact, and not establishing a voice connection (see FIG. 6d ). If Connor decides his call is too urgent, he may choose to press a button that produces an audible ring. Optionally, calls may be set to complete automatically when proxies determine a lack of eye contact over a user-configurable time period.
  • EyePHONES were also used to represent multiple participants during conference calls. Unlike regular conference calls, the negotiation of connections using nonverbal cues allows group members to enter at different times without interrupting the meeting. Furthermore, we implemented a “cocktail party” feature to facilitate the establishment of side conversations. When this is active, the speaker volume of a person's proxy depends on the amount of eye contact received from that person.
  • Example 3 Audio/Video Applications
  • Attentive user interfaces using eye contact sensors may function to direct video cameras, or recording facilities, or to deliver audiovisual content. By mounting an eye contact sensor on a camera, and connecting its signal to the recording of this camera, an automated direction system can automatically switch to the camera currently looked at by a presenter.
  • Similarly, televisions and other audiovisual content delivery systems can be augmented with eye contact sensors to determine whether that content is being viewed, and to take appropriate action when it is no longer viewed. In combination with a personal video recording system, this may involve tracking user attention automatically for various shows, skipping commercials on the basis of perceived attentiveness, modulating volume level or messages delivered through that medium, or live pausing of audiovisual material.
  • In a video conferencing system, eye contact sensors or related eye tracking technologies may be used to ensure that eye contact with a user is captured at all times, by switching to one of multiple cameras positioned behind a virtual display such that the camera closest to which the user is looking is always selected for broadcast. Quality of service of network connection, including resolution of audio and video data can be modulated according to which person is being looked at, as measured by an eye contact sensor or other eye tracking device.
  • Example 4 Attention Monitor
  • As an attention monitor, an attentive user interface includes an eye contact sensor, optionally in conjunction with other sensors for measuring other indices of the attentive state of a user, and software to monitor what device, person, or task a user is attending to. This information can be used, for example, to determine the optimal channel of delivering information, prioritize the delivery and notification of messages, appointments, and information from multiple devices or users across a network, and generally manage the user's attention space.
  • As used herein, the term “attention space” refers to the limited attention a user has available to process/respond to stimuli, given that the capacity of a user to process information simultaneously from various sources is limited.
  • Software augmented with sensing systems including eye contact sensors function as an intermediary to the management of a user's physical attention. Thus, miniaturized eye contact sensors can be embedded in, and augment, small electronic devices such as PDAs, cell phones, personal entertainment systems, appliances, or any other object to deliver information when a user is paying attention to the device, deferring that information's delivery when the user's attention is directed elsewhere. This information may be used, for example, to dynamically route audio or video calls, instant messages, email messages, or any other communications to the correct location of the user's current attention, and to infer and modulate quality of service of the network.
  • In environments with many potential subjects requesting a user's attention, attentive user interfaces need a dynamic model of the user's attentive context to establish a gradual and appropriate notification process that does not overload the user. This context includes which task, device, or person the user is paying attention to, the importance of that task, and the preferred communication channel to contact the user. The invention provides a personalized communications server, referred to herein as “eyeREASON”, that negotiates all remote interactions between a user and attentive devices by keeping track of the user's attentive context. In one embodiment, eyeREASON is an advanced personal unified messaging filter, not unlike an advanced spam filter. EyeREASON decides, on the basis of information about the user's prior, current, and/or future attentive state, the priority of a message originating from a subject in relationship to that of tasks the user is attending to. By examining parameters of the message and user task(s), including attentive states of subjects pertaining to that message, eyeREASON makes decisions about whether, when, and how to forward notifications to the user, or to defer message delivery for later retrieval by the user. A message can be in any format, such as email, instant messaging or voice connection, speech recognition, or messages from sensors, asynchronous or synchronous. In the embodiment of speech recognition and production interface, any speech communication between a user and device(s) can be routed through a wired or wireless headset worn by the user, and processed by a speech recognition and production system on the server. As the user works with various devices, eyeREASON switches its vocabulary to the lexicon of the focus device, sending commands through that device's in/out (1/O) channels. Each device reports to the eyeREASON server when it senses that a user is paying attention to it. EyeREASON uses this information to determine when and how to relay messages from devices to the user. Using information about the attentive state of the user, such as what devices the user is currently operating, what communication channels with the user are currently occupied, and the priority of the message relative to the tasks the user is engaged in, eyeREASON dynamically chooses an optimal notification device with appropriate channels and levels of notification. Notifications can migrate between devices, tracking the attention of the user, as is illustrated by the below scenario. One application of eyeREASON is the management of prioritized delivery of unified messages.
  • The following scenario illustrates interactions of a user with various devices enabled with attentive user interfaces, employing eye contact sensing capability, through eyeREASON's attentive reasoning system. It shows how awareness of a user's attentive context may facilitate turn-taking between the user and remote ubiquitous devices. Alex enters his living room, which senses his presence (e.g., via the RF ID tag he is wearing) and reports his presence to his eyeREASON server. He turns on his television, which has live pausing capability (e.g., TiVo, personal video recorder (PVR)). The television is augmented with an attentive user interface having an eye contact sensor, which notifies the server that it is being watched. The eyeREASON server updates the visual and auditory interruption levels of all people present in the living room. Alex goes to the kitchen to get himself a cold drink from his attentive refrigerator, which is augmented with a RF ID tag reader. As he enters the kitchen, his interruption levels are adjusted appropriate to his interactions with devices in the kitchen. In the living room, the TV pauses because its eye contact sensor reports that no one is watching. Alex queries his attentive fridge and finds that there are no cold drinks within. He gets a bottle of soda from a cupboard in the kitchen and puts it in the freezer compartment of the fridge. Informed by a RF ID tag on the bottle, the fridge estimates the amount of time it will take for the bottle to freeze and break. It records Alex's tag and posts a notification with a timed priority level to his eyeREASON server. Alex returns to the living room and looks at the TV, which promptly resumes the program. When the notification times out, Alex's eyeREASON server determines that the TV is an appropriate device to use for notifying Alex. It chooses the visual communication channel, because it is less disruptive than audio. A box with a message from the fridge appears in the corner of the TV. As time progresses, the priority of the notification increases, and the box grows in size on the screen, demonstrating with increased urgency that Alex's drink is freezing. Alex gets up, the TV pauses and he sits down at his computer to check his email. His eyeREASON server determines that the priority of the fridge notification is greater than that of his current email, and moves the alert to his computer. Alex acknowledges this alert, and retrieves his drink, causing the fridge to withdraw the notification. Had Alex not acknowledged this alert, the eyeREASON server would have forwarded the notification to Alex's email, or chosen an alternative channel.
  • Example 5 Response Monitor
  • By placing an attentive user interface in the vicinity of any visual material that one would be interested in tracking the response to, such as advertisements (virtual or real), television screens, and billboards, the attention of users for the visual material can be monitored. Applications include, for example, gathering marketing information and monitoring of the effectiveness of advertisements.
  • Example 6 Control of Graphical User Interface
  • An attentive user interface, using eye contact sensors or related eye tracking technology, can be used to modulate the amount of screen space allocated to a window in a graphical user interface windowing system according to the amount of visual attention received by that window. Similarly, attentive user interfaces employing eye contact sensors or other related eye tracking technology may be used to initiate the retrieval of information on the basis of progressive disclosure. For example, information may initially be shown with limited resolution on the side of a display. When a user looks at the representation for a set amount of time, more detailed information is retrieved and rendered on the screen using a larger surface. Examples include stock market tickers that grow and provide more information when users pay attention to it, instant messaging buddy status lists that engage in connections, opening up chat boxes with users that are being looked at, etc.
  • Example 7 Graphical User Interface
  • This example relates to use of an attentive user interface in a windowing system, referred to herein as “eyeWINDOWS”, for a graphical user interface which incorporates fisheye windows or views that use eye fixation, rather than manual pointing, to select the focus window. The windowing system allocates display space to a given window based on the amount of visual attention received by that window. Use of eye input facilitates contextual activity while maintaining user focus. It allows more continuous accommodation of the windowing system to shifts in user attention, and more efficient use of manual input.
  • Windowing systems of commercial desktop interfaces have experienced little change over the last 20 years. Current systems employ the same basic technique of allocating display space using manually arranged, overlapping windows into the task world. However, due to interruption by for example, system prompts, incoming email messages, and other notifications, a user's attention shifts almost continuously between tasks. Such behavior requires a more flexible windowing systems that allows a user to more easily move between alternate activities. This problem has prompted new research into windowing systems that allow more fluent interaction through, e.g., zooming task bars (Cadiz et al., 2002) or fisheye views (Gutwin, 2002). While most of this work emphasizes the use of manual input for optimizing display space, there has been little work on windowing systems that sense the user's attention using more direct means. Using an alternate channel for sensing the attention of the user for parts of a display has a number of benefits. Firstly, it allows an undisrupted use of manual tools for task-oriented activities; and secondly, it allows a more continuous accommodation of shifts in user attention.
  • Consider, for example, a scenario where a user is working on a task on a personal computer when an alert window appears on the screen to inform him that a new email message has just been received. The alert window obscures the user's current task and the received message, such that the user is only allowed to resume his task or read the message after manually dismissing the alert. Tracking the focus of a user allows an interface to more actively avoid interrupting the user, e.g., by more careful placement of windows.
  • Use of eye input to select a window of interest has several advantages. Firstly, the eyes typically acquire a target well before manual pointing is initiated (Zhai, 2003). Secondly, eye muscles operate much faster than hand muscles (Zhai, 2003). Finally, the eyes provide a more continuous signal that frees the hands for other tasks. Bolt (1985) recognized early on how, using a pair of eye tracking glasses, windows might automatically be selected and zoomed. Unfortunately, his glasses did not provide sufficient resolution. However, recent advances allow seamless integration of an eye tracker with a head movement tolerance of 60 cm and an on-screen accuracy of better than 1 cm into a 17″ LCD screen. We used a similar eye tracker to implement eyeWINDOWS.
  • To determine which window should be the focus window, eyeWINDOWS observes user eye fixations at windows with an LC Technologies eye tracker. Using a lens algorithm similar to Sarkar et al. (1992), the focus window is zoomed to maximum magnification. Surrounding windows contract with distance to the focus window. However, the enlarged window does not obscure the surrounding contracted windows, such that the user can readily view all windows. While typical fisheye browsers run within a single window, eyeWINDOWS affects all active applications. Traditional icons are replaced with active thumbnail views that provide full functionality, referred to herein as “eyecons”. Eyecons zoom into a focus window when a user looks at them.
  • Our first design issue was that of when to zoom an eyecon into a focus window. We first experimented with a continuous fisheye lens, which shifted whenever the user produced an eye movement. This led to focus targeting problems similar to those observed during manual pointing (Gutwin, 2002). In subsequent implementations, the lens was shifted only after selecting a new focus window. Our second design issue was how to trigger this selection. We designed two solutions. In our first approach, dwell time was used as a trigger. An eyecon zooms into a focus window after a user-configurable period of fixations at that eyecon. To avoid a Midas Touch effect (Zhai, 2003)—where users avoid looking to prevent unintentional triggering—fish-eye magnification is applied with non-linear acceleration. When the user first fixates on an eyecon, it starts growing very slowly. If this is not what the user intended, one fixation at the original focus window undoes the action. However, when the user continues to produce fixations at the eyecon, zooming accelerates until maximum magnification is reached. Our second approach to this problem prevents a Midas Touch effect altogether. In this approach, a new focus window is selected when the user presses the space bar while fixating at an eyecon. Focus window selection is suspended during normal keyboard or pointing activity, such as when scrolling or typing. Fish-eye magnification does not apply to certain utility windows, such as tool bars.
  • Initial user observations appear to favor the use of key triggering for focus window selection. The following scenario illustrates this process: a user is working on a text in the focus window in the center of the screen. The focus window is surrounded by eyecons of related documents, with associated file names. The user wishes to copy a picture from the document to the right of his focus window. He looks at its eyecon and presses the space bar, and the eyecon zooms into a focus window, while the old focus window shrinks into an eyecon. After having found the picture, he places it in the clipboard and shifts his attention back to the original document. It zooms into a focus window and the user pastes the picture into the document. This scenario illustrates how contextual actions are supported without the need for multiple pointing gestures to resize or reposition windows. EyeWINDOWS also supports more attention-sensitive notification. For example, the user is notified of a message by a notification eyecon at the bottom of the screen. When the user fixates at the notification eyecon it zooms to reveal its message. The notification is dismissed once eyeWINDOWS detects the message was read. This illustrates how an attentive user interface supports user focus within the context of more peripheral events.
  • Example 8 Attentive Appliance
  • Any household or commercial/industrial appliance, digital or analog apparatus, or object may be configured as an attentive appliance. Such attentive appliance may be a stand alone “smart appliance”, or may be networked to a shared computational resource such as a communications server (e.g., eyeREASON; see Example 4), providing unified message capabilities to all networked appliances without requiring extensive embedded computational support in each appliance. In Example 4 the attentive refrigerator was a refrigerator augmented with the capabilities to sense eye contact with its user, presence of objects inside and outside the fridge through radio frequency ID tags, user identification and presence sensing through RF ID tags or any other means of sensing, as well as identification of objects inside and outside the fridge. A small computer embedded in the fridge, and connected to a network through a tcp ip connection, runs a simple program that allows the fridge to reason about its contents, and interact with the user, by incorporating eye contact with the user. The fridge may contain software for processing and producing speech, and a speech recognition and production engine residing on eyeREASON can advantageously be employed to process speech for it, responding to contextualized verbal queries by a user. This is accomplished by sending xml speech recognition grammars and lexicons from the fridge to eyeREASON that are contextualized upon the state of the fridge's sensing systems. The fridge will send xml grammars and enable speech processing whenever a user is in close proximity to it, and/or making eye contact with the fridge, and/or holding objects from the fridge in his/her hand. The user is connected to the speech recognition and production engine on eyeREASON through a wireless headset (e.g., BlueTooth®). This allows eyeREASON to process speech by the user, with the contextualized grammars provided by the appliance the user is interacting with. EyeREASON determines a) whether speech should be processed; e.g., focus events sent by the appliance on the basis of information from its eye contact sensor; b) for which appliance, and with which grammar speech should be processed; c) what commands should be sent to the appliance as a consequence; and d) what the priority of messages returned from the appliance should be. Messages sent by appliances during synchronous interactions with a user will receive the highest notification levels.
  • The following scenario illustrates the process: User A is standing near his attentive fridge. He asks what is contained in the fridge while looking at the fridge. The fridge senses his presence, detects eye contact, and determines the identity of the user. It sends an xml grammar containing the speech vocabulary suitable for answering queries to user A's eyeREASON server. The eyeREASON server switches its speech recognition lexicon to process speech for the fridge, as instructed by the current xml grammar. It parses the user's speech according to the grammar, recognizes that the user wants a list of items in the fridge, and sends a command to the fridge to provide a list of items, according the xml specs. The fridge responds by sending a text message to eyeREASON listing the items in the fridge. Since the user is directly engaged in a synchronous interaction with the fridge, eyeREASON decides the message should be forwarded to the user immediately. Since the user has been interacting with the fridge through speech over his headset, eyeREASON uses this same path, speaking the message to the user with its speech production system. The user opens the fridge and retrieves some cheese. The fridge recognizes that the hand of user A is in the fridge, and has removed the cheese. It sends a hand focus event, and subsequently an object focus event to the eyeREASON server with the RF ID of the cheese object, with corresponding grammar for handling any user speech. The user may query any property of the cheese object, for example its expiration date. If the user says “start message” eyeREASON will record any voice message and tag it with the RF ID of the object the user was holding, as well as the ID of the user. It will stop recording when user puts the object back into the fridge, tagging the object with a voice message. It forwards this voice message with a store command to the embedded processor in the fridge. The next time any user other than user A retrieves the same object, the fridge will forward the voice message to pertaining to this object to that user.
  • Any attentive appliance may signal its attention for a user using, for example, an eye proxy mounted in close proximity to it. The eye proxy (described in more detail above and in Example 2) will function in lieu of an eye contact sensor, tracking and maintaining eye contact with a user. It maintains activation of the speech recognition engine for the appliance it is associated with while there is sufficient statistical evidence the user is looking at or interacting with that appliance. Before replying to a user through a message, the appliance will attempt to signal its request for attention by seeking eye contact between its proxy and user. Should the user not respond, the eyeREASON system will determine a new notification level for the message. EyeREASON will lower the notification level of the message the moment a user is perceived to be no longer interacting directly with the appliance that sent the message. Competing with other messages in the priority queue of the user, the server will either forward the message, for example to the user's cell phone, or store it for later retrieval in the user's message queue. If the priority of the message is determined higher than those of other messages in the user's notification queue, eyeREASON will attempt to progressively notify the user of the message up to a user determined number of times. Each time the user does not respond the notification level of the message is increased. This allows eyeREASON to seek different channels of notification each time the notification is re-triggered. For example, it may initially attempt to signal attention through seeking eye contact with the user through the eye proxy pertaining to the appliance that sent the message. When this fails, it may initiate a low-volume auditory interruption in that appliance. When this fails, it may forward the notification to the appliance the user is currently interacting with, potentially disrupting the user's current activity. The latter should only occur when messages are determined to be of a greater notification level than the user's current tasks. When this fails, the message is forwarded to the user's message queue for later retrieval.
  • Those of ordinary skill in the art will recognize, or be able to ascertain through routine experimentation, equivalents to the embodiments described herein. Such equivalents are within the scope of the invention and are covered by the appended claims.
  • REFERENCES
    • Bolt, R. A., 1985. Conversing with Computers. Technology Review 88(2), pp. 34-43.
    • Cadiz, J. et al. (2002). Designing and Deploying an Information Awareness Interface. In: Proceedings of CSCW'02.
    • Frolich, D., et al., 1994. Informal Workplace Communication: What is It Like and How Might We Support It? HP Tech. Report.
    • Gutwin, C., 2002. Improving Focus Targeting in Interactive Fisheye Views. In: Proceedings of CHI'02, pp. 267-274.
    • Morimoto, C. et al., 2000. Pupil Detection and Tracking Using Multiple Light Sources. Image and Vision Computing, vol 18.
    • Sarkar, M. et al., 1992. Graphical Fisheye Views of Graphs. In: Proceedings of CHI'92, pp. 83-91.
    • Vertegaal, R., 1999. The GAZE Groupware System. In Proceedings of CHI'99. Pittsburgh: ACM.
    • Zhai, S., 2003. What's in the Eyes for Attentive Input. In: Communications of ACM 46(3).

Claims (41)

1-44. (canceled)
45. Apparatus for controlling outputting of information on a screen of a device, comprising:
at least one sensor coupled to the device that outputs a sensor signal;
a processor that processes the sensor signal and produces a user state signal that is indicative of user attention toward the screen of the device;
wherein the apparatus uses the user state signal as a basis for determining whether to pause the outputting of information on the screen of the device; and
wherein the pausing of the outputting of information is initiated by the device based on said determining.
46. The apparatus of claim 45, further comprising the apparatus using the user state signal as a basis for determining whether to resume the outputting of information on the screen of the device;
wherein the resuming of the outputting of information is initiated by the device based on said determining.
47. The apparatus of claim 45, further comprising the apparatus using the user state signal as a basis for determining whether to progressively control the outputting of information on the screen of the device;
wherein the progressively controlling the outputting of information is initiated by the device based on said determining.
48. The apparatus of claim 45, wherein the information includes audio information.
49. The apparatus of claim 45, wherein the device is selected from the group consisting of a personal computer, a cellular telephone, a telephone, a personal digital assistant (PDA), an electronic device, a machine, a smart appliance, and an appliance.
50. The apparatus of claim 45, wherein the outputting of information on the screen of the device includes the device soliciting user input.
51. The apparatus of claim 45, wherein the user state signal is based on user head or face orientation.
52. The apparatus of claim 45, wherein the user state signal is based on user eye contact.
53. The apparatus of claim 45, wherein the user state signal is based on user body presence.
54. The apparatus of claim 45, wherein the user state signal is based on user activity.
55. The apparatus of claim 45, wherein the user state signal is based on user body orientation.
56. The apparatus of claim 45, wherein the sensor comprises an infrared sensor.
57. The apparatus of claim 45, wherein the sensor is electronically coupled to the device.
58. The apparatus of claim 45, wherein the sensor is wirelessly coupled to the device.
59. The apparatus of claim 45, wherein the sensor is physically coupled to the device.
60. The apparatus of claim 59, wherein the sensor is attached to or embedded in the screen of the device.
61. The apparatus of claim 45, wherein the screen is a video screen.
62. The apparatus of claim 45, wherein the screen is a computer screen.
63. The apparatus of claim 45, wherein the outputting of information comprises notifying the user when the user state signal indicates that the user attention toward the screen is at least at a selected level.
64. The apparatus of claim 63, wherein notifying the user comprises using a less interruptive notification and progressing to a more interruptive notification.
65. A method for controlling outputting of information on a screen of a device, comprising:
using at least one sensor coupled to the device to output a sensor signal;
processing the sensor signal to produce a user state signal that is indicative of user attention toward the screen of the device;
using the user state signal as a basis for determining whether to pause the outputting of information on the screen of the device; and
wherein the pausing of the outputting of information is initiated by the device based on said determining.
66. The method of claim 65, further comprising using the user state signal as a basis for determining whether to resume the outputting of information on the screen of the device;
wherein the resuming of the outputting of information is initiated by the device based on said determining.
67. The method of claim 65, further comprising using the user state signal as a basis for determining whether to progressively control the outputting of information on the screen of the device;
wherein the progressively controlling the outputting of information is initiated by the device based on said determining.
68. The method of claim 65, wherein the information includes audio information.
69. The method of claim 65, wherein the device is selected from the group consisting of a personal computer, a cellular telephone, a telephone, a personal digital assistant (PDA), an electronic device, a machine, a smart appliance, and an appliance.
70. The method of claim 65, wherein the outputting of information on the screen of the device includes the device soliciting user input.
71. The method of claim 65, wherein the user state signal is based on user head or face orientation.
72. The method of claim 65, wherein the user state signal is based on user eye contact.
73. The method of claim 65, wherein the user state signal is based on user body presence.
74. The method of claim 65, wherein the user state signal is based on user activity.
75. The method of claim 65, wherein the user state signal is based on user body orientation.
76. The method of claim 65, comprising using an infrared sensor.
77. The method of claim 65, comprising electronically coupling the sensor to the device.
78. The method of claim 65, comprising wirelessly coupling the sensor to the device.
79. The method of claim 65, comprising physically coupling the sensor to the device.
80. The method of claim 79, wherein the sensor is attached to or embedded in the screen of the device.
81. The method of claim 65, wherein the screen is a video screen.
82. The method of claim 65, wherein the screen is a computer screen.
83. The method of claim 65, wherein the outputting of information comprises notifying the user when the user state signal indicates that the user attention toward the screen is at least at a selected level.
84. The method of claim 83, wherein notifying the user comprises using a less interruptive notification and progressing to a more interruptive notification.
US14/722,504 2003-03-21 2015-05-27 Method and Apparatus for Communication Between Humans and Devices Abandoned US20160103486A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US14/722,504 US20160103486A1 (en) 2003-03-21 2015-05-27 Method and Apparatus for Communication Between Humans and Devices
US15/429,733 US10296084B2 (en) 2003-03-21 2017-02-10 Method and apparatus for communication between humans and devices
US16/407,591 US10915171B2 (en) 2003-03-21 2019-05-09 Method and apparatus for communication between humans and devices

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US10/392,960 US7762665B2 (en) 2003-03-21 2003-03-21 Method and apparatus for communication between humans and devices
US12/843,399 US8096660B2 (en) 2003-03-21 2010-07-26 Method and apparatus for communication between humans and devices
US13/315,844 US20120078623A1 (en) 2003-03-21 2011-12-09 Method and Apparatus for Communication Between Humans and Devices
US13/866,430 US8672482B2 (en) 2003-03-21 2013-04-19 Method and apparatus for communication between humans and devices
US14/210,778 US20150042555A1 (en) 2003-03-21 2014-03-14 Method and Apparatus for Communication Between Humans and Devices
US14/722,504 US20160103486A1 (en) 2003-03-21 2015-05-27 Method and Apparatus for Communication Between Humans and Devices

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/210,778 Continuation US20150042555A1 (en) 2003-03-21 2014-03-14 Method and Apparatus for Communication Between Humans and Devices

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/429,733 Continuation US10296084B2 (en) 2003-03-21 2017-02-10 Method and apparatus for communication between humans and devices

Publications (1)

Publication Number Publication Date
US20160103486A1 true US20160103486A1 (en) 2016-04-14

Family

ID=32988008

Family Applications (9)

Application Number Title Priority Date Filing Date
US10/392,960 Expired - Fee Related US7762665B2 (en) 2003-03-21 2003-03-21 Method and apparatus for communication between humans and devices
US12/843,399 Expired - Fee Related US8096660B2 (en) 2003-03-21 2010-07-26 Method and apparatus for communication between humans and devices
US13/315,844 Abandoned US20120078623A1 (en) 2003-03-21 2011-12-09 Method and Apparatus for Communication Between Humans and Devices
US13/534,706 Expired - Fee Related US8322856B2 (en) 2003-03-21 2012-06-27 Method and apparatus for communication between humans and devices
US13/866,430 Expired - Fee Related US8672482B2 (en) 2003-03-21 2013-04-19 Method and apparatus for communication between humans and devices
US14/210,778 Abandoned US20150042555A1 (en) 2003-03-21 2014-03-14 Method and Apparatus for Communication Between Humans and Devices
US14/722,504 Abandoned US20160103486A1 (en) 2003-03-21 2015-05-27 Method and Apparatus for Communication Between Humans and Devices
US15/429,733 Expired - Fee Related US10296084B2 (en) 2003-03-21 2017-02-10 Method and apparatus for communication between humans and devices
US16/407,591 Expired - Lifetime US10915171B2 (en) 2003-03-21 2019-05-09 Method and apparatus for communication between humans and devices

Family Applications Before (6)

Application Number Title Priority Date Filing Date
US10/392,960 Expired - Fee Related US7762665B2 (en) 2003-03-21 2003-03-21 Method and apparatus for communication between humans and devices
US12/843,399 Expired - Fee Related US8096660B2 (en) 2003-03-21 2010-07-26 Method and apparatus for communication between humans and devices
US13/315,844 Abandoned US20120078623A1 (en) 2003-03-21 2011-12-09 Method and Apparatus for Communication Between Humans and Devices
US13/534,706 Expired - Fee Related US8322856B2 (en) 2003-03-21 2012-06-27 Method and apparatus for communication between humans and devices
US13/866,430 Expired - Fee Related US8672482B2 (en) 2003-03-21 2013-04-19 Method and apparatus for communication between humans and devices
US14/210,778 Abandoned US20150042555A1 (en) 2003-03-21 2014-03-14 Method and Apparatus for Communication Between Humans and Devices

Family Applications After (2)

Application Number Title Priority Date Filing Date
US15/429,733 Expired - Fee Related US10296084B2 (en) 2003-03-21 2017-02-10 Method and apparatus for communication between humans and devices
US16/407,591 Expired - Lifetime US10915171B2 (en) 2003-03-21 2019-05-09 Method and apparatus for communication between humans and devices

Country Status (1)

Country Link
US (9) US7762665B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9851939B2 (en) 2015-05-14 2017-12-26 International Business Machines Corporation Reading device usability

Families Citing this family (231)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050034147A1 (en) * 2001-12-27 2005-02-10 Best Robert E. Remote presence recognition information delivery systems and methods
US8292433B2 (en) * 2003-03-21 2012-10-23 Queen's University At Kingston Method and apparatus for communication between humans and devices
US7762665B2 (en) 2003-03-21 2010-07-27 Queen's University At Kingston Method and apparatus for communication between humans and devices
US7729711B2 (en) * 2003-05-09 2010-06-01 Intel Corporation Reducing interference from closely proximate wireless units
US8705808B2 (en) * 2003-09-05 2014-04-22 Honeywell International Inc. Combined face and iris recognition system
US7752550B2 (en) * 2003-09-23 2010-07-06 At&T Intellectual Property I, Lp System and method for providing managed point to point services
US20050128296A1 (en) * 2003-12-11 2005-06-16 Skurdal Vincent C. Processing systems and methods of controlling same
US20050281531A1 (en) * 2004-06-16 2005-12-22 Unmehopa Musa R Television viewing apparatus
DK1607840T3 (en) 2004-06-18 2015-02-16 Tobii Technology Ab Eye control of a computer device
US7406422B2 (en) * 2004-07-20 2008-07-29 Hewlett-Packard Development Company, L.P. Techniques for improving collaboration effectiveness
JP4284538B2 (en) * 2004-10-19 2009-06-24 ソニー株式会社 Playback apparatus and playback method
US7396129B2 (en) * 2004-11-22 2008-07-08 Carestream Health, Inc. Diagnostic system having gaze tracking
US20070152076A1 (en) * 2004-12-13 2007-07-05 Chiang Kuo C Monitoring system with a wireless transmitting/receiving module
US20060192775A1 (en) * 2005-02-25 2006-08-31 Microsoft Corporation Using detected visual cues to change computer system operating states
US8024438B2 (en) * 2005-03-31 2011-09-20 At&T Intellectual Property, I, L.P. Methods, systems, and computer program products for implementing bandwidth management services
US8098582B2 (en) * 2005-03-31 2012-01-17 At&T Intellectual Property I, L.P. Methods, systems, and computer program products for implementing bandwidth control services
US7975283B2 (en) * 2005-03-31 2011-07-05 At&T Intellectual Property I, L.P. Presence detection in a bandwidth management system
US8306033B2 (en) * 2005-03-31 2012-11-06 At&T Intellectual Property I, L.P. Methods, systems, and computer program products for providing traffic control services
US8335239B2 (en) 2005-03-31 2012-12-18 At&T Intellectual Property I, L.P. Methods, systems, and devices for bandwidth conservation
US8259861B2 (en) * 2005-03-31 2012-09-04 At&T Intellectual Property I, L.P. Methods and systems for providing bandwidth adjustment
US20060253272A1 (en) * 2005-05-06 2006-11-09 International Business Machines Corporation Voice prompts for use in speech-to-speech translation system
US8437729B2 (en) * 2005-05-10 2013-05-07 Mobile Communication Technologies, Llc Apparatus for and system for enabling a mobile communicator
US20070270122A1 (en) 2005-05-10 2007-11-22 Ewell Robert C Jr Apparatus, system, and method for disabling a mobile communicator
US8385880B2 (en) * 2005-05-10 2013-02-26 Mobile Communication Technologies, Llc Apparatus for and system for enabling a mobile communicator
US7904300B2 (en) * 2005-08-10 2011-03-08 Nuance Communications, Inc. Supporting multiple speech enabled user interface consoles within a motor vehicle
US8022989B2 (en) * 2005-08-17 2011-09-20 Palo Alto Research Center Incorporated Method and apparatus for controlling data delivery with user-maintained modes
US8701148B2 (en) 2005-09-01 2014-04-15 At&T Intellectual Property I, L.P. Methods, systems, and devices for bandwidth conservation
US8104054B2 (en) 2005-09-01 2012-01-24 At&T Intellectual Property I, L.P. Methods, systems, and devices for bandwidth conservation
US8825482B2 (en) 2005-09-15 2014-09-02 Sony Computer Entertainment Inc. Audio, video, simulation, and user interface paradigms
US8775975B2 (en) 2005-09-21 2014-07-08 Buckyball Mobile, Inc. Expectation assisted text messaging
US7697827B2 (en) 2005-10-17 2010-04-13 Konicek Jeffrey C User-friendlier interfaces for a camera
US20070100939A1 (en) * 2005-10-27 2007-05-03 Bagley Elizabeth V Method for improving attentiveness and participation levels in online collaborative operating environments
US20070100938A1 (en) * 2005-10-27 2007-05-03 Bagley Elizabeth V Participant-centered orchestration/timing of presentations in collaborative environments
US7860233B2 (en) * 2005-12-20 2010-12-28 Charles Schwab & Co., Inc. System and method for tracking alerts
US7774851B2 (en) * 2005-12-22 2010-08-10 Scenera Technologies, Llc Methods, systems, and computer program products for protecting information on a user interface based on a viewability of the information
US20070150916A1 (en) * 2005-12-28 2007-06-28 James Begole Using sensors to provide feedback on the access of digital content
US7812826B2 (en) * 2005-12-30 2010-10-12 Apple Inc. Portable electronic device with multi-touch input
KR100792293B1 (en) * 2006-01-16 2008-01-07 삼성전자주식회사 Method for providing service considering user's context and the service providing apparatus thereof
US8209181B2 (en) * 2006-02-14 2012-06-26 Microsoft Corporation Personal audio-video recorder for live meetings
US8064487B1 (en) * 2006-04-17 2011-11-22 Avaya Inc. Virtual office presence bridge
JP5397219B2 (en) * 2006-04-19 2014-01-22 イグニス・イノベーション・インコーポレイテッド Stable drive scheme for active matrix display
WO2007132566A1 (en) * 2006-05-15 2007-11-22 Nec Corporation Video reproduction device, video reproduction method, and video reproduction program
US20070282783A1 (en) * 2006-05-31 2007-12-06 Mona Singh Automatically determining a sensitivity level of a resource and applying presentation attributes to the resource based on attributes of a user environment
US8182267B2 (en) * 2006-07-18 2012-05-22 Barry Katz Response scoring system for verbal behavior within a behavioral stream with a remote central processing system and associated handheld communicating devices
AU2007275991B2 (en) * 2006-07-20 2010-12-02 Lg Electronics Inc. Operation method of interactive refrigerator system
US8228371B2 (en) * 2006-07-31 2012-07-24 Hewlett-Packard Development Company, L.P. Projection screen and camera array
US8370207B2 (en) 2006-12-30 2013-02-05 Red Dot Square Solutions Limited Virtual reality system including smart objects
US9940589B2 (en) * 2006-12-30 2018-04-10 Red Dot Square Solutions Limited Virtual reality system including viewer responsiveness to smart objects
WO2008081413A1 (en) * 2006-12-30 2008-07-10 Kimberly-Clark Worldwide, Inc. Virtual reality system for environment building
US8341022B2 (en) * 2006-12-30 2012-12-25 Red Dot Square Solutions Ltd. Virtual reality system for environment building
US8243119B2 (en) 2007-09-30 2012-08-14 Optical Fusion Inc. Recording and videomail for video conferencing call systems
US9513699B2 (en) * 2007-10-24 2016-12-06 Invention Science Fund I, LL Method of selecting a second content based on a user's reaction to a first content
US20090113297A1 (en) * 2007-10-24 2009-04-30 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Requesting a second content based on a user's reaction to a first content
US20090112694A1 (en) * 2007-10-24 2009-04-30 Searete Llc, A Limited Liability Corporation Of The State Of Delaware Targeted-advertising based on a sensed physiological response by a person to a general advertisement
US9582805B2 (en) 2007-10-24 2017-02-28 Invention Science Fund I, Llc Returning a personalized advertisement
US8194628B2 (en) 2007-12-03 2012-06-05 At&T Intellectual Property I, L.P. Methods and apparatus to enable call completion in internet protocol communication networks
US9035876B2 (en) 2008-01-14 2015-05-19 Apple Inc. Three-dimensional user interface session control
US8933876B2 (en) 2010-12-13 2015-01-13 Apple Inc. Three dimensional user interface session control
US8495660B1 (en) * 2008-03-28 2013-07-23 Symantec Corporation Methods and systems for handling instant messages and notifications based on the state of a computing device
CN101596368A (en) * 2008-06-04 2009-12-09 鸿富锦精密工业(深圳)有限公司 Interactive toy system and method thereof
US8593503B2 (en) * 2008-09-25 2013-11-26 Alcatel Lucent Videoconferencing terminal and method of operation thereof to maintain eye contact
US20100208078A1 (en) * 2009-02-17 2010-08-19 Cisco Technology, Inc. Horizontal gaze estimation for video conferencing
US20100235786A1 (en) * 2009-03-13 2010-09-16 Primesense Ltd. Enhanced 3d interfacing for remote devices
EP2237237B1 (en) * 2009-03-30 2013-03-20 Tobii Technology AB Eye closure detection using structured illumination
US20100295782A1 (en) 2009-05-21 2010-11-25 Yehuda Binder System and method for control based on face ore hand gesture detection
US8520051B2 (en) * 2009-12-17 2013-08-27 Alcatel Lucent Videoconferencing terminal with a persistence of vision display and a method of operation thereof to maintain eye contact
US8676581B2 (en) * 2010-01-22 2014-03-18 Microsoft Corporation Speech recognition analysis via identification information
WO2011100436A1 (en) * 2010-02-10 2011-08-18 Lead Technology Capital Management, Llc System and method of determining an area of concentrated focus and controlling an image displayed in response
US8467133B2 (en) 2010-02-28 2013-06-18 Osterhout Group, Inc. See-through display with an optical assembly including a wedge-shaped illumination system
US9759917B2 (en) 2010-02-28 2017-09-12 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered AR eyepiece interface to external devices
US9134534B2 (en) 2010-02-28 2015-09-15 Microsoft Technology Licensing, Llc See-through near-eye display glasses including a modular image source
US20120249797A1 (en) 2010-02-28 2012-10-04 Osterhout Group, Inc. Head-worn adaptive display
US9128281B2 (en) 2010-09-14 2015-09-08 Microsoft Technology Licensing, Llc Eyepiece with uniformly illuminated reflective display
US9341843B2 (en) 2010-02-28 2016-05-17 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a small scale image source
US9223134B2 (en) 2010-02-28 2015-12-29 Microsoft Technology Licensing, Llc Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses
US8472120B2 (en) 2010-02-28 2013-06-25 Osterhout Group, Inc. See-through near-eye display glasses with a small scale image source
US9129295B2 (en) 2010-02-28 2015-09-08 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear
US20150309316A1 (en) 2011-04-06 2015-10-29 Microsoft Technology Licensing, Llc Ar glasses with predictive control of external device based on event input
US9366862B2 (en) 2010-02-28 2016-06-14 Microsoft Technology Licensing, Llc System and method for delivering content to a group of see-through near eye display eyepieces
US9091851B2 (en) 2010-02-28 2015-07-28 Microsoft Technology Licensing, Llc Light control in head mounted displays
US8477425B2 (en) 2010-02-28 2013-07-02 Osterhout Group, Inc. See-through near-eye display glasses including a partially reflective, partially transmitting optical element
US8488246B2 (en) 2010-02-28 2013-07-16 Osterhout Group, Inc. See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film
US9097891B2 (en) 2010-02-28 2015-08-04 Microsoft Technology Licensing, Llc See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment
US9182596B2 (en) 2010-02-28 2015-11-10 Microsoft Technology Licensing, Llc See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light
US8482859B2 (en) 2010-02-28 2013-07-09 Osterhout Group, Inc. See-through near-eye display glasses wherein image light is transmitted to and reflected from an optically flat film
JP2013521576A (en) 2010-02-28 2013-06-10 オスターハウト グループ インコーポレイテッド Local advertising content on interactive head-mounted eyepieces
US9097890B2 (en) 2010-02-28 2015-08-04 Microsoft Technology Licensing, Llc Grating in a light transmissive illumination system for see-through near-eye display glasses
US9229227B2 (en) 2010-02-28 2016-01-05 Microsoft Technology Licensing, Llc See-through near-eye display glasses with a light transmissive wedge shaped illumination system
US10180572B2 (en) 2010-02-28 2019-01-15 Microsoft Technology Licensing, Llc AR glasses with event and user action control of external applications
US9285589B2 (en) 2010-02-28 2016-03-15 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered control of AR eyepiece applications
US9634855B2 (en) 2010-05-13 2017-04-25 Alexander Poltorak Electronic personal interactive device that determines topics of interest using a conversational agent
US9201501B2 (en) 2010-07-20 2015-12-01 Apple Inc. Adaptive projector
CN102959616B (en) 2010-07-20 2015-06-10 苹果公司 Interactive reality augmentation for natural interaction
US8959013B2 (en) 2010-09-27 2015-02-17 Apple Inc. Virtual keyboard for a non-tactile three dimensional user interface
KR20120053803A (en) * 2010-11-18 2012-05-29 삼성전자주식회사 Apparatus and method for displaying contents using trace of eyes movement
US8872762B2 (en) 2010-12-08 2014-10-28 Primesense Ltd. Three dimensional user interface cursor control
CN103282847B (en) 2010-12-22 2016-09-07 Abb研究有限公司 The method and system for monitoring industrial system with eye tracking system
JP5691568B2 (en) * 2011-01-28 2015-04-01 ソニー株式会社 Information processing apparatus, notification method, and program
JP2012155655A (en) * 2011-01-28 2012-08-16 Sony Corp Information processing device, notification method, and program
CN106125921B (en) 2011-02-09 2019-01-15 苹果公司 Gaze detection in 3D map environment
US9547428B2 (en) 2011-03-01 2017-01-17 Apple Inc. System and method for touchscreen knob control
US9013264B2 (en) 2011-03-12 2015-04-21 Perceptive Devices, Llc Multipurpose controller for electronic devices, facial expressions management and drowsiness detection
US8188880B1 (en) 2011-03-14 2012-05-29 Google Inc. Methods and devices for augmenting a field of view
US9079313B2 (en) 2011-03-15 2015-07-14 Microsoft Technology Licensing, Llc Natural human to robot remote control
US20120259638A1 (en) * 2011-04-08 2012-10-11 Sony Computer Entertainment Inc. Apparatus and method for determining relevance of input speech
US9026779B2 (en) 2011-04-12 2015-05-05 Mobile Communication Technologies, Llc Mobile communicator device including user attentiveness detector
US9026780B2 (en) 2011-04-12 2015-05-05 Mobile Communication Technologies, Llc Mobile communicator device including user attentiveness detector
US10139900B2 (en) 2011-04-12 2018-11-27 Mobile Communication Technologies, Llc Mobile communicator device including user attentiveness detector
US8760395B2 (en) 2011-05-31 2014-06-24 Microsoft Corporation Gesture recognition techniques
US8881051B2 (en) 2011-07-05 2014-11-04 Primesense Ltd Zoom-based gesture user interface
US9377865B2 (en) 2011-07-05 2016-06-28 Apple Inc. Zoom-based gesture user interface
US9459758B2 (en) 2011-07-05 2016-10-04 Apple Inc. Gesture-based interface with enhanced features
US8885882B1 (en) 2011-07-14 2014-11-11 The Research Foundation For The State University Of New York Real time eye tracking for human computer interaction
US20130022220A1 (en) 2011-07-20 2013-01-24 Google Inc. Wearable Computing Device with Indirect Bone-Conduction Speaker
US20130038437A1 (en) * 2011-08-08 2013-02-14 Panasonic Corporation System for task and notification handling in a connected car
US9030498B2 (en) 2011-08-15 2015-05-12 Apple Inc. Combining explicit select gestures and timeclick in a non-tactile three dimensional user interface
US9122311B2 (en) 2011-08-24 2015-09-01 Apple Inc. Visual feedback for tactile and non-tactile user interfaces
US9218063B2 (en) 2011-08-24 2015-12-22 Apple Inc. Sessionless pointing user interface
US9442565B2 (en) 2011-08-24 2016-09-13 The United States Of America, As Represented By The Secretary Of The Navy System and method for determining distracting features in a visual display
US8995945B2 (en) 2011-08-30 2015-03-31 Mobile Communication Technologies, Llc Mobile communicator and system
US20130057693A1 (en) * 2011-09-02 2013-03-07 John Baranek Intruder imaging and identification system
JP5539945B2 (en) * 2011-11-01 2014-07-02 株式会社コナミデジタルエンタテインメント GAME DEVICE AND PROGRAM
US8929589B2 (en) 2011-11-07 2015-01-06 Eyefluence, Inc. Systems and methods for high-resolution gaze tracking
US8943526B2 (en) * 2011-12-02 2015-01-27 Microsoft Corporation Estimating engagement of consumers of presented content
US8635637B2 (en) 2011-12-02 2014-01-21 Microsoft Corporation User interface presenting an animated avatar performing a media reaction
US9100685B2 (en) 2011-12-09 2015-08-04 Microsoft Technology Licensing, Llc Determining audience state or interest using passive sensor data
US9229534B2 (en) 2012-02-28 2016-01-05 Apple Inc. Asymmetric mapping for tactile and non-tactile user interfaces
US9116545B1 (en) * 2012-03-21 2015-08-25 Hayes Solos Raffle Input detection
AU2013239179B2 (en) 2012-03-26 2015-08-20 Apple Inc. Enhanced virtual touchpad and touchscreen
US9128522B2 (en) 2012-04-02 2015-09-08 Google Inc. Wink gesture input for a head-mountable device
US9201512B1 (en) 2012-04-02 2015-12-01 Google Inc. Proximity sensing for input detection
US8898687B2 (en) 2012-04-04 2014-11-25 Microsoft Corporation Controlling a media program based on a media reaction
US8856815B2 (en) * 2012-04-27 2014-10-07 Intel Corporation Selective adjustment of picture quality features of a display
JP6001758B2 (en) 2012-04-27 2016-10-05 ヒューレット−パッカード デベロップメント カンパニー エル.ピー.Hewlett‐Packard Development Company, L.P. Audio input from user
US9851588B2 (en) 2012-05-01 2017-12-26 Luis Emilio LOPEZ-GARCIA Eyewear with a pair of light emitting diode matrices
CA2775700C (en) 2012-05-04 2013-07-23 Microsoft Corporation Determining a future portion of a currently presented media program
US9423870B2 (en) 2012-05-08 2016-08-23 Google Inc. Input determination method
US9736604B2 (en) 2012-05-11 2017-08-15 Qualcomm Incorporated Audio user interaction recognition and context refinement
US9746916B2 (en) 2012-05-11 2017-08-29 Qualcomm Incorporated Audio user interaction recognition and application interface
US9030505B2 (en) * 2012-05-17 2015-05-12 Nokia Technologies Oy Method and apparatus for attracting a user's gaze to information in a non-intrusive manner
US8902281B2 (en) 2012-06-29 2014-12-02 Alcatel Lucent System and method for image stabilization in videoconferencing
US9400551B2 (en) * 2012-09-28 2016-07-26 Nokia Technologies Oy Presentation of a notification based on a user's susceptibility and desired intrusiveness
US9642214B2 (en) 2012-10-22 2017-05-02 Whirlpool Corporation Sensor system for refrigerator
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
KR102081797B1 (en) 2012-12-13 2020-04-14 삼성전자주식회사 Glass apparatus and Method for controlling glass apparatus, Audio apparatus and Method for providing audio signal and Display apparatus
US9842511B2 (en) * 2012-12-20 2017-12-12 The United States Of America As Represented By The Secretary Of The Army Method and apparatus for facilitating attention to a task
US8769557B1 (en) 2012-12-27 2014-07-01 The Nielsen Company (Us), Llc Methods and apparatus to determine engagement levels of audience members
US8937552B1 (en) 2013-01-02 2015-01-20 The Boeing Company Heads down warning system
DE102013001383B4 (en) * 2013-01-26 2016-03-03 Audi Ag Method and system for operating a rear window wiper of a motor vehicle
US10365874B2 (en) * 2013-01-28 2019-07-30 Sony Corporation Information processing for band control of a communication stream
US9044863B2 (en) 2013-02-06 2015-06-02 Steelcase Inc. Polarized enhanced confidentiality in mobile camera applications
US9332411B2 (en) 2013-02-20 2016-05-03 Microsoft Technology Licensing, Llc User interruptibility aware notifications
US20140244363A1 (en) * 2013-02-28 2014-08-28 International Business Machines Corporation Publication of information regarding the quality of a virtual meeting
US9691382B2 (en) * 2013-03-01 2017-06-27 Mediatek Inc. Voice control device and method for deciding response of voice control according to recognized speech command and detection output derived from processing sensor data
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
US9311837B2 (en) 2013-03-14 2016-04-12 Martigold Enterprises, Llc Methods and apparatus for message playback
EP2975997B1 (en) 2013-03-18 2023-07-12 Mirametrix Inc. System and method for on-axis eye gaze tracking
US9176582B1 (en) * 2013-04-10 2015-11-03 Google Inc. Input system
US10474793B2 (en) 2013-06-13 2019-11-12 Northeastern University Systems, apparatus and methods for delivery and augmentation of behavior modification therapy and teaching
US20150015509A1 (en) * 2013-07-11 2015-01-15 David H. Shanabrook Method and system of obtaining affective state from touch screen display interactions
US9196239B1 (en) 2013-08-30 2015-11-24 Amazon Technologies, Inc. Distracted browsing modes
US9420950B2 (en) 2013-09-17 2016-08-23 Pixart Imaging Inc. Retro-reflectivity array for enabling pupil tracking
WO2015050748A1 (en) * 2013-10-03 2015-04-09 Bae Systems Information And Electronic Systems Integration Inc. User friendly interfaces and controls for targeting systems
US10405786B2 (en) 2013-10-09 2019-09-10 Nedim T. SAHIN Systems, environment and methods for evaluation and management of autism spectrum disorder using a wearable data collection device
KR20150045683A (en) * 2013-10-21 2015-04-29 삼성전자주식회사 Method for providing custumized food life service and custumized food life service providing appratus
US20150139486A1 (en) * 2013-11-21 2015-05-21 Ziad Ali Hassan Darawi Electronic eyeglasses and method of manufacture thereto
US20150169048A1 (en) * 2013-12-18 2015-06-18 Lenovo (Singapore) Pte. Ltd. Systems and methods to present information on device based on eye tracking
US9633252B2 (en) 2013-12-20 2017-04-25 Lenovo (Singapore) Pte. Ltd. Real-time detection of user intention based on kinematics analysis of movement-oriented biometric data
US10180716B2 (en) 2013-12-20 2019-01-15 Lenovo (Singapore) Pte Ltd Providing last known browsing location cue using movement-oriented biometric data
US10073671B2 (en) * 2014-01-20 2018-09-11 Lenovo (Singapore) Pte. Ltd. Detecting noise or object interruption in audio video viewing and altering presentation based thereon
US9325938B2 (en) * 2014-03-13 2016-04-26 Google Inc. Video chat picture-in-picture
US9747722B2 (en) 2014-03-26 2017-08-29 Reflexion Health, Inc. Methods for teaching and instructing in a virtual world including multiple views
US10207405B2 (en) * 2014-03-31 2019-02-19 Christopher Deane Shaw Methods for spontaneously generating behavior in two and three-dimensional images and mechanical robots, and of linking this behavior to that of human users
US11436618B2 (en) * 2014-05-20 2022-09-06 [24]7.ai, Inc. Method and apparatus for providing customer notifications
DE202014006924U1 (en) * 2014-08-27 2015-11-30 GM Global Technology Operations LLC (n. d. Ges. d. Staates Delaware) Vehicle and windshield wiper control device for it
JP2016096430A (en) * 2014-11-13 2016-05-26 パナソニックIpマネジメント株式会社 Imaging device and imaging method
US9535497B2 (en) 2014-11-20 2017-01-03 Lenovo (Singapore) Pte. Ltd. Presentation of data on an at least partially transparent display based on user focus
US9652035B2 (en) * 2015-02-23 2017-05-16 International Business Machines Corporation Interfacing via heads-up display using eye contact
US10109228B2 (en) * 2015-04-10 2018-10-23 Samsung Display Co., Ltd. Method and apparatus for HDR on-demand attenuation control
US10546424B2 (en) 2015-04-15 2020-01-28 Google Llc Layered content delivery for virtual and augmented reality experiences
US10444931B2 (en) 2017-05-09 2019-10-15 Google Llc Vantage generation and interactive playback
US10567464B2 (en) 2015-04-15 2020-02-18 Google Llc Video compression with adaptive view-dependent lighting removal
US10419737B2 (en) 2015-04-15 2019-09-17 Google Llc Data structures and delivery methods for expediting virtual reality playback
US10469873B2 (en) 2015-04-15 2019-11-05 Google Llc Encoding and decoding virtual reality video
US10412373B2 (en) 2015-04-15 2019-09-10 Google Llc Image capture for virtual reality displays
US10540818B2 (en) 2015-04-15 2020-01-21 Google Llc Stereo image generation and interactive playback
US10440407B2 (en) * 2017-05-09 2019-10-08 Google Llc Adaptive control for immersive experience delivery
TWI571768B (en) * 2015-04-29 2017-02-21 由田新技股份有限公司 A human interface synchronous system, device, method, computer readable media, and computer program product
US10127525B2 (en) 2015-06-25 2018-11-13 International Business Machines Corporation Enhanced e-mail return receipts based on cognitive consideration
US10178150B2 (en) 2015-08-07 2019-01-08 International Business Machines Corporation Eye contact-based information transfer
JP6439052B2 (en) * 2015-08-26 2018-12-19 富士フイルム株式会社 Projection display
US10230805B2 (en) 2015-09-24 2019-03-12 International Business Machines Corporation Determining and displaying user awareness of information
US10008201B2 (en) * 2015-09-28 2018-06-26 GM Global Technology Operations LLC Streamlined navigational speech recognition
US10198233B2 (en) * 2016-03-01 2019-02-05 Microsoft Technology Licensing, Llc Updating displays based on attention tracking data
US10776827B2 (en) * 2016-06-13 2020-09-15 International Business Machines Corporation System, method, and recording medium for location-based advertisement
US10963914B2 (en) 2016-06-13 2021-03-30 International Business Machines Corporation System, method, and recording medium for advertisement remarketing
US10025383B2 (en) 2016-07-20 2018-07-17 Toyota Motor Engineering & Manufacturing North America, Inc. Automatic pause and resume of media content during automated driving based on driver's gaze
US11086593B2 (en) 2016-08-26 2021-08-10 Bragi GmbH Voice assistant for wireless earpieces
US10091148B2 (en) 2016-08-29 2018-10-02 International Business Machines Corporation Message delivery management based on device accessibility
US10237218B2 (en) 2016-08-29 2019-03-19 International Business Machines Corporation Message delivery management based on device accessibility
US9948729B1 (en) 2016-10-15 2018-04-17 International Business Machines Corporation Browsing session transfer using QR codes
CN106970711B (en) * 2017-04-27 2020-06-30 上海临奇智能科技有限公司 Method and equipment for aligning VR display device and display terminal screen
US10474227B2 (en) 2017-05-09 2019-11-12 Google Llc Generation of virtual reality with 6 degrees of freedom from limited viewer data
US11221497B2 (en) 2017-06-05 2022-01-11 Steelcase Inc. Multiple-polarization cloaking
JP2019005842A (en) * 2017-06-23 2019-01-17 カシオ計算機株式会社 Robot, robot controlling method, and program
US11181977B2 (en) * 2017-11-17 2021-11-23 Dolby Laboratories Licensing Corporation Slippage compensation in eye tracking
JP2019101492A (en) * 2017-11-28 2019-06-24 トヨタ自動車株式会社 Communication apparatus
US10951761B1 (en) 2017-12-20 2021-03-16 Wells Fargo Bank, N.A. System and method for live and virtual support interaction
US11106124B2 (en) 2018-02-27 2021-08-31 Steelcase Inc. Multiple-polarization cloaking for projected and writing surface view screens
US10871874B2 (en) 2018-05-09 2020-12-22 Mirametrix Inc. System and methods for device interaction using a pointing device and attention sensing device
CN108897589B (en) * 2018-05-31 2020-10-27 刘国华 Human-computer interaction method and device in display equipment, computer equipment and storage medium
US10839811B2 (en) 2018-06-08 2020-11-17 The Toronto-Dominion Bank System, device and method for enforcing privacy during a communication session with a voice assistant
US10831923B2 (en) 2018-06-08 2020-11-10 The Toronto-Dominion Bank System, device and method for enforcing privacy during a communication session with a voice assistant
US11023200B2 (en) * 2018-09-27 2021-06-01 The Toronto-Dominion Bank Systems, devices and methods for delivering audible alerts
US10978063B2 (en) * 2018-09-27 2021-04-13 The Toronto-Dominion Bank Systems, devices and methods for delivering audible alerts
CN109324689A (en) * 2018-09-30 2019-02-12 平安科技(深圳)有限公司 Test topic amplification method, system and equipment based on eyeball moving track
US11500185B2 (en) * 2018-11-09 2022-11-15 Meta Platforms Technologies, Llc Catadioptric and refractive optical structures for beam shaping
US10696160B2 (en) * 2018-11-28 2020-06-30 International Business Machines Corporation Automatic control of in-vehicle media
US10978064B2 (en) 2018-11-30 2021-04-13 International Business Machines Corporation Contextually relevant spoken device-to-device communication between IoT devices
US10770072B2 (en) 2018-12-10 2020-09-08 International Business Machines Corporation Cognitive triggering of human interaction strategies to facilitate collaboration, productivity, and learning
EP3894999A1 (en) * 2019-01-17 2021-10-20 Apple Inc. Head-mounted display with facial interface for sensing physiological conditions
US11269591B2 (en) 2019-06-19 2022-03-08 International Business Machines Corporation Artificial intelligence based response to a user based on engagement level
CN110647800B (en) * 2019-08-06 2022-06-03 广东工业大学 Eye contact communication detection method based on deep learning
CN112584280B (en) * 2019-09-27 2022-11-29 百度在线网络技术(北京)有限公司 Control method, device, equipment and medium for intelligent equipment
US11016656B1 (en) * 2020-02-14 2021-05-25 International Business Machines Corporation Fault recognition self-learning graphical user interface
JP7380365B2 (en) * 2020-03-19 2023-11-15 マツダ株式会社 state estimation device
US11580984B2 (en) * 2020-03-20 2023-02-14 At&T Intellectual Property I, L.P. Virtual assistant-initiated conversations
US11211095B1 (en) * 2020-06-19 2021-12-28 Harman International Industries, Incorporated Modifying media content playback based on user mental state
US11589332B2 (en) * 2020-07-08 2023-02-21 Dish Network L.L.C. Automatically suspending or reducing portable device notifications when viewing audio/video programs
US11755277B2 (en) 2020-11-05 2023-09-12 Harman International Industries, Incorporated Daydream-aware information recovery system
WO2023037348A1 (en) * 2021-09-13 2023-03-16 Benjamin Simon Thompson System and method for monitoring human-device interactions

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7164886B2 (en) * 2001-10-30 2007-01-16 Texas Instruments Incorporated Bluetooth transparent bridge

Family Cites Families (97)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4302011A (en) * 1976-08-24 1981-11-24 Peptek, Incorporated Video game apparatus and method
US4169663A (en) * 1978-02-27 1979-10-02 Synemed, Inc. Eye attention monitor
US4595990A (en) 1980-12-31 1986-06-17 International Business Machines Corporation Eye controlled information transfer
US4659197A (en) 1984-09-20 1987-04-21 Weinblatt Lee S Eyeglass-frame-mounted eye-movement-monitoring apparatus
US4755045A (en) * 1986-04-04 1988-07-05 Applied Science Group, Inc. Method and system for generating a synchronous display of a visual presentation and the looking response of many viewers
US4973149A (en) * 1987-08-19 1990-11-27 Center For Innovative Technology Eye movement detector
US4836670A (en) 1987-08-19 1989-06-06 Center For Innovative Technology Eye movement detector
US5016282A (en) 1988-07-14 1991-05-14 Atr Communication Systems Research Laboratories Eye tracking image pickup apparatus for separating noise from feature portions
US4950069A (en) 1988-11-04 1990-08-21 University Of Virginia Eye movement detector with improved calibration and speed
JP2522859B2 (en) 1990-12-14 1996-08-07 日産自動車株式会社 Eye position detection device
JPH0761314B2 (en) 1991-10-07 1995-07-05 コナミ株式会社 Retinal reflected light amount measuring device and eye gaze detecting device using the device
US5335276A (en) 1992-12-16 1994-08-02 Texas Instruments Incorporated Communication system and methods for enhanced information transfer
JPH0743804A (en) 1993-07-30 1995-02-14 Canon Inc Function selecting device
US5481622A (en) * 1994-03-01 1996-01-02 Rensselaer Polytechnic Institute Eye tracking apparatus and method employing grayscale threshold values
US5422690A (en) 1994-03-16 1995-06-06 Pulse Medical Instruments, Inc. Fitness impairment tester
US5689241A (en) 1995-04-24 1997-11-18 Clarke, Sr.; James Russell Sleep detection and driver alert apparatus
US5649061A (en) 1995-05-11 1997-07-15 The United States Of America As Represented By The Secretary Of The Army Device and method for estimating a mental decision
JPH0934424A (en) 1995-07-21 1997-02-07 Mitsubishi Electric Corp Display system
US6001065A (en) * 1995-08-02 1999-12-14 Ibva Technologies, Inc. Method and apparatus for measuring and analyzing physiological signals for active or passive control of physical and virtual spaces and the contents therein
JPH0981309A (en) * 1995-09-13 1997-03-28 Toshiba Corp Input device
US6158432A (en) 1995-12-08 2000-12-12 Cardiopulmonary Corporation Ventilator control system and method
US5912721A (en) * 1996-03-13 1999-06-15 Kabushiki Kaisha Toshiba Gaze detection apparatus and its method as well as information display apparatus
US5835083A (en) 1996-05-30 1998-11-10 Sun Microsystems, Inc. Eyetrack-driven illumination and information display
US5886683A (en) 1996-06-25 1999-03-23 Sun Microsystems, Inc. Method and apparatus for eyetrack-driven information retrieval
US5831594A (en) 1996-06-25 1998-11-03 Sun Microsystems, Inc. Method and apparatus for eyetrack derived backtrack
US5731805A (en) 1996-06-25 1998-03-24 Sun Microsystems, Inc. Method and apparatus for eyetrack-driven text enlargement
US6437758B1 (en) 1996-06-25 2002-08-20 Sun Microsystems, Inc. Method and apparatus for eyetrack—mediated downloading
US5898423A (en) 1996-06-25 1999-04-27 Sun Microsystems, Inc. Method and apparatus for eyetrack-driven captioning
US6078310A (en) 1996-06-26 2000-06-20 Sun Microsystems, Inc. Eyetracked alert messages
US5850211A (en) 1996-06-26 1998-12-15 Sun Microsystems, Inc. Eyetrack-driven scrolling
US5689619A (en) * 1996-08-09 1997-11-18 The United States Of America As Represented By The Secretary Of The Army Eyetracker control of heads-up displays
US5944530A (en) 1996-08-13 1999-08-31 Ho; Chi Fai Learning method and system that consider a student's concentration level
US6542081B2 (en) 1996-08-19 2003-04-01 William C. Torch System and method for monitoring eye movement
US6242546B1 (en) * 1997-02-24 2001-06-05 Daicel Chemical Industries, Ltd. Process for producing vinyl polymers
US6067069A (en) 1997-03-14 2000-05-23 Krause; Philip R. User interface for dynamic presentation of text with a variable speed based on a cursor location in relation to a neutral, deceleration, and acceleration zone
US6351273B1 (en) 1997-04-30 2002-02-26 Jerome H. Lemelson System and methods for controlling automatic scrolling of information on a display or screen
US6353824B1 (en) 1997-11-18 2002-03-05 Apple Computer, Inc. Method for dynamic presentation of the contents topically rich capsule overviews corresponding to the plurality of documents, resolving co-referentiality in document segments
JP3361980B2 (en) 1997-12-12 2003-01-07 株式会社東芝 Eye gaze detecting apparatus and method
US6092058A (en) * 1998-01-08 2000-07-18 The United States Of America As Represented By The Secretary Of The Army Automatic aiding of human cognitive functions with computerized displays
US6152563A (en) 1998-02-20 2000-11-28 Hutchinson; Thomas E. Eye gaze direction tracker
US6204828B1 (en) 1998-03-31 2001-03-20 International Business Machines Corporation Integrated gaze/manual cursor positioning system
JP3285545B2 (en) 1998-09-29 2002-05-27 松下電器産業株式会社 Motion detection circuit and noise reduction device
EP1039752A4 (en) 1998-10-09 2007-05-02 Sony Corp Communication apparatus and method
GB9823977D0 (en) 1998-11-02 1998-12-30 Scient Generics Ltd Eye tracking method and apparatus
US6282553B1 (en) 1998-11-04 2001-08-28 International Business Machines Corporation Gaze-based secure keypad entry system
TW413844B (en) * 1998-11-26 2000-12-01 Samsung Electronics Co Ltd Manufacturing methods of thin film transistor array panels for liquid crystal displays and photolithography method of thin films
US6526159B1 (en) 1998-12-31 2003-02-25 Intel Corporation Eye tracking for resource and power management
US6393136B1 (en) 1999-01-04 2002-05-21 International Business Machines Corporation Method and apparatus for determining eye contact
US6539100B1 (en) * 1999-01-27 2003-03-25 International Business Machines Corporation Method and apparatus for associating pupils with subjects
US6577329B1 (en) 1999-02-25 2003-06-10 International Business Machines Corporation Method and system for relevance feedback through gaze tracking and ticker interfaces
US7120880B1 (en) 1999-02-25 2006-10-10 International Business Machines Corporation Method and system for real-time determination of a subject's interest level to media content
WO2000069158A1 (en) 1999-05-06 2000-11-16 Kyocera Corporation Videophone system using cellular telephone terminal
US6401050B1 (en) 1999-05-21 2002-06-04 The United States Of America As Represented By The Secretary Of The Navy Non-command, visual interaction system for watchstations
US6803887B1 (en) 1999-07-22 2004-10-12 Swisscom Mobile Ag Method and corresponding devices for delivering useful data concerning observed objects
US6618716B1 (en) 1999-07-30 2003-09-09 Microsoft Corporation Computational architecture for managing the transmittal and rendering of information, alerts, and notifications
DE19953835C1 (en) 1999-10-30 2001-05-23 Hertz Inst Heinrich Computer-aided method for contactless, video-based gaze direction determination of a user's eye for eye-guided human-computer interaction and device for carrying out the method
US20020024506A1 (en) 1999-11-09 2002-02-28 Flack James F. Motion detection and tracking system to control navigation and display of object viewers
US6147612A (en) * 1999-11-10 2000-11-14 Ruan; Ying Chao Dual function optic sleep preventing device for vehicle drivers
JP2001154631A (en) 1999-11-24 2001-06-08 Fujitsu General Ltd Method and device for controlling gradation in pdp
GB2357650A (en) 1999-12-23 2001-06-27 Mitsubishi Electric Inf Tech Method for tracking an area of interest in a video image, and for transmitting said area
JP2001231062A (en) 2000-02-17 2001-08-24 Nec Shizuoka Ltd Mobile phone system and its hand-over method
US7124374B1 (en) * 2000-03-06 2006-10-17 Carl Herman Haken Graphical interface control system
JP5243679B2 (en) 2000-03-16 2013-07-24 マイクロソフト コーポレーション Notification platform architecture
US6456262B1 (en) 2000-05-09 2002-09-24 Intel Corporation Microdisplay with eye gaze detection
US6603491B2 (en) 2000-05-26 2003-08-05 Jerome H. Lemelson System and methods for controlling automatic scrolling of information on a display or screen
US7289102B2 (en) 2000-07-17 2007-10-30 Microsoft Corporation Method and apparatus using multiple sensors in a device with a display
US7302280B2 (en) 2000-07-17 2007-11-27 Microsoft Corporation Mobile phone operation based upon context sensing
US6608615B1 (en) 2000-09-19 2003-08-19 Intel Corporation Passive gaze-driven browsing
US6925425B2 (en) 2000-10-14 2005-08-02 Motorola, Inc. Method and apparatus for vehicle operator performance assessment and improvement
US6731307B1 (en) 2000-10-30 2004-05-04 Koninklije Philips Electronics N.V. User interface/entertainment device that simulates personal interaction and responds to user's mental state and/or personality
JP2002157607A (en) 2000-11-17 2002-05-31 Canon Inc System and method for image generation, and storage medium
US20020102947A1 (en) * 2001-01-09 2002-08-01 Sital Technology And Hardwear Development (1997) Ltd. Cell phone -hand set combination unit
US6964023B2 (en) 2001-02-05 2005-11-08 International Business Machines Corporation System and method for multi-modal focus detection, referential ambiguity resolution and mood classification using multi-modal input
US6628918B2 (en) 2001-02-21 2003-09-30 Sri International, Inc. System, method and computer program product for instant group learning feedback via image-based marking and aggregation
GB2372683A (en) 2001-02-23 2002-08-28 Ibm Eye tracking display apparatus
US6397137B1 (en) 2001-03-02 2002-05-28 International Business Machines Corporation System and method for selection of vehicular sideview mirrors via eye gaze
US20020137552A1 (en) 2001-03-20 2002-09-26 Cannon Joseph M. Indication unit for a portable wireless device
US7068813B2 (en) 2001-03-28 2006-06-27 Koninklijke Philips Electronics N.V. Method and apparatus for eye gazing smart display
US7027655B2 (en) * 2001-03-29 2006-04-11 Electronics For Imaging, Inc. Digital image compression with spatially varying quality levels determined by identifying areas of interest
US20020144259A1 (en) 2001-03-29 2002-10-03 Philips Electronics North America Corp. Method and apparatus for controlling a media player based on user activity
US6496117B2 (en) 2001-03-30 2002-12-17 Koninklijke Philips Electronics N.V. System for monitoring a driver's attention to driving
US6578962B1 (en) 2001-04-27 2003-06-17 International Business Machines Corporation Calibration-free eye gaze tracking
US6886137B2 (en) 2001-05-29 2005-04-26 International Business Machines Corporation Eye gaze control of dynamic information presentation
JP4530587B2 (en) 2001-07-30 2010-08-25 株式会社リコー Broadcast receiver
US20030038754A1 (en) 2001-08-22 2003-02-27 Mikael Goldstein Method and apparatus for gaze responsive text presentation in RSVP display
US7284201B2 (en) * 2001-09-20 2007-10-16 Koninklijke Philips Electronics N.V. User attention-based adaptation of quality level to improve the management of real-time multi-media content delivery and distribution
US7536704B2 (en) 2001-10-05 2009-05-19 Opentv, Inc. Method and apparatus automatic pause and resume of playback for a popup on interactive TV
US20030081834A1 (en) 2001-10-31 2003-05-01 Vasanth Philomin Intelligent TV room
US6937745B2 (en) 2001-12-31 2005-08-30 Microsoft Corporation Machine vision system and method for estimating and tracking facial pose
US7554541B2 (en) 2002-06-28 2009-06-30 Autodesk, Inc. Widgets displayed and operable on a surface of a volumetric display enclosure
US6784916B2 (en) 2002-02-11 2004-08-31 Telbotics Inc. Video conferencing apparatus
US7206435B2 (en) 2002-03-26 2007-04-17 Honda Giken Kogyo Kabushiki Kaisha Real-time eye detection and tracking under various light conditions
US20040203635A1 (en) 2002-04-23 2004-10-14 Say-Yee Wen Method of transferring data during interactive teaching procedure
US6859144B2 (en) * 2003-02-05 2005-02-22 Delphi Technologies, Inc. Vehicle situation alert system with eye gaze controlled alert signal generation
US8292433B2 (en) 2003-03-21 2012-10-23 Queen's University At Kingston Method and apparatus for communication between humans and devices
US7762665B2 (en) 2003-03-21 2010-07-27 Queen's University At Kingston Method and apparatus for communication between humans and devices
US7401920B1 (en) * 2003-05-20 2008-07-22 Elbit Systems Ltd. Head mounted eye tracking and display system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7164886B2 (en) * 2001-10-30 2007-01-16 Texas Instruments Incorporated Bluetooth transparent bridge

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9851939B2 (en) 2015-05-14 2017-12-26 International Business Machines Corporation Reading device usability
US9851940B2 (en) 2015-05-14 2017-12-26 International Business Machines Corporation Reading device usability
US10331398B2 (en) 2015-05-14 2019-06-25 International Business Machines Corporation Reading device usability

Also Published As

Publication number Publication date
US20150042555A1 (en) 2015-02-12
US8322856B2 (en) 2012-12-04
US20110043617A1 (en) 2011-02-24
US10296084B2 (en) 2019-05-21
US20120078623A1 (en) 2012-03-29
US20130231938A1 (en) 2013-09-05
US7762665B2 (en) 2010-07-27
US20170371407A1 (en) 2017-12-28
US8672482B2 (en) 2014-03-18
US20040183749A1 (en) 2004-09-23
US10915171B2 (en) 2021-02-09
US20120268367A1 (en) 2012-10-25
US8096660B2 (en) 2012-01-17
US20200097078A1 (en) 2020-03-26

Similar Documents

Publication Publication Date Title
US10915171B2 (en) Method and apparatus for communication between humans and devices
US8292433B2 (en) Method and apparatus for communication between humans and devices
AU2004221365B2 (en) Method and apparatus for communication between humans and devices
US9645642B2 (en) Low distraction interfaces
Shell et al. Interacting with groups of computers
Vertegaal et al. Designing for augmented attention: Towards a framework for attentive user interfaces
WO2018066191A1 (en) Server, client terminal, control method, and storage medium
JP2010529738A (en) Home video communication system
Vertegaal et al. Designing attentive cell phone using wearable eyecontact sensors
WO2019220729A1 (en) Information processing device, information processing method, and storage medium
US20210266499A1 (en) Single Point Devices That Connect to a Display Device
CN109804407B (en) Care maintenance system and server
Vertegaal et al. Attentive user interfaces: the surveillance and sousveillance of gaze-aware objects
Shell et al. ECSGlasses and EyePliances: using attention to open sociable windows of interaction
US9137648B2 (en) Peripheral computing device
CA2423142C (en) Method and apparatus for communication between humans and devices
Jabarin et al. Establishing remote conversations through eye contact with physical awareness proxies
WO2020175115A1 (en) Information processing device and information processing method
US11909544B1 (en) Electronic devices and corresponding methods for redirecting user interface controls during a videoconference
Mamuji et al. Attentive Headphones: Augmenting Conversational Attention with a Real World TiVo
US20240097927A1 (en) Electronic Devices and Corresponding Methods for Redirecting User Interface Controls During a Videoconference
Mamuji Eyereason and Eyepliances: Tools for Driving Interactions Using Attention-based Reasoning
Mamuji GGGGG GGG aLLL
JP2018133723A (en) Display device, control method of the same, and control program

Legal Events

Date Code Title Description
AS Assignment

Owner name: QUEEN'S UNIVERSITY AT KINGSTON, CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VERTEGAAL, ROEL;REEL/FRAME:039378/0379

Effective date: 20030715

Owner name: QUEEN'S UNIVERSITY AT KINGSTON, CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SHELL, JEFFREY S.;REEL/FRAME:039378/0395

Effective date: 20061010

Owner name: QUEEN'S UNIVERSITY AT KINGSTON, CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DICKIE, CONNOR;REEL/FRAME:039378/0424

Effective date: 20140103

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION