WO2006130364A2 - Monitoring system with speech recognition - Google Patents

Monitoring system with speech recognition Download PDF

Info

Publication number
WO2006130364A2
WO2006130364A2 PCT/US2006/019483 US2006019483W WO2006130364A2 WO 2006130364 A2 WO2006130364 A2 WO 2006130364A2 US 2006019483 W US2006019483 W US 2006019483W WO 2006130364 A2 WO2006130364 A2 WO 2006130364A2
Authority
WO
WIPO (PCT)
Prior art keywords
speech
transducer
control circuitry
circuitry
software
Prior art date
Application number
PCT/US2006/019483
Other languages
French (fr)
Other versions
WO2006130364A3 (en
Inventor
Lee D. Tice
Original Assignee
Honeywell International, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honeywell International, Inc. filed Critical Honeywell International, Inc.
Priority to EP06760193A priority Critical patent/EP1889464B1/en
Publication of WO2006130364A2 publication Critical patent/WO2006130364A2/en
Publication of WO2006130364A3 publication Critical patent/WO2006130364A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M11/00Telephonic communication systems specially adapted for combination with other electrical systems
    • H04M11/007Telephonic communication systems specially adapted for combination with other electrical systems with remote control systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/2818Controlling appliance services of a home automation network by calling their functionalities from a device located outside both the home and the home network

Definitions

  • the present invention relates to the remote monitoring of a patient, that incorporates the ability of that patient to call for help during emergency situations without carrying a specific device on their person. More specifically, the present invention relates to monitoring systems with acoustical devices and speech recognition that are capable of recognizing a patient's request for assistance and can automatically initiate a summons for help.
  • these queries can be less than daily depending on the condition of the resident and his/her medications.
  • a resident is monitored for physiological and other conditions indicative of health and well-being.
  • the physiological monitoring can include the person's vital signs such as weight, blood pressure, pulse rate and oxygen saturation.
  • the system may also incorporate medication control to support the health and well-being of the resident.
  • a call can be initiated to a remote monitoring facility to provide an alert.
  • the remote monitoring facility can respond by calling the resident or patient or by visiting the person.
  • the follow-up call may find that no one answers the phone and therefore a visit is scheduled for a later time.
  • the resident may not be answering the phone because he/she went outside, could not hear the phone ring, or was not able to respond.
  • Other reasons for not being able to respond include, the person fell down and cannot get up, or, had an earlier emergency and is now incapacitated or unavailable at the scheduled time. Because the cause is not certain, a routine follow-up call may be made. It therefore becomes extremely important to initiate a call at the first sign of an abnormal situation. Otherwise, a person that falls could be down for a very long time and need assistance. The person might also miss vital medications or monitoring.
  • FIG. 1 is a system in accordance with the invention.
  • FIGs. 2A-2E taken together illustrate a process which can be carried out by the system of Fig. 1.
  • acoustic transducers can be located in primary regions of a home. These acoustic transducers can be connected to a computer or similar device that incorporates software and that can perform a speech recognition function.
  • the speech of a resident may be used for as a basis for programming a speech recognition function such that the resident's speech can be recognized while other ambient noise or sounds are present.
  • Some ambient sounds can come from a radio or TV that is running.
  • the resident can enter specific sounds, words or word phrases that will be recognized by the home monitoring system. Some of these words could include "help, or help me, or other words descriptive of the situation such as "it hurts, etc.”
  • characteristics of the resident's speech could be programmed and recognized so that any high stress speech pattern will initiate a response.
  • the monitoring system could have control of the radio or TV and interrupt power or audio output from these devices if it recognizes the resident's speech.
  • Different recognition techniques and methods can be used to identify a situation where the resident needs assistance.
  • Speech recognition and system activation software may have problems determining if a stressful speech or sound is the patient's speech or a television or other audio system.
  • a sound transducer can be located at the output of these devices. The signals from these sound transducers can then be transmitted to the monitoring system wherein the software uses them to compensate the other sensors in the residence.
  • the transducers throughout the residence can each be compensated in software or hardware to minimize and cancel signals that relate to the television or other audio system. With this compensation, the transducers throughout the residence can have a high sensitivity to those sounds of interest. These would include the voice of the patient.
  • the home monitoring system When the home monitoring system recognizes a situation that needs attention from the speech recognition, it automatically initiates a call to one or more preprogrammed remote locations to summon help.
  • the message for help may include a speech recording of the resident describing the situation or the stored words that the system recognized for activating the automatic call for help.
  • the remote location can include a neighbor, a relative, a friend, or a central monitoring station having medical emergency response capabilities.
  • the neighbor can be a very important person because of his/her close proximity to the resident and ability to respond in a very short time.
  • the system may be configured such that it first calls certain locations and then calls others after some verification of the situation has been accomplished.
  • the system may relay messages from the remote location back to the resident or establish a two-way direct communication with the remote location.
  • speakers could be located strategically within the home. The resident can now have a dialog describing the situation through the transducers and speakers in the home.
  • the system could incorporate a verbal prompt to the resident to describe his/her situation in sufficient detail that it can determine the appropriate response and which locations to call first.
  • Fig. 1 illustrates a system 10 that embodies the invention.
  • the system 10 includes control circuitry 12.
  • Circuitry 12 can include a programmable processor 12a and associated control software 12b.
  • Control circuitry 12 is linked via a wired or wireless medium (or both) 14 to a plurality of audio input transducers, for example microphones, 18 and a plurality of audio output transducers, for example loud speakers, 20.
  • a plurality of audio input transducers for example microphones
  • a plurality of audio output transducers for example loud speakers
  • the respective input, output transducers 18i, 2Oi could be packaged together in a single housing.
  • the input, output transducers such as 18i, 2Oi can be located in various locations or rooms of a residence where the resident would be present at least from time to time. These could include, without limitation, living rooms, kitchens, bathrooms, bedrooms, halls and the like.
  • the circuitry 12 could also be linked, via a wired, or wireless medium 16 to one or more remote monitoring stations. Software 12b can receive and initiate communications via • medium 16.
  • the acoustic input transducers such as transducer 18i can be located in heavily used regions of a home or residence.
  • the processor 12a and associated software 12b can carry out a speech recognition function based on previously received speech of the resident of a region or home where the system 10 has been installed.
  • Characteristics of the resident's speech can be incorporated or stored in the voice recognition software, the control circuitry 12.
  • receive high stress speech patterns from the resident can be recognized and can initiate communications via medium 16 to a remote monitoring location, which could include a neighbor's or relative's house as well as a monitoring facility.
  • control circuitry 12 can communicate directly with the resident via medium
  • the software 12b can couple audio from the resident, via medium 14 to one or more remote locations via medium 16.
  • the audio input transducers 18 can be compensated to be able to distinguish a resident's speech from simultaneously present background audio such as from televisions, radios and the like all without limitation.
  • a sound source such as a TV or RADIO can intermittently or continuously emit audio, such as A-T, A-R throughout the residence, see Fig. 1.
  • an adjustment can be made to remove the sounds associated with sources other than the person in the residence.
  • FIG. 2A-2E illustrate a process for compensating the sensors 18 by measuring the sounds in the residence.
  • the relationships, phases, and amplitudes of associated sound signals from each acoustical sensor, such as 18i, in the residence can then be stored by control circuitry 12.
  • FIG. 2 A illustrates a signal 30 from an acoustical sensor such as 18i positioned to receive audio, such as A-T or A-R, from a sound source such as a TV or from a RADIO.
  • the signal 30 has an associated amplitude.
  • This sound source will produce acoustical waves that travel throughout the residence and will be received by acoustical sensors 18 at various locations. These locations will have different distances relative to the source location.
  • Fig. 2B illustrates a signal 32 from an acoustical sensor 18j at another location within the residence. Signal 32 exhibits a phase shift and amplitude different than that of the signal 30 of Fig. 2A.
  • Fig. 2A This can be done by storing the signals in a memory of a processor 12a for a period of time and then using the processor 12a to shift one signal in time relative to the other such that the signals are crossing zero at the same time.
  • the processor 12a does not yet have to consider the amplitude. Once the phase shift has been measured, it is stored for use in compensating the signal 30 shown in Fig. 2 A.
  • Fig. 2C illustrates adjusting the signal 32 of Fig. 2B so that it is now in phase with the source signal 30, Fig. 2A. This is in preparation for the next step which is to adjust the amplitude for the eventual cancellation of the non-resident sound source signals.
  • the two signals being compared have the same phase but not the same amplitude.
  • One approach is then to invert the signal, see signal 32", Fig. 2D 3 from the acoustical sensor 18j at the location that produced signal 32, Fig. 2B and start adding the signals of Fig. 2A and Fig. 2D. If the result is non-zero, then the amplitude of the signal 32" in Fig. 2D is altered, increased in this example, such that the result of that processing is closer to zero.
  • phase and amplitude information can be stored in memory, for example,
  • EEPROM electrically erasable programmable read-only memory
  • the system 10 will be capable of distinguishing the voice of the resident near this location.
  • This process will need to be repeated for each sound source such as TV, or RADIO in the residence that may interfere with the speech recognition of the system 10 due to signals received at each acoustical sensor location.
  • the system processor 12a can automatically make these individual adjustments for each monitored sound source present in the residence such that it can even compensate for more than one source being turned ON at a time.
  • the sounds from the resident will produce signals at least at one acoustical sensor 18j and possibly multiple acoustical sensors in the residence.
  • the system 10 can also use the amplitude of the signals at multiple acoustical sensors to help locate the resident within the residence. This can be accomplished using amplitude information and phase relationships of the signals from the respective acoustical input sensors 18. It will be understood that the above described steps can be altered without departing from the spirit and scope of the invention. [00044] From the foregoing, it will be observed that numerous variations and modifications may be effected without departing from the spirit and scope of the invention. It is to be understood that no limitation with respect to the specific apparatus illustrated herein is intended or should be inferred. It is, of course, intended to cover by the appended claims all such modifications as fall within the scope of the claims.

Abstract

A system for monitoring conditions associated with an individual in a region includes at least one speech input transducer and speech processing software coupled thereto. Results of the speech processing can initiate communications with a displaced communications device such as a telephone or a computer to provide a source of feedback.

Description

MONITORING SYSTEM WITH SPEECH RECOGNITION
FIELD OF THE INVENTION
[0001] The present invention relates to the remote monitoring of a patient, that incorporates the ability of that patient to call for help during emergency situations without carrying a specific device on their person. More specifically, the present invention relates to monitoring systems with acoustical devices and speech recognition that are capable of recognizing a patient's request for assistance and can automatically initiate a summons for help.
BACKGROUND OF THE INVENTION
[0002] Systems are known that monitor a resident within a home as part of a home monitoring system. One such system has been disclosed in U.S. Patent No. 6,402,691 Bl entitled "In-Home Patient Monitoring System issued June 115 2002. These systems save costs by physiological testing of the person and transmitting that information to a remote monitoring location. In addition, these systems can include an automated call function.
Questions can be asked relative to the resident's condition and medications. Another such system has been disclosed in United States Patent Application No. 10/956,681, filed October
1, 2004. The '681 application has been assigned to the assignee hereof and is incorporated herein by reference.
[0003] Known systems rely upon the resident having the mobility to use the system. This includes answering any telephone query that is automatically generated on a periodic basis.
In some cases, these queries can be less than daily depending on the condition of the resident and his/her medications.
[0004] In a home monitoring system, a resident is monitored for physiological and other conditions indicative of health and well-being. The physiological monitoring can include the person's vital signs such as weight, blood pressure, pulse rate and oxygen saturation. The system may also incorporate medication control to support the health and well-being of the resident.
[0005] In the event that the physiological monitoring determines that an emergency situation is prevalent or the physiological measurement is not completed as scheduled or the medication is not taken as prescribed, a call can be initiated to a remote monitoring facility to provide an alert. The remote monitoring facility can respond by calling the resident or patient or by visiting the person. However, the follow-up call may find that no one answers the phone and therefore a visit is scheduled for a later time.
[0006] The resident may not be answering the phone because he/she went outside, could not hear the phone ring, or was not able to respond. Other reasons for not being able to respond include, the person fell down and cannot get up, or, had an earlier emergency and is now incapacitated or unavailable at the scheduled time. Because the cause is not certain, a routine follow-up call may be made. It therefore becomes extremely important to initiate a call at the first sign of an abnormal situation. Otherwise, a person that falls could be down for a very long time and need assistance. The person might also miss vital medications or monitoring.
[0007] There are also systems wherein the resident carries a device that can be activated send a signal for help if he/she falls or become demobilized. However, these systems rely upon the person having the transmitting device on their person at the time of the emergency. It is possible if not likely that the resident will remove the device and forget to put it back on under certain circumstances such as bathing, sleeping, or changing clothing. This raises the potential that the resident may not be able to summon help when demobilized by falling or by other reasons.
[0008] Therefore, there exists a need for improved systems for summoning help in situations where time can be important to survival or relief of discomfort.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] Fig. 1 is a system in accordance with the invention; and
[00010] Figs. 2A-2E taken together illustrate a process which can be carried out by the system of Fig. 1. DETAILED DESCRIPTION OF THE INVENTION
[00011] While embodiments of this invention can take many different forms, specific embodiments thereof are shown in the drawings and will be described herein in detail with the understanding that the present disclosure is to be considered as an exemplification of the principles of the invention, as well as the best mode of practicing same, and is not intended to limit the invention to the specific embodiment illustrated.
[00012] In systems that embody the invention, acoustic transducers can be located in primary regions of a home. These acoustic transducers can be connected to a computer or similar device that incorporates software and that can perform a speech recognition function.
[00013] The speech of a resident may be used for as a basis for programming a speech recognition function such that the resident's speech can be recognized while other ambient noise or sounds are present. Some ambient sounds can come from a radio or TV that is running. The resident can enter specific sounds, words or word phrases that will be recognized by the home monitoring system. Some of these words could include "help, or help me, or other words descriptive of the situation such as "it hurts, etc."
[00014] Instead of words, characteristics of the resident's speech could be programmed and recognized so that any high stress speech pattern will initiate a response. In addition, it is within the scope of this invention that the monitoring system could have control of the radio or TV and interrupt power or audio output from these devices if it recognizes the resident's speech. Different recognition techniques and methods can be used to identify a situation where the resident needs assistance.
[00015] In a residence, there may be a television or other source of sounds and voices that are not related to the patient's need for help. Speech recognition and system activation software may have problems determining if a stressful speech or sound is the patient's speech or a television or other audio system.
[00016] In order to distinguish the resident's speech from simultaneous, loud television or audio output, a sound transducer can be located at the output of these devices. The signals from these sound transducers can then be transmitted to the monitoring system wherein the software uses them to compensate the other sensors in the residence.
[00017] The transducers throughout the residence can each be compensated in software or hardware to minimize and cancel signals that relate to the television or other audio system. With this compensation, the transducers throughout the residence can have a high sensitivity to those sounds of interest. These would include the voice of the patient.
[00018] When the home monitoring system recognizes a situation that needs attention from the speech recognition, it automatically initiates a call to one or more preprogrammed remote locations to summon help. The message for help may include a speech recording of the resident describing the situation or the stored words that the system recognized for activating the automatic call for help.
[00019] The remote location can include a neighbor, a relative, a friend, or a central monitoring station having medical emergency response capabilities. The neighbor can be a very important person because of his/her close proximity to the resident and ability to respond in a very short time. The system may be configured such that it first calls certain locations and then calls others after some verification of the situation has been accomplished.
[00020] In an aspect of the invention, the system may relay messages from the remote location back to the resident or establish a two-way direct communication with the remote location. In this embodiment, speakers could be located strategically within the home. The resident can now have a dialog describing the situation through the transducers and speakers in the home.
[00021] In another aspect of the invention, the system could incorporate a verbal prompt to the resident to describe his/her situation in sufficient detail that it can determine the appropriate response and which locations to call first.
[00022] Fig. 1 illustrates a system 10 that embodies the invention. The system 10 includes control circuitry 12. Circuitry 12 can include a programmable processor 12a and associated control software 12b.
[00023] Control circuitry 12 is linked via a wired or wireless medium (or both) 14 to a plurality of audio input transducers, for example microphones, 18 and a plurality of audio output transducers, for example loud speakers, 20. In one embodiment the respective input, output transducers 18i, 2Oi could be packaged together in a single housing.
[00024] The input, output transducers such as 18i, 2Oi can be located in various locations or rooms of a residence where the resident would be present at least from time to time. These could include, without limitation, living rooms, kitchens, bathrooms, bedrooms, halls and the like. [00025] The circuitry 12 could also be linked, via a wired, or wireless medium 16 to one or more remote monitoring stations. Software 12b can receive and initiate communications via medium 16.
[00026] Relative to the system 10, the acoustic input transducers, such as transducer 18i can be located in heavily used regions of a home or residence. The processor 12a and associated software 12b can carry out a speech recognition function based on previously received speech of the resident of a region or home where the system 10 has been installed.
[00027] Characteristics of the resident's speech can be incorporated or stored in the voice recognition software, the control circuitry 12. Advantageously, receive high stress speech patterns from the resident can be recognized and can initiate communications via medium 16 to a remote monitoring location, which could include a neighbor's or relative's house as well as a monitoring facility.
[00028] The control circuitry 12 can communicate directly with the resident via medium
14 and the audio output transducers 20. In addition, the software 12b can couple audio from the resident, via medium 14 to one or more remote locations via medium 16.
[00029] The audio input transducers 18 can be compensated to be able to distinguish a resident's speech from simultaneously present background audio such as from televisions, radios and the like all without limitation.
[00030] The ability to distinguish the voice of a resident from other sources of sounds within a residence is an important advantage of the system 10. A sound source such as a TV or RADIO can intermittently or continuously emit audio, such as A-T, A-R throughout the residence, see Fig. 1. Preferably an adjustment can be made to remove the sounds associated with sources other than the person in the residence.
[00031] The graphs of Figs. 2A-2E illustrate a process for compensating the sensors 18 by measuring the sounds in the residence. The relationships, phases, and amplitudes of associated sound signals from each acoustical sensor, such as 18i, in the residence can then be stored by control circuitry 12.
[00032] Fig. 2 A illustrates a signal 30 from an acoustical sensor such as 18i positioned to receive audio, such as A-T or A-R, from a sound source such as a TV or from a RADIO. The signal 30 has an associated amplitude. This sound source will produce acoustical waves that travel throughout the residence and will be received by acoustical sensors 18 at various locations. These locations will have different distances relative to the source location.
[00033] As sound travels through the residence, it takes time due to its propagation rate. A corresponding signal will be produced at the other locations of acoustical sensors 18. Those signals will each have a different phase and amplitude. Fig. 2B illustrates a signal 32 from an acoustical sensor 18j at another location within the residence. Signal 32 exhibits a phase shift and amplitude different than that of the signal 30 of Fig. 2A.
[00034] In a first step, the phase of the signal 32 in Fig. 2B is adjusted to match the signal
30 in Fig. 2A. This can be done by storing the signals in a memory of a processor 12a for a period of time and then using the processor 12a to shift one signal in time relative to the other such that the signals are crossing zero at the same time.
[00035] By using the zero crossings, the processor 12a does not yet have to consider the amplitude. Once the phase shift has been measured, it is stored for use in compensating the signal 30 shown in Fig. 2 A.
[00036] Fig. 2C illustrates adjusting the signal 32 of Fig. 2B so that it is now in phase with the source signal 30, Fig. 2A. This is in preparation for the next step which is to adjust the amplitude for the eventual cancellation of the non-resident sound source signals. In this case the two signals being compared have the same phase but not the same amplitude.
[00037] One approach is then to invert the signal, see signal 32", Fig. 2D3 from the acoustical sensor 18j at the location that produced signal 32, Fig. 2B and start adding the signals of Fig. 2A and Fig. 2D. If the result is non-zero, then the amplitude of the signal 32" in Fig. 2D is altered, increased in this example, such that the result of that processing is closer to zero.
[00038] Adjusting the amplitude of the signal 32" in Fig. 2D continues until the result is as close to zero as possible. When the amplitude signal 32" of Figure 2D is the same as the amplitude of signal 32 of Fig. 2A, then adding them when they are out of phase as illustrated will result in a substantially zero amplitude signal as illustrated in Fig. 2E.
[00039] The phase and amplitude information can be stored in memory, for example,
EEPROM and used to dynamically adjust all acoustical input sensors 18 whenever the sound source, such as the TV or RADIO, is turned ON. [00040] Once the acoustical sensor signals such as the signal 32, from the location associated with Fig. 2B have been compensated for the sound source producing the signals at Fig. 2 A, the system 10 will be capable of distinguishing the voice of the resident near this location.
[00041] This process will need to be repeated for each sound source such as TV, or RADIO in the residence that may interfere with the speech recognition of the system 10 due to signals received at each acoustical sensor location. The system processor 12a can automatically make these individual adjustments for each monitored sound source present in the residence such that it can even compensate for more than one source being turned ON at a time.
[00042] The above described process is dynamic. A new sound source that is not associated with a resident's voice can be automatically responded to by the system 10 and compensation provided for it. The adjustment of each non-resident sound source can take place within seconds if it has not already been measured and recorded in the memory of the processor 12.
[00043] After the acoustic sensor signals are nulled within the system 10, then the sounds from the resident will produce signals at least at one acoustical sensor 18j and possibly multiple acoustical sensors in the residence. The system 10 can also use the amplitude of the signals at multiple acoustical sensors to help locate the resident within the residence. This can be accomplished using amplitude information and phase relationships of the signals from the respective acoustical input sensors 18. It will be understood that the above described steps can be altered without departing from the spirit and scope of the invention. [00044] From the foregoing, it will be observed that numerous variations and modifications may be effected without departing from the spirit and scope of the invention. It is to be understood that no limitation with respect to the specific apparatus illustrated herein is intended or should be inferred. It is, of course, intended to cover by the appended claims all such modifications as fall within the scope of the claims.

Claims

WHAT IS CLAIMED:
1. A system comprising: at least one sensor of a physiological condition of an individual; an audio input transducer; and control circuitry coupled to the sensor and the transducer, the control circuitry has a communications port, the control circuitry responds to speech received from the transducer and carries out a speech recognition process thereon, and in accordance with the results thereof, initiates communications with a displaced location via the communications port.
2. A system as in claim 1 which includes an audio output transducer coupled to the control circuitry, the control circuitry provides at least one speech output to the audio output transducer.
3. A system as in claim 2 which includes software to carry out the speech recognition process.
4. A system as in claim 3 where the control circuitry provides one of a plurality of speech outputs to the audio output transducer.
5. A system as in claim 4 where the software selects at least one of the speech outputs in response to results of the speech recognition process.
6. A system as in claim 4 where the control circuitry couples speech signals from the audio input transducer to the port for communications to the displaced location.
7. A system as in claim 6 where at least some communications received from the displaced location via the port are coupled by the control circuitry to the audio output transducer.
8. A system as in claim 1 where the control circuitry evaluates, at least in part, outputs from the at least one sensor.
9. A system as in claim 7 where the control circuitry evaluates, at least in part, outputs from the at least one sensor.
10. A system as in claim 1 which includes a sensor compensation function.
11. A monitoring system comprising: at least a first sensor for monitoring a physiological condition of a person and associated sensor interface circuitry; at least one acoustical input transducer for monitoring acoustical signals and associated transducer interface circuitry; a device that receives at least first sensor data and which receives data from the at least one transducer and respective interface circuitry, to carry out at least a speech recognition function, and responsive thereto, to initiate communications with a displaced unit.
12. A system as in claim 11 where the speech recognition function includes rdbognizing at least one of spoken words or sounds indicative of a condition needing attention.
13. A system as in claim 12 including a plurality of acoustic output transducers to provide output audio information to a person regarding the recognition of a message by the speech recognition function.
14. A system as in claim 11 which includes circuitry to transfer to the remote unit at least in part a recording of speech of a person indicating or describing an ongoing condition.
15. A system as in claim 11 where the speech recognition function recognizes members of a plurality of condition describing words.
Q
16. A system as in claim 11 where the remote unit comprises a bi-directional verbal communications device.
17. A system as in claim 11 which also includes an intercom system.
18. A system as in claim 11 which also includes a video system.
19. A system as in claim 11 which includes at least one audible output transducer.
20. A system as in claim 19 wherein the device, in combination with the audible output transducer, provide local audible feedback.
21. A system as in claim 20 which includes a plurality of acoustical input transducers coupled to the device.
22. A system as in claim 21 where at least some of the input transducers have associated therewith a respective audible output transducer.
23. A system as in claim 21 where the input transducers are coupled to the device by one of a wired, or a wireless link.
24. A system as in claim 21 where the device includes a programmable processor and associated voice recognition software.
25. A system as in claim 13 which includes control software to provide the output audio information.
26. A system as in claim 25 where the device includes a programmable processor and associated voice recognition software.
27. A system as in claim 11 which includes a plurality of acoustic input transducers.
28. A system as in claim 27 which includes circuitry to compensate at least some of the acoustic input transducers.
29. A system as in claim 25 which includes a digital signal processor coupled to the control software.
30. A system as in claim 26 which includes software to pre-store background sound indicia.
31. A system as in claim 30 which includes software to remove background sounds from received audio.
32. A system as in claim 27 which includes circuitry to compensate at least some of the acoustic input transducers to reduce interference noise signals not associated with selected speech.
33. A system as in claim 32 where signals from at least some of the acoustic input transducers associated with the interference noise signals are coupled to the circuitry to compensate.
PCT/US2006/019483 2005-05-31 2006-05-18 Monitoring system with speech recognition WO2006130364A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP06760193A EP1889464B1 (en) 2005-05-31 2006-05-18 Monitoring system with speech recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/141,125 US7881939B2 (en) 2005-05-31 2005-05-31 Monitoring system with speech recognition
US11/141,125 2005-05-31

Publications (2)

Publication Number Publication Date
WO2006130364A2 true WO2006130364A2 (en) 2006-12-07
WO2006130364A3 WO2006130364A3 (en) 2009-04-16

Family

ID=37482140

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/019483 WO2006130364A2 (en) 2005-05-31 2006-05-18 Monitoring system with speech recognition

Country Status (3)

Country Link
US (1) US7881939B2 (en)
EP (1) EP1889464B1 (en)
WO (1) WO2006130364A2 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8612538B2 (en) * 2007-08-21 2013-12-17 Honeywell International Inc. System and method for upgrading telemonitor unit firmware
US20100125183A1 (en) * 2008-11-17 2010-05-20 Honeywell International Inc. System and method for dynamically configuring functionality of remote health monitoring device
US20100239110A1 (en) * 2009-03-17 2010-09-23 Temic Automotive Of North America, Inc. Systems and Methods for Optimizing an Audio Communication System
US8340975B1 (en) * 2011-10-04 2012-12-25 Theodore Alfred Rosenberger Interactive speech recognition device and system for hands-free building control
US9524717B2 (en) * 2013-10-15 2016-12-20 Trevo Solutions Group LLC System, method, and computer program for integrating voice-to-text capability into call systems
CN108156497B (en) * 2018-01-02 2020-12-18 联想(北京)有限公司 Control method, control equipment and control system
US20200105120A1 (en) * 2018-09-27 2020-04-02 International Business Machines Corporation Emergency detection and notification system
EP4256558A1 (en) * 2020-12-02 2023-10-11 Hearunow, Inc. Dynamic voice accentuation and reinforcement

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001056019A1 (en) 2000-01-28 2001-08-02 Horacek, Gregor Voice control system and method
WO2002075688A2 (en) 2001-03-15 2002-09-26 Koninklijke Philips Electronics N.V. Automatic system for monitoring independent person requiring occasional assistance
US20030104800A1 (en) 2001-11-30 2003-06-05 Artur Zak Telephone with alarm signalling
US20040066940A1 (en) 2002-10-03 2004-04-08 Silentium Ltd. Method and system for inhibiting noise produced by one or more sources of undesired sound from pickup by a speech recognition unit

Family Cites Families (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5353793A (en) * 1991-11-25 1994-10-11 Oishi-Kogyo Company Sensor apparatus
US5794219A (en) * 1996-02-20 1998-08-11 Health Hero Network, Inc. Method of conducting an on-line auction with bid pooling
US5897493A (en) * 1997-03-28 1999-04-27 Health Hero Network, Inc. Monitoring system for remotely querying individuals
US5899855A (en) * 1992-11-17 1999-05-04 Health Hero Network, Inc. Modular microprocessor-based health monitoring system
US5997476A (en) * 1997-03-28 1999-12-07 Health Hero Network, Inc. Networked system for interactive communication and remote monitoring of individuals
US5956501A (en) * 1997-01-10 1999-09-21 Health Hero Network, Inc. Disease simulation system and method
US6168563B1 (en) * 1992-11-17 2001-01-02 Health Hero Network, Inc. Remote health monitoring and maintenance system
US5832448A (en) * 1996-10-16 1998-11-03 Health Hero Network Multiple patient monitoring system for proactive health management
US5960403A (en) * 1992-11-17 1999-09-28 Health Hero Network Health management process control system
US6101478A (en) * 1997-04-30 2000-08-08 Health Hero Network Multi-user remote health monitoring system
US5704366A (en) * 1994-05-23 1998-01-06 Enact Health Management Systems System for monitoring and reporting medical measurements
KR0154387B1 (en) * 1995-04-01 1998-11-16 김주용 Digital audio encoder applying multivoice system
KR100208772B1 (en) * 1996-01-17 1999-07-15 서정욱 Interactive system for guiding the blind and its control method
US6377919B1 (en) * 1996-02-06 2002-04-23 The Regents Of The University Of California System and method for characterizing voiced excitations of speech and acoustic signals, removing acoustic noise from speech, and synthesizing speech
US6050940A (en) * 1996-06-17 2000-04-18 Cybernet Systems Corporation General-purpose medical instrumentation
US6032199A (en) * 1996-06-26 2000-02-29 Sun Microsystems, Inc. Transport independent invocation and servant interfaces that permit both typecode interpreted and compiled marshaling
US6032119A (en) 1997-01-16 2000-02-29 Health Hero Network, Inc. Personalized display of health information
US6270455B1 (en) * 1997-03-28 2001-08-07 Health Hero Network, Inc. Networked system for interactive communications and remote monitoring of drug delivery
US6248065B1 (en) * 1997-04-30 2001-06-19 Health Hero Network, Inc. Monitoring system for remotely querying individuals
JP3040977B2 (en) * 1998-09-21 2000-05-15 松下電器産業株式会社 Car accident emergency response device
US6161095A (en) * 1998-12-16 2000-12-12 Health Hero Network, Inc. Treatment regimen compliance and efficacy with feedback
US6302844B1 (en) * 1999-03-31 2001-10-16 Walker Digital, Llc Patient care delivery system
ATE339154T1 (en) * 1999-09-21 2006-10-15 Honeywell Hommed Llc HOME PATIENT MONITORING SYSTEM
US6612984B1 (en) * 1999-12-03 2003-09-02 Kerr, Ii Robert A. System and method for collecting and transmitting medical data
US20030179888A1 (en) * 2002-03-05 2003-09-25 Burnett Gregory C. Voice activity detection (VAD) devices and methods for use with noise suppression systems
DE10064756A1 (en) * 2000-12-22 2002-07-04 Daimler Chrysler Ag Method and arrangement for processing noise signals from a noise source
US6723046B2 (en) * 2001-01-29 2004-04-20 Cybernet Systems Corporation At-home health data management method and apparatus
US7272565B2 (en) * 2002-12-17 2007-09-18 Technology Patents Llc. System and method for monitoring individuals
US20040233045A1 (en) * 2003-03-10 2004-11-25 Mays Wesley M. Automated vehicle information system
US7483831B2 (en) * 2003-11-21 2009-01-27 Articulation Incorporated Methods and apparatus for maximizing speech intelligibility in quiet or noisy backgrounds
US20050136848A1 (en) * 2003-12-22 2005-06-23 Matt Murray Multi-mode audio processors and methods of operating the same
US7444287B2 (en) * 2004-07-01 2008-10-28 Emc Corporation Efficient monitoring system and method
US7525440B2 (en) * 2005-06-01 2009-04-28 Bose Corporation Person monitoring

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001056019A1 (en) 2000-01-28 2001-08-02 Horacek, Gregor Voice control system and method
WO2002075688A2 (en) 2001-03-15 2002-09-26 Koninklijke Philips Electronics N.V. Automatic system for monitoring independent person requiring occasional assistance
US20030104800A1 (en) 2001-11-30 2003-06-05 Artur Zak Telephone with alarm signalling
US20040066940A1 (en) 2002-10-03 2004-04-08 Silentium Ltd. Method and system for inhibiting noise produced by one or more sources of undesired sound from pickup by a speech recognition unit

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1889464A4

Also Published As

Publication number Publication date
US20060285651A1 (en) 2006-12-21
WO2006130364A3 (en) 2009-04-16
EP1889464A2 (en) 2008-02-20
EP1889464B1 (en) 2012-03-28
EP1889464A4 (en) 2010-09-01
US7881939B2 (en) 2011-02-01

Similar Documents

Publication Publication Date Title
EP1889464B1 (en) Monitoring system with speech recognition
US5917414A (en) Body-worn monitoring system for obtaining and evaluating data from a person
US7312699B2 (en) Ear associated machine-human interface
US4689820A (en) Hearing aid responsive to signals inside and outside of the audio frequency range
US5852408A (en) Medication dispensing and compliance monitoring system
US6738485B1 (en) Apparatus, method and system for ultra short range communication
US7567659B2 (en) Intercom system
US20050215171A1 (en) Child-care robot and a method of controlling the robot
JP2023099012A (en) alert system
CN107003715A (en) Smart phone is configured based on user's sleep state
JP2007229315A (en) Data management system, data transmitting and receiving apparatus, data management server and data management method
JP2001291180A (en) Device for reporting relief request
US20220148597A1 (en) Local artificial intelligence assistant system with ear-wearable device
WO2011000113A1 (en) Multiple sound and voice detector for hearing- impaired or deaf person
CN103338316B (en) Integrated security verification systems and method
KR20090094572A (en) Alarm system for a hearing-impaired person
JP2020080124A (en) Voice information storage system and nurse call system
JP2014021635A (en) Watching system for person requiring nursing care or support, and operation method therefor
JP2005032139A (en) Information processing means for care support, information processing means for nursing support, and care or nursing support system
US11915695B2 (en) Method of and a device for using a wireless receiver as a help-seeking-signal converter for rendering help using smart speakers
KR101672947B1 (en) A Smart Hearing Device Having a Function of Circumstance Notice and a Method for Noticing a Circumstance Using the Same
JP2019146125A (en) Nurse call system
US20230292064A1 (en) Audio processing using ear-wearable device and wearable vision device
JP2023059601A (en) Program, information processing method, and information processing apparatus
JP3066776U (en) Communication care device

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2006760193

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU