US20080059188A1 - Natural Language Interface Control System - Google Patents
Natural Language Interface Control System Download PDFInfo
- Publication number
- US20080059188A1 US20080059188A1 US11/932,771 US93277107A US2008059188A1 US 20080059188 A1 US20080059188 A1 US 20080059188A1 US 93277107 A US93277107 A US 93277107A US 2008059188 A1 US2008059188 A1 US 2008059188A1
- Authority
- US
- United States
- Prior art keywords
- natural language
- user
- module
- models
- speech recognition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/14—Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/1815—Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
Definitions
- the present invention relates to speech recognition, and more specifically to natural language speech recognition. Even more specifically, the present invention relates to a natural language speech recognition system used to control an application.
- Speech recognition techniques have been used to enable machines to recognize human speech.
- speech recognition technology is used in many applications, such as word processing, control of devices, and menu driven data entry.
- Natural language is written or spoken input that is in natural form such as if the user is actually conversing with the machine.
- non-natural language is limited in syntax and structure.
- the user To communicate with the machine in non-natural language, the user must know and speak commands or requests according to the syntactic and semantic structure of the speech recognition engine.
- a natural language interface system permits the user to easily interface with the machine or system, since the user can simply speak in a conversational manner without having to remember the proper format to speak a command or request.
- natural language interface systems are difficult to implement due to the complex and shifting “rules” of human natural language.
- natural language processing of the prior art has been inefficient and rigid in its ability to recognize the meaning of natural language utterances.
- conventional natural language interface systems are dialog-based or prompt-driven.
- the natural language interface controls the context of the speech being input to the system.
- natural language interfaces have been implemented as automated phone systems, such as an automated natural language airline reservation system.
- automated natural language airline reservation system Such systems prompt the user to speak within a certain context.
- the natural language system asks the user what city would the user like to fly.
- the system dictates to the user the context of the speech it expects.
- the natural language interface system will look for natural language indicating names of cities.
- the system will prompt the user to speak what date the user would like to fly. Again, the context of the response is dictated by the natural language interface system. Disadvantageously, the user is unable to provide open-ended information or an open ended request. If the received speech data is not within the context as prompted by the system, the system will either ignore the request inform the user that the response is not understood or potentially misinterpret the request as falling within the context of the prompt.
- the present invention advantageously addresses the needs above as well as other needs by providing an open-ended natural language interface control system for controlling multiple devices whose context is not defined by the natural language interface, but by the direction of the user and the capabilities of the multiple devices.
- the invention can be characterized as a natural language interface control system for operating a plurality of devices comprising a first microphone array, a feature extraction module coupled to the first microphone array, and a speech recognition module coupled to the feature extraction module, wherein the speech recognition module utilizes hidden Markov models.
- the system also comprises a natural language interface module coupled to the speech recognition module and a device interface coupled to the natural language interface module, wherein the natural language interface module is for operating a plurality of devices coupled to the device interface based upon non-prompted, open-ended natural language requests from a user.
- the invention can be characterized as a method of speech recognition comprising the steps of: searching for an attention word based on a first context including a first set of models, grammars, and lexicons; and switching, upon finding the attention word, to a second context to search for an open-ended user request, wherein second context includes a second set of models, grammars, and lexicons.
- the invention can be characterized as a method of natural language control of one or more devices, and a means for practicing the method, the method comprising the steps of: receiving an attention word, wherein the attention word indicates that an open-ended, natural language user request will be received; receiving the open-ended, natural language user request; matching the open-ended natural language request with the most likely command corresponding the open-ended natural language request; and transmitting the command to a respective one of the one or more devices.
- FIG. 1 is a system level block diagram of a natural language interface control system (NLICS) in accordance with one embodiment of the invention
- FIG. 2 is a functional block diagram of a remote unit of the natural language interface control system (NLICS) of FIG. 1 in accordance with another embodiment of the invention
- FIG. 3 is a functional block diagram of a base station unit of the natural language interface control system (NLICS) of FIG. 1 in accordance with a further embodiment of the invention.
- NLICS natural language interface control system
- FIG. 4 is a flowchart for the steps performed in the natural language interface algorithm of the natural language interface control system of FIGS. 1 through 3 .
- FIG. 1 a system level block diagram is shown of a natural language interface control system in accordance with one embodiment of the invention.
- the natural language interface control system 102 also referred to as the NLICS 102
- the remote unit 104 has a linear microphone array 108 and a speaker 112 and the base unit 106 has a planar microphone array 110 .
- the remote unit 104 is coupled to multiple devices 114 controllable via the natural language interface control system 102 .
- the base unit 106 is coupled to an external network 116 .
- the natural language interface control system 102 eliminates the seam between the multiple devices 114 and the user for control purposes.
- the natural language interface control system 102 provides a natural language interface such that a user may control one or more of the multiple devices 114 by simply speaking in a natural, conversational manner to the natural language interface control system 102 .
- the NLICS 102 is able to interpret the natural language request of the user and issue the appropriate command to the respective device(s) to effect the user's request.
- the devices 114 may include a television, a stereo, a video cassette recorder (VCR), a digital video disk (DVD) player, etc.
- the NLICS 102 includes a speech recognition module utilizing hidden Markov models (HMMs), as known and understood in the art, to detect the speech and uses a natural language interface to interpret the natural language and determine the probability of what the appropriate user request is.
- the natural language interface utilizes probabilistic context free grammar (also referred to as PCFG) rules and lexicons that are stored for each of the respective devices 114 .
- the natural language interface module includes a device abstraction module that contains an abstraction of each device 114 that the NLICS 102 is designed to interface.
- each device 114 is abstracted into a set of commands that are understandable by the respective devices 114 .
- each abstraction is associated with individual grammars and lexicons specific to the respective device.
- the natural language interface module issues a sequence of command(s) to the appropriate device(s) to effect the user's request. For example, in response to a user's request of “I warmtha watch TV”, the natural language interface module will issue command(s) to the appropriate device(s) to turn on the television and amplifier, set the television and amplifier to the proper modes, and set the volume to an appropriate level. It also updates the states and settings of these devices in its internally maintained abstractions. The command may even turn the television to a preferred channel as learned by the NLICS 102 or as requested by the user in the open ended natural language request.
- the user may request specific information, such as “Do you have the album ‘Genesis’?” to which the system would respond “Yes”. The user could then respond “Play that”, or “Play the album Genesis”. The system would respond by turning on the CD jukebox and the amplifier, setting the proper mode for the amplifier, setting the proper volume level, selecting the proper album and finally, playing the album. It would also update the internally maintained states and settings of the device abstractions as well as the user's profile.
- this command signal is transmitted via a radio frequency (RF) link or an Infrared (IR) link, as are known in the art.
- RF radio frequency
- IR Infrared
- Speech recognition techniques are well known in the art and the control of devices based upon spoken commands is known. For example, applications exist where a user speaks a predetermined speech command to a speech recognition control system, for example, the user speaks, “Turn on” to a controlled television set. In response, the TV is turned on.
- a speech recognition control system for example, the user speaks, “Turn on” to a controlled television set. In response, the TV is turned on.
- this embodiment implements a natural language interface module which is used to determine probabilistically the most likely meaning of the spoken utterance and issue the appropriate command(s).
- the instructions from the user come in a very conversational manner without having to remember a specified command signal.
- the system will use its natural language interface module to probabilistically determine that the user is requesting to watch the television, and will issue an appropriate set of command(s) that the television and other appropriate devices will understand.
- the physical interface or seam between the device 114 and the user is eliminated.
- the user does not even need to know how to operate the device 114 in question.
- the user may not know how to operate the DVD player; however, the user can simply say, “I want to watch a DVD” and a command signal may be sent to power on the DVD player and begin playing the DVD within the player.
- the natural language interface module disambiguates the user's request if it is not sure what the request means. For example, the request may be “I want to watch a movie”. The natural language interface module does not know if the user would like to watch a movie on the DVD player, the VCR or a television movie. In such cases, the natural language interface module includes a feedback module (e.g. a text-to-speech module) and a feedback mechanism such as a speaker to ask the user to clarify the request. For example, the natural language interface module will ask in response to such a request, “Do you want to watch a movie on the DVD, VCR or television?” At which point the user may reply “DVD”, for example.
- a feedback module e.g. a text-to-speech module
- a feedback mechanism such as a speaker
- the natural language interface control system 102 is not a “closed-ended” system that is primarily dialog driven or prompt driven.
- the conversation must be controlled by the system by prompting the user to provide certain information that the system will then try to identify.
- the system will guide the user through the dialog such that the context is constrained by the questions asked by the system. For example, the system will ask, “To what city would you like to fly?” Then the user would respond, in natural language, with the destination city and the system will essentially try to understand the response by trying to match the response with the names of cities.
- the system will prompt the user by asking “What date would you like to leave?” and the system will then constrain the context of the search and analysis of the incoming text strings based on what it is expecting to receive, i.e., dates.
- the user not the system, initiates the dialog. The user simply states “I want to hear some music” with no prompting from the NLICS 102 .
- the context of the search is not constrained by the prompting of the system, but is constrained by the abilities of the devices 114 controlled by the NLICS 102 .
- the user may ask for the NLICS 102 to perform any of the tasks that each of the controlled devices is capable of performing.
- the NLICS 102 If, for example, the user asks the NLICS 102 to perform a function that is not available from the controlled devices, e.g., if the user says “Make me some breakfast”, the NLICS 102 is not able to effect such a request because it is not within the programmed functionality of the controlled devices. For example, the NLICS 102 will properly interpret phrases within the abilities of the devices 114 and simply ignore other requests.
- the feedback portion of the natural language interface module will alert the user that the request is not available.
- the natural language interface control system 102 is “always on”, such that the user may speak a request at any time and the system will respond.
- This attention word notifies the NLICS 102 that following the attention word, a request will arrive.
- the microphone arrays employed by the NLICS only have to search for the attention word or words within the physical space defined by the microphone arrays. For example, if the attention word is programmed as “Mona”, then the user's request becomes “Mona, I warmtha watch TV.” This greatly reduces the processing and searching by the microphone arrays.
- individual users may have separate attention words specific to that user. For example, within a household, a first user's attention word is “Mona” while a second user's attention word is “Thor”.
- the NLICS 102 hears the attention word “Mona”, the system assumes that the first user is issuing the command. For example, if the first user says, “Mona, I warmtha watch TV”, then the system will not only turn on the television (and other relevant devices), but the system will turn on the television to the first user's selected favorite channel. Note that this does not provide a true identification; however, since the first user could say the second user's attention word.
- This mechanism simply provides a means to tailor the experience of the NLICS 102 specifically to the likes, pronunciations and habits of individual users.
- each of the devices 114 coupled to the NLICS 102 are abstracted into a separate device abstraction such that separate grammars and lexicons are stored for each of the devices 114 .
- a grammar and lexicon specific to that particular context i.e., the context of the DVD player
- This provides a context switching feature in the speech recognition module.
- the NLICS 102 is set up such that models used in the speech recognition module for the HMMs and grammars can be streamed into use from a secondary source, such as a hard disk, CD-ROM, or DVD at run time. Once the data is read in, it can be immediately used without any preprocessing. As such, memory usage for the speech recognition module is improved since many models and grammars can be stored remotely of the memory of the NLICS 102 .
- the NLICS 102 is designed to be implemented as two separate units, for example, the remote unit 104 and the base unit 106 .
- the base unit 106 functions as a “docking station” for the remote unit 104 , which may be coupled to the base unit 106 via a universal serial bus (USB) connection, for example.
- USB universal serial bus
- the remote unit 104 functions as a universal remote control for a variety of devices as is traditionally done, by providing buttons for the user to press.
- the base unit 106 provides an external network interface for the NLICS 102 .
- the external network interface couples the NLICS to an external network 116 , such as a home local area network (LAN), an Intranet or the Internet.
- the NLICS 102 may download additional grammars, HMM models, device abstractions, CD, DVD, television or other programming information and/or lexicons that are maintained in central databases within the external network 116 .
- the base unit 106 functions as a secondary cache for the remote unit 104 .
- the remote unit 104 includes a feature extraction module, a speech recognition module, and a natural language interface module, as well as the device interface to the various devices.
- the base unit 106 includes a memory that functions to hold additional models, grammars, and lexicons to be used in the remote unit 104 .
- the remote unit 104 includes a traditional two element linear microphone array 108 that receives acoustic signaling.
- the base unit 106 contains a planar microphone array 110 which listens to acoustic energy from a two-dimensional space.
- the NLICS 102 advantageously uses both microphone arrays 108 and 110 to implement a three-dimensional microphone array such that together the two sets of microphone arrays 108 and 110 listen to a predefined three-dimensional physical space.
- a three-dimensional volume can be defined within a space, for example, the NLICS 102 can be configured to listen to a volume including a living room couch where a user may be sitting when operating respective devices. As such, acoustical data coming from sources outside of this defined space will attenuate while acoustical data coming from within the defined space will be summed in phase.
- the remote unit 104 including the linear microphone array 108 , a feature extraction module 202 , a speech recognition module 204 , a natural language interface control module 206 , a system processing controller 208 , a device interface 210 , a base unit interface 212 (also referred to as a universal serial bus (USB) interface 212 ), and a speaker 214 . Also illustrated are the devices 114 .
- the speech recognition module 204 includes a speech decoder 216 , an N-gram grammar module 218 , and an acoustic models module 220 .
- the natural language interface control module 206 includes a natural language interface module 222 , a probabilistic context free grammar module 224 (also referred to as the PCFG module 224 ), a device abstraction module 226 and a feedback module 228 .
- the core functionality of the NLICS 102 may be implemented solely within the remote unit 104 , although preferred embodiments utilize both the remote unit 104 and the base unit 106 as separate units. As such, the remote unit 104 will be described first below, followed by a description of the base unit 106 .
- the linear microphone array 108 is a two element narrow-cardioid microphone that localizes a source, i.e., the user, and discriminates against interfering noise.
- Such linear microphone arrays are well known in the art.
- the linear microphone array 108 samples the input speech data from each of the microphone elements, and then time aligns and sums his data in order to produce a signal-to-noise ratio (SNR)-enhanced representation of the incoming acoustic signal.
- SNR signal-to-noise ratio
- the acoustic data is then passed to the feature extraction module 202 , which is used to extract parameters or feature vectors representing information related to the incoming acoustic data.
- the feature extraction module 202 performs edge-detection, signal conditioning and feature extraction.
- speech edge detection is accomplished using noise estimation and energy detection based on the 0 th Cepstral coefficient and zero-crossing statistics.
- Feature extraction and signal conditioning consist of extracting Mel-frequency cepstral coefficients (MFCC), delta information and acceleration information. It is a 38 dimensional feature vector based on 12.8 ms sample buffers overlapped by 50%.
- MFCC Mel-frequency cepstral coefficients
- delta information delta information
- acceleration information It is a 38 dimensional feature vector based on 12.8 ms sample buffers overlapped by 50%.
- Such feature extraction modules 202 and functionality are well understood in the art, and that one skilled in the art may implement the feature extraction module in a variety of ways.
- the output of the feature extraction module 202 is a sequence of feature vectors.
- the speech recognition module 204 functions as a Hidden-Markov Model (HMM)-based continuous speech recognizer that has the ability to reject “unmodeled events”, e.g. out-of vocabulary events, disfluencies, environmental noise, etc.
- the speech recognition module 204 is under the control of the natural language interface module 222 and can switch between different acoustic models and different grammars based on the context of the speech, as determined by the natural language interface control module 206 .
- the speech recognition module 204 may be entirely conventional, although the speech recognition module 204 has several features which are advantageous for use in the NLICS 102 .
- memory usage in the speech recognition module 204 has been optimized so that the memory requirement is mainly a reflection of the amount of acoustic speech model data used. A more detailed description follows of the speech recognition module 204 and the natural language interface control module 206 .
- the feature vectors from the feature extraction module 202 are input to the speech recognition module 204 , i.e., input to the speech decoder 216 of the speech recognition module (SRM) 204 .
- the speech recognition module (SRM) 204 is responsible for requesting speech feature vectors from the feature extraction module (FEM) 202 and finding the most likely match of the corresponding utterance with a set of speech models, while rejecting non-speech events, using an approach based on Hidden Markov Models (HMMs).
- HMMs Hidden Markov Models
- the models used by the speech decoder 216 are stored in the acoustic models module 220 . These models may comprise context-dependent or independent phonetic models, sub word models or whole word models, e.g. monophones, biphones and/or triphones. In one embodiment, the speech decoder 216 may dynamically switch between different models, e.g., the speech decoder 216 may switch between models based on triphones and monophones. This is in contrast to known systems, where there are a fixed number of states and Gaussians per state, i.e. the architecture of the respective phonemes is fixed.
- a selection between models based on monophones, biphones, and triphones, as well as varying the architecture of these phonemes, e.g., the number of states and the number of Gaussians per state for each type of phoneme (monophone, biphone, and triphone) may be varied for optimization in space, speed, and accuracy.
- the received utterances are analyzed with the models, e.g., using a Viterbi algorithm, and scores are assigned representing how well the utterance fits the given models.
- the models used by the speech decoder 216 are under direct control by the natural language interface control module 206 , which is described further below.
- Garbage filler models are stored with the acoustic models module 220 to model background noises as well as disfluencies and “silences”. These models are utilized by the speech decoder 216 in the rejection of out-of-vocabulary (oov) events.
- the speech decoder 216 also rejects out-of-vocabulary (oov) events using an online garbage calculation. It then returns the N-best candidates if their scores are very close. Such out-of-vocabulary rejection is also well understood in the art.
- the rejection techniques have been improved compared to those known in the art.
- the basic principle behind HMM-based speech recognition systems is that an utterance is compared with a number of speech models (from the acoustic models module 220 ) in order to find the model that best matches the utterance.
- an HMM-based system will typically still attempt to find the closest match between utterances and models and report the results. In many cases this is unwanted, as any sound that is picked up by an open microphone will cause a reference to a model to be emitted.
- a Viterbi score passes a threshold, the utterance is determined to be an in-vocabulary word. If the Viterbi score of the utterance does not exceed the threshold, then the utterance is deemed out-of vocabulary.
- a Viterbi score is generated using the Viterbi algorithm. This algorithm calculates a single best state sequence through an HMM and its corresponding probability, given an observation sequence. However, experiments have shown that this is not a very accurate rejection scheme.
- a garbage score can then be defined as the difference between the logarithms of each of the two Viterbi scores divided by the number of frames in the utterance according to equation 1 below. The garbage score reveals whether the utterance had a closer match with the word models or the out-of-vocabulary models. Many variants have been proposed as to how to reject out-of-vocabulary events.
- Fricatives are characterized as broadband, low energy noise, e.g. “white noise”.
- a fricative as known in the art, is a sound, as exemplified by such phonemes as “th”, “sh”, etc.
- the feature extraction module 202 attempts to solve this problem by making its best efforts to find the beginning and ending samples To guarantee that low-energy sounds are included in the speech sample, the feature extraction module 202 includes a number of extra samples in the beginning and ending of the utterance.
- each model is preceded and followed by a single-state silence model that “consumes” the frames of silence passed along from the feature extraction module 202 .
- the speech decoder 216 finds the sequence of models with the closest match and optimally aligns the silence models as well as the word-models with the utterance. Now the start and end indices for the beginning and ending silence portions of the utterance can be obtained and removed.
- w is the logarithm of the Viterbi score for the acoustic models of in-vocabulary words without preceding or following silence models and where no silence is included in the utterance.
- g is the logarithm of the corresponding score for the out-of-vocabulary HMM models.
- n is the total number of frames in the utterance and m is the number of frames that were consumed by the preceding and following silence models.
- the N-gram grammar module 218 includes the grammars used by the speech decoder 216 . These grammars are the rules by which lexicons are built and a lexicon is a dictionary consisting of words and their pronunciation entries. The specific grammars used by the speech decoder 216 are also controlled by the natural language interface module 222 . In this embodiment, the N-gram grammar is configured to use multiple grammar types or a combination of grammar types. For applications (e.g., controlled devices with many controls and functions) that use a complex language it might be advantageous to use the trigram grammar option. For smaller systems (e.g., a device with very simple controls and functions), the bigram grammar option might constitute a better memory and accuracy tradeoff.
- the allowed combinations of lexicon entries can be expressed in terms of specific lexicon entry labels or word groups. If any lexicon entry should be able to follow upon any lexicon entry, the ergodic grammar option can be used.
- an N-gram grammar within a device that generally has a small footprint is not intuitive. By a small footprint, it is meant that the system only has to recognize speech relating to the controlled devices 114 coupled to the remote unit 104 , such that it can classify the remaining speech as out-of-vocabulary.
- the N-gram grammar module 218 allows for the use of multiple grammars and types even in the case of a speech recognition module 204 having a small footprint.
- the word list grammar is used to recalculate the Viterbi score for a fixed sequence of words and a subset of an utterance.
- the system incorporates the various grammars in such a way that allows for “context switching” or the immediate switching between grammar types and sets of grammar rules under the control of the natural language interface module. Being able to do so is important as the content of a person's speech is highly affected by context. For example, only certain phrases (e.g., the attention words described above) are expected to begin a dialog while others could only follow upon a question (e.g., the natural language interface disambiguating an unclear request). In particular, this becomes evident when a speaker is targeting different audiences, and in the case of consumer electronics—different products, such as a television, a DVD player, a stereo, and a VCR.
- the system provides a way to define contexts for which only certain grammar rules should apply.
- the natural language interface module 222 can instruct the speech recognition module 204 to listen only to phrases that are expected. For example, when the natural language interface module 222 has determined that the user is attempting to operate the DVD player, the speech recognition module 204 may be instructed to use the grammar type and grammar corresponding to the DVD player. Thus, the speech decoder 216 will retrieve the proper grammar from the N-gram grammar module 218 . Context switching can also be performed on a finer level where a flag for each grammar rule or lexicon entry is used to indicate which individual rules or words are to be enabled and disabled. Further, for some system settings and some grammar modes it might be preferred to limit the search for the best hypothesis to a set of lexicon entries. Defining several lexicons and referencing only the lexicon of interest can do this.
- the speech recognition module 204 can dynamically change the grammar used given the context of the received speech, the lexicons are dynamically changed, since the lexicons depend on the selected grammar/grammars.
- the processing time can be reduced.
- the processing time is greatly reduced using an efficient implementation of the Beam Search algorithm.
- This beam search algorithm aims to keep the number of hypotheses at a minimum during the Viterbi search algorithm. As such, all active hypotheses are compared at each discrete time step and the Viterbi score for the best hypothesis is calculated. Pruning can then be accomplished by discarding any hypotheses whose scores fall below the maximum hypothesis score minus some pre-defined rejection threshold function. This constrains the search based on hypotheses that are pruned and so will not be considered again in the following time steps until the score for the corresponding model states become high enough to pass the threshold.
- Token Passing is a well-known approach to tracking the best word hypotheses through an HMM.
- the last model state for the state sequence with the highest Viterbi score can be easily found once the processing of all frames of an utterance is completed. However, this does not necessarily provide the best state (or word) sequence.
- To find the best state sequence it is required to perform “back tracing”. The traditional way of doing this is to let each state contain a pointer back to the previously best state for each frame. Back tracing can then be performed by following the pointers back, starting with the last model state for the state sequence with the highest Viterbi score.
- the speech decoder 216 instead of storing one token pointer in each state, uses two arrays S i and S 2 to hold the token pointers for each state.
- Array S i keeps the token pointers for each state and the previous frame, and S 2 keeps the token pointers for each state and the current frame.
- a caching scheme is used for the lexicons stored in memory on the remote unit, e.g., by the N-gram grammar module 218 .
- a lexicon is a dictionary consisting of words and their pronunciation entries. These pronunciations may be implemented as either phonetic spellings that refer to phonetic models, or to whole-word models. A given word entry may contain alternate pronunciation entries, most of which are seldom used by any single speaker. This redundancy is echoed at each part-of-speech abstraction, creating even more entries that are never utilized by a given speaker. This implies that if lexicon entries are sorted by their frequency of usage, there is a great chance that the words in an utterance can be found among the top n lexicon entries.
- the cache is divided into different levels divided by frequency of use. For example, frequently used lexicon entries will be stored within the top level of the cache.
- a caching scheme may devised in which the top 10% of the cache is used 90% of the time, for example.
- a multi-pass search is performed where the most likely entries are considered in the first pass. If the garbage score from this pass is high enough to believe that the words actually spoken were contained in the set of most likely spellings, the speech decoder 216 reports the results to the calling function. If this score is low, the system falls back to considering a wider range of spellings.
- the score from the first pass is high, but not high enough in order to be able to make a decision whether the correct spellings, for the elements of the utterance, were contained in the set of most likely spellings, this is also reported back to the calling function, which might prompt the user for clarification. If a lexicon spelling for a given part-of-speech is never used while some of its alternative spellings are frequently used, that spelling is put in a “trash can” and will never be considered for that user. As such, rarely used spellings are not considered and the chance of confusing similar-sounding utterances with one of those spellings is reduced and the recognition accuracy is therefore increased. Further, the caching scheme allows the system to consider less data and hence provides a great speed improvement.
- the natural language interface control module 206 includes the natural language interface module 222 , the probabilistic context free grammar (PCFG) module 224 , the device abstraction module 226 , and the feedback module 228 .
- the natural interface module (NLIM) 222 is responsible for interpreting the user's requests within the context of the devices 114 under control and the user's usage history as defined by a set of probabilistic context-free grammar (PCFG) rules and device abstractions.
- PCFG probabilistic context-free grammar
- the natural language interface module 222 asserts control over the speech recognition module 204 and the microphone array 108 search. It does this by controlling the speech recognition module's 204 grammar, and therefore the lexicon under consideration. It also controls system parameters as well as the current state of its device abstractions, and current language references.
- the user initiates a dialog with the NLICS by speaking an attention word.
- the preferred method of locating the attention word is described with reference to FIG. 3 .
- the user then follows the attention word with an open-ended request constrained only by the capabilities of the devices coupled to the remote unit 104 .
- the attention word alerts to the natural language interface module 222 the identity of the user so that the speech decoder can be instructed to use the proper grammar and models based upon the attention word; thus, the system can preconfigure itself to the speech pattern's (e.g., the pronunciation, structure, habits etc.) of the user.
- the speech recognition module 204 transcribes the user's request, which is in natural, conversational language.
- the utterance is transcribed into a set of alternative hypothesis strings ordered by probability.
- the speech decoder 216 forwards the N best text strings to the natural language interface module 222 to be analyzed to determine the probable meaning of the utterance.
- the natural language interface module 222 parses the incoming strings by applying a set of probabilistic context free grammar (PCFGs) rules from the PCFG module 224 to find the most likely string, given the string's probability, the user's history, and the current system context.
- PCFGs probabilistic context free grammar
- These PCFG rules reflect the context of the user (based on the attention word) and also the context of the device to be operated (if already determined).
- the PCFGs are initially ordered in terms of frequency of usage as well as likelihood of use. Over time, it tracks habits of individual users and improves rule probability estimations to reflect this data. This data can be shared and combined with data from other systems and then redistributed via the collaborative corpus.
- the NLICS includes two sets of grammars, one is the N-gram grammar of the speech recognition module 204 and the other is the probabilistic context free grammar module 224 of the natural language interface control module 206 .
- Conventional systems only use one set of grammars, not a combination of N-gram grammar and PCFG rules which are inferred from data collected from man-machine dialog in the domain of personal electronic products.
- the natural language interface module 222 reaches one of three conclusions: (1) that it unambiguously understands and can comply with the user request, in which case it carries out the command; (2) that is unambiguously understands and cannot comply with a user request, in which case it informs the user of this conclusion; and (3) that it cannot resolve an ambiguity in the request, in which case, it requests clarification from the user.
- the natural language interface module 222 interprets an incoming string with a sufficiently high confidence level as a request to “Turn on the television”.
- the appropriate command within the device abstraction module 226 is retrieved and transmitted to the controlled device 114 (i.e., the television).
- the device abstraction module 226 includes all of the commands to effect the proper requests of the user in the format understandable by the television itself.
- the command is transmitted via the device interface 210 , e.g., an IR transmitter, to the television.
- the television is powered on.
- the second case is the case in which the user asks the NLICS to perform a task it can not perform. For example, the user requests for the television to explode.
- the feedback module 228 (e.g. text-to-speech) 228 is instructed to play an audible message over the speaker alerting the user that the request can not be performed. It is noted that the feedback module 228 may simply display notices on a screen display instead of playing an audio signal over the speaker 214 .
- the natural language interface module 222 disambiguates the ambiguous request. If the ambiguity arises due to a low confidence, it asks the user to affirm its conclusion. For example, the speaker 214 plays, “Did you mean play the CD?” Alternatively, the natural language interface module 222 asks the user to repeat the request. If the ambiguity arises due to a set of choices, it presents these alternatives to the user, e.g., “Did you want to watch a movie on the VCR or the DVD?” If the ambiguity arises because of the current context, the user is made aware of this, e.g., the user requests to play the DVD player when it is already playing.
- the system adjusts the user's profile to reflect the confidence with which a decision was made, as well as preference given a set of alternatives.
- these statistics are used to reorder the PCFG rules and entries in the relevant lexicon(s). This results in a faster, more accurate system, since the most likely entries will always be checked earlier and these more likely entries will produce a higher confidence.
- the natural language interface module 222 instructs the feedback module 228 to clarify the request, e.g., the speaker 214 plays “Did you mean to play a CD?”
- the natural language interface module 222 switches the context and grammar rules based on what it is expecting to receive at the microphone array 108 . For example, the system will switch to a context of expecting to receive a “yes” or a “no” or any known variants thereof.
- the natural language interface module 222 switches context back to the original state.
- the natural language interface module 222 instructs the speech recognition module 204 to switch grammars, which will indirectly cause the lexicons to change, since the grammar controls which lexicons are used.
- the natural language interface control module 206 also contains the device abstraction module 226 .
- the device abstraction module 226 stores the abstractions for each device 114 . As such, the commands for each device 114 and the objects that each device 114 can manipulate are stored here. It also relates these controls to the states that the devices can be in and the actions they can perform. The content of the device abstraction module 226 depends on the different devices that are coupled to the remote unit 104 .
- the device abstraction module 226 also includes commands for other devices in order to operate another device. For example, if the user requests to play a DVD, then the instructions to power on the DVD player, cause the DVD to play are issued. Additionally, a command signal is sent to turn on the television, if it is not already on.
- the commands stored in the device abstraction module 226 are transmitted to the respective controlled device 214 via the device interface 210 .
- the device interface 210 is an IR or an RF interface.
- the NLICS can be implemented to control any device which is controllable via such an IR link. As long as the device abstraction has stored the commands to operate the specific device, the device does not realize that it is being controlled by a natural language interface. It simply thinks its remote control or a universal remote control has sent the signal.
- the system processing controller 208 operates as the controller and processor for the various modules in the NLICS. Its function is well understood in the art. Furthermore, the interface 212 is coupled to the system processing controller 208 . This allows for connection to the base unit 106 , or alternatively, to a computer. The interface 212 may be any other type of link, either wireline or wireless, as known in the art.
- various components of system such as the feature extraction module 202 , the speech recognition module 204 and the natural language interface control module 206 may be implemented in software or firmware, for example using an application specific integrated circuit (ASIC) or a digital signal processor (DSP).
- ASIC application specific integrated circuit
- DSP digital signal processor
- FIG. 3 a functional block diagram is shown of a base unit or base station of the natural language interface control system of FIG. 1 in accordance with a further embodiment of the invention.
- the base unit 106 includes the planar microphone array 110 , a frequency localization module 302 , a time search module 304 , a remote interface 306 (also referred to as a remote interface 306 ), the external network interface 308 , and a secondary cache 310 .
- the linear microphone array 108 and the planar microphone array 110 combine to form a three-dimensional microphone array 312 (also referred to as a 3D microphone array 312 ).
- the external network 116 coupled to the external network interface 308 .
- the base unit 106 is intended as a docking station for the remote unit 104 (which is similar to a universal remote control).
- the base unit 106 includes the external network interface 308 such that the NLICS can interface with an external network 116 , such as a home LAN or the Internet either directly or through a hosted Internet portal.
- an external network 116 such as a home LAN or the Internet either directly or through a hosted Internet portal.
- additional grammars, speech models, programming information, IR codes, device abstractions, etc. can be downloaded into the base unit 106 , for storage in the secondary cache 310 , for example.
- the NLICS 102 may transmit its grammars, models, and lexicons to a remote server on the external network for storage.
- This remote storage may become a repository of knowledge that may be retrieved by other such devices.
- the system will never get old, since lexicons will constantly be updated with the most current pronunciations and usages.
- This enables a collaborative lexicon and/or a collaborative corpus to be built since multiple natural language interface control systems will individually contribute the external database in a remote server.
- the NLICS 102 may download command signals for the device abstraction module of the remote unit 104 .
- the base unit 106 simply downloads the commands that are stored for any number of devices. These commands are then stored in the device abstraction module.
- the NLICS can submit feature vector data and labels associated with high-confidence utterances to the collaborative corpus. This data can then be incorporated with other data and used to train improved models that are subsequently redistributed. This approach can also be used to incorporate new words into the collaborative corpus by submitting the feature vector data and its label, which may subsequently be combined with other data and phonetically transcribed using the forward-backward algorithm. This entry may then be added to the lexicon and redistributed.
- the base unit 106 includes the planar microphone array 110 .
- the planar microphone array 110 and the linear microphone array 108 of the remote unit 104 combine to form a three-dimensional array 312 .
- Both arrays comprise conventional point source locating microphone.
- a three-dimensional array is constructed by first constructing a planar array (e.g., planar microphone array 110 ), then adding one or two microphone elements off of the plane of the planar array. As such, the linear microphone array 108 becomes the additional one or two elements.
- This enables the NLICS 102 to define a three dimensional search volume. As such, the device will only search for speech energy within the volume. Thus, the microphone arrays 108 and 110 will localize on a point within the search volume.
- the search volume is configured to be the volume about a user's living room couch.
- Both the linear microphone array 108 and the planar microphone array 110 are controlled by the natural language interface module 222 .
- a frequency localization module 302 and a time search module 304 are coupled to the 3D microphone array 110 .
- the time search module 304 receives control signaling from the natural language interface module 222 within the remote unit 104 via the remote interface 306 .
- the time search module 304 adds up time aligned buffers which are provided by the microphones. Thus, the time search module 304 locates putative hits and helps to steer the 3D microphone array 110 in the direction of the hit.
- the functionality of the time search module 304 is well known in the art.
- the frequency localization module 302 is also under the control of the natural language interface module 222 .
- the frequency localization module 302 implements a localization algorithm as is known in the art.
- the localization algorithm is used to localize speech energy within the defined volume. As such, speech energy originating from outside of the localized point within the volume will attenuate (is out of phase), while speech energy from within the localized point will sum (is in phase). Thus, the localization takes advantage of constructive interference and destructive interference in the frequency domain.
- the search module is used to do a coarse search for attention words. If the speech energy passes a threshold, then a fine search is done by the localization module. If it passes the fine search, then the word passed to the recognition and NLI modules.
- the processing is reduced.
- the SR module identifies the putative hit as an attention word, is passed to the natural language interface module 222 to be analyzed to determine which attention word has been uttered.
- the context of the natural language interface module is initially of attention words, i.e., the system is searching for attention words to activate the system. Once an attention word is found, the context of the NLICS is caused to change to a request context, such that it will be looking for requests constrained by the devices coupled to the NLICS.
- the secondary cache of the base unit 106 is used to store secondary models, grammars and/or lexicons for use in the remote unit 104 .
- This compliments the speech recognition module which is designed to read in (stream) speech models and grammars from a secondary storage device or secondary cache (e.g. hard disk, CDROM, DVD) at run-time. Once the data has been read in, it can immediately be used without any kind of preprocessing. This effectively ties in well with the idea of context switching.
- the memory requirements are greatly reduced, since less frequently used grammars, etc. may be stored in the secondary cache 310 and read when required without occupying memory within the remote unit 104 .
- the secondary cache may be a storage for models, grammars, etc. that are downloaded from an external network 116 .
- Step 402 the speech recognition module 204 and the natural language interface module 222 are initialized to the context of looking for attention words. This allows the NLICS to accept non-prompted user requests, but first the system must be told that a user request is corning. The attention word accomplishes this. As such, the grammars and the models for the hidden Markov models are used to specifically identify the presence of an attention word.
- the remote unit receives the acoustic speech data at the microphone array (Step 404 ). The acoustic data is segregated into 12.8 msec frames using a 50% overlap.
- a 38-dimensional feature vector is derived from the acoustic data. These features consist of Mel-Frequency Cepstral coefficients 1 - 12 and the first and second order derivatives of MFC coefficients 0 - 12 . Thus, feature vectors are created from the acoustic data (Step 406 ). This is performed at the feature extraction module 202 .
- the speech recognition module 204 applies acoustic hidden Markov models (HMM) and an N-gram grammar to the incoming feature vectors (as specified by the natural language interface) to derive an in-vocabulary (IV) Viterbi (likelihood) score (Step 408 ).
- the feature data is reprocessed using models of OOV events, e.g., an ergodic bank of monophone models, to derive an out-of-vocabulary (OOV) Viterbi score (Step 410 ).
- OOV out-of-vocabulary
- the garbage score is calculated from the IV and OOV scores, e.g., the garbage score equals [Ln(IV score) ⁇ Ln(OOV score)]/number of frames (Block 411 ).
- a low score indicates a garbage utterance.
- the N-best transcribed text string(s) and corresponding garbage score(s) are passed to the natural language interface module 222 (Step 412 ).
- the natural language interface module 222 parses the incoming string(s) using a set of probabilistic context-free grammar (PCFG) rules as well as device context information for an attention utterance (Step 414 ).
- PCFG probabilistic context-free grammar
- the natural language interface module 222 requires an attention strategy, e.g., the receipt of an attention word (i.e., Mona) that is unique to the user, or speaker identification coupled with allowable grammar rules.
- the natural language interface module 222 knows the user's identity. It proceeds by configuring the system according to the user. It does this by changing the relevant system parameters and by directing the speech recognition module 204 to change grammars to those appropriate for accepting commands and requests and according to the user.
- the speech recognition module 204 changes lexicons according to the grammar rules and the individual user.
- the speech recognition module 204 and the natural language interface module 222 change contexts to look for user requests (Step 418 ).
- the natural language interface module directs the microphone array of the base unit or base station to narrow its focus in order to better discriminate against environmental noise.
- the natural language interface module directs the amplifier to reduce its volume. Then, the natural language interface module 222 initiates a timer and waits for the user's request until the time-out period has expired. If the system times-out, the natural language interface module 222 reconfigures the system by resetting the relevant speech recognition module rules and lexicon to search for attention words. Also, the microphone array and the amplifier volume are reset if they had been adjusted. These resetting steps are such as those performed in Step 402 .
- Step 418 After switching to the context of looking for a user request (Step 418 ), Steps 404 through 414 are repeated, except that in this pass the acoustic speech represents a user request to operate one or more of the controlled devices.
- the natural language interface module 222 detects a user request (Step 416 ), i.e. a user request (as determined by the PCFG grammar system and device context) is received, it draws on of three conclusions (Steps 420 , 422 or 424 ). According to Step 420 , the user request is unambiguously understood and the natural language interface module can comply with a user request. Thus, the natural language interface module 222 carries out the command by sending the appropriate signals via the device interface 210 , as indicated by the device abstraction. Then, the context of the speech recognition module 204 and the natural language interface module 206 is switched back to look for attention words (Step 426 ), before proceeding to Step 404 .
- Step 422 the user request is unambiguously understood and the natural language interface module cannot comply with the user request. As such, the user is informed of this conclusion and prompts for further direction. The system then waits for further user requests or times out and proceeds to Step 426 .
- the natural language interface module 222 requests clarification from the user, e.g., by using the feedback module 228 and the speaker 214 .
- the ambiguity is resolved according to the kind of ambiguity encountered. If the ambiguity arises due to a low confidence, it affirms its conclusion with the user (e.g., “Did you mean play the CD player?”). If the user confirms the conclusion, the command is carried out, and the system is reset (Step 426 ). The system adjusts the user's profile to reflect the confidence with which a decision was made, as well as preference given a set of alternatives.
- these statistics are used to reorder the PCFG rules and entries in the relevant lexicon(s). This results in a faster, more accurate system, since the most likely entries will always be checked earlier and these more likely entries will produce a higher confidence.
- the natural language interface module 222 carries out the command, otherwise the system is reset (Step 426 ). In either case, the user profile is updated as described above.
Abstract
A search is performed in a network for an attention word based on a first context including a first set of models, grammars, and lexica. Upon finding the attention word, a switch is performed to a second context to search for an open-ended user request. The second context includes a second set of models, grammars, and lexicons and the open-ended user request does not follow a predetermined format.
Description
- This application is a continuation of and claims priority to application Ser. No. 09/692,846 to Konopka, which was filed on Oct. 19, 2000 and entitled “Natural Language Interface Control System,” which claims priority under 35 U.S.C. §119(e) to U.S. Provisional Patent Application Ser. No. 60/160,281 to Konopka, which was filed on Oct. 19, 1999, and was entitled “A Natural Language Interface for Personal Electronic Products.” The contents of both of these applications are incorporated herein by reference in their entirety.
- The present invention relates to speech recognition, and more specifically to natural language speech recognition. Even more specifically, the present invention relates to a natural language speech recognition system used to control an application.
- Many have dreamed of a device that could completely bridge the gap or seam between man-made machines and humans. Speech recognition techniques have been used to enable machines to recognize human speech. For example, speech recognition technology is used in many applications, such as word processing, control of devices, and menu driven data entry.
- Most users prefer to provide the input speech in the form of a natural language. Natural language is written or spoken input that is in natural form such as if the user is actually conversing with the machine. In contrast, non-natural language is limited in syntax and structure. To communicate with the machine in non-natural language, the user must know and speak commands or requests according to the syntactic and semantic structure of the speech recognition engine.
- Advantageously, a natural language interface system permits the user to easily interface with the machine or system, since the user can simply speak in a conversational manner without having to remember the proper format to speak a command or request. Disadvantageously, natural language interface systems are difficult to implement due to the complex and shifting “rules” of human natural language.
- Furthermore, natural language processing of the prior art has been inefficient and rigid in its ability to recognize the meaning of natural language utterances. As such, in order to limit the context of the user's natural language input and ease the processing of the input speech, conventional natural language interface systems are dialog-based or prompt-driven. The natural language interface controls the context of the speech being input to the system. For example, natural language interfaces have been implemented as automated phone systems, such as an automated natural language airline reservation system. Such systems prompt the user to speak within a certain context. For example, the natural language system asks the user what city would the user like to fly. As such, the system dictates to the user the context of the speech it expects. Thus, the natural language interface system will look for natural language indicating names of cities. Next, the system will prompt the user to speak what date the user would like to fly. Again, the context of the response is dictated by the natural language interface system. Disadvantageously, the user is unable to provide open-ended information or an open ended request. If the received speech data is not within the context as prompted by the system, the system will either ignore the request inform the user that the response is not understood or potentially misinterpret the request as falling within the context of the prompt.
- What is needed is an efficient natural language system in which the context is not limited by the natural language processing, but is limited by the user's speech. The present invention advantageously addresses the above and other needs.
- The present invention advantageously addresses the needs above as well as other needs by providing an open-ended natural language interface control system for controlling multiple devices whose context is not defined by the natural language interface, but by the direction of the user and the capabilities of the multiple devices.
- In one embodiment, the invention can be characterized as a natural language interface control system for operating a plurality of devices comprising a first microphone array, a feature extraction module coupled to the first microphone array, and a speech recognition module coupled to the feature extraction module, wherein the speech recognition module utilizes hidden Markov models. The system also comprises a natural language interface module coupled to the speech recognition module and a device interface coupled to the natural language interface module, wherein the natural language interface module is for operating a plurality of devices coupled to the device interface based upon non-prompted, open-ended natural language requests from a user.
- In another embodiment, the invention can be characterized as a method of speech recognition comprising the steps of: searching for an attention word based on a first context including a first set of models, grammars, and lexicons; and switching, upon finding the attention word, to a second context to search for an open-ended user request, wherein second context includes a second set of models, grammars, and lexicons.
- In a further embodiment, the invention can be characterized as a method of natural language control of one or more devices, and a means for practicing the method, the method comprising the steps of: receiving an attention word, wherein the attention word indicates that an open-ended, natural language user request will be received; receiving the open-ended, natural language user request; matching the open-ended natural language request with the most likely command corresponding the open-ended natural language request; and transmitting the command to a respective one of the one or more devices.
- The above and other aspects, features and advantages of the present invention will be more apparent from the following more particular description thereof, presented in conjunction with the following drawings wherein:
-
FIG. 1 is a system level block diagram of a natural language interface control system (NLICS) in accordance with one embodiment of the invention; -
FIG. 2 is a functional block diagram of a remote unit of the natural language interface control system (NLICS) ofFIG. 1 in accordance with another embodiment of the invention; -
FIG. 3 is a functional block diagram of a base station unit of the natural language interface control system (NLICS) ofFIG. 1 in accordance with a further embodiment of the invention; and -
FIG. 4 is a flowchart for the steps performed in the natural language interface algorithm of the natural language interface control system ofFIGS. 1 through 3 . - Corresponding reference characters indicate corresponding components throughout the several views of the drawings.
- The following description of the presently contemplated best mode of practicing the invention is not to be taken in a limiting sense, but is made merely for the purpose of describing the general principles of the invention. The scope of the invention should be determined with reference to the claims.
- Referring first to
FIG. 1 , a system level block diagram is shown of a natural language interface control system in accordance with one embodiment of the invention. Shown is the natural language interface control system 102 (also referred to as the NLICS 102) having aremote unit 104 and a base unit 106 (also referred to as a base station 106). Theremote unit 104 has alinear microphone array 108 and aspeaker 112 and thebase unit 106 has aplanar microphone array 110. Theremote unit 104 is coupled tomultiple devices 114 controllable via the natural languageinterface control system 102. Furthermore, thebase unit 106 is coupled to anexternal network 116. - In operation, the natural language
interface control system 102 eliminates the seam between themultiple devices 114 and the user for control purposes. The natural languageinterface control system 102 provides a natural language interface such that a user may control one or more of themultiple devices 114 by simply speaking in a natural, conversational manner to the natural languageinterface control system 102. The NLICS 102 is able to interpret the natural language request of the user and issue the appropriate command to the respective device(s) to effect the user's request. For example, in a home application, thedevices 114 may include a television, a stereo, a video cassette recorder (VCR), a digital video disk (DVD) player, etc. When the user wishes to operate one of thedevices 114, the user simply speaks, “I wanna watch TV”, or another natural language equivalent. The NLICS 102 includes a speech recognition module utilizing hidden Markov models (HMMs), as known and understood in the art, to detect the speech and uses a natural language interface to interpret the natural language and determine the probability of what the appropriate user request is. The natural language interface utilizes probabilistic context free grammar (also referred to as PCFG) rules and lexicons that are stored for each of therespective devices 114. As such, the natural language interface module includes a device abstraction module that contains an abstraction of eachdevice 114 that the NLICS 102 is designed to interface. Thus, eachdevice 114 is abstracted into a set of commands that are understandable by therespective devices 114. Furthermore, each abstraction is associated with individual grammars and lexicons specific to the respective device. - Once the request is determined with the desired level of confidence, the natural language interface module issues a sequence of command(s) to the appropriate device(s) to effect the user's request. For example, in response to a user's request of “I wanna watch TV”, the natural language interface module will issue command(s) to the appropriate device(s) to turn on the television and amplifier, set the television and amplifier to the proper modes, and set the volume to an appropriate level. It also updates the states and settings of these devices in its internally maintained abstractions. The command may even turn the television to a preferred channel as learned by the
NLICS 102 or as requested by the user in the open ended natural language request. As a further example, the user may request specific information, such as “Do you have the album ‘Genesis’?” to which the system would respond “Yes”. The user could then respond “Play that”, or “Play the album Genesis”. The system would respond by turning on the CD jukebox and the amplifier, setting the proper mode for the amplifier, setting the proper volume level, selecting the proper album and finally, playing the album. It would also update the internally maintained states and settings of the device abstractions as well as the user's profile. Preferably, this command signal is transmitted via a radio frequency (RF) link or an Infrared (IR) link, as are known in the art. - Speech recognition techniques are well known in the art and the control of devices based upon spoken commands is known. For example, applications exist where a user speaks a predetermined speech command to a speech recognition control system, for example, the user speaks, “Turn on” to a controlled television set. In response, the TV is turned on. However, such approaches do not take advantage of the use of natural language or conversational language, nor abstract the devices under control to derive dialog context. If the exact predetermined voice command is not issued, then the system will not issue the command. In contrast, this embodiment implements a natural language interface module which is used to determine probabilistically the most likely meaning of the spoken utterance and issue the appropriate command(s). Thus, the instructions from the user come in a very conversational manner without having to remember a specified command signal. For example, if the user states “hey, lets watch TV”, “I wanna watch TV”, “turn on the TV”, “whattya say we watch a little television”, the system will use its natural language interface module to probabilistically determine that the user is requesting to watch the television, and will issue an appropriate set of command(s) that the television and other appropriate devices will understand.
- Thus, advantageously, the physical interface or seam between the
device 114 and the user is eliminated. For example, the user does not even need to know how to operate thedevice 114 in question. For example, the user may not know how to operate the DVD player; however, the user can simply say, “I want to watch a DVD” and a command signal may be sent to power on the DVD player and begin playing the DVD within the player. - Furthermore, the natural language interface module disambiguates the user's request if it is not sure what the request means. For example, the request may be “I want to watch a movie”. The natural language interface module does not know if the user would like to watch a movie on the DVD player, the VCR or a television movie. In such cases, the natural language interface module includes a feedback module (e.g. a text-to-speech module) and a feedback mechanism such as a speaker to ask the user to clarify the request. For example, the natural language interface module will ask in response to such a request, “Do you want to watch a movie on the DVD, VCR or television?” At which point the user may reply “DVD”, for example.
- As such, the system is a true “natural language interface” that can accept “open-ended” requests. The natural language
interface control system 102 is not a “closed-ended” system that is primarily dialog driven or prompt driven. For example, in known natural language systems, the conversation must be controlled by the system by prompting the user to provide certain information that the system will then try to identify. For example, in a natural language based airline reservation system, the system will guide the user through the dialog such that the context is constrained by the questions asked by the system. For example, the system will ask, “To what city would you like to fly?” Then the user would respond, in natural language, with the destination city and the system will essentially try to understand the response by trying to match the response with the names of cities. Then the system will prompt the user by asking “What date would you like to leave?” and the system will then constrain the context of the search and analysis of the incoming text strings based on what it is expecting to receive, i.e., dates. In contrast, with respect to theNLICS 102, the user, not the system, initiates the dialog. The user simply states “I want to hear some music” with no prompting from theNLICS 102. The context of the search is not constrained by the prompting of the system, but is constrained by the abilities of thedevices 114 controlled by theNLICS 102. Thus, the user may ask for theNLICS 102 to perform any of the tasks that each of the controlled devices is capable of performing. If, for example, the user asks theNLICS 102 to perform a function that is not available from the controlled devices, e.g., if the user says “Make me some breakfast”, theNLICS 102 is not able to effect such a request because it is not within the programmed functionality of the controlled devices. For example, theNLICS 102 will properly interpret phrases within the abilities of thedevices 114 and simply ignore other requests. Advantageously, the feedback portion of the natural language interface module will alert the user that the request is not available. - In this embodiment, the natural language
interface control system 102 is “always on”, such that the user may speak a request at any time and the system will respond. However, to get the attention of theNLICS 102, the user speaks an “attention word” followed by the request. This functions to identify the user, to avoid false detections of requests and to distinguish between regular conversation and background noise not intended for the NLICS. This attention word notifies theNLICS 102 that following the attention word, a request will arrive. As such, the microphone arrays employed by the NLICS only have to search for the attention word or words within the physical space defined by the microphone arrays. For example, if the attention word is programmed as “Mona”, then the user's request becomes “Mona, I wanna watch TV.” This greatly reduces the processing and searching by the microphone arrays. - Furthermore, individual users may have separate attention words specific to that user. For example, within a household, a first user's attention word is “Mona” while a second user's attention word is “Thor”. When the
NLICS 102 hears the attention word “Mona”, the system assumes that the first user is issuing the command. For example, if the first user says, “Mona, I wanna watch TV”, then the system will not only turn on the television (and other relevant devices), but the system will turn on the television to the first user's selected favorite channel. Note that this does not provide a true identification; however, since the first user could say the second user's attention word. This mechanism simply provides a means to tailor the experience of theNLICS 102 specifically to the likes, pronunciations and habits of individual users. - One feature that enables the
NLICS 102 to function efficiently is that each of thedevices 114 coupled to theNLICS 102 are abstracted into a separate device abstraction such that separate grammars and lexicons are stored for each of thedevices 114. For example, as the natural language interface module determines that the request is for the DVD player, a grammar and lexicon specific to that particular context (i.e., the context of the DVD player) is used to aid in the processing of the arriving acoustic data within the speech recognition module. This provides a context switching feature in the speech recognition module. - In some embodiments, the
NLICS 102 is set up such that models used in the speech recognition module for the HMMs and grammars can be streamed into use from a secondary source, such as a hard disk, CD-ROM, or DVD at run time. Once the data is read in, it can be immediately used without any preprocessing. As such, memory usage for the speech recognition module is improved since many models and grammars can be stored remotely of the memory of theNLICS 102. - In other embodiments, the
NLICS 102 is designed to be implemented as two separate units, for example, theremote unit 104 and thebase unit 106. Thebase unit 106 functions as a “docking station” for theremote unit 104, which may be coupled to thebase unit 106 via a universal serial bus (USB) connection, for example. In some embodiments, theremote unit 104 functions as a universal remote control for a variety of devices as is traditionally done, by providing buttons for the user to press. Furthermore, thebase unit 106 provides an external network interface for theNLICS 102. For example, the external network interface couples the NLICS to anexternal network 116, such as a home local area network (LAN), an Intranet or the Internet. As such, theNLICS 102 may download additional grammars, HMM models, device abstractions, CD, DVD, television or other programming information and/or lexicons that are maintained in central databases within theexternal network 116. - Additionally, the
base unit 106 functions as a secondary cache for theremote unit 104. Theremote unit 104 includes a feature extraction module, a speech recognition module, and a natural language interface module, as well as the device interface to the various devices. As such, thebase unit 106 includes a memory that functions to hold additional models, grammars, and lexicons to be used in theremote unit 104. - The
remote unit 104 includes a traditional two elementlinear microphone array 108 that receives acoustic signaling. Also, thebase unit 106 contains aplanar microphone array 110 which listens to acoustic energy from a two-dimensional space. TheNLICS 102 advantageously uses bothmicrophone arrays microphone arrays NLICS 102 can be configured to listen to a volume including a living room couch where a user may be sitting when operating respective devices. As such, acoustical data coming from sources outside of this defined space will attenuate while acoustical data coming from within the defined space will be summed in phase. - Although the system has generally been described above, a more detailed description of the natural language interface control system follows.
- Referring next to
FIG. 2 , a functional block diagram is shown of theremote unit 104 of the natural languageinterface control system 102 ofFIG. 1 in accordance with another embodiment of the invention. Shown is theremote unit 104 including thelinear microphone array 108, afeature extraction module 202, aspeech recognition module 204, a natural languageinterface control module 206, asystem processing controller 208, adevice interface 210, a base unit interface 212 (also referred to as a universal serial bus (USB) interface 212), and aspeaker 214. Also illustrated are thedevices 114. Thespeech recognition module 204 includes aspeech decoder 216, an N-gram grammar module 218, and anacoustic models module 220. The natural languageinterface control module 206 includes a naturallanguage interface module 222, a probabilistic context free grammar module 224 (also referred to as the PCFG module 224), adevice abstraction module 226 and afeedback module 228. - Although the system has been described as two separate components, i.e., the
remote unit 104 and thebase unit 106, the core functionality of theNLICS 102 may be implemented solely within theremote unit 104, although preferred embodiments utilize both theremote unit 104 and thebase unit 106 as separate units. As such, theremote unit 104 will be described first below, followed by a description of thebase unit 106. - Acoustic data enters the
remote unit 104 via thelinear microphone array 108, which is a two element narrow-cardioid microphone that localizes a source, i.e., the user, and discriminates against interfering noise. Such linear microphone arrays are well known in the art. Thelinear microphone array 108 samples the input speech data from each of the microphone elements, and then time aligns and sums his data in order to produce a signal-to-noise ratio (SNR)-enhanced representation of the incoming acoustic signal. - The acoustic data is then passed to the
feature extraction module 202, which is used to extract parameters or feature vectors representing information related to the incoming acoustic data. - The
feature extraction module 202 performs edge-detection, signal conditioning and feature extraction. According to one embodiment, speech edge detection is accomplished using noise estimation and energy detection based on the 0th Cepstral coefficient and zero-crossing statistics. Feature extraction and signal conditioning consist of extracting Mel-frequency cepstral coefficients (MFCC), delta information and acceleration information. It is a 38 dimensional feature vector based on 12.8 ms sample buffers overlapped by 50%. Suchfeature extraction modules 202 and functionality are well understood in the art, and that one skilled in the art may implement the feature extraction module in a variety of ways. Thus, the output of thefeature extraction module 202 is a sequence of feature vectors. - Next, generally, the
speech recognition module 204 functions as a Hidden-Markov Model (HMM)-based continuous speech recognizer that has the ability to reject “unmodeled events”, e.g. out-of vocabulary events, disfluencies, environmental noise, etc. Thespeech recognition module 204 is under the control of the naturallanguage interface module 222 and can switch between different acoustic models and different grammars based on the context of the speech, as determined by the natural languageinterface control module 206. Thespeech recognition module 204 may be entirely conventional, although thespeech recognition module 204 has several features which are advantageous for use in theNLICS 102. Furthermore, memory usage in thespeech recognition module 204 has been optimized so that the memory requirement is mainly a reflection of the amount of acoustic speech model data used. A more detailed description follows of thespeech recognition module 204 and the natural languageinterface control module 206. - The feature vectors from the
feature extraction module 202 are input to thespeech recognition module 204, i.e., input to thespeech decoder 216 of the speech recognition module (SRM) 204. Thus, the speech recognition module (SRM) 204 is responsible for requesting speech feature vectors from the feature extraction module (FEM) 202 and finding the most likely match of the corresponding utterance with a set of speech models, while rejecting non-speech events, using an approach based on Hidden Markov Models (HMMs). - The models used by the
speech decoder 216 are stored in theacoustic models module 220. These models may comprise context-dependent or independent phonetic models, sub word models or whole word models, e.g. monophones, biphones and/or triphones. In one embodiment, thespeech decoder 216 may dynamically switch between different models, e.g., thespeech decoder 216 may switch between models based on triphones and monophones. This is in contrast to known systems, where there are a fixed number of states and Gaussians per state, i.e. the architecture of the respective phonemes is fixed. In contrast, a selection between models based on monophones, biphones, and triphones, as well as varying the architecture of these phonemes, e.g., the number of states and the number of Gaussians per state for each type of phoneme (monophone, biphone, and triphone) may be varied for optimization in space, speed, and accuracy. As is well understood in the art, the received utterances are analyzed with the models, e.g., using a Viterbi algorithm, and scores are assigned representing how well the utterance fits the given models. Furthermore, the models used by thespeech decoder 216 are under direct control by the natural languageinterface control module 206, which is described further below. - Additionally, two garbage-modeling techniques are utilized. Garbage filler models are stored with the
acoustic models module 220 to model background noises as well as disfluencies and “silences”. These models are utilized by thespeech decoder 216 in the rejection of out-of-vocabulary (oov) events. Thespeech decoder 216 also rejects out-of-vocabulary (oov) events using an online garbage calculation. It then returns the N-best candidates if their scores are very close. Such out-of-vocabulary rejection is also well understood in the art. - In some embodiments, the rejection techniques have been improved compared to those known in the art. The basic principle behind HMM-based speech recognition systems is that an utterance is compared with a number of speech models (from the acoustic models module 220) in order to find the model that best matches the utterance. This implies that the output of the
speech recognition module 204 will be a reference to the model (e.g. word) with the best match. However, this causes problems in cases where no models exist that represent the words spoken. In such cases, an HMM-based system will typically still attempt to find the closest match between utterances and models and report the results. In many cases this is unwanted, as any sound that is picked up by an open microphone will cause a reference to a model to be emitted. To avoid this effect, it is sometimes preferred to determine whether the utterance is contained within in-vocabulary words or not. For example, if a Viterbi score passes a threshold, the utterance is determined to be an in-vocabulary word. If the Viterbi score of the utterance does not exceed the threshold, then the utterance is deemed out-of vocabulary. Such a Viterbi score is generated using the Viterbi algorithm. This algorithm calculates a single best state sequence through an HMM and its corresponding probability, given an observation sequence. However, experiments have shown that this is not a very accurate rejection scheme. Instead, many systems rely on comparing the Viterbi score with another Viterbi score that is obtained by reprocessing the utterance through an alternative HMM whose task is to represent all out-of-vocabulary events or filler sounds, i.e., using garbage models. A garbage score can then be defined as the difference between the logarithms of each of the two Viterbi scores divided by the number of frames in the utterance according to equation 1 below. The garbage score reveals whether the utterance had a closer match with the word models or the out-of-vocabulary models. Many variants have been proposed as to how to reject out-of-vocabulary events. One observation is that periods of silence in an utterance typically produce high Viterbi scores even for models that are supposed to model high-energy parts-of-speech. To some extent this can be avoided by providing an additional feature representing the energy of the speech signal in thefeature extraction module 202. However, this still leads to incorrect garbage score measurements. If there is silence in the beginning or ending of an utterance and this beginning or ending silence is not being modeled, it has been observed that the garbage scores are indeed affected. Thefeature extraction module 202 performs speech detection such that the beginning and ending silences should not be included in the sample forwarded to thespeech decoder 216 of thespeech recognition module 204. However, finding the beginning and ending of an utterance becomes a complex task for utterances that begin or end with low-energy sounds. An example of a group of sounds where this is a problem is the fricative. Fricatives are characterized as broadband, low energy noise, e.g. “white noise”. A fricative, as known in the art, is a sound, as exemplified by such phonemes as “th”, “sh”, etc. Thefeature extraction module 202 attempts to solve this problem by making its best efforts to find the beginning and ending samples To guarantee that low-energy sounds are included in the speech sample, thefeature extraction module 202 includes a number of extra samples in the beginning and ending of the utterance. In cases where there is no low-energy sound in the beginning or ending of an utterance, this implies that silence will be prepended and appended to the speech sample, assuming that the utterance was spoken in isolation, and hence the garbage scores in thespeech decoder 216 become skewed. To solve this problem, in one embodiment, each model is preceded and followed by a single-state silence model that “consumes” the frames of silence passed along from thefeature extraction module 202. Thespeech decoder 216 then finds the sequence of models with the closest match and optimally aligns the silence models as well as the word-models with the utterance. Now the start and end indices for the beginning and ending silence portions of the utterance can be obtained and removed. Furthermore, the best matching word models are now kept and reprocessed without the preceding and following silence models, using only the pure-speech portion of the utterance. Next, the out-of vocabulary HMMs process the same portion of the utterance and the garbage scores can be calculated as, - where w is the logarithm of the Viterbi score for the acoustic models of in-vocabulary words without preceding or following silence models and where no silence is included in the utterance. Similarly, g is the logarithm of the corresponding score for the out-of-vocabulary HMM models. Also, n is the total number of frames in the utterance and m is the number of frames that were consumed by the preceding and following silence models. In summary, using this rejection technique, the system is better able to accurately isolate the speech portion of the utterance. This has the effect of better isolating in-vocabulary words and rejecting out-of vocabulary events that begin or end with low energy sounds, such as fricatives, in comparison to conventional rejection schemes.
- The N-
gram grammar module 218 includes the grammars used by thespeech decoder 216. These grammars are the rules by which lexicons are built and a lexicon is a dictionary consisting of words and their pronunciation entries. The specific grammars used by thespeech decoder 216 are also controlled by the naturallanguage interface module 222. In this embodiment, the N-gram grammar is configured to use multiple grammar types or a combination of grammar types. For applications (e.g., controlled devices with many controls and functions) that use a complex language it might be advantageous to use the trigram grammar option. For smaller systems (e.g., a device with very simple controls and functions), the bigram grammar option might constitute a better memory and accuracy tradeoff. To provide a memory efficient representation of the bigram and trigram grammars, the allowed combinations of lexicon entries can be expressed in terms of specific lexicon entry labels or word groups. If any lexicon entry should be able to follow upon any lexicon entry, the ergodic grammar option can be used. - It is noted that the use of an N-gram grammar within a device that generally has a small footprint is not intuitive. By a small footprint, it is meant that the system only has to recognize speech relating to the controlled
devices 114 coupled to theremote unit 104, such that it can classify the remaining speech as out-of-vocabulary. However, the N-gram grammar module 218 allows for the use of multiple grammars and types even in the case of aspeech recognition module 204 having a small footprint. - Another grammar that is mainly used for the rejection scheme of the
speech decoder 216 is the word list grammar. The word list grammar is used to recalculate the Viterbi score for a fixed sequence of words and a subset of an utterance. - The system incorporates the various grammars in such a way that allows for “context switching” or the immediate switching between grammar types and sets of grammar rules under the control of the natural language interface module. Being able to do so is important as the content of a person's speech is highly affected by context. For example, only certain phrases (e.g., the attention words described above) are expected to begin a dialog while others could only follow upon a question (e.g., the natural language interface disambiguating an unclear request). In particular, this becomes evident when a speaker is targeting different audiences, and in the case of consumer electronics—different products, such as a television, a DVD player, a stereo, and a VCR. As an attempt to keep the processing requirements low while increasing the speech recognition accuracy, the system provides a way to define contexts for which only certain grammar rules should apply. If the context is known, the natural
language interface module 222 can instruct thespeech recognition module 204 to listen only to phrases that are expected. For example, when the naturallanguage interface module 222 has determined that the user is attempting to operate the DVD player, thespeech recognition module 204 may be instructed to use the grammar type and grammar corresponding to the DVD player. Thus, thespeech decoder 216 will retrieve the proper grammar from the N-gram grammar module 218. Context switching can also be performed on a finer level where a flag for each grammar rule or lexicon entry is used to indicate which individual rules or words are to be enabled and disabled. Further, for some system settings and some grammar modes it might be preferred to limit the search for the best hypothesis to a set of lexicon entries. Defining several lexicons and referencing only the lexicon of interest can do this. - It is noted that since the
speech recognition module 204 can dynamically change the grammar used given the context of the received speech, the lexicons are dynamically changed, since the lexicons depend on the selected grammar/grammars. - Depending on the size of the system, i.e., how great the search needs to be in the
speech decoder 216, the processing time can be reduced. For medium to large size natural language interface control systems 102 (perhaps having many controlled devices 114), the processing time is greatly reduced using an efficient implementation of the Beam Search algorithm. This beam search algorithm aims to keep the number of hypotheses at a minimum during the Viterbi search algorithm. As such, all active hypotheses are compared at each discrete time step and the Viterbi score for the best hypothesis is calculated. Pruning can then be accomplished by discarding any hypotheses whose scores fall below the maximum hypothesis score minus some pre-defined rejection threshold function. This constrains the search based on hypotheses that are pruned and so will not be considered again in the following time steps until the score for the corresponding model states become high enough to pass the threshold. - Another problem associated with large speech recognition systems is the amount of memory required to store the speech models. Fortunately, the number of sub word units (e.g. phonemes), used by the
NLICS 102 is typically fixed and hence, more and more speech models will reference the same sub word models as the number of lexicon entries grows. By allowing lexicon entries to reference the same model elements, e.g. sub word models, model states and/or Gaussians, the memory requirements can be kept to a minimum. The tradeoff is a slight increase in the computational resource required. When this indirect model referencing is used, speech can be represented on any level of abstraction (e.g. phrases, words, sub words). Such abstractions can be combined to form more abstract units according to a lexicon, which in turn can be referenced in grammar definitions. - Token Passing is a well-known approach to tracking the best word hypotheses through an HMM. As is known in the art, in connected word recognition systems, the last model state for the state sequence with the highest Viterbi score can be easily found once the processing of all frames of an utterance is completed. However, this does not necessarily provide the best state (or word) sequence. To find the best state sequence, it is required to perform “back tracing”. The traditional way of doing this is to let each state contain a pointer back to the previously best state for each frame. Back tracing can then be performed by following the pointers back, starting with the last model state for the state sequence with the highest Viterbi score. This means that if a system uses N states over T discrete time steps, the number of back pointers required is typically NT. This quickly becomes a high number and therefore leads to high memory requirements. Various methods have been proposed to minimize the memory requirements associated with storing such back-pointers, whereof some are based on the idea of passing “tokens” around to the various states instead of allocating memory on a per-state basis. In accordance with one embodiment of the invention, instead of storing one token pointer in each state, the
speech decoder 216 uses two arrays Si and S2 to hold the token pointers for each state. Array Si keeps the token pointers for each state and the previous frame, and S2 keeps the token pointers for each state and the current frame. When each state i “looks back” to find the previously best state j, two things can happen. If the previous best state j is a member of the same acoustic model as i, the token pointer for state j in S1 is copied into position i in S2. If this is not the case, a new token is created and stored in position i in S2. The new token gets the same contents as token i in S1, and in the token history, a reference to model m, i e m, is added. Once all states have been processed for the current frame, the pointers to structures Si and S2, are swapped, and the process is repeated for the following frame. Thus, this token passing technique provides a highly memory efficient solution to an otherwise well-known problem in HMM-based speech recognition systems; the storage of back-pointers that allows for finding the best word sequence hypothesis once all speech data has been processed. - In some embodiments, a caching scheme is used for the lexicons stored in memory on the remote unit, e.g., by the N-
gram grammar module 218. A stated above, a lexicon is a dictionary consisting of words and their pronunciation entries. These pronunciations may be implemented as either phonetic spellings that refer to phonetic models, or to whole-word models. A given word entry may contain alternate pronunciation entries, most of which are seldom used by any single speaker. This redundancy is echoed at each part-of-speech abstraction, creating even more entries that are never utilized by a given speaker. This implies that if lexicon entries are sorted by their frequency of usage, there is a great chance that the words in an utterance can be found among the top n lexicon entries. As such, the cache is divided into different levels divided by frequency of use. For example, frequently used lexicon entries will be stored within the top level of the cache. A caching scheme may devised in which the top 10% of the cache is used 90% of the time, for example. Thus, according to an embodiment, a multi-pass search is performed where the most likely entries are considered in the first pass. If the garbage score from this pass is high enough to believe that the words actually spoken were contained in the set of most likely spellings, thespeech decoder 216 reports the results to the calling function. If this score is low, the system falls back to considering a wider range of spellings. If the score from the first pass is high, but not high enough in order to be able to make a decision whether the correct spellings, for the elements of the utterance, were contained in the set of most likely spellings, this is also reported back to the calling function, which might prompt the user for clarification. If a lexicon spelling for a given part-of-speech is never used while some of its alternative spellings are frequently used, that spelling is put in a “trash can” and will never be considered for that user. As such, rarely used spellings are not considered and the chance of confusing similar-sounding utterances with one of those spellings is reduced and the recognition accuracy is therefore increased. Further, the caching scheme allows the system to consider less data and hence provides a great speed improvement. - Next, the natural language
interface control module 206 will be described in detail. The natural languageinterface control module 206 includes the naturallanguage interface module 222, the probabilistic context free grammar (PCFG) module 224, thedevice abstraction module 226, and thefeedback module 228. Generally, the natural interface module (NLIM) 222 is responsible for interpreting the user's requests within the context of thedevices 114 under control and the user's usage history as defined by a set of probabilistic context-free grammar (PCFG) rules and device abstractions. As such, the naturallanguage interface module 222 asserts control over thespeech recognition module 204 and themicrophone array 108 search. It does this by controlling the speech recognition module's 204 grammar, and therefore the lexicon under consideration. It also controls system parameters as well as the current state of its device abstractions, and current language references. - As described above, the user initiates a dialog with the NLICS by speaking an attention word. The preferred method of locating the attention word is described with reference to
FIG. 3 . The user then follows the attention word with an open-ended request constrained only by the capabilities of the devices coupled to theremote unit 104. The attention word alerts to the naturallanguage interface module 222 the identity of the user so that the speech decoder can be instructed to use the proper grammar and models based upon the attention word; thus, the system can preconfigure itself to the speech pattern's (e.g., the pronunciation, structure, habits etc.) of the user. - The
speech recognition module 204 transcribes the user's request, which is in natural, conversational language. The utterance is transcribed into a set of alternative hypothesis strings ordered by probability. For example, thespeech decoder 216 forwards the N best text strings to the naturallanguage interface module 222 to be analyzed to determine the probable meaning of the utterance. - The natural
language interface module 222 then parses the incoming strings by applying a set of probabilistic context free grammar (PCFGs) rules from the PCFG module 224 to find the most likely string, given the string's probability, the user's history, and the current system context. These PCFG rules reflect the context of the user (based on the attention word) and also the context of the device to be operated (if already determined). The PCFGs are initially ordered in terms of frequency of usage as well as likelihood of use. Over time, it tracks habits of individual users and improves rule probability estimations to reflect this data. This data can be shared and combined with data from other systems and then redistributed via the collaborative corpus. - Furthermore, note that the NLICS includes two sets of grammars, one is the N-gram grammar of the
speech recognition module 204 and the other is the probabilistic context free grammar module 224 of the natural languageinterface control module 206. Conventional systems only use one set of grammars, not a combination of N-gram grammar and PCFG rules which are inferred from data collected from man-machine dialog in the domain of personal electronic products. - Using the PCFG rules on the incoming text strings, the natural
language interface module 222 reaches one of three conclusions: (1) that it unambiguously understands and can comply with the user request, in which case it carries out the command; (2) that is unambiguously understands and cannot comply with a user request, in which case it informs the user of this conclusion; and (3) that it cannot resolve an ambiguity in the request, in which case, it requests clarification from the user. - For example, in case 1, the natural
language interface module 222 interprets an incoming string with a sufficiently high confidence level as a request to “Turn on the television”. As such, the appropriate command within thedevice abstraction module 226 is retrieved and transmitted to the controlled device 114 (i.e., the television). Thedevice abstraction module 226 includes all of the commands to effect the proper requests of the user in the format understandable by the television itself. Typically, the command is transmitted via thedevice interface 210, e.g., an IR transmitter, to the television. In response, the television is powered on. The second case is the case in which the user asks the NLICS to perform a task it can not perform. For example, the user requests for the television to explode. - The feedback module (e.g. text-to-speech) 228 is instructed to play an audible message over the speaker alerting the user that the request can not be performed. It is noted that the
feedback module 228 may simply display notices on a screen display instead of playing an audio signal over thespeaker 214. - In the third case, the ambiguity is resolved according to the kind of ambiguity encountered. Thus, the natural
language interface module 222 disambiguates the ambiguous request. If the ambiguity arises due to a low confidence, it asks the user to affirm its conclusion. For example, thespeaker 214 plays, “Did you mean play the CD?” Alternatively, the naturallanguage interface module 222 asks the user to repeat the request. If the ambiguity arises due to a set of choices, it presents these alternatives to the user, e.g., “Did you want to watch a movie on the VCR or the DVD?” If the ambiguity arises because of the current context, the user is made aware of this, e.g., the user requests to play the DVD player when it is already playing. - In the first two ambiguous situations, the system adjusts the user's profile to reflect the confidence with which a decision was made, as well as preference given a set of alternatives. In some embodiments, over time, these statistics are used to reorder the PCFG rules and entries in the relevant lexicon(s). This results in a faster, more accurate system, since the most likely entries will always be checked earlier and these more likely entries will produce a higher confidence.
- It is noted that when the natural
language interface module 222 instructs thefeedback module 228 to clarify the request, e.g., thespeaker 214 plays “Did you mean to play a CD?”, the naturallanguage interface module 222 switches the context and grammar rules based on what it is expecting to receive at themicrophone array 108. For example, the system will switch to a context of expecting to receive a “yes” or a “no” or any known variants thereof. When the user replies “yes”, the naturallanguage interface module 222 switches context back to the original state. - As such, again, when the context changes, the natural
language interface module 222 instructs thespeech recognition module 204 to switch grammars, which will indirectly cause the lexicons to change, since the grammar controls which lexicons are used. - The natural language
interface control module 206 also contains thedevice abstraction module 226. Thedevice abstraction module 226 stores the abstractions for eachdevice 114. As such, the commands for eachdevice 114 and the objects that eachdevice 114 can manipulate are stored here. It also relates these controls to the states that the devices can be in and the actions they can perform. The content of thedevice abstraction module 226 depends on the different devices that are coupled to theremote unit 104. Thedevice abstraction module 226 also includes commands for other devices in order to operate another device. For example, if the user requests to play a DVD, then the instructions to power on the DVD player, cause the DVD to play are issued. Additionally, a command signal is sent to turn on the television, if it is not already on. - The commands stored in the
device abstraction module 226 are transmitted to the respective controlleddevice 214 via thedevice interface 210. In some embodiments, thedevice interface 210 is an IR or an RF interface. - The NLICS can be implemented to control any device which is controllable via such an IR link. As long as the device abstraction has stored the commands to operate the specific device, the device does not realize that it is being controlled by a natural language interface. It simply thinks its remote control or a universal remote control has sent the signal.
- The
system processing controller 208 operates as the controller and processor for the various modules in the NLICS. Its function is well understood in the art. Furthermore, theinterface 212 is coupled to thesystem processing controller 208. This allows for connection to thebase unit 106, or alternatively, to a computer. Theinterface 212 may be any other type of link, either wireline or wireless, as known in the art. - It is noted that various components of system, such as the
feature extraction module 202, thespeech recognition module 204 and the natural languageinterface control module 206 may be implemented in software or firmware, for example using an application specific integrated circuit (ASIC) or a digital signal processor (DSP). - Referring next to
FIG. 3 , a functional block diagram is shown of a base unit or base station of the natural language interface control system ofFIG. 1 in accordance with a further embodiment of the invention. Shown is the base unit 106 (also referred to as the base station 106) and theremote unit 104 including thelinear microphone array 108. Thebase unit 106 includes theplanar microphone array 110, afrequency localization module 302, a time search module 304, a remote interface 306 (also referred to as a remote interface 306), the external network interface 308, and asecondary cache 310. Thelinear microphone array 108 and theplanar microphone array 110 combine to form a three-dimensional microphone array 312 (also referred to as a 3D microphone array 312). Also shown is theexternal network 116 coupled to the external network interface 308. - In operation, the
base unit 106 is intended as a docking station for the remote unit 104 (which is similar to a universal remote control). Thebase unit 106 includes the external network interface 308 such that the NLICS can interface with anexternal network 116, such as a home LAN or the Internet either directly or through a hosted Internet portal. As such, additional grammars, speech models, programming information, IR codes, device abstractions, etc. can be downloaded into thebase unit 106, for storage in thesecondary cache 310, for example. - Furthermore, the
NLICS 102 may transmit its grammars, models, and lexicons to a remote server on the external network for storage. This remote storage may become a repository of knowledge that may be retrieved by other such devices. As such, the system will never get old, since lexicons will constantly be updated with the most current pronunciations and usages. This enables a collaborative lexicon and/or a collaborative corpus to be built since multiple natural language interface control systems will individually contribute the external database in a remote server. - Furthermore, the
NLICS 102 may download command signals for the device abstraction module of theremote unit 104. For example, a user would like to operate an older VCR that has an IR remote control manufactured by a different maker that the NLICS. Thebase unit 106 simply downloads the commands that are stored for any number of devices. These commands are then stored in the device abstraction module. Also, the NLICS can submit feature vector data and labels associated with high-confidence utterances to the collaborative corpus. This data can then be incorporated with other data and used to train improved models that are subsequently redistributed. This approach can also be used to incorporate new words into the collaborative corpus by submitting the feature vector data and its label, which may subsequently be combined with other data and phonetically transcribed using the forward-backward algorithm. This entry may then be added to the lexicon and redistributed. - The
base unit 106 includes theplanar microphone array 110. Theplanar microphone array 110 and thelinear microphone array 108 of theremote unit 104 combine to form a three-dimensional array 312. Both arrays comprise conventional point source locating microphone. As is known in the art, a three-dimensional array is constructed by first constructing a planar array (e.g., planar microphone array 110), then adding one or two microphone elements off of the plane of the planar array. As such, thelinear microphone array 108 becomes the additional one or two elements. This enables theNLICS 102 to define a three dimensional search volume. As such, the device will only search for speech energy within the volume. Thus, themicrophone arrays - Both the
linear microphone array 108 and theplanar microphone array 110 are controlled by the naturallanguage interface module 222. Afrequency localization module 302 and a time search module 304 are coupled to the3D microphone array 110. The time search module 304 receives control signaling from the naturallanguage interface module 222 within theremote unit 104 via theremote interface 306. The time search module 304 adds up time aligned buffers which are provided by the microphones. Thus, the time search module 304 locates putative hits and helps to steer the3D microphone array 110 in the direction of the hit. The functionality of the time search module 304 is well known in the art. - The
frequency localization module 302 is also under the control of the naturallanguage interface module 222. Thefrequency localization module 302 implements a localization algorithm as is known in the art. The localization algorithm is used to localize speech energy within the defined volume. As such, speech energy originating from outside of the localized point within the volume will attenuate (is out of phase), while speech energy from within the localized point will sum (is in phase). Thus, the localization takes advantage of constructive interference and destructive interference in the frequency domain. In operation, the search module is used to do a coarse search for attention words. If the speech energy passes a threshold, then a fine search is done by the localization module. If it passes the fine search, then the word passed to the recognition and NLI modules. This coarse to narrow search is very helpful in reducing the processing involved in the localization. For example, such localization is very computationally intense since the localization must transform the energy into the frequency domain and back. Thus, by eliminating many putative hits in the coarse search, the processing is reduced. If the SR module identifies the putative hit as an attention word, is passed to the naturallanguage interface module 222 to be analyzed to determine which attention word has been uttered. Note that the context of the natural language interface module is initially of attention words, i.e., the system is searching for attention words to activate the system. Once an attention word is found, the context of the NLICS is caused to change to a request context, such that it will be looking for requests constrained by the devices coupled to the NLICS. - The secondary cache of the
base unit 106 is used to store secondary models, grammars and/or lexicons for use in theremote unit 104. This compliments the speech recognition module which is designed to read in (stream) speech models and grammars from a secondary storage device or secondary cache (e.g. hard disk, CDROM, DVD) at run-time. Once the data has been read in, it can immediately be used without any kind of preprocessing. This effectively ties in well with the idea of context switching. In addition to the benefits of low processing requirements and the high speech recognition accuracy that comes with the grammar context-switching feature, the memory requirements are greatly reduced, since less frequently used grammars, etc. may be stored in thesecondary cache 310 and read when required without occupying memory within theremote unit 104. Further, more acoustic data can be used which improves speech recognition accuracy, and various approaches to speaker adaptation can be efficiently implemented as secondary storage devices can hold large amounts of base models for different dialects and accents. Furthermore, the secondary cache may be a storage for models, grammars, etc. that are downloaded from anexternal network 116. - Referring next to
FIG. 4 , a flowchart is shown for the steps performed in the natural language interface algorithm of the natural language interface control system ofFIGS. 1 through 3 . Initially, thespeech recognition module 204 and the naturallanguage interface module 222 are initialized to the context of looking for attention words (Step 402). This allows the NLICS to accept non-prompted user requests, but first the system must be told that a user request is corning. The attention word accomplishes this. As such, the grammars and the models for the hidden Markov models are used to specifically identify the presence of an attention word. Next, the remote unit receives the acoustic speech data at the microphone array (Step 404). The acoustic data is segregated into 12.8 msec frames using a 50% overlap. A 38-dimensional feature vector is derived from the acoustic data. These features consist of Mel-Frequency Cepstral coefficients 1-12 and the first and second order derivatives of MFC coefficients 0-12. Thus, feature vectors are created from the acoustic data (Step 406). This is performed at thefeature extraction module 202. - Next, the
speech recognition module 204 applies acoustic hidden Markov models (HMM) and an N-gram grammar to the incoming feature vectors (as specified by the natural language interface) to derive an in-vocabulary (IV) Viterbi (likelihood) score (Step 408). Then, the feature data is reprocessed using models of OOV events, e.g., an ergodic bank of monophone models, to derive an out-of-vocabulary (OOV) Viterbi score (Step 410). The garbage score is calculated from the IV and OOV scores, e.g., the garbage score equals [Ln(IV score)−Ln(OOV score)]/number of frames (Block 411). A low score indicates a garbage utterance. The N-best transcribed text string(s) and corresponding garbage score(s) are passed to the natural language interface module 222 (Step 412). The naturallanguage interface module 222 parses the incoming string(s) using a set of probabilistic context-free grammar (PCFG) rules as well as device context information for an attention utterance (Step 414). As described above, the naturallanguage interface module 222 requires an attention strategy, e.g., the receipt of an attention word (i.e., Mona) that is unique to the user, or speaker identification coupled with allowable grammar rules. - Once the user has the system's attention, i.e., the natural
language interface module 222 has detected an attention word (Step 416), the natural language interface module knows the user's identity. It proceeds by configuring the system according to the user. It does this by changing the relevant system parameters and by directing thespeech recognition module 204 to change grammars to those appropriate for accepting commands and requests and according to the user. Thespeech recognition module 204 changes lexicons according to the grammar rules and the individual user. Thus, thespeech recognition module 204 and the naturallanguage interface module 222 change contexts to look for user requests (Step 418). Additionally, the natural language interface module directs the microphone array of the base unit or base station to narrow its focus in order to better discriminate against environmental noise. Furthermore, if there are devices under NLICS control (TV, CD, etc.) which are playing at a high volume, the natural language interface module directs the amplifier to reduce its volume. Then, the naturallanguage interface module 222 initiates a timer and waits for the user's request until the time-out period has expired. If the system times-out, the naturallanguage interface module 222 reconfigures the system by resetting the relevant speech recognition module rules and lexicon to search for attention words. Also, the microphone array and the amplifier volume are reset if they had been adjusted. These resetting steps are such as those performed inStep 402. - After switching to the context of looking for a user request (Step 418), Steps 404 through 414 are repeated, except that in this pass the acoustic speech represents a user request to operate one or more of the controlled devices.
- If the natural
language interface module 222 detects a user request (Step 416), i.e. a user request (as determined by the PCFG grammar system and device context) is received, it draws on of three conclusions (Steps Step 420, the user request is unambiguously understood and the natural language interface module can comply with a user request. Thus, the naturallanguage interface module 222 carries out the command by sending the appropriate signals via thedevice interface 210, as indicated by the device abstraction. Then, the context of thespeech recognition module 204 and the naturallanguage interface module 206 is switched back to look for attention words (Step 426), before proceeding to Step 404. - According to
Step 422, the user request is unambiguously understood and the natural language interface module cannot comply with the user request. As such, the user is informed of this conclusion and prompts for further direction. The system then waits for further user requests or times out and proceeds to Step 426. - According to
Step 424, the ambiguity cannot be resolved for the request, in which case, the naturallanguage interface module 222 requests clarification from the user, e.g., by using thefeedback module 228 and thespeaker 214. The ambiguity is resolved according to the kind of ambiguity encountered. If the ambiguity arises due to a low confidence, it affirms its conclusion with the user (e.g., “Did you mean play the CD player?”). If the user confirms the conclusion, the command is carried out, and the system is reset (Step 426). The system adjusts the user's profile to reflect the confidence with which a decision was made, as well as preference given a set of alternatives. In some embodiments, over time, these statistics are used to reorder the PCFG rules and entries in the relevant lexicon(s). This results in a faster, more accurate system, since the most likely entries will always be checked earlier and these more likely entries will produce a higher confidence. - If the ambiguity arises due to a set of choices, it presents these alternatives to the user (e.g., “Did you want to watch a movie on the DVD player or the VCR?”). If the user selects from among the options provided, the natural
language interface module 222 carries out the command, otherwise the system is reset (Step 426). In either case, the user profile is updated as described above. - If the ambiguity arises because of the current context (e.g., the user requests to stop the TV and it is off), the user is made aware of this.
- While the invention herein disclosed has been described by means of specific embodiments and applications thereof, numerous modifications and variations could be made thereto by those skilled in the art without departing from the scope of the invention set forth in the claims.
Claims (15)
1. A method of speech recognition in a network comprising:
receiving a communication in a network, the communication including an attention word;
searching for an attention word based on a first context including a first set of models, grammars, and lexica; and
switching, upon finding the attention word, to a second context to search for an open-ended user request, wherein the second context includes a second set of models, grammars, and lexicons and wherein the open-ended user request does not follow a predetermined format.
2. The method of claim 1 wherein searching for an attention word comprises applying speech recognition to the open-ended user request.
3. The method of claim 2 wherein applying the speech recognition comprises applying hidden Markov models to the open-ended user request.
4. The method of claim 1 wherein the attention word is received via a microphone array.
5. The method of claim 1 further comprising subsequently searching for a device identifier subsequent to the attention word.
6. The method of claim 1 wherein the network is selected from a group consisting of a home network, an office network, and a personal network.
7. A speech recognition device used in a network comprising:
an interface for receiving received speech including an attention word; and
a controller coupled to the interface, the controller being configured and arranged to search for the attention word based on a first context including a first set of models, grammars, and lexica, the controller being further configured and arranged to switch, upon finding the attention word, to a second context to search for an open-ended user request, wherein the second context includes a second set of models, grammars, and lexicons and wherein the open-ended user request does not follow a predetermined format.
8. The device of claim 7 wherein the controller is configured and arranged to search for an attention word by applying speech recognition to the open-ended user request.
9. The device of claim 8 wherein the speech recognition includes the application of hidden Markov models to the open-ended user request.
10. The device of claim 8 wherein the speech recognition includes application of N gram grammars to the open-ended request.
11. The device of claim 7 wherein interface comprises a three-dimensional microphone array.
12. The device of claim 11 wherein the three-dimensional microphone array comprises a linear microphone array.
13. The device of claim 11 wherein the three-dimensional microphone array comprises a planar microphone array.
14. The device of claim 7 wherein the controller is further configured and arranged to subsequently search for a device identifier subsequent to receiving the attention word.
15. The device of claim 7 wherein the interface is configured and arranged to receive wireless commands.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/932,771 US20080059188A1 (en) | 1999-10-19 | 2007-10-31 | Natural Language Interface Control System |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16028199P | 1999-10-19 | 1999-10-19 | |
US09/692,846 US7447635B1 (en) | 1999-10-19 | 2000-10-19 | Natural language interface control system |
US11/932,771 US20080059188A1 (en) | 1999-10-19 | 2007-10-31 | Natural Language Interface Control System |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/692,846 Continuation US7447635B1 (en) | 1999-10-19 | 2000-10-19 | Natural language interface control system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20080059188A1 true US20080059188A1 (en) | 2008-03-06 |
Family
ID=22576252
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/692,846 Expired - Fee Related US7447635B1 (en) | 1999-10-19 | 2000-10-19 | Natural language interface control system |
US11/932,771 Abandoned US20080059188A1 (en) | 1999-10-19 | 2007-10-31 | Natural Language Interface Control System |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/692,846 Expired - Fee Related US7447635B1 (en) | 1999-10-19 | 2000-10-19 | Natural language interface control system |
Country Status (7)
Country | Link |
---|---|
US (2) | US7447635B1 (en) |
EP (1) | EP1222655A1 (en) |
JP (2) | JP5118280B2 (en) |
KR (1) | KR100812109B1 (en) |
AU (1) | AU8030300A (en) |
CA (2) | CA2387079C (en) |
WO (1) | WO2001029823A1 (en) |
Cited By (69)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010056307A1 (en) * | 2000-05-03 | 2001-12-27 | Wolfgang Theimer | Method for controlling a system, especially an electrical and/or electronic system comprising at least one application device |
US20030212761A1 (en) * | 2002-05-10 | 2003-11-13 | Microsoft Corporation | Process kernel |
US20050125486A1 (en) * | 2003-11-20 | 2005-06-09 | Microsoft Corporation | Decentralized operating system |
US20060206337A1 (en) * | 2005-03-08 | 2006-09-14 | Microsoft Corporation | Online learning for dialog systems |
US20060206332A1 (en) * | 2005-03-08 | 2006-09-14 | Microsoft Corporation | Easy generation and automatic training of spoken dialog systems using text-to-speech |
US20060206333A1 (en) * | 2005-03-08 | 2006-09-14 | Microsoft Corporation | Speaker-dependent dialog adaptation |
US20070038446A1 (en) * | 2005-08-09 | 2007-02-15 | Delta Electronics, Inc. | System and method for selecting audio contents by using speech recognition |
US20070219797A1 (en) * | 2006-03-16 | 2007-09-20 | Microsoft Corporation | Subword unit posterior probability for measuring confidence |
US20080092200A1 (en) * | 2006-10-13 | 2008-04-17 | Jeff Grady | Interface systems for portable digital media storage and playback devices |
US20080133239A1 (en) * | 2006-12-05 | 2008-06-05 | Jeon Hyung Bae | Method and apparatus for recognizing continuous speech using search space restriction based on phoneme recognition |
WO2010019831A1 (en) * | 2008-08-14 | 2010-02-18 | 21Ct, Inc. | Hidden markov model for speech processing with training method |
US7707131B2 (en) | 2005-03-08 | 2010-04-27 | Microsoft Corporation | Thompson strategy based online reinforcement learning system for action selection |
US20100121640A1 (en) * | 2008-10-31 | 2010-05-13 | Sony Computer Entertainment Inc. | Method and system for modeling a common-language speech recognition, by a computer, under the influence of a plurality of dialects |
US20110301955A1 (en) * | 2010-06-07 | 2011-12-08 | Google Inc. | Predicting and Learning Carrier Phrases for Speech Input |
US20120109649A1 (en) * | 2010-11-01 | 2012-05-03 | General Motors Llc | Speech dialect classification for automatic speech recognition |
US20120246081A1 (en) * | 2011-03-25 | 2012-09-27 | Next It Corporation | Systems and Methods for Automated Itinerary Modification |
US20120259639A1 (en) * | 2011-04-07 | 2012-10-11 | Sony Corporation | Controlling audio video display device (avdd) tuning using channel name |
US20130151250A1 (en) * | 2011-12-08 | 2013-06-13 | Lenovo (Singapore) Pte. Ltd | Hybrid speech recognition |
US20130218565A1 (en) * | 2008-07-28 | 2013-08-22 | Nuance Communications, Inc. | Enhanced Media Playback with Speech Recognition |
US20130290900A1 (en) * | 2008-10-30 | 2013-10-31 | Centurylink Intellectual Property Llc | System and Method for Voice Activated Provisioning of Telecommunication Services |
US20140012586A1 (en) * | 2012-07-03 | 2014-01-09 | Google Inc. | Determining hotword suitability |
US20140067373A1 (en) * | 2012-09-03 | 2014-03-06 | Nice-Systems Ltd | Method and apparatus for enhanced phonetic indexing and search |
US20140207470A1 (en) * | 2013-01-22 | 2014-07-24 | Samsung Electronics Co., Ltd. | Electronic apparatus and voice processing method thereof |
CN104049707A (en) * | 2013-03-15 | 2014-09-17 | 马克西姆综合产品公司 | Always-on Low-power Keyword Spotting |
US20140278394A1 (en) * | 2013-03-12 | 2014-09-18 | Motorola Mobility Llc | Apparatus and Method for Beamforming to Obtain Voice and Noise Signals |
US20140278427A1 (en) * | 2013-03-13 | 2014-09-18 | Samsung Electronics Co., Ltd. | Dynamic dialog system agent integration |
US20140304205A1 (en) * | 2013-04-04 | 2014-10-09 | Spansion Llc | Combining of results from multiple decoders |
US20150051913A1 (en) * | 2012-03-16 | 2015-02-19 | Lg Electronics Inc. | Unlock method using natural language processing and terminal for performing same |
US20150106405A1 (en) * | 2013-10-16 | 2015-04-16 | Spansion Llc | Hidden markov model processing engine |
WO2015084659A1 (en) * | 2013-12-02 | 2015-06-11 | Rawles Llc | Natural language control of secondary device |
US20150317980A1 (en) * | 2014-05-05 | 2015-11-05 | Sensory, Incorporated | Energy post qualification for phrase spotting |
US9390284B1 (en) | 2015-04-03 | 2016-07-12 | Ray Wang | Method for secure and private computer file |
EP3043348A4 (en) * | 2013-09-03 | 2016-07-13 | Panasonic Ip Corp America | Voice interaction control method |
US9406078B2 (en) | 2007-02-06 | 2016-08-02 | Voicebox Technologies Corporation | System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements |
WO2016028628A3 (en) * | 2014-08-19 | 2016-08-18 | Nuance Communications, Inc. | System and method for speech validation |
US9466295B2 (en) | 2012-12-31 | 2016-10-11 | Via Technologies, Inc. | Method for correcting a speech response and natural language dialogue system |
US9520130B2 (en) * | 2014-07-25 | 2016-12-13 | Google Inc. | Providing pre-computed hotword models |
US9570070B2 (en) | 2009-02-20 | 2017-02-14 | Voicebox Technologies Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US9591508B2 (en) | 2012-12-20 | 2017-03-07 | Google Technology Holdings LLC | Methods and apparatus for transmitting data between different peer-to-peer communication groups |
US9620113B2 (en) | 2007-12-11 | 2017-04-11 | Voicebox Technologies Corporation | System and method for providing a natural language voice user interface |
US9626703B2 (en) | 2014-09-16 | 2017-04-18 | Voicebox Technologies Corporation | Voice commerce |
US20170186425A1 (en) * | 2015-12-23 | 2017-06-29 | Rovi Guides, Inc. | Systems and methods for conversations with devices about media using interruptions and changes of subjects |
US9705736B2 (en) | 2014-03-14 | 2017-07-11 | Ray Wang | Method and system for a personal network |
US9711143B2 (en) | 2008-05-27 | 2017-07-18 | Voicebox Technologies Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US9736230B2 (en) | 2010-11-23 | 2017-08-15 | Centurylink Intellectual Property Llc | User control over content delivery |
US20170236511A1 (en) * | 2016-02-17 | 2017-08-17 | GM Global Technology Operations LLC | Automatic speech recognition for disfluent speech |
US9747896B2 (en) | 2014-10-15 | 2017-08-29 | Voicebox Technologies Corporation | System and method for providing follow-up responses to prior natural language inputs of a user |
US9779723B2 (en) * | 2012-06-22 | 2017-10-03 | Visteon Global Technologies, Inc. | Multi-pass vehicle voice recognition systems and methods |
US9813262B2 (en) | 2012-12-03 | 2017-11-07 | Google Technology Holdings LLC | Method and apparatus for selectively transmitting data using spatial diversity |
US9898459B2 (en) | 2014-09-16 | 2018-02-20 | Voicebox Technologies Corporation | Integration of domain information into state transitions of a finite state transducer for natural language processing |
US20180089176A1 (en) * | 2016-09-26 | 2018-03-29 | Samsung Electronics Co., Ltd. | Method of translating speech signal and electronic device employing the same |
US9979531B2 (en) | 2013-01-03 | 2018-05-22 | Google Technology Holdings LLC | Method and apparatus for tuning a communication device for multi band operation |
US10297249B2 (en) | 2006-10-16 | 2019-05-21 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
EP3493049A1 (en) * | 2017-12-04 | 2019-06-05 | Sharp Kabushiki Kaisha | External control device, speech interactive control system, control method, and control program |
US10331784B2 (en) | 2016-07-29 | 2019-06-25 | Voicebox Technologies Corporation | System and method of disambiguating natural language processing requests |
US10403265B2 (en) * | 2014-12-24 | 2019-09-03 | Mitsubishi Electric Corporation | Voice recognition apparatus and voice recognition method |
US20190294678A1 (en) * | 2018-03-23 | 2019-09-26 | Servicenow, Inc. | Systems and method for vocabulary management in a natural learning framework |
US10431214B2 (en) | 2014-11-26 | 2019-10-01 | Voicebox Technologies Corporation | System and method of determining a domain and/or an action related to a natural language input |
US10438591B1 (en) * | 2012-10-30 | 2019-10-08 | Google Llc | Hotword-based speaker recognition |
US10575120B2 (en) | 2016-02-27 | 2020-02-25 | Ray Wang | Method of autonomous social media system |
US10607606B2 (en) | 2017-06-19 | 2020-03-31 | Lenovo (Singapore) Pte. Ltd. | Systems and methods for execution of digital assistant |
US10614799B2 (en) | 2014-11-26 | 2020-04-07 | Voicebox Technologies Corporation | System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance |
US20200152186A1 (en) * | 2018-11-13 | 2020-05-14 | Motorola Solutions, Inc. | Methods and systems for providing a corrected voice command |
US10957310B1 (en) | 2012-07-23 | 2021-03-23 | Soundhound, Inc. | Integrated programming framework for speech and text understanding with meaning parsing |
US11295730B1 (en) | 2014-02-27 | 2022-04-05 | Soundhound, Inc. | Using phonetic variants in a local context to improve natural language understanding |
US11308939B1 (en) * | 2018-09-25 | 2022-04-19 | Amazon Technologies, Inc. | Wakeword detection using multi-word model |
US11437041B1 (en) * | 2018-03-23 | 2022-09-06 | Amazon Technologies, Inc. | Speech interface device with caching component |
US11488580B2 (en) * | 2019-04-03 | 2022-11-01 | Hyundai Motor Company | Dialogue system and dialogue processing method |
US20230089285A1 (en) * | 2020-02-11 | 2023-03-23 | Amazon Technologies, Inc. | Natural language understanding |
Families Citing this family (274)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU6630800A (en) | 1999-08-13 | 2001-03-13 | Pixo, Inc. | Methods and apparatuses for display and traversing of links in page character array |
AU8030300A (en) * | 1999-10-19 | 2001-04-30 | Sony Electronics Inc. | Natural language interface control system |
US8645137B2 (en) | 2000-03-16 | 2014-02-04 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US6741963B1 (en) * | 2000-06-21 | 2004-05-25 | International Business Machines Corporation | Method of managing a speech cache |
US7324947B2 (en) | 2001-10-03 | 2008-01-29 | Promptu Systems Corporation | Global speech user interface |
ITFI20010199A1 (en) | 2001-10-22 | 2003-04-22 | Riccardo Vieri | SYSTEM AND METHOD TO TRANSFORM TEXTUAL COMMUNICATIONS INTO VOICE AND SEND THEM WITH AN INTERNET CONNECTION TO ANY TELEPHONE SYSTEM |
US7398209B2 (en) * | 2002-06-03 | 2008-07-08 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
US7693720B2 (en) * | 2002-07-15 | 2010-04-06 | Voicebox Technologies, Inc. | Mobile systems and methods for responding to natural language speech utterance |
DE10252457A1 (en) * | 2002-11-12 | 2004-05-27 | Harman Becker Automotive Systems Gmbh | Voice input system for controlling functions by voice has voice interface with microphone array, arrangement for wireless transmission of signals generated by microphones to stationary central unit |
KR101032176B1 (en) * | 2002-12-02 | 2011-05-02 | 소니 주식회사 | Dialogue control device and method, and robot device |
US7711560B2 (en) * | 2003-02-19 | 2010-05-04 | Panasonic Corporation | Speech recognition device and speech recognition method |
US7669134B1 (en) | 2003-05-02 | 2010-02-23 | Apple Inc. | Method and apparatus for displaying information during an instant messaging session |
US20090164215A1 (en) * | 2004-02-09 | 2009-06-25 | Delta Electronics, Inc. | Device with voice-assisted system |
EP1723636A1 (en) * | 2004-03-12 | 2006-11-22 | Siemens Aktiengesellschaft | User and vocabulary-adaptive determination of confidence and rejecting thresholds |
US7813917B2 (en) * | 2004-06-22 | 2010-10-12 | Gary Stephen Shuster | Candidate matching using algorithmic analysis of candidate-authored narrative information |
US8335688B2 (en) * | 2004-08-20 | 2012-12-18 | Multimodal Technologies, Llc | Document transcription system training |
US8412521B2 (en) * | 2004-08-20 | 2013-04-02 | Multimodal Technologies, Llc | Discriminative training of document transcription system |
US7925506B2 (en) * | 2004-10-05 | 2011-04-12 | Inago Corporation | Speech recognition accuracy via concept to keyword mapping |
US8064663B2 (en) * | 2004-12-02 | 2011-11-22 | Lieven Van Hoe | Image evaluation system, methods and database |
EP1693829B1 (en) * | 2005-02-21 | 2018-12-05 | Harman Becker Automotive Systems GmbH | Voice-controlled data system |
US7583808B2 (en) * | 2005-03-28 | 2009-09-01 | Mitsubishi Electric Research Laboratories, Inc. | Locating and tracking acoustic sources with microphone arrays |
GB2426368A (en) * | 2005-05-21 | 2006-11-22 | Ibm | Using input signal quality in speeech recognition |
US7640160B2 (en) | 2005-08-05 | 2009-12-29 | Voicebox Technologies, Inc. | Systems and methods for responding to natural language speech utterance |
US7620549B2 (en) | 2005-08-10 | 2009-11-17 | Voicebox Technologies, Inc. | System and method of supporting adaptive misrecognition in conversational speech |
US7949529B2 (en) | 2005-08-29 | 2011-05-24 | Voicebox Technologies, Inc. | Mobile systems and methods of supporting natural language human-machine interactions |
WO2007027989A2 (en) * | 2005-08-31 | 2007-03-08 | Voicebox Technologies, Inc. | Dynamic speech sharpening |
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US7633076B2 (en) | 2005-09-30 | 2009-12-15 | Apple Inc. | Automated response to and sensing of user activity in portable devices |
US7328199B2 (en) | 2005-10-07 | 2008-02-05 | Microsoft Corporation | Componentized slot-filling architecture |
US7697827B2 (en) | 2005-10-17 | 2010-04-13 | Konicek Jeffrey C | User-friendlier interfaces for a camera |
US7606700B2 (en) | 2005-11-09 | 2009-10-20 | Microsoft Corporation | Adaptive task framework |
US7822699B2 (en) * | 2005-11-30 | 2010-10-26 | Microsoft Corporation | Adaptive semantic reasoning engine |
US20070124147A1 (en) * | 2005-11-30 | 2007-05-31 | International Business Machines Corporation | Methods and apparatus for use in speech recognition systems for identifying unknown words and for adding previously unknown words to vocabularies and grammars of speech recognition systems |
US8442828B2 (en) * | 2005-12-02 | 2013-05-14 | Microsoft Corporation | Conditional model for natural language understanding |
US7831585B2 (en) | 2005-12-05 | 2010-11-09 | Microsoft Corporation | Employment of task framework for advertising |
US7933914B2 (en) | 2005-12-05 | 2011-04-26 | Microsoft Corporation | Automatic task creation and execution using browser helper objects |
KR100764247B1 (en) * | 2005-12-28 | 2007-10-08 | 고려대학교 산학협력단 | Apparatus and Method for speech recognition with two-step search |
US7996783B2 (en) | 2006-03-02 | 2011-08-09 | Microsoft Corporation | Widget searching utilizing task framework |
KR100845428B1 (en) * | 2006-08-25 | 2008-07-10 | 한국전자통신연구원 | Speech recognition system of mobile terminal |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8612225B2 (en) * | 2007-02-28 | 2013-12-17 | Nec Corporation | Voice recognition device, voice recognition method, and voice recognition program |
US7813929B2 (en) * | 2007-03-30 | 2010-10-12 | Nuance Communications, Inc. | Automatic editing using probabilistic word substitution models |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
ITFI20070177A1 (en) | 2007-07-26 | 2009-01-27 | Riccardo Vieri | SYSTEM FOR THE CREATION AND SETTING OF AN ADVERTISING CAMPAIGN DERIVING FROM THE INSERTION OF ADVERTISING MESSAGES WITHIN AN EXCHANGE OF MESSAGES AND METHOD FOR ITS FUNCTIONING. |
JP5238205B2 (en) * | 2007-09-07 | 2013-07-17 | ニュアンス コミュニケーションズ,インコーポレイテッド | Speech synthesis system, program and method |
JP4455633B2 (en) * | 2007-09-10 | 2010-04-21 | 株式会社東芝 | Basic frequency pattern generation apparatus, basic frequency pattern generation method and program |
US9053089B2 (en) | 2007-10-02 | 2015-06-09 | Apple Inc. | Part-of-speech tagging using latent analogy |
US8165886B1 (en) | 2007-10-04 | 2012-04-24 | Great Northern Research LLC | Speech interface system and method for control and interaction with applications on a computing system |
US8595642B1 (en) | 2007-10-04 | 2013-11-26 | Great Northern Research, LLC | Multiple shell multi faceted graphical user interface |
US8364694B2 (en) | 2007-10-26 | 2013-01-29 | Apple Inc. | Search assistant for digital media assets |
US8359204B2 (en) * | 2007-10-26 | 2013-01-22 | Honda Motor Co., Ltd. | Free-speech command classification for car navigation system |
US8010369B2 (en) * | 2007-10-30 | 2011-08-30 | At&T Intellectual Property I, L.P. | System and method for controlling devices that are connected to a network |
US8620662B2 (en) | 2007-11-20 | 2013-12-31 | Apple Inc. | Context-aware unit selection |
JP5327054B2 (en) * | 2007-12-18 | 2013-10-30 | 日本電気株式会社 | Pronunciation variation rule extraction device, pronunciation variation rule extraction method, and pronunciation variation rule extraction program |
US10002189B2 (en) | 2007-12-20 | 2018-06-19 | Apple Inc. | Method and apparatus for searching using an active ontology |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US8327272B2 (en) | 2008-01-06 | 2012-12-04 | Apple Inc. | Portable multifunction device, method, and graphical user interface for viewing and managing electronic calendars |
US8065143B2 (en) | 2008-02-22 | 2011-11-22 | Apple Inc. | Providing text input using speech data and non-speech data |
US8289283B2 (en) | 2008-03-04 | 2012-10-16 | Apple Inc. | Language input interface on a device |
US8996376B2 (en) | 2008-04-05 | 2015-03-31 | Apple Inc. | Intelligent text-to-speech conversion |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US8589161B2 (en) | 2008-05-27 | 2013-11-19 | Voicebox Technologies, Inc. | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US8464150B2 (en) | 2008-06-07 | 2013-06-11 | Apple Inc. | Automatic language identification for dynamic text processing |
KR20100007625A (en) * | 2008-07-14 | 2010-01-22 | 엘지전자 주식회사 | Mobile terminal and method for displaying menu thereof |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US8768702B2 (en) | 2008-09-05 | 2014-07-01 | Apple Inc. | Multi-tiered voice feedback in an electronic device |
US8898568B2 (en) | 2008-09-09 | 2014-11-25 | Apple Inc. | Audio user interface |
DE102008046431A1 (en) * | 2008-09-09 | 2010-03-11 | Deutsche Telekom Ag | Speech dialogue system with reject avoidance method |
US8583418B2 (en) | 2008-09-29 | 2013-11-12 | Apple Inc. | Systems and methods of detecting language and natural language strings for text to speech synthesis |
US8712776B2 (en) | 2008-09-29 | 2014-04-29 | Apple Inc. | Systems and methods for selective text to speech synthesis |
US8355919B2 (en) | 2008-09-29 | 2013-01-15 | Apple Inc. | Systems and methods for text normalization for text to speech synthesis |
US8396714B2 (en) | 2008-09-29 | 2013-03-12 | Apple Inc. | Systems and methods for concatenation of words in text to speech synthesis |
US8352268B2 (en) | 2008-09-29 | 2013-01-08 | Apple Inc. | Systems and methods for selective rate of speech and speech preferences for text to speech synthesis |
US8352272B2 (en) | 2008-09-29 | 2013-01-08 | Apple Inc. | Systems and methods for text to speech synthesis |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US8335324B2 (en) * | 2008-12-24 | 2012-12-18 | Fortemedia, Inc. | Method and apparatus for automatic volume adjustment |
US8862252B2 (en) | 2009-01-30 | 2014-10-14 | Apple Inc. | Audio user interface for displayless electronic device |
US8380507B2 (en) | 2009-03-09 | 2013-02-19 | Apple Inc. | Systems and methods for determining the language to use for speech generated by a text to speech engine |
JP2010224194A (en) * | 2009-03-23 | 2010-10-07 | Sony Corp | Speech recognition device and speech recognition method, language model generating device and language model generating method, and computer program |
US10540976B2 (en) | 2009-06-05 | 2020-01-21 | Apple Inc. | Contextual voice commands |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10255566B2 (en) | 2011-06-03 | 2019-04-09 | Apple Inc. | Generating and processing task items that represent tasks to perform |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9431006B2 (en) | 2009-07-02 | 2016-08-30 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US9502025B2 (en) | 2009-11-10 | 2016-11-22 | Voicebox Technologies Corporation | System and method for providing a natural language content dedication service |
US9171541B2 (en) | 2009-11-10 | 2015-10-27 | Voicebox Technologies Corporation | System and method for hybrid processing in a natural language voice services environment |
US8682649B2 (en) | 2009-11-12 | 2014-03-25 | Apple Inc. | Sentiment prediction from textual data |
US8600743B2 (en) | 2010-01-06 | 2013-12-03 | Apple Inc. | Noise profile determination for voice-related feature |
US8381107B2 (en) | 2010-01-13 | 2013-02-19 | Apple Inc. | Adaptive audio feedback system and method |
US8311838B2 (en) | 2010-01-13 | 2012-11-13 | Apple Inc. | Devices and methods for identifying a prompt corresponding to a voice input in a sequence of prompts |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US8676581B2 (en) * | 2010-01-22 | 2014-03-18 | Microsoft Corporation | Speech recognition analysis via identification information |
US8977584B2 (en) | 2010-01-25 | 2015-03-10 | Newvaluexchange Global Ai Llp | Apparatuses, methods and systems for a digital conversation management platform |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US8639516B2 (en) | 2010-06-04 | 2014-01-28 | Apple Inc. | User-specific noise suppression for voice quality improvements |
US8713021B2 (en) | 2010-07-07 | 2014-04-29 | Apple Inc. | Unsupervised document clustering using latent semantic density analysis |
US9104670B2 (en) | 2010-07-21 | 2015-08-11 | Apple Inc. | Customized search or acquisition of digital media assets |
US8719006B2 (en) | 2010-08-27 | 2014-05-06 | Apple Inc. | Combined statistical and rule-based part-of-speech tagging for text-to-speech synthesis |
US8719014B2 (en) | 2010-09-27 | 2014-05-06 | Apple Inc. | Electronic device with text error correction based on voice recognition data |
US10515147B2 (en) | 2010-12-22 | 2019-12-24 | Apple Inc. | Using statistical language models for contextual lookup |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US8781836B2 (en) | 2011-02-22 | 2014-07-15 | Apple Inc. | Hearing assistance system for providing consistent human speech |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US20120310642A1 (en) | 2011-06-03 | 2012-12-06 | Apple Inc. | Automatically creating a mapping between text data and audio data |
US8812294B2 (en) | 2011-06-21 | 2014-08-19 | Apple Inc. | Translating phrases from one language into another using an order-based set of declarative rules |
US8959014B2 (en) | 2011-06-30 | 2015-02-17 | Google Inc. | Training acoustic models using distributed computing techniques |
US9367526B1 (en) * | 2011-07-26 | 2016-06-14 | Nuance Communications, Inc. | Word classing for language modeling |
US8706472B2 (en) | 2011-08-11 | 2014-04-22 | Apple Inc. | Method for disambiguating multiple readings in language conversion |
US8994660B2 (en) | 2011-08-29 | 2015-03-31 | Apple Inc. | Text correction processing |
US8762156B2 (en) | 2011-09-28 | 2014-06-24 | Apple Inc. | Speech recognition repair using contextual information |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
CA2775700C (en) | 2012-05-04 | 2013-07-23 | Microsoft Corporation | Determining a future portion of a currently presented media program |
US9280610B2 (en) | 2012-05-14 | 2016-03-08 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US8775442B2 (en) | 2012-05-15 | 2014-07-08 | Apple Inc. | Semantic search using a single-source semantic model |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
US9721563B2 (en) | 2012-06-08 | 2017-08-01 | Apple Inc. | Name recognition system |
US10019994B2 (en) | 2012-06-08 | 2018-07-10 | Apple Inc. | Systems and methods for recognizing textual identifiers within a plurality of words |
US8521523B1 (en) * | 2012-06-20 | 2013-08-27 | Google Inc. | Selecting speech data for speech recognition vocabulary |
WO2014000275A1 (en) * | 2012-06-29 | 2014-01-03 | Harman International (Shanghai) Management Co., Ltd. | Vehicle universal control device for interfacing sensors and controllers |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9547647B2 (en) | 2012-09-19 | 2017-01-17 | Apple Inc. | Voice-based media searching |
US8935167B2 (en) | 2012-09-25 | 2015-01-13 | Apple Inc. | Exemplar-based latent perceptual modeling for automatic speech recognition |
US9336771B2 (en) * | 2012-11-01 | 2016-05-10 | Google Inc. | Speech recognition using non-parametric models |
JP5887253B2 (en) * | 2012-11-16 | 2016-03-16 | 本田技研工業株式会社 | Message processing device |
CN103915095B (en) | 2013-01-06 | 2017-05-31 | 华为技术有限公司 | The method of speech recognition, interactive device, server and system |
EP2954514B1 (en) | 2013-02-07 | 2021-03-31 | Apple Inc. | Voice trigger for a digital assistant |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US10572476B2 (en) | 2013-03-14 | 2020-02-25 | Apple Inc. | Refining a search based on schedule items |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US9977779B2 (en) | 2013-03-14 | 2018-05-22 | Apple Inc. | Automatic supplementation of word correction dictionaries |
US10642574B2 (en) | 2013-03-14 | 2020-05-05 | Apple Inc. | Device, method, and graphical user interface for outputting captions |
US9733821B2 (en) | 2013-03-14 | 2017-08-15 | Apple Inc. | Voice control to diagnose inadvertent activation of accessibility features |
WO2014144579A1 (en) | 2013-03-15 | 2014-09-18 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US10078487B2 (en) | 2013-03-15 | 2018-09-18 | Apple Inc. | Context-sensitive handling of interruptions |
AU2014233517B2 (en) | 2013-03-15 | 2017-05-25 | Apple Inc. | Training an at least partial voice command system |
US11151899B2 (en) | 2013-03-15 | 2021-10-19 | Apple Inc. | User training by intelligent digital assistant |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
US9736088B1 (en) | 2013-05-01 | 2017-08-15 | PongPro LLC | Structured communication framework |
US9472205B2 (en) * | 2013-05-06 | 2016-10-18 | Honeywell International Inc. | Device voice recognition systems and methods |
US9390708B1 (en) * | 2013-05-28 | 2016-07-12 | Amazon Technologies, Inc. | Low latency and memory efficient keywork spotting |
US9953630B1 (en) * | 2013-05-31 | 2018-04-24 | Amazon Technologies, Inc. | Language recognition for device settings |
WO2014197334A2 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
WO2014197336A1 (en) | 2013-06-07 | 2014-12-11 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
WO2014200728A1 (en) | 2013-06-09 | 2014-12-18 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
AU2014278595B2 (en) | 2013-06-13 | 2017-04-06 | Apple Inc. | System and method for emergency calls initiated by voice command |
KR101749009B1 (en) | 2013-08-06 | 2017-06-19 | 애플 인크. | Auto-activating smart responses based on activities from remote devices |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
US9443516B2 (en) * | 2014-01-09 | 2016-09-13 | Honeywell International Inc. | Far-field speech recognition systems and methods |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9858922B2 (en) | 2014-06-23 | 2018-01-02 | Google Inc. | Caching speech recognition scores |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
CN105490890A (en) * | 2014-09-16 | 2016-04-13 | 中兴通讯股份有限公司 | Intelligent household terminal and control method therefor |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9299347B1 (en) | 2014-10-22 | 2016-03-29 | Google Inc. | Speech recognition using associative mapping |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US10152299B2 (en) | 2015-03-06 | 2018-12-11 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
DK179309B1 (en) | 2016-06-09 | 2018-04-23 | Apple Inc | Intelligent automated assistant in a home environment |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179343B1 (en) | 2016-06-11 | 2018-05-14 | Apple Inc | Intelligent task discovery |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
DK179049B1 (en) | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
WO2018140420A1 (en) | 2017-01-24 | 2018-08-02 | Honeywell International, Inc. | Voice control of an integrated room automation system |
US11402909B2 (en) | 2017-04-26 | 2022-08-02 | Cognixion | Brain computer interface for augmented reality |
US11237635B2 (en) | 2017-04-26 | 2022-02-01 | Cognixion | Nonverbal multi-input and feedback devices for user intended computer control and communication of text, graphics and audio |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | User interface for correcting recognition errors |
DK201770439A1 (en) | 2017-05-11 | 2018-12-13 | Apple Inc. | Offline personal assistant |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK201770428A1 (en) | 2017-05-12 | 2019-02-18 | Apple Inc. | Low-latency intelligent automated assistant |
DK201770431A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
DK201770432A1 (en) | 2017-05-15 | 2018-12-21 | Apple Inc. | Hierarchical belief states for digital assistants |
DK179560B1 (en) | 2017-05-16 | 2019-02-18 | Apple Inc. | Far-field extension for digital assistant services |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
US10984329B2 (en) | 2017-06-14 | 2021-04-20 | Ademco Inc. | Voice activated virtual assistant with a fused response |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
CN108538291A (en) * | 2018-04-11 | 2018-09-14 | 百度在线网络技术(北京)有限公司 | Sound control method, terminal device, cloud server and system |
US20190332848A1 (en) | 2018-04-27 | 2019-10-31 | Honeywell International Inc. | Facial enrollment and recognition system |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10984780B2 (en) | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
DK179822B1 (en) | 2018-06-01 | 2019-07-12 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
DK201870355A1 (en) | 2018-06-01 | 2019-12-16 | Apple Inc. | Virtual assistant operation in multi-device environments |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
US10496705B1 (en) | 2018-06-03 | 2019-12-03 | Apple Inc. | Accelerated task performance |
US20190390866A1 (en) | 2018-06-22 | 2019-12-26 | Honeywell International Inc. | Building management system with natural language interface |
US10540960B1 (en) * | 2018-09-05 | 2020-01-21 | International Business Machines Corporation | Intelligent command filtering using cones of authentication in an internet of things (IoT) computing environment |
US11934403B2 (en) * | 2020-05-18 | 2024-03-19 | Salesforce, Inc. | Generating training data for natural language search systems |
Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4827520A (en) * | 1987-01-16 | 1989-05-02 | Prince Corporation | Voice actuated control system for use in a vehicle |
US5208864A (en) * | 1989-03-10 | 1993-05-04 | Nippon Telegraph & Telephone Corporation | Method of detecting acoustic signal |
US5513298A (en) * | 1992-09-21 | 1996-04-30 | International Business Machines Corporation | Instantaneous context switching for speech recognition systems |
US5615296A (en) * | 1993-11-12 | 1997-03-25 | International Business Machines Corporation | Continuous speech recognition and voice response system and method to enable conversational dialogues with microprocessors |
US5748841A (en) * | 1994-02-25 | 1998-05-05 | Morin; Philippe | Supervised contextual language acquisition system |
US5748974A (en) * | 1994-12-13 | 1998-05-05 | International Business Machines Corporation | Multimodal natural language interface for cross-application tasks |
US5797123A (en) * | 1996-10-01 | 1998-08-18 | Lucent Technologies Inc. | Method of key-phase detection and verification for flexible speech understanding |
US5855002A (en) * | 1996-06-11 | 1998-12-29 | Pegasus Micro-Technologies, Inc. | Artificially intelligent natural language computational interface system for interfacing a human to a data processor having human-like responses |
US5878394A (en) * | 1994-04-21 | 1999-03-02 | Info Byte Ag | Process and device for the speech-controlled remote control of electrical consumers |
US6035267A (en) * | 1996-09-26 | 2000-03-07 | Mitsubishi Denki Kabushiki Kaisha | Interactive processing apparatus having natural language interfacing capability, utilizing goal frames, and judging action feasibility |
US6052666A (en) * | 1995-11-06 | 2000-04-18 | Thomson Multimedia S.A. | Vocal identification of devices in a home environment |
US6085160A (en) * | 1998-07-10 | 2000-07-04 | Lernout & Hauspie Speech Products N.V. | Language independent speech recognition |
US6112174A (en) * | 1996-11-13 | 2000-08-29 | Hitachi, Ltd. | Recognition dictionary system structure and changeover method of speech recognition system for car navigation |
US6188985B1 (en) * | 1997-01-06 | 2001-02-13 | Texas Instruments Incorporated | Wireless voice-activated device for control of a processor-based host system |
US6208972B1 (en) * | 1998-12-23 | 2001-03-27 | Richard Grant | Method for integrating computer processes with an interface controlled by voice actuated grammars |
US6298324B1 (en) * | 1998-01-05 | 2001-10-02 | Microsoft Corporation | Speech recognition system with changing grammars and grammar help command |
US6324512B1 (en) * | 1999-08-26 | 2001-11-27 | Matsushita Electric Industrial Co., Ltd. | System and method for allowing family members to access TV contents and program media recorder over telephone or internet |
US6408272B1 (en) * | 1999-04-12 | 2002-06-18 | General Magic, Inc. | Distributed voice user interface |
US20020099543A1 (en) * | 1998-08-28 | 2002-07-25 | Ossama Eman | Segmentation technique increasing the active vocabulary of speech recognizers |
US6442522B1 (en) * | 1999-10-12 | 2002-08-27 | International Business Machines Corporation | Bi-directional natural language system for interfacing with multiple back-end applications |
US6499013B1 (en) * | 1998-09-09 | 2002-12-24 | One Voice Technologies, Inc. | Interactive user interface using speech recognition and natural language processing |
US6553345B1 (en) * | 1999-08-26 | 2003-04-22 | Matsushita Electric Industrial Co., Ltd. | Universal remote control allowing natural language modality for television and multimedia searches and requests |
US6584439B1 (en) * | 1999-05-21 | 2003-06-24 | Winbond Electronics Corporation | Method and apparatus for controlling voice controlled devices |
US7016827B1 (en) * | 1999-09-03 | 2006-03-21 | International Business Machines Corporation | Method and system for ensuring robustness in natural language understanding |
US7447635B1 (en) * | 1999-10-19 | 2008-11-04 | Sony Corporation | Natural language interface control system |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61285495A (en) * | 1985-06-12 | 1986-12-16 | 株式会社日立製作所 | Voice recognition system |
JP2913105B2 (en) * | 1989-03-10 | 1999-06-28 | 日本電信電話株式会社 | Sound signal detection method |
JPH04338817A (en) * | 1991-05-16 | 1992-11-26 | Sony Corp | Electronic equipment controller |
JPH06274190A (en) * | 1993-03-18 | 1994-09-30 | Sony Corp | Navigation system and speech recognition device |
JPH0844387A (en) * | 1994-08-04 | 1996-02-16 | Aqueous Res:Kk | Voice recognizing device |
JP2975540B2 (en) * | 1994-10-20 | 1999-11-10 | 株式会社エイ・ティ・アール音声翻訳通信研究所 | Free speech recognition device |
JP2929959B2 (en) * | 1995-02-17 | 1999-08-03 | 日本電気株式会社 | Voice input network service system |
JPH0926799A (en) * | 1995-07-12 | 1997-01-28 | Aqueous Res:Kk | Speech recognition device |
KR100198019B1 (en) * | 1996-11-20 | 1999-06-15 | 정선종 | Remote speech input and its processing method using microphone array |
JP3546633B2 (en) * | 1997-03-12 | 2004-07-28 | 三菱電機株式会社 | Voice recognition device |
JPH10293709A (en) * | 1997-04-18 | 1998-11-04 | Casio Comput Co Ltd | Information processor and storage medium |
JP3027557B2 (en) * | 1997-09-03 | 2000-04-04 | 株式会社エイ・ティ・アール音声翻訳通信研究所 | Voice recognition method and apparatus, and recording medium storing voice recognition processing program |
JP4201870B2 (en) * | 1998-02-24 | 2008-12-24 | クラリオン株式会社 | System using control by speech recognition and control method by speech recognition |
US6418431B1 (en) | 1998-03-30 | 2002-07-09 | Microsoft Corporation | Information retrieval and speech recognition based on language models |
JPH11288296A (en) * | 1998-04-06 | 1999-10-19 | Denso Corp | Information processor |
US6895379B2 (en) * | 2002-03-27 | 2005-05-17 | Sony Corporation | Method of and apparatus for configuring and controlling home entertainment systems through natural language and spoken commands using a natural language server |
KR100740978B1 (en) * | 2004-12-08 | 2007-07-19 | 한국전자통신연구원 | System and method for processing natural language request |
JP4627475B2 (en) * | 2005-09-30 | 2011-02-09 | 本田技研工業株式会社 | Control device arrangement structure for electric power steering unit |
-
2000
- 2000-10-19 AU AU80303/00A patent/AU8030300A/en not_active Abandoned
- 2000-10-19 EP EP00971002A patent/EP1222655A1/en not_active Withdrawn
- 2000-10-19 CA CA2387079A patent/CA2387079C/en not_active Expired - Lifetime
- 2000-10-19 JP JP2001532534A patent/JP5118280B2/en not_active Expired - Lifetime
- 2000-10-19 US US09/692,846 patent/US7447635B1/en not_active Expired - Fee Related
- 2000-10-19 CA CA2748396A patent/CA2748396A1/en not_active Abandoned
- 2000-10-19 KR KR1020027005028A patent/KR100812109B1/en active IP Right Grant
- 2000-10-19 WO PCT/US2000/029036 patent/WO2001029823A1/en not_active Application Discontinuation
-
2007
- 2007-10-31 US US11/932,771 patent/US20080059188A1/en not_active Abandoned
-
2011
- 2011-06-20 JP JP2011135965A patent/JP2011237811A/en active Pending
Patent Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4827520A (en) * | 1987-01-16 | 1989-05-02 | Prince Corporation | Voice actuated control system for use in a vehicle |
US5208864A (en) * | 1989-03-10 | 1993-05-04 | Nippon Telegraph & Telephone Corporation | Method of detecting acoustic signal |
US5513298A (en) * | 1992-09-21 | 1996-04-30 | International Business Machines Corporation | Instantaneous context switching for speech recognition systems |
US5615296A (en) * | 1993-11-12 | 1997-03-25 | International Business Machines Corporation | Continuous speech recognition and voice response system and method to enable conversational dialogues with microprocessors |
US5748841A (en) * | 1994-02-25 | 1998-05-05 | Morin; Philippe | Supervised contextual language acquisition system |
US5878394A (en) * | 1994-04-21 | 1999-03-02 | Info Byte Ag | Process and device for the speech-controlled remote control of electrical consumers |
US5748974A (en) * | 1994-12-13 | 1998-05-05 | International Business Machines Corporation | Multimodal natural language interface for cross-application tasks |
US6052666A (en) * | 1995-11-06 | 2000-04-18 | Thomson Multimedia S.A. | Vocal identification of devices in a home environment |
US5855002A (en) * | 1996-06-11 | 1998-12-29 | Pegasus Micro-Technologies, Inc. | Artificially intelligent natural language computational interface system for interfacing a human to a data processor having human-like responses |
US6035267A (en) * | 1996-09-26 | 2000-03-07 | Mitsubishi Denki Kabushiki Kaisha | Interactive processing apparatus having natural language interfacing capability, utilizing goal frames, and judging action feasibility |
US5797123A (en) * | 1996-10-01 | 1998-08-18 | Lucent Technologies Inc. | Method of key-phase detection and verification for flexible speech understanding |
US6112174A (en) * | 1996-11-13 | 2000-08-29 | Hitachi, Ltd. | Recognition dictionary system structure and changeover method of speech recognition system for car navigation |
US6188985B1 (en) * | 1997-01-06 | 2001-02-13 | Texas Instruments Incorporated | Wireless voice-activated device for control of a processor-based host system |
US6298324B1 (en) * | 1998-01-05 | 2001-10-02 | Microsoft Corporation | Speech recognition system with changing grammars and grammar help command |
US6085160A (en) * | 1998-07-10 | 2000-07-04 | Lernout & Hauspie Speech Products N.V. | Language independent speech recognition |
US20020099543A1 (en) * | 1998-08-28 | 2002-07-25 | Ossama Eman | Segmentation technique increasing the active vocabulary of speech recognizers |
US6499013B1 (en) * | 1998-09-09 | 2002-12-24 | One Voice Technologies, Inc. | Interactive user interface using speech recognition and natural language processing |
US6208972B1 (en) * | 1998-12-23 | 2001-03-27 | Richard Grant | Method for integrating computer processes with an interface controlled by voice actuated grammars |
US6408272B1 (en) * | 1999-04-12 | 2002-06-18 | General Magic, Inc. | Distributed voice user interface |
US6584439B1 (en) * | 1999-05-21 | 2003-06-24 | Winbond Electronics Corporation | Method and apparatus for controlling voice controlled devices |
US6324512B1 (en) * | 1999-08-26 | 2001-11-27 | Matsushita Electric Industrial Co., Ltd. | System and method for allowing family members to access TV contents and program media recorder over telephone or internet |
US6553345B1 (en) * | 1999-08-26 | 2003-04-22 | Matsushita Electric Industrial Co., Ltd. | Universal remote control allowing natural language modality for television and multimedia searches and requests |
US7016827B1 (en) * | 1999-09-03 | 2006-03-21 | International Business Machines Corporation | Method and system for ensuring robustness in natural language understanding |
US6442522B1 (en) * | 1999-10-12 | 2002-08-27 | International Business Machines Corporation | Bi-directional natural language system for interfacing with multiple back-end applications |
US7447635B1 (en) * | 1999-10-19 | 2008-11-04 | Sony Corporation | Natural language interface control system |
Cited By (148)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010056307A1 (en) * | 2000-05-03 | 2001-12-27 | Wolfgang Theimer | Method for controlling a system, especially an electrical and/or electronic system comprising at least one application device |
US9772739B2 (en) * | 2000-05-03 | 2017-09-26 | Nokia Technologies Oy | Method for controlling a system, especially an electrical and/or electronic system comprising at least one application device |
US20030212761A1 (en) * | 2002-05-10 | 2003-11-13 | Microsoft Corporation | Process kernel |
US20050125486A1 (en) * | 2003-11-20 | 2005-06-09 | Microsoft Corporation | Decentralized operating system |
US20060206337A1 (en) * | 2005-03-08 | 2006-09-14 | Microsoft Corporation | Online learning for dialog systems |
US20060206333A1 (en) * | 2005-03-08 | 2006-09-14 | Microsoft Corporation | Speaker-dependent dialog adaptation |
US20060206332A1 (en) * | 2005-03-08 | 2006-09-14 | Microsoft Corporation | Easy generation and automatic training of spoken dialog systems using text-to-speech |
US7707131B2 (en) | 2005-03-08 | 2010-04-27 | Microsoft Corporation | Thompson strategy based online reinforcement learning system for action selection |
US7885817B2 (en) * | 2005-03-08 | 2011-02-08 | Microsoft Corporation | Easy generation and automatic training of spoken dialog systems using text-to-speech |
US20070038446A1 (en) * | 2005-08-09 | 2007-02-15 | Delta Electronics, Inc. | System and method for selecting audio contents by using speech recognition |
US8706489B2 (en) * | 2005-08-09 | 2014-04-22 | Delta Electronics Inc. | System and method for selecting audio contents by using speech recognition |
US20070219797A1 (en) * | 2006-03-16 | 2007-09-20 | Microsoft Corporation | Subword unit posterior probability for measuring confidence |
US7890325B2 (en) * | 2006-03-16 | 2011-02-15 | Microsoft Corporation | Subword unit posterior probability for measuring confidence |
US20080092200A1 (en) * | 2006-10-13 | 2008-04-17 | Jeff Grady | Interface systems for portable digital media storage and playback devices |
US10037781B2 (en) * | 2006-10-13 | 2018-07-31 | Koninklijke Philips N.V. | Interface systems for portable digital media storage and playback devices |
US10515628B2 (en) | 2006-10-16 | 2019-12-24 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US10510341B1 (en) | 2006-10-16 | 2019-12-17 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US10755699B2 (en) | 2006-10-16 | 2020-08-25 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US10297249B2 (en) | 2006-10-16 | 2019-05-21 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US11222626B2 (en) | 2006-10-16 | 2022-01-11 | Vb Assets, Llc | System and method for a cooperative conversational voice user interface |
US20080133239A1 (en) * | 2006-12-05 | 2008-06-05 | Jeon Hyung Bae | Method and apparatus for recognizing continuous speech using search space restriction based on phoneme recognition |
US8032374B2 (en) * | 2006-12-05 | 2011-10-04 | Electronics And Telecommunications Research Institute | Method and apparatus for recognizing continuous speech using search space restriction based on phoneme recognition |
US11080758B2 (en) | 2007-02-06 | 2021-08-03 | Vb Assets, Llc | System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements |
US10134060B2 (en) | 2007-02-06 | 2018-11-20 | Vb Assets, Llc | System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements |
US9406078B2 (en) | 2007-02-06 | 2016-08-02 | Voicebox Technologies Corporation | System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements |
US9620113B2 (en) | 2007-12-11 | 2017-04-11 | Voicebox Technologies Corporation | System and method for providing a natural language voice user interface |
US10347248B2 (en) | 2007-12-11 | 2019-07-09 | Voicebox Technologies Corporation | System and method for providing in-vehicle services via a natural language voice user interface |
US10089984B2 (en) | 2008-05-27 | 2018-10-02 | Vb Assets, Llc | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US9711143B2 (en) | 2008-05-27 | 2017-07-18 | Voicebox Technologies Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US10553216B2 (en) | 2008-05-27 | 2020-02-04 | Oracle International Corporation | System and method for an integrated, multi-modal, multi-device natural language voice services environment |
US20130218565A1 (en) * | 2008-07-28 | 2013-08-22 | Nuance Communications, Inc. | Enhanced Media Playback with Speech Recognition |
US20110208521A1 (en) * | 2008-08-14 | 2011-08-25 | 21Ct, Inc. | Hidden Markov Model for Speech Processing with Training Method |
US9020816B2 (en) * | 2008-08-14 | 2015-04-28 | 21Ct, Inc. | Hidden markov model for speech processing with training method |
WO2010019831A1 (en) * | 2008-08-14 | 2010-02-18 | 21Ct, Inc. | Hidden markov model for speech processing with training method |
US20130290900A1 (en) * | 2008-10-30 | 2013-10-31 | Centurylink Intellectual Property Llc | System and Method for Voice Activated Provisioning of Telecommunication Services |
US10936151B2 (en) * | 2008-10-30 | 2021-03-02 | Centurylink Intellectual Property Llc | System and method for voice activated provisioning of telecommunication services |
US8712773B2 (en) * | 2008-10-31 | 2014-04-29 | Sony Computer Entertainment Inc. | Method and system for modeling a common-language speech recognition, by a computer, under the influence of a plurality of dialects |
US20100121640A1 (en) * | 2008-10-31 | 2010-05-13 | Sony Computer Entertainment Inc. | Method and system for modeling a common-language speech recognition, by a computer, under the influence of a plurality of dialects |
US9953649B2 (en) | 2009-02-20 | 2018-04-24 | Voicebox Technologies Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US9570070B2 (en) | 2009-02-20 | 2017-02-14 | Voicebox Technologies Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US10553213B2 (en) | 2009-02-20 | 2020-02-04 | Oracle International Corporation | System and method for processing multi-modal device interactions in a natural language voice services environment |
US20110301955A1 (en) * | 2010-06-07 | 2011-12-08 | Google Inc. | Predicting and Learning Carrier Phrases for Speech Input |
US10297252B2 (en) | 2010-06-07 | 2019-05-21 | Google Llc | Predicting and learning carrier phrases for speech input |
US11423888B2 (en) | 2010-06-07 | 2022-08-23 | Google Llc | Predicting and learning carrier phrases for speech input |
US9412360B2 (en) * | 2010-06-07 | 2016-08-09 | Google Inc. | Predicting and learning carrier phrases for speech input |
US8738377B2 (en) * | 2010-06-07 | 2014-05-27 | Google Inc. | Predicting and learning carrier phrases for speech input |
US20140229185A1 (en) * | 2010-06-07 | 2014-08-14 | Google Inc. | Predicting and learning carrier phrases for speech input |
US20120109649A1 (en) * | 2010-11-01 | 2012-05-03 | General Motors Llc | Speech dialect classification for automatic speech recognition |
US9736230B2 (en) | 2010-11-23 | 2017-08-15 | Centurylink Intellectual Property Llc | User control over content delivery |
US10320614B2 (en) | 2010-11-23 | 2019-06-11 | Centurylink Intellectual Property Llc | User control over content delivery |
US20120246081A1 (en) * | 2011-03-25 | 2012-09-27 | Next It Corporation | Systems and Methods for Automated Itinerary Modification |
US20120259639A1 (en) * | 2011-04-07 | 2012-10-11 | Sony Corporation | Controlling audio video display device (avdd) tuning using channel name |
US8972267B2 (en) * | 2011-04-07 | 2015-03-03 | Sony Corporation | Controlling audio video display device (AVDD) tuning using channel name |
US20130151250A1 (en) * | 2011-12-08 | 2013-06-13 | Lenovo (Singapore) Pte. Ltd | Hybrid speech recognition |
US9620122B2 (en) * | 2011-12-08 | 2017-04-11 | Lenovo (Singapore) Pte. Ltd | Hybrid speech recognition |
US20150051913A1 (en) * | 2012-03-16 | 2015-02-19 | Lg Electronics Inc. | Unlock method using natural language processing and terminal for performing same |
US9779723B2 (en) * | 2012-06-22 | 2017-10-03 | Visteon Global Technologies, Inc. | Multi-pass vehicle voice recognition systems and methods |
KR20150037986A (en) * | 2012-07-03 | 2015-04-08 | 구글 인코포레이티드 | Determining hotword suitability |
US10002613B2 (en) | 2012-07-03 | 2018-06-19 | Google Llc | Determining hotword suitability |
CN104584119A (en) * | 2012-07-03 | 2015-04-29 | 谷歌公司 | Determining hotword suitability |
US11741970B2 (en) | 2012-07-03 | 2023-08-29 | Google Llc | Determining hotword suitability |
KR102072730B1 (en) * | 2012-07-03 | 2020-02-03 | 구글 엘엘씨 | Determining hotword suitability |
US20140012586A1 (en) * | 2012-07-03 | 2014-01-09 | Google Inc. | Determining hotword suitability |
US11227611B2 (en) | 2012-07-03 | 2022-01-18 | Google Llc | Determining hotword suitability |
US10714096B2 (en) | 2012-07-03 | 2020-07-14 | Google Llc | Determining hotword suitability |
US9536528B2 (en) * | 2012-07-03 | 2017-01-03 | Google Inc. | Determining hotword suitability |
US11776533B2 (en) | 2012-07-23 | 2023-10-03 | Soundhound, Inc. | Building a natural language understanding application using a received electronic record containing programming code including an interpret-block, an interpret-statement, a pattern expression and an action statement |
US10996931B1 (en) | 2012-07-23 | 2021-05-04 | Soundhound, Inc. | Integrated programming framework for speech and text understanding with block and statement structure |
US10957310B1 (en) | 2012-07-23 | 2021-03-23 | Soundhound, Inc. | Integrated programming framework for speech and text understanding with meaning parsing |
US9311914B2 (en) * | 2012-09-03 | 2016-04-12 | Nice-Systems Ltd | Method and apparatus for enhanced phonetic indexing and search |
US20140067373A1 (en) * | 2012-09-03 | 2014-03-06 | Nice-Systems Ltd | Method and apparatus for enhanced phonetic indexing and search |
US10438591B1 (en) * | 2012-10-30 | 2019-10-08 | Google Llc | Hotword-based speaker recognition |
US11557301B2 (en) | 2012-10-30 | 2023-01-17 | Google Llc | Hotword-based speaker recognition |
US10020963B2 (en) | 2012-12-03 | 2018-07-10 | Google Technology Holdings LLC | Method and apparatus for selectively transmitting data using spatial diversity |
US9813262B2 (en) | 2012-12-03 | 2017-11-07 | Google Technology Holdings LLC | Method and apparatus for selectively transmitting data using spatial diversity |
US9591508B2 (en) | 2012-12-20 | 2017-03-07 | Google Technology Holdings LLC | Methods and apparatus for transmitting data between different peer-to-peer communication groups |
US9466295B2 (en) | 2012-12-31 | 2016-10-11 | Via Technologies, Inc. | Method for correcting a speech response and natural language dialogue system |
TWI594139B (en) * | 2012-12-31 | 2017-08-01 | 威盛電子股份有限公司 | Method for correcting speech response and natural language dialog system |
US9979531B2 (en) | 2013-01-03 | 2018-05-22 | Google Technology Holdings LLC | Method and apparatus for tuning a communication device for multi band operation |
US20140207470A1 (en) * | 2013-01-22 | 2014-07-24 | Samsung Electronics Co., Ltd. | Electronic apparatus and voice processing method thereof |
US9830911B2 (en) * | 2013-01-22 | 2017-11-28 | Samsung Electronics Co., Ltd. | Electronic apparatus and voice processing method thereof |
US20140278394A1 (en) * | 2013-03-12 | 2014-09-18 | Motorola Mobility Llc | Apparatus and Method for Beamforming to Obtain Voice and Noise Signals |
US10229697B2 (en) * | 2013-03-12 | 2019-03-12 | Google Technology Holdings LLC | Apparatus and method for beamforming to obtain voice and noise signals |
US20140278427A1 (en) * | 2013-03-13 | 2014-09-18 | Samsung Electronics Co., Ltd. | Dynamic dialog system agent integration |
CN104049707A (en) * | 2013-03-15 | 2014-09-17 | 马克西姆综合产品公司 | Always-on Low-power Keyword Spotting |
US20140304205A1 (en) * | 2013-04-04 | 2014-10-09 | Spansion Llc | Combining of results from multiple decoders |
US9530103B2 (en) * | 2013-04-04 | 2016-12-27 | Cypress Semiconductor Corporation | Combining of results from multiple decoders |
US9472193B2 (en) | 2013-09-03 | 2016-10-18 | Panasonic Intellectual Property Corporation Of America | Speech dialogue control method |
EP3261087A1 (en) * | 2013-09-03 | 2017-12-27 | Panasonic Intellectual Property Corporation of America | Voice interaction control method |
EP3043348A4 (en) * | 2013-09-03 | 2016-07-13 | Panasonic Ip Corp America | Voice interaction control method |
US9817881B2 (en) * | 2013-10-16 | 2017-11-14 | Cypress Semiconductor Corporation | Hidden markov model processing engine |
US20150106405A1 (en) * | 2013-10-16 | 2015-04-16 | Spansion Llc | Hidden markov model processing engine |
WO2015084659A1 (en) * | 2013-12-02 | 2015-06-11 | Rawles Llc | Natural language control of secondary device |
CN106062734A (en) * | 2013-12-02 | 2016-10-26 | 亚马逊技术股份有限公司 | Natural language control of secondary device |
US9698999B2 (en) | 2013-12-02 | 2017-07-04 | Amazon Technologies, Inc. | Natural language control of secondary device |
US11295730B1 (en) | 2014-02-27 | 2022-04-05 | Soundhound, Inc. | Using phonetic variants in a local context to improve natural language understanding |
US9705736B2 (en) | 2014-03-14 | 2017-07-11 | Ray Wang | Method and system for a personal network |
US9548065B2 (en) * | 2014-05-05 | 2017-01-17 | Sensory, Incorporated | Energy post qualification for phrase spotting |
US20150317980A1 (en) * | 2014-05-05 | 2015-11-05 | Sensory, Incorporated | Energy post qualification for phrase spotting |
US10621987B2 (en) | 2014-07-25 | 2020-04-14 | Google Llc | Providing pre-computed hotword models |
US9646612B2 (en) | 2014-07-25 | 2017-05-09 | Google Inc. | Providing pre-computed hotword models |
US10186268B2 (en) | 2014-07-25 | 2019-01-22 | Google Llc | Providing pre-computed hotword models |
US9520130B2 (en) * | 2014-07-25 | 2016-12-13 | Google Inc. | Providing pre-computed hotword models |
US11682396B2 (en) | 2014-07-25 | 2023-06-20 | Google Llc | Providing pre-computed hotword models |
US9911419B2 (en) | 2014-07-25 | 2018-03-06 | Google Llc | Providing pre-computed hotword models |
US11062709B2 (en) | 2014-07-25 | 2021-07-13 | Google Llc | Providing pre-computed hotword models |
US10446153B2 (en) | 2014-07-25 | 2019-10-15 | Google Llc | Providing pre-computed hotword models |
US10497373B1 (en) | 2014-07-25 | 2019-12-03 | Google Llc | Providing pre-computed hotword models |
WO2016028628A3 (en) * | 2014-08-19 | 2016-08-18 | Nuance Communications, Inc. | System and method for speech validation |
CN106796784A (en) * | 2014-08-19 | 2017-05-31 | 努恩斯通讯公司 | For the system and method for speech verification |
US9626703B2 (en) | 2014-09-16 | 2017-04-18 | Voicebox Technologies Corporation | Voice commerce |
US10216725B2 (en) | 2014-09-16 | 2019-02-26 | Voicebox Technologies Corporation | Integration of domain information into state transitions of a finite state transducer for natural language processing |
US9898459B2 (en) | 2014-09-16 | 2018-02-20 | Voicebox Technologies Corporation | Integration of domain information into state transitions of a finite state transducer for natural language processing |
US10430863B2 (en) | 2014-09-16 | 2019-10-01 | Vb Assets, Llc | Voice commerce |
US11087385B2 (en) | 2014-09-16 | 2021-08-10 | Vb Assets, Llc | Voice commerce |
US9747896B2 (en) | 2014-10-15 | 2017-08-29 | Voicebox Technologies Corporation | System and method for providing follow-up responses to prior natural language inputs of a user |
US10229673B2 (en) | 2014-10-15 | 2019-03-12 | Voicebox Technologies Corporation | System and method for providing follow-up responses to prior natural language inputs of a user |
US10431214B2 (en) | 2014-11-26 | 2019-10-01 | Voicebox Technologies Corporation | System and method of determining a domain and/or an action related to a natural language input |
US10614799B2 (en) | 2014-11-26 | 2020-04-07 | Voicebox Technologies Corporation | System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance |
US10403265B2 (en) * | 2014-12-24 | 2019-09-03 | Mitsubishi Electric Corporation | Voice recognition apparatus and voice recognition method |
US9390284B1 (en) | 2015-04-03 | 2016-07-12 | Ray Wang | Method for secure and private computer file |
US10311862B2 (en) * | 2015-12-23 | 2019-06-04 | Rovi Guides, Inc. | Systems and methods for conversations with devices about media using interruptions and changes of subjects |
US20170186425A1 (en) * | 2015-12-23 | 2017-06-29 | Rovi Guides, Inc. | Systems and methods for conversations with devices about media using interruptions and changes of subjects |
US20190237064A1 (en) * | 2015-12-23 | 2019-08-01 | Rovi Guides, Inc. | Systems and methods for conversations with devices about media using interruptions and changes of subjects |
US10629187B2 (en) * | 2015-12-23 | 2020-04-21 | Rovi Guides, Inc. | Systems and methods for conversations with devices about media using interruptions and changes of subjects |
US11024296B2 (en) * | 2015-12-23 | 2021-06-01 | Rovi Guides, Inc. | Systems and methods for conversations with devices about media using interruptions and changes of subjects |
US10255913B2 (en) * | 2016-02-17 | 2019-04-09 | GM Global Technology Operations LLC | Automatic speech recognition for disfluent speech |
US20170236511A1 (en) * | 2016-02-17 | 2017-08-17 | GM Global Technology Operations LLC | Automatic speech recognition for disfluent speech |
US10575120B2 (en) | 2016-02-27 | 2020-02-25 | Ray Wang | Method of autonomous social media system |
US10331784B2 (en) | 2016-07-29 | 2019-06-25 | Voicebox Technologies Corporation | System and method of disambiguating natural language processing requests |
US20180089176A1 (en) * | 2016-09-26 | 2018-03-29 | Samsung Electronics Co., Ltd. | Method of translating speech signal and electronic device employing the same |
US10614170B2 (en) * | 2016-09-26 | 2020-04-07 | Samsung Electronics Co., Ltd. | Method of translating speech signal and electronic device employing the same |
US10607606B2 (en) | 2017-06-19 | 2020-03-31 | Lenovo (Singapore) Pte. Ltd. | Systems and methods for execution of digital assistant |
TWI752286B (en) * | 2017-12-04 | 2022-01-11 | 日商夏普股份有限公司 | External control device, voice dialogue control system, control method, recording medium and program product |
CN110058833A (en) * | 2017-12-04 | 2019-07-26 | 夏普株式会社 | External control device, sound conversational control system, control method and recording medium |
EP3493049A1 (en) * | 2017-12-04 | 2019-06-05 | Sharp Kabushiki Kaisha | External control device, speech interactive control system, control method, and control program |
US11437041B1 (en) * | 2018-03-23 | 2022-09-06 | Amazon Technologies, Inc. | Speech interface device with caching component |
US20190294678A1 (en) * | 2018-03-23 | 2019-09-26 | Servicenow, Inc. | Systems and method for vocabulary management in a natural learning framework |
US11681877B2 (en) | 2018-03-23 | 2023-06-20 | Servicenow, Inc. | Systems and method for vocabulary management in a natural learning framework |
US10956683B2 (en) * | 2018-03-23 | 2021-03-23 | Servicenow, Inc. | Systems and method for vocabulary management in a natural learning framework |
US11887604B1 (en) | 2018-03-23 | 2024-01-30 | Amazon Technologies, Inc. | Speech interface device with caching component |
US11308939B1 (en) * | 2018-09-25 | 2022-04-19 | Amazon Technologies, Inc. | Wakeword detection using multi-word model |
US10885912B2 (en) * | 2018-11-13 | 2021-01-05 | Motorola Solutions, Inc. | Methods and systems for providing a corrected voice command |
US20200152186A1 (en) * | 2018-11-13 | 2020-05-14 | Motorola Solutions, Inc. | Methods and systems for providing a corrected voice command |
US11488580B2 (en) * | 2019-04-03 | 2022-11-01 | Hyundai Motor Company | Dialogue system and dialogue processing method |
US20230014114A1 (en) * | 2019-04-03 | 2023-01-19 | Hyundai Motor Company | Dialogue system and dialogue processing method |
US11783806B2 (en) * | 2019-04-03 | 2023-10-10 | Hyundai Motor Company | Dialogue system and dialogue processing method |
US20230089285A1 (en) * | 2020-02-11 | 2023-03-23 | Amazon Technologies, Inc. | Natural language understanding |
Also Published As
Publication number | Publication date |
---|---|
KR100812109B1 (en) | 2008-03-12 |
JP2003515177A (en) | 2003-04-22 |
EP1222655A1 (en) | 2002-07-17 |
CA2387079A1 (en) | 2001-04-26 |
JP2011237811A (en) | 2011-11-24 |
AU8030300A (en) | 2001-04-30 |
CA2387079C (en) | 2011-10-18 |
WO2001029823A1 (en) | 2001-04-26 |
JP5118280B2 (en) | 2013-01-16 |
CA2748396A1 (en) | 2001-04-26 |
KR20020071856A (en) | 2002-09-13 |
US7447635B1 (en) | 2008-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7447635B1 (en) | Natural language interface control system | |
US9484030B1 (en) | Audio triggered commands | |
US10884701B2 (en) | Voice enabling applications | |
US20220115016A1 (en) | Speech-processing system | |
US10365887B1 (en) | Generating commands based on location and wakeword | |
US7826945B2 (en) | Automobile speech-recognition interface | |
EP0965978B1 (en) | Non-interactive enrollment in speech recognition | |
US7016849B2 (en) | Method and apparatus for providing speech-driven routing between spoken language applications | |
US7013275B2 (en) | Method and apparatus for providing a dynamic speech-driven control and remote service access system | |
US20070033003A1 (en) | Spoken word spotting queries | |
US20200184967A1 (en) | Speech processing system | |
EP2048655A1 (en) | Context sensitive multi-stage speech recognition | |
JP2002304190A (en) | Method for generating pronunciation change form and method for speech recognition | |
US11715472B2 (en) | Speech-processing system | |
US20070198268A1 (en) | Method for controlling a speech dialog system and speech dialog system | |
US20240029743A1 (en) | Intermediate data for inter-device speech processing | |
JP2004163541A (en) | Voice response device | |
US11735178B1 (en) | Speech-processing system | |
US10854196B1 (en) | Functional prerequisites and acknowledgments | |
JP2005157166A (en) | Apparatus and method for speech recognition, and program | |
US11176930B1 (en) | Storing audio commands for time-delayed execution | |
Georgila et al. | A speech-based human-computer interaction system for automating directory assistance services | |
JP2003510662A (en) | Spelling mode in speech recognizer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION |