US20090144050A1 - System and method for augmenting spoken language understanding by correcting common errors in linguistic performance - Google Patents

System and method for augmenting spoken language understanding by correcting common errors in linguistic performance Download PDF

Info

Publication number
US20090144050A1
US20090144050A1 US12/365,980 US36598009A US2009144050A1 US 20090144050 A1 US20090144050 A1 US 20090144050A1 US 36598009 A US36598009 A US 36598009A US 2009144050 A1 US2009144050 A1 US 2009144050A1
Authority
US
United States
Prior art keywords
speech
words
user
module
spoken
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/365,980
Inventor
Steven H. Lewis
Kenneth H. Rosen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nuance Communications Inc
Original Assignee
AT&T Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AT&T Corp filed Critical AT&T Corp
Priority to US12/365,980 priority Critical patent/US20090144050A1/en
Publication of US20090144050A1 publication Critical patent/US20090144050A1/en
Assigned to NUANCE COMMUNICATIONS, INC. reassignment NUANCE COMMUNICATIONS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AT&T INTELLECTUAL PROPERTY II, L.P.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/187Phonemic context, e.g. pronunciation rules, phonotactical constraints or phoneme n-grams

Definitions

  • the present invention relates to spoken dialog systems and more specifically to a system and method of augmenting spoken language recognition and understanding by correcting common errors in linguistic performance.
  • FIG. 1 illustrates the basic components of a spoken dialog system 100 .
  • the spoken dialog system 100 may operate on a single computing device or on a distributed computer network.
  • the system 100 receives speech sounds from a user 102 and operates to generate a response.
  • the general components of such a system include an automatic speech recognition (“ASR”) module 104 that recognizes the words spoken by the user 102 .
  • ASR automatic speech recognition
  • AT&T's Watson ASR component is an illustration of this module.
  • a spoken language understanding (“SLU”) module 106 associates a meaning to the words received from the ASR module 104 .
  • a dialog management (“DM”) module 108 manages the dialog by determining an appropriate response to the customer question.
  • AT&T's Florence DM engine is an example of this module.
  • a spoken language generation (“SLG”) module 110 Based on the determined action, a spoken language generation (“SLG”) module 110 generates the appropriate words to be spoken by the system in response and a Text-to-Speech (“TTS”) module 112 synthesizes the speech for the user 102 .
  • SSG spoken language generation
  • TTS Text-to-Speech
  • AT&T's Natural Voices TTS engine provides an example of the TTS module.
  • Data and rules 114 are used to train each module and to process run-time data in each module.
  • a key component in achieving wide-spread acceptance of interactive spoken dialog services is achieving a sufficiently high percentage correct interpretations of requests spoken by callers.
  • the ASR module 104 uses statistical models of acoustic information to recognize patterns as semantic units such as words and phrases.
  • the patterns are typically matched against large or specialized dictionaries of words that are found in general or restricted contexts. In general, the smaller the set of accepted target words the greater the recognition accuracy.
  • the embodiments of the invention comprise a method, software module, and spoken dialog system for performing automatic speech recognition and spoken language understanding.
  • the method comprises receiving speech from a user, the speech including at least one speech error, modifying the probabilities of closely related words to the at least one speech error and processing the received speech using the modified probabilities.
  • a corpora of data is used to identify words that are commonly mis-stated so that when the at least one speech error is received, related words to the at least one speech error may have their probabilities modified when speech recognition occurs or language understanding occurs. This increases the likelihood that the speaker's intended word will be interpreted.
  • FIG. 1 illustrates a typical spoken language dialog system
  • FIG. 2 illustrates a method according to an aspect of the present invention.
  • the present invention relates to improving the correct interpretation of the speaker's intent in the SLU module 106 .
  • a user will not speak fluently and will have “slips of the tongue” where words are spoken that are different from the user's intent.
  • an ASR module In speech recognition, a basic fundamental process involved relates to probability theory. When speech sounds are received, an ASR module will determine based on probability theory what text should be associated with the sounds. The details of probability theory and pattern recognition are beyond the scope of this disclosure, but details are known to those of skill in the art and may be found in such books as Huang, Acero and Hon, Spoken Language Processing , Prentice Hall, 2001. It is sufficient for this disclosure to understand that an ASR system will receive speech and use probability theory to seek to determine the appropriate words to assign to each utterance.
  • the present invention provides a method for using predictable linguistic disfluencies to augment automatic speech recognition models used in a spoken language understanding system. Further, the present invention may provide data and input to the spoken language understanding module to increase the understanding of the recognized speech. For example, it is known that many slips of the tongue are the result of a word spoken in error that is quite similar to the word the speaker “meant” to say. These words spoken in error can be identified in predictable ways. In particular, words that share 1) the initial phoneme, 2) the final phoneme, and 3) number of syllables with the “correct” intended word are quite common as a type of slip of the tongue. For example, in an telephone operator services environment, if the operator prompts the user to determine the type of call the user desires to make, the user may respond by saying “correct” or “connect” instead of what they intended to say, which is “collect.”
  • the system will preferably comprise a computing device such as a computer server operating in a stand-alone mode, in a local area network or on a wide area network such as the Internet. Any particular configuration of the computing device is immaterial to the present invention.
  • the computing device may also operate in a wireless network and/or a wired network either as a client computing device or a server computing device.
  • the system may involve computer processing partially on a client device and partially on a server device as well to accomplish speech recognition.
  • a the spoken dialog system upon receiving the word “correct” when the applicant intended to say “collect” would seek to accurately recognize the input by raising or modifying the probabilities of close relatives of the word recognized based on the similarity according to the characteristics described.
  • the modification of probabilities may be an increase in some or all probability parameters or may be a decrease in some or all of the parameters.
  • the modification may also increase some and decrease other probability parameters associated with speech recognition. In most cases, the probability is increased but the invention covers all these alternative ways of modifying the parameters. Therefore, the probability of the word “collect” is increased in the operator domain to increase the chance that the ASR module will interpret “correct” as “collect.” Such a modification will increase the correct interpretation of user input and increase user satisfaction with the system.
  • slips of the tongue often involve two or more words in a phrase with beginning or ending or words interposed.
  • This invention also provides methods for potentially correcting these slips of the tongue.
  • Another aspect of this invention is to understand slips of the tongue of people who are not native English speakers, including situations where words of different languages are mixed with words in English.
  • the invention also makes use of existing corpora of slips of the tongue, as well as future database of slips of the tongue that may be collected by analyzing the actual interaction of callers with systems.
  • the corpora may be based on specific domains such as dialogs related to handling collect calls or customer care related to a telephone service, or any other domain.
  • the corpora may also be based on different languages.
  • an English ASR system can utilize a corpora of Japanese speaker slips and utilize that corpora when adjusting the probabilities of potential recognitions based on the common slips.
  • a similar process of adjusting probabilities may occur for speakers of different dialects of English. For example, a person with a New York accent or a southern accent may have particular words that they are more likely to mis-state and the probabilities associated with those words may be modified to improve recognition.
  • the corpora of data may provide, for example, increased probabilities for the predictable error words spoken in a particular domain or for a particular cultural or language domain. Therefore, if the system determines that the person communicating with the system is Japanese, then the Japanese language corpora may be loaded that identifies predictable error speech and increases the probabilities of certain words or phrases or sentences to increase the probability that the correct word or phrase will be recognized.
  • FIG. 2 illustrates the basic steps of the invention. They may be practiced on any computing device used to perform ASR or SLU. There is no specific programming language required to practice the invention.
  • the method comprises receiving user speech containing at least one speech error such as a slip ( 202 ).
  • the system raises the probabilities of close relatives of the at least one speech error ( 204 ).
  • the ASR step then involves recognizing the speech using the raised or modified probabilities in the ASR module ( 206 ).
  • step ( 206 ) may also involve performing spoken language understanding based on the raised probabilities of certain word or phrases as set forth herein.
  • the system may lower probabilities of certain based on words that a Japanese speaker rarely slips on.
  • This invention will improve the performance of systems such as AT&T's “How May I Help You” system and related systems being developed.
  • the system provides a learning mode which operates to make probabilities adjustments based on an ongoing conversation between a user and the system.
  • the system determines whether it is interpreting the speech correctly. This step may occur by asking for a confirmation of the recognition. For example, “did you say Washington, D.C.?”. Other methods of determining the accuracy of the recognition are also contemplated such as other actions taken by the user when interacting with the system.
  • the system modifies the probabilities to improve the recognition accuracy.
  • the system can adjust its recognition accuracy on a person by person basis as one particular person may more often articulate specific speech errors than another person.
  • spoonerisms Another linguistic problem that the present invention addresses relates to “spoonerisms”.
  • a person speaks a “spoonerism” letters become interchanged in a phrase. For example, if a caller is making an insurance claim, he or she may say “I need to report a clammage dame.”
  • the system modifies the word recognition probabilities of well-known or anticipated spoonerisms and unpacks to spoonerism to reveal the user intention rather than the slip. This cognitive repair work may be performed to improve recognition.
  • Embodiments within the scope of the present invention may also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon.
  • Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer.
  • Such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures.
  • a network or another communications connection either hardwired, wireless, or combination thereof to a computer, the computer properly views the connection as a computer-readable medium.
  • any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of the computer-readable media.
  • Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
  • Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments.
  • program modules include routines, programs, objects, components, and data structures, etc. that perform particular tasks or implement particular abstract data types.
  • Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.
  • Embodiments of the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.

Abstract

A method and system for automatic speech recognition are disclosed. The method comprises receiving speech from a user, the speech including at least one speech error, increasing the probabilities of closely related words to the at least one speech error and processing the received speech using the increased probabilities. A corpora of data having common words that are mis-stated is used to identify and increase the probabilities of related words. The method applies to at least the automatic speech recognition module and the spoken language understanding module.

Description

    PRIORITY
  • The present invention is a continuation of U.S. patent application Ser. No. 10/787,782, filed Feb. 26, 2004, the contents of which is incorporated herein by reference in its entirety.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to spoken dialog systems and more specifically to a system and method of augmenting spoken language recognition and understanding by correcting common errors in linguistic performance.
  • 2. Introduction
  • Spoken dialog systems have several main components or modules to process information in the form of speech from a user and generate an appropriate, conversational response. FIG. 1 illustrates the basic components of a spoken dialog system 100. The spoken dialog system 100 may operate on a single computing device or on a distributed computer network. The system 100 receives speech sounds from a user 102 and operates to generate a response. The general components of such a system include an automatic speech recognition (“ASR”) module 104 that recognizes the words spoken by the user 102. AT&T's Watson ASR component is an illustration of this module. A spoken language understanding (“SLU”) module 106 associates a meaning to the words received from the ASR module 104. A dialog management (“DM”) module 108 manages the dialog by determining an appropriate response to the customer question. AT&T's Florence DM engine is an example of this module. Based on the determined action, a spoken language generation (“SLG”) module 110 generates the appropriate words to be spoken by the system in response and a Text-to-Speech (“TTS”) module 112 synthesizes the speech for the user 102. AT&T's Natural Voices TTS engine provides an example of the TTS module. Data and rules 114 are used to train each module and to process run-time data in each module.
  • A key component in achieving wide-spread acceptance of interactive spoken dialog services is achieving a sufficiently high percentage correct interpretations of requests spoken by callers. Typically, the ASR module 104 uses statistical models of acoustic information to recognize patterns as semantic units such as words and phrases. The patterns are typically matched against large or specialized dictionaries of words that are found in general or restricted contexts. In general, the smaller the set of accepted target words the greater the recognition accuracy.
  • However, a common problem arises when the speaker or user of the system does not speak in a fluent manner. For example, the user may say “I . . . um . . . um . . . am interested in . . . ah making a connect call.” In this example, the user meant to say a “collect” call. What is needed in the art is an approach to correctly recognizing and understanding what a caller means to say when the caller has said something different than what this caller intended because of disfluencies, or slips of the tongue.
  • SUMMARY OF THE INVENTION
  • Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth herein.
  • The embodiments of the invention comprise a method, software module, and spoken dialog system for performing automatic speech recognition and spoken language understanding. The method comprises receiving speech from a user, the speech including at least one speech error, modifying the probabilities of closely related words to the at least one speech error and processing the received speech using the modified probabilities. A corpora of data is used to identify words that are commonly mis-stated so that when the at least one speech error is received, related words to the at least one speech error may have their probabilities modified when speech recognition occurs or language understanding occurs. This increases the likelihood that the speaker's intended word will be interpreted.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In order to describe the manner in which the above-recited and other advantages and features of the invention can be obtained, a more particular description of the invention briefly described above will be rendered with reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
  • FIG. 1 illustrates a typical spoken language dialog system; and
  • FIG. 2 illustrates a method according to an aspect of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention relates to improving the correct interpretation of the speaker's intent in the SLU module 106. As referenced above, often a user will not speak fluently and will have “slips of the tongue” where words are spoken that are different from the user's intent.
  • In speech recognition, a basic fundamental process involved relates to probability theory. When speech sounds are received, an ASR module will determine based on probability theory what text should be associated with the sounds. The details of probability theory and pattern recognition are beyond the scope of this disclosure, but details are known to those of skill in the art and may be found in such books as Huang, Acero and Hon, Spoken Language Processing, Prentice Hall, 2001. It is sufficient for this disclosure to understand that an ASR system will receive speech and use probability theory to seek to determine the appropriate words to assign to each utterance.
  • The present invention provides a method for using predictable linguistic disfluencies to augment automatic speech recognition models used in a spoken language understanding system. Further, the present invention may provide data and input to the spoken language understanding module to increase the understanding of the recognized speech. For example, it is known that many slips of the tongue are the result of a word spoken in error that is quite similar to the word the speaker “meant” to say. These words spoken in error can be identified in predictable ways. In particular, words that share 1) the initial phoneme, 2) the final phoneme, and 3) number of syllables with the “correct” intended word are quite common as a type of slip of the tongue. For example, in an telephone operator services environment, if the operator prompts the user to determine the type of call the user desires to make, the user may respond by saying “correct” or “connect” instead of what they intended to say, which is “collect.”
  • One embodiment of the invention relates to a system for performing speech recognition. The system will preferably comprise a computing device such as a computer server operating in a stand-alone mode, in a local area network or on a wide area network such as the Internet. Any particular configuration of the computing device is immaterial to the present invention. The computing device may also operate in a wireless network and/or a wired network either as a client computing device or a server computing device. The system may involve computer processing partially on a client device and partially on a server device as well to accomplish speech recognition.
  • In one aspect of the invention, a the spoken dialog system, upon receiving the word “correct” when the applicant intended to say “collect” would seek to accurately recognize the input by raising or modifying the probabilities of close relatives of the word recognized based on the similarity according to the characteristics described. In this manner, for the telephone operator domain, a set of predictable erroneous responses would be identified and a modification in the probabilities of the appropriate words is achieved. The modification of probabilities may be an increase in some or all probability parameters or may be a decrease in some or all of the parameters. The modification may also increase some and decrease other probability parameters associated with speech recognition. In most cases, the probability is increased but the invention covers all these alternative ways of modifying the parameters. Therefore, the probability of the word “collect” is increased in the operator domain to increase the chance that the ASR module will interpret “correct” as “collect.” Such a modification will increase the correct interpretation of user input and increase user satisfaction with the system.
  • Similarly, slips of the tongue often involve two or more words in a phrase with beginning or ending or words interposed. This invention also provides methods for potentially correcting these slips of the tongue. Another aspect of this invention is to understand slips of the tongue of people who are not native English speakers, including situations where words of different languages are mixed with words in English. The invention also makes use of existing corpora of slips of the tongue, as well as future database of slips of the tongue that may be collected by analyzing the actual interaction of callers with systems. The corpora may be based on specific domains such as dialogs related to handling collect calls or customer care related to a telephone service, or any other domain. The corpora may also be based on different languages. For example, if a native Japanese speaker commonly mis-states specific words when speaking English, then an English ASR system can utilize a corpora of Japanese speaker slips and utilize that corpora when adjusting the probabilities of potential recognitions based on the common slips. A similar process of adjusting probabilities may occur for speakers of different dialects of English. For example, a person with a New York accent or a southern accent may have particular words that they are more likely to mis-state and the probabilities associated with those words may be modified to improve recognition.
  • The corpora of data may provide, for example, increased probabilities for the predictable error words spoken in a particular domain or for a particular cultural or language domain. Therefore, if the system determines that the person communicating with the system is Japanese, then the Japanese language corpora may be loaded that identifies predictable error speech and increases the probabilities of certain words or phrases or sentences to increase the probability that the correct word or phrase will be recognized.
  • FIG. 2 illustrates the basic steps of the invention. They may be practiced on any computing device used to perform ASR or SLU. There is no specific programming language required to practice the invention. The method comprises receiving user speech containing at least one speech error such as a slip (202). The system raises the probabilities of close relatives of the at least one speech error (204). The ASR step then involves recognizing the speech using the raised or modified probabilities in the ASR module (206). As mentioned above, step (206) may also involve performing spoken language understanding based on the raised probabilities of certain word or phrases as set forth herein.
  • There are variations on the present invention. For example, the system may lower probabilities of certain based on words that a Japanese speaker rarely slips on. There may also be a particular step in the process that identifies which adjusted corpora are to be applied to the dialog. In other words, there may be an additional step of identifying a cultural corpora database based on an initial portion of a dialog in which the Japanese or Spanish or other type of corpora is applied to the dialog to improve the speech recognition with that particular person. This invention will improve the performance of systems such as AT&T's “How May I Help You” system and related systems being developed.
  • In another aspect of the invention, the system provides a learning mode which operates to make probabilities adjustments based on an ongoing conversation between a user and the system. In this embodiment, as the system receives speech input and makes its inferences and evaluations during speech recognition, the system determines whether it is interpreting the speech correctly. This step may occur by asking for a confirmation of the recognition. For example, “did you say Washington, D.C.?”. Other methods of determining the accuracy of the recognition are also contemplated such as other actions taken by the user when interacting with the system. Based on the assessment of the accuracy of the speech recognition in an ongoing dialog, the system modifies the probabilities to improve the recognition accuracy. Using this learning mode, the system can adjust its recognition accuracy on a person by person basis as one particular person may more often articulate specific speech errors than another person.
  • Another linguistic problem that the present invention addresses relates to “spoonerisms”. When a person speaks a “spoonerism”, letters become interchanged in a phrase. For example, if a caller is making an insurance claim, he or she may say “I need to report a clammage dame.” In one aspect of the invention, the system modifies the word recognition probabilities of well-known or anticipated spoonerisms and unpacks to spoonerism to reveal the user intention rather than the slip. This cognitive repair work may be performed to improve recognition.
  • An understanding of errors in speech may be found in literature such as Fromkin, V. A., Errors in Linguistic Performance: Slips of the Tongue, Ear, Pen and Hand, New York, Academic Press (1980) and Fromkin, V. A., Speech Errors as Linguistic Evidence, The Hague: Mouton (1981), the contents of which are incorporated herein by reference. Other references are available to those of skill in the art that outline that various collections of slips of the tongue and spoonerisms in various languages.
  • Embodiments within the scope of the present invention may also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or combination thereof to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of the computer-readable media.
  • Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments. Generally, program modules include routines, programs, objects, components, and data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.
  • Those of skill in the art will appreciate that other embodiments of the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
  • Although the above description may contain specific details, they should not be construed as limiting the claims in any way. Other configurations of the described embodiments of the invention are part of the scope of this invention. For example, other types of corpora may be developed to correct for slips. Such corpora may include different dialects, speech impediments, children's language characteristics, words that rhyme, etc. It is further appreciated that while increasing the probabilities of certain words or phrases will help the ASR module and SLU modules, such information may also be used by other modules in the spoken dialog system to increase their operation. Accordingly, the appended claims and their legal equivalents should only define the invention, rather than any specific examples given.

Claims (24)

1. A method of processing speech, the method comprising:
receiving speech from a user, the speech including at least one speech error;
modifying probabilities of closely related words to the at least one speech error based on a plurality of corpora of data having modified probabilities of certain words; and
processing via speech recognition the received speech using the modified probabilities.
2. The method of claim 1, wherein processing the received speech further comprises automatic speech recognition of the received speech using the modified probabilities.
3. The method of claim 1, wherein processing the received speech further comprises spoken language understanding of the received speech using the modified probabilities.
4. The method of claim 1, wherein the closely related words are words that begin and end with similar sounds to the at least one speech error.
5. The method of claim 1, wherein the closely related words are words that begin and end with the same phoneme and that have the same number of syllables as the at least one speech error.
6. The method of claim 1, wherein the speech error comprises a plurality of words, and the closely related words sound similar to the plurality of words.
7. The method of claim 1, further comprising using a corpora of data associated with the user's language patterns.
8. The method of claim 7, wherein the user's language patterns relate to the language spoken by the user.
9. The method of claim 8, wherein the corpora of data further comprises common speech errors made by speakers of the language spoken by the user.
10. An language processing module in a spoken dialog system, the module comprising:
means for receiving speech from a user, the speech including at least one speech error;
means for modifying the probabilities of closely related words to the at least one speech error based on a plurality of corpora of data having modified probabilities of certain words; and
means for processing via speech recognition the received speech using the modified probabilities.
11. The module of claim 10, wherein the closely related words are words that begin and end with similar sounds to the at least one speech error.
12. The module of claim 10, wherein the speech error comprises a plurality of words, and the closely related words sound similar to the plurality of words.
13. The module of claim 10, further comprising using a corpora of data associated with the user's language patterns.
14. The module of claim 10, wherein the closely related words are words that begin and end with the same phoneme and that have the same number of syllables as the at least one speech error.
15. The module of claim 13, wherein the user's language patterns relate to the language spoken by the user.
16. The module of claim 15, wherein the corpora of data further comprises common speech errors made by speakers of the language spoken by the user.
17. A spoken dialog system having a speech processing module, the module comprising:
means for receiving speech from a user, the speech including at least one speech error;
means for modifying probabilities of closely related words to the at least one speech error based on a plurality of corpora of data having modified probabilities of certain words; and
means for processing via speech recognition the received speech using the modified probabilities.
18. The spoken dialog system of claim 17, wherein the module is an automatic speech recognition module.
19. The spoken dialog system of claim 17, wherein the closely related words are words that begin and end with similar sounds to the at least one speech error.
20. The spoken dialog system of claim 17, wherein the speech error comprises a plurality of words, and the closely related words sound similar to the plurality of words.
21. The spoken dialog system of claim 17, further comprising using a corpora of data associated with the user's language patterns.
22. The spoken dialog system of claim 21, wherein the user's language patterns relate to the language spoken by the user.
23. The spoken dialog system of claim 22, wherein the corpora of data further comprises common speech errors made by speakers of the language spoken by the user.
24. The spoken dialog system of claim 17, wherein the closely related words are words that begin and end with the same phoneme and that have the same number of syllables as the at least one speech error.
US12/365,980 2004-02-26 2009-02-05 System and method for augmenting spoken language understanding by correcting common errors in linguistic performance Abandoned US20090144050A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/365,980 US20090144050A1 (en) 2004-02-26 2009-02-05 System and method for augmenting spoken language understanding by correcting common errors in linguistic performance

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/787,782 US7505906B2 (en) 2004-02-26 2004-02-26 System and method for augmenting spoken language understanding by correcting common errors in linguistic performance
US12/365,980 US20090144050A1 (en) 2004-02-26 2009-02-05 System and method for augmenting spoken language understanding by correcting common errors in linguistic performance

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/787,782 Continuation US7505906B2 (en) 2004-02-26 2004-02-26 System and method for augmenting spoken language understanding by correcting common errors in linguistic performance

Publications (1)

Publication Number Publication Date
US20090144050A1 true US20090144050A1 (en) 2009-06-04

Family

ID=34750519

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/787,782 Active 2026-05-13 US7505906B2 (en) 2004-02-26 2004-02-26 System and method for augmenting spoken language understanding by correcting common errors in linguistic performance
US12/365,980 Abandoned US20090144050A1 (en) 2004-02-26 2009-02-05 System and method for augmenting spoken language understanding by correcting common errors in linguistic performance

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/787,782 Active 2026-05-13 US7505906B2 (en) 2004-02-26 2004-02-26 System and method for augmenting spoken language understanding by correcting common errors in linguistic performance

Country Status (3)

Country Link
US (2) US7505906B2 (en)
EP (1) EP1569202A3 (en)
CA (1) CA2493265C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11361769B2 (en) 2019-11-05 2022-06-14 International Business Machines Corporation Assessing accuracy of an input or request received by an artificial intelligence system

Families Citing this family (128)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8645137B2 (en) 2000-03-16 2014-02-04 Apple Inc. Fast, language-independent method for user authentication by voice
US7310602B2 (en) * 2004-09-27 2007-12-18 Kabushiki Kaisha Equos Research Navigation apparatus
TWI269268B (en) * 2005-01-24 2006-12-21 Delta Electronics Inc Speech recognizing method and system
US20060200338A1 (en) * 2005-03-04 2006-09-07 Microsoft Corporation Method and system for creating a lexicon
US20060200336A1 (en) * 2005-03-04 2006-09-07 Microsoft Corporation Creating a lexicon using automatic template matching
US20060200337A1 (en) * 2005-03-04 2006-09-07 Microsoft Corporation System and method for template authoring and a template data structure
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US8170868B2 (en) * 2006-03-14 2012-05-01 Microsoft Corporation Extracting lexical features for classifying native and non-native language usage style
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US8977255B2 (en) 2007-04-03 2015-03-10 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US7437291B1 (en) * 2007-12-13 2008-10-14 International Business Machines Corporation Using partial information to improve dialog in automatic speech recognition systems
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US10255566B2 (en) 2011-06-03 2019-04-09 Apple Inc. Generating and processing task items that represent tasks to perform
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US9659559B2 (en) * 2009-06-25 2017-05-23 Adacel Systems, Inc. Phonetic distance measurement system and related methods
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US9263034B1 (en) 2010-07-13 2016-02-16 Google Inc. Adapting enhanced acoustic models
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US20120310642A1 (en) 2011-06-03 2012-12-06 Apple Inc. Automatically creating a mapping between text data and audio data
US8994660B2 (en) 2011-08-29 2015-03-31 Apple Inc. Text correction processing
US8762156B2 (en) * 2011-09-28 2014-06-24 Apple Inc. Speech recognition repair using contextual information
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9280610B2 (en) 2012-05-14 2016-03-08 Apple Inc. Crowd sourcing information to fulfill user requests
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
EP2954514B1 (en) 2013-02-07 2021-03-31 Apple Inc. Voice trigger for a digital assistant
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
AU2014233517B2 (en) 2013-03-15 2017-05-25 Apple Inc. Training an at least partial voice command system
WO2014144579A1 (en) 2013-03-15 2014-09-18 Apple Inc. System and method for updating an adaptive speech recognition model
WO2014197336A1 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
WO2014200728A1 (en) 2013-06-09 2014-12-18 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
AU2014278595B2 (en) 2013-06-13 2017-04-06 Apple Inc. System and method for emergency calls initiated by voice command
KR101749009B1 (en) 2013-08-06 2017-06-19 애플 인크. Auto-activating smart responses based on activities from remote devices
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10410629B2 (en) * 2015-08-19 2019-09-10 Hand Held Products, Inc. Auto-complete methods for spoken complete value entries
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179309B1 (en) 2016-06-09 2018-04-23 Apple Inc Intelligent automated assistant in a home environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10831796B2 (en) * 2017-01-15 2020-11-10 International Business Machines Corporation Tone optimization for digital content
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. Far-field extension for digital assistant services
US11597519B2 (en) 2017-10-17 2023-03-07 The Boeing Company Artificially intelligent flight crew systems and methods

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5261112A (en) * 1989-09-08 1993-11-09 Casio Computer Co., Ltd. Spelling check apparatus including simple and quick similar word retrieval operation
US5799289A (en) * 1995-10-02 1998-08-25 Ricoh Company, Ltd. Order management system and method considering budget limit
US5852801A (en) * 1995-10-04 1998-12-22 Apple Computer, Inc. Method and apparatus for automatically invoking a new word module for unrecognized user input
US5855000A (en) * 1995-09-08 1998-12-29 Carnegie Mellon University Method and apparatus for correcting and repairing machine-transcribed input using independent or cross-modal secondary input
US5903864A (en) * 1995-08-30 1999-05-11 Dragon Systems Speech recognition
US5960394A (en) * 1992-11-13 1999-09-28 Dragon Systems, Inc. Method of speech command recognition with dynamic assignment of probabilities according to the state of the controlled applications
US5999896A (en) * 1996-06-25 1999-12-07 Microsoft Corporation Method and system for identifying and resolving commonly confused words in a natural language parser
US6195634B1 (en) * 1997-12-24 2001-02-27 Nortel Networks Corporation Selection of decoys for non-vocabulary utterances rejection
US20020087309A1 (en) * 2000-12-29 2002-07-04 Lee Victor Wai Leung Computer-implemented speech expectation-based probability method and system
US20020116191A1 (en) * 2000-12-26 2002-08-22 International Business Machines Corporation Augmentation of alternate word lists by acoustic confusability criterion
US20020120446A1 (en) * 2001-02-23 2002-08-29 Motorola, Inc. Detection of inconsistent training data in a voice recognition system
US6598017B1 (en) * 1998-07-27 2003-07-22 Canon Kabushiki Kaisha Method and apparatus for recognizing speech information based on prediction
US20030225579A1 (en) * 2002-05-31 2003-12-04 Industrial Technology Research Institute Error-tolerant language understanding system and method
US20030229497A1 (en) * 2000-04-21 2003-12-11 Lessac Technology Inc. Speech recognition method
US20040102971A1 (en) * 2002-08-09 2004-05-27 Recare, Inc. Method and system for context-sensitive recognition of human input
US7117153B2 (en) * 2003-02-13 2006-10-03 Microsoft Corporation Method and apparatus for predicting word error rates from text
US7216079B1 (en) * 1999-11-02 2007-05-08 Speechworks International, Inc. Method and apparatus for discriminative training of acoustic models of a speech recognition system
US20080215326A1 (en) * 2002-12-16 2008-09-04 International Business Machines Corporation Speaker adaptation of vocabulary for speech recognition

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5537317A (en) * 1994-06-01 1996-07-16 Mitsubishi Electric Research Laboratories Inc. System for correcting grammer based parts on speech probability

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5261112A (en) * 1989-09-08 1993-11-09 Casio Computer Co., Ltd. Spelling check apparatus including simple and quick similar word retrieval operation
US5960394A (en) * 1992-11-13 1999-09-28 Dragon Systems, Inc. Method of speech command recognition with dynamic assignment of probabilities according to the state of the controlled applications
US6101468A (en) * 1992-11-13 2000-08-08 Dragon Systems, Inc. Apparatuses and methods for training and operating speech recognition systems
US5903864A (en) * 1995-08-30 1999-05-11 Dragon Systems Speech recognition
US5855000A (en) * 1995-09-08 1998-12-29 Carnegie Mellon University Method and apparatus for correcting and repairing machine-transcribed input using independent or cross-modal secondary input
US5799289A (en) * 1995-10-02 1998-08-25 Ricoh Company, Ltd. Order management system and method considering budget limit
US5852801A (en) * 1995-10-04 1998-12-22 Apple Computer, Inc. Method and apparatus for automatically invoking a new word module for unrecognized user input
US5999896A (en) * 1996-06-25 1999-12-07 Microsoft Corporation Method and system for identifying and resolving commonly confused words in a natural language parser
US6195634B1 (en) * 1997-12-24 2001-02-27 Nortel Networks Corporation Selection of decoys for non-vocabulary utterances rejection
US6598017B1 (en) * 1998-07-27 2003-07-22 Canon Kabushiki Kaisha Method and apparatus for recognizing speech information based on prediction
US7216079B1 (en) * 1999-11-02 2007-05-08 Speechworks International, Inc. Method and apparatus for discriminative training of acoustic models of a speech recognition system
US20030229497A1 (en) * 2000-04-21 2003-12-11 Lessac Technology Inc. Speech recognition method
US20020116191A1 (en) * 2000-12-26 2002-08-22 International Business Machines Corporation Augmentation of alternate word lists by acoustic confusability criterion
US20020087309A1 (en) * 2000-12-29 2002-07-04 Lee Victor Wai Leung Computer-implemented speech expectation-based probability method and system
US20020120446A1 (en) * 2001-02-23 2002-08-29 Motorola, Inc. Detection of inconsistent training data in a voice recognition system
US20030225579A1 (en) * 2002-05-31 2003-12-04 Industrial Technology Research Institute Error-tolerant language understanding system and method
US20040102971A1 (en) * 2002-08-09 2004-05-27 Recare, Inc. Method and system for context-sensitive recognition of human input
US20080215326A1 (en) * 2002-12-16 2008-09-04 International Business Machines Corporation Speaker adaptation of vocabulary for speech recognition
US7117153B2 (en) * 2003-02-13 2006-10-03 Microsoft Corporation Method and apparatus for predicting word error rates from text

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11361769B2 (en) 2019-11-05 2022-06-14 International Business Machines Corporation Assessing accuracy of an input or request received by an artificial intelligence system

Also Published As

Publication number Publication date
EP1569202A3 (en) 2007-07-04
CA2493265A1 (en) 2005-08-26
US7505906B2 (en) 2009-03-17
CA2493265C (en) 2011-03-15
EP1569202A2 (en) 2005-08-31
US20050192801A1 (en) 2005-09-01

Similar Documents

Publication Publication Date Title
US7505906B2 (en) System and method for augmenting spoken language understanding by correcting common errors in linguistic performance
US8285546B2 (en) Method and system for identifying and correcting accent-induced speech recognition difficulties
US7542907B2 (en) Biasing a speech recognizer based on prompt context
US7640159B2 (en) System and method of speech recognition for non-native speakers of a language
US7412387B2 (en) Automatic improvement of spoken language
US8024179B2 (en) System and method for improving interaction with a user through a dynamically alterable spoken dialog system
US8457966B2 (en) Method and system for providing speech recognition
US10468016B2 (en) System and method for supporting automatic speech recognition of regional accents based on statistical information and user corrections
Neto et al. Free tools and resources for Brazilian Portuguese speech recognition
US20020123894A1 (en) Processing speech recognition errors in an embedded speech recognition system
EP1089193A2 (en) Translating apparatus and method, and recording medium used therewith
US8457973B2 (en) Menu hierarchy skipping dialog for directed dialog speech recognition
Raux et al. Using task-oriented spoken dialogue systems for language learning: potential, practical applications and challenges
KR20050098839A (en) Intermediary for speech processing in network environments
USH2187H1 (en) System and method for gender identification in a speech application environment
Karat et al. Conversational interface technologies
JPH10504404A (en) Method and apparatus for speech recognition
Rabiner et al. Speech recognition: Statistical methods
US20040019488A1 (en) Email address recognition using personal information
Kamm et al. Design issues for interfaces using voice input
US7853451B1 (en) System and method of exploiting human-human data for spoken language understanding systems
KR101598950B1 (en) Apparatus for evaluating pronunciation of language and recording medium for method using the same
Ferreiros et al. Improving continuous speech recognition in Spanish by phone-class semicontinuous HMMs with pausing and multiple pronunciations
Takrim et al. Speech to Text Recognition
Delić et al. A Review of AlfaNum Speech Technologies for Serbian, Croatian and Macedonian

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: NUANCE COMMUNICATIONS, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AT&T INTELLECTUAL PROPERTY II, L.P.;REEL/FRAME:041512/0608

Effective date: 20161214