CA2211869A1 - Speech recognition system and method with automatic syntax generation - Google Patents

Speech recognition system and method with automatic syntax generation

Info

Publication number
CA2211869A1
CA2211869A1 CA002211869A CA2211869A CA2211869A1 CA 2211869 A1 CA2211869 A1 CA 2211869A1 CA 002211869 A CA002211869 A CA 002211869A CA 2211869 A CA2211869 A CA 2211869A CA 2211869 A1 CA2211869 A1 CA 2211869A1
Authority
CA
Canada
Prior art keywords
syntax
predefined
word
identifying
word sequences
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002211869A
Other languages
French (fr)
Inventor
Gabriel F. Groner
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kor Team International Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2211869A1 publication Critical patent/CA2211869A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/19Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
    • G10L15/193Formal grammars, e.g. finite state automata, context free grammars or word networks

Abstract

A syntax rule authoring system automatically generates syntax rules for an application program's predefined inputs, thereby enabling the application program to be used with a syntax based speech recognition system. The syntax rule authoring system includes memory (104) for storing an application program having an associated set of user selectable predefined inputs. The syntax rule authoring system stores in a first data structure for each predefined input an associated longest word sequence for uniquely identifying that predefined input. A word sequence generation procedure automatically generates, for each predefined input, a set of potential identifying word sequences. Each generated potential identifying word sequence includes a subset of the words in the associated longest word sequence. The potential identifying word sequences for all the predefined inputs are stored in a second data structure. A redundant word sequence elimination procedure identifies redundant sets of matching word sequences in the second data structure, where each redundant set of matching word sequences includes potential identifying word sequences for at least two distinct predefined inputs whose word sequences satisfy predefined match criteria. A syntax generation procedure then generates syntax rules, each generated syntax rule corresponding to those of the potential identifying word sequences for a distinct predefined input that are not included in any of the identified redundant sets. The generated syntax rules are suitable for use in a syntax based speech recognition system.

Description

WO 96/24129 PCT/US96/012~1 SPEECH RECOGNITION SYSTEM
AND METHIOD WITH AUTOMATIC SYNTAX GENERATION

The present invention relates generally to speech recognition systems as applied to data input and general computer use, and particularly to a system and method for generating speech syntaxes for multiple speech input contexts so as to automatically provide the least resl~ictive syntax possible 5 while still providing ia unique syntax identification for each defined input in each defined conte~

Many d~t~h~se programs include user interface software and programming tools for defining data entry forms, and for linking fields in those d~ata entryforms to fields in database tables. A related application, System and Method for Generating Database Input Forms, U.S. serial no. 08/328,362, filed October 2~, 19!34, teaches a system and method for converting an existing non-computerized (i.e., paper) data entry form into a computer based data entry form that uses speech recognition for verbal data entry.
Application no. 08/3:28,362, is hereby incorporated by reference.

20 The present invention provides a tool for automatic generation of the speech input part of an application program without requiring the application developer to know anything about speech recognition systems. The application developer provides only a set of Umenu files" listing the longest word sequence for identifying each predefined input to the program, and the CA 022ll869 l997-07-30 - present invention then generates all the syntax and dictionary files needed to enable the developer's application program to be used with a syntax based speech recognition system.

rl.o,-er"e based speech ~ecoy~ oll systems (also called extendable voc~h~ ry s~,eecl- recognition systems) are cG,.si.ler~d desirable bec~ se they are speaker i..dependent: there is no need to train the speech recognition system to learn each user's voice ,udllellls. Syntax based speech recognition systems are speech recognition systems that define a 10 set of altemate verbal inputs for each predefined multiple word input value.
The set of altemate verbal inputs accepted as matching a particular multiple word input value is defined by a ~syntax rule," sometimes called a syntax statement. Syntax based speech recognition systems are usually also phoneme based speech recognition systems, although it would be possible 15 to have a syntax based speech recognition system that is not phoneme based. The prefer,e-l embodiment of the present invention uses phoneme based word r~cGgr,ilion and syntax based word sequence recognition.

In a typical application of a speech recognition system, ther~ will be either 20 one speech input context for an entire associated application program, or there will be multiple contexts, such as one for each pull down menu of the a~ l c~lion program and one for each special dialog box used by the ~ppliG~iQn proy~l~. Alternately, in a data entry context, each defined region of a data entry form can be defined as a separate context. Each 2~ co,ltekl, whether in the ~pplic~tion program or data entry form, will typically include a set of global commands (such as "save file,~ ~help,~ or ~exit program~) as well as a set of navigation commands (such as ~tools menu~) for switching to another context.

30 Within any given context, it is desirable that the speech recognition system be as flexible as possible as to the set of words the user can speak to identify each ~re-lefined input value, while still uniquely identifying that predefined input value. In the past, this has been accomplished by a ~,er-~o", typically a computer programmer, manually generating a syntax statement for each predefined input value, where the syntax statem~nt .Jeri,)es all word sequences that will be acce,vted as identifying that 5 ~r~d~ti"eJ input value. For example, a syntax statement for the input value ~move to back~ may be of the forrn:

TAG23 -~ nnove back I move to back or TAG23 -~ nnove (to) back where the symbol ~ ¦ ~ is the logical OR operator and parentheses i,-d;cale that the word ~to~ is optional. However, although the author of the above statement may not have tlhought of it, in the context of this predefined input 15 value, the word ~backU might be su~rici~"t to uniquely identify it. In other words, the syntax statemPnt should probably read:

TAG23-~ (move) (to) back 20 indicating that both the word Umove" and the word utou are optional.

When a pr~defi"ed input value has more than the three words of the above example, ~lefi"ing the optimal syntax for the input value gets considerably more complex. For instance, for a predefined input value having seven 25 words, there are 127 potential word sequences that maintain the same word order as the original sequence and that might acceptably and uniquely identify that input value.

Of course, the sequences of words that uniquely identify an input value30 depend on the othler predefined input values within the same defined contexl. Thus, if the same context that included the input value ~move to back" included the input value "back one step", then the word ~back~ could WO 96t24129 PCI~/US96/01251 not be used to uniquely identify either of those input values. In co"lexls with even ten or so predefined input values, checking for all possi~'e cGIltlict~
belweon word sequences can be difficult to do properly. In conteAl~ with several dozen or more input values, this task is extremely difficult for a 6 human to perform manually without spending inordinate amounts of time on the task. An example of where such large co"te,~ can aris~ are applications where the ~pplicz~l;oo designer has ~ecided to make as many cGI"l"ands as rossP~le available while minimizing verbal navis~dliol, cG,.,l,.and requirements.
It is common for many data entry forms, and for many ~I.plic~l;o., programs, to use abbreviations, numbers, ordinals, acronyms and initialisms to identify predefined input values and commands. The cor~espGI~ding syntax statements for a speech recognition system must include equivalent word 16 sequences that correspond to the standard verb~ tions of such abbreviations, numbers, ordinals, acronyms and initialisms.

The difference between an acron-ym and an initialism is as follows. An acronym is formed from pronounceable syllables, even if it represents a 20 ~made up word," while an initialism is not pronounceable except as a *~ sequence of letters and numbers. Thus, "I~M" and ~YMCAU are initialisms, while ~NASA" and "UNICEF" are acronyms. Some words, such as ~MSDOS" and ~PCNETU are a mix of acronym and initialism col"ponents.

25 In order for a speech recognition system to work with such data entry forms and application programs, the syntax statements defining the range of word sequences for each predefined input value must include equivalent verbal word sequences. For initialisms, this means the user will need to speak the same or sl~hst~ntially the same sequence of letters as found in the initialism.
30 For an acronym, the user will need to speak the equivalent verb~ ;on of the acronym, and thus the corresponding syntax statement will need to accurately reflect the equivalent verb~ tion. For an abbreviation, the WO 96/24129 PCrlUS96101251 conespol-ding synltax statement will need to include the corresponding full word. For numbers and ordinals, the syntax statement will need to include the equivalent full ltext words.

~i It is therefor~3 a goal of the present inYention to provide a speech recognition system in which s~l~ntax statements for all predefined input values for all cG,~te~ls are autormatically generated.

Another goal of the~ senl invention is for the automatically generated 10 syntax statement to providle maximal flexibility in terms of word sequences accepted as identiffiers for each predefined input value while still providing aunique syntax icler,liricdliG,. for each predefined input value in each ~ ,ed .CGI ,te~

SUMMARY OF THE INVENTION

In summary, the present invention is a syntax rule authoring system that automatically generates syntax rules for an application program's predefined 20 inputs, thereby enabling the application program to be used with a syntax based speech recognition system.

The syntax rule authoring system includes memory for storing an ~pplic~tion program having an ~-csoci~ted set of user selectable predefined inputs. The 25 system stores in a first data structure for each predefined input an soci~led longeslt word sequence for uniquely identifying that predefined input. The syntax rule generation procedure begins by generating, for each predefined input, a set of potential identifying word sequences. Each generated potential identifying word sequence includes a subset of the 30 words in the associated longest word sequence. The potential identifying word sequences for all the predefined inputs are stored a second data structure.

WO 96/24129 PCI~/US96/01251 A redundant word sequence elimination procedure identifies redundant sets of ~natcl,i"y word sequences in the secG"d data structure, where each redundant set of matching word sequences incl~des pote.~lial identifying word sequences for at least two distinct predefined inputs whose word 5 sequences satisfy predefined match criteria. A syntax generation yn~cedure then generates syntax rules, incluing one di~lillct syntax rule for each Jt7ri"ed input. Each yenel~ted syntax rule represents those of the 5,oter,~ial identifying word sequences for a di~ -..t predefined input that are not included in any of the identified redundant sets. The generated syntax 10 rules are suitable for use in a syntax based speech recogoilic)r. system.

BRIEF DESCRIPTION OF THE DRAWINGS

15 Additional objects and features of the invention will be more readily apparent from the following detailed desc;,iplion and appended claims when taken in conjunction with the drawings, in which:

Figure 1 is a block diagram of a computer system for converting paper 20 based data entry forms into computer based data entry forms and for then using the compute~F based data entry forms to collect and store data, where data entry using a variety of input devices, including voice input devices, is supported.

25 Figure 2 depicts an example of a paper based data entry forrn.

Figure 3 depicts an end user subsystem using speech recognition in accordance with the present invention.

30 Figures 4A and 4B depict pull down menus for an application program, depic~ typical contexts in which the speech recognition syntax generation of the present invention may be used.

WO 96/24129 PCI~/US96101251 Figures 5A and 5B depict data structures used by the syntax generator in the ,.,,~fer-,3d embodiment of the present invention.

Figure 6 is a flow chart of the procedure for converting a set of ~menu files~
5 into a set of syntax files in the preferl~d ell,o.l;,.,e"t of the ~urese~ll invention.

Figure 7 is a ~et~i!e~ flow chart of the procedure for process;..y the predefined input values for a single defined co"le)~l in the preferred 10 ~,-,L.o~Ji,nent of the present invention.

DESCRIF'TION OF THE PREFERRED EMBODIMENTS

15 Referring to Figure 1, a computer system 100 incorporating the present invention incl~des a central processing unit 102, primary and secondary computer memory siubsystems 104, a user interface 106 and a document scanner 108. The IJser interface ~06 typically includes a display 110, a ~,icropl.one 112, an audio speaker 114, and a pointing device 116 such as a 20 mouse or trackball. In the preferred embodiment, the user interface 106 also includes a keyboard 118 for entering text and a printer 119. The scanner 108 is usecl to scan in paper based data entry fomms 120.

The computer mem~ry 104 stores a number of different programs, 25 sometimes herein called procedures, and data structures. Whenever a paper based data er1try form is scanned by scanl-er 108, the resulting image file 132 is stored in computer memory 104. A set of procedures collectively called the ~Fomm Tool" are used to generate a computer based data entry form that is based on the scanned paper based data entry form.

More specifically, the Form Tool 134 stores data in a form definition data structure 136 represl~nting all the objects and object properties required to WO 96/24129 PCI~/US96101251 represent a computer based data entry form, and that data structure 136 is stored as a ~Forrn Definition File~ in the computer memory 104. The form ~e~iniliol, data structure 136 in the forrn definition file is then converted by a ~compiler~ into a ~compiled form~ 140 for use by a set of data collsction 5 ~ .c~dures collecti~ely called the ~Char Tool~ 150. The foml ~Je~i"ition file is ~r~ferdbly a text file editable using convenliol,al text editor ~rog~"~s, while the co".~ilecl forrn 140 is a binary file that is not e~it~hle using conv~,.liol.al text editor ~roy.d",s.

10 The Form Tool 134 also contains procedures for passing a list of all voice cGI"l"a"ds defined for a form to a procedure herein called the Voice Tool 170. The Voice Tool 170 generates a set of Voice Syntax Files 158 and one Voice Dictionary file 159 for each data entry form. The Voice Dictionary file 159 for a panicular data entry form stores phoneme strings that desclil,e t5 the pronunciation of words ~ssocl~ted with various form sections, texthoxesand buttons as well as form navigation con,n,a~)ds for moving between sections of the data entry form and other global cGn,l~,ands common to all data entry forms. To the extent ~ossible, the phoneme strings in the voice dictionary file 159 are obtained by the Voice Tool 170 from a standard word to phoneme transcription di~lionary 152 of several tens of thousands of commonly spoken words. For words not in the Standard Voice Dictionary 162 but specified during the form definition process, pl.o"ema strings to be included in the voice dictionary 158 are generated using a set of pronunciation rules incorporated in a word to phoneme tra"slator 153. For each word in the defined menu items, the Voice Dictionary File 159 stores several altemate phoneme strings that represent altemate pronunciations of that word.

The Voice Tool 170 generates a separate Voice Syntax File 158 for each 30 distinct co"texl in a data entry form. Each Voice Syntax File 158 represents all the legal voiced commands that a user can specify at a particular point in the data entry process. More particularly, each Voice Syntax file 158 WO 96/24129 PCrtUS96/01251 includes ~~ferences to all the words in the Voice Di.,liG,.ary file 159 that arecan~ l:d~tes for spee~ch recognition, and also specifies all the different wordsand word orderings that can be used to make various particular data entries.
For i--slance, after i3~1ecl;ng a particular forrn section, the cGr.esponding 5 voice syntax file will include all syntax strings for all voiced co"""ands that are 'legal~ from that pOSi(;OI~ in the forrn. At any point in the ,~rucess of ~"leri"g data in a particular data entry forrn, words spoken by an end user are i..le"ureted using the Voice Di_tionary file 159 for the entire data entry forrn, the Voice Syntax File 1~8 for the co,-lekl currently selected or 10 specified by the end user's previously entered cGr,.,l,ands, and a Voice Model 172 (see Figure 3) appropriate for the end user.

Refe,.i..y to Figures 1 and 3, after a computerized data entry form has been defined and stored in compile'd form 140, end users utilizQ the computerized 15 data entry form for data entry. The Char Tool procedures 150, or the procedures of'any other application program using speech recoy,-iliûn, control the data entry process. In particular, based on the form being used and the sectiû,-, if a~ny, that the user has .selected, the Char Tool procedures150 select one of the previously defined Voice Syntax files, which 20 est~hlishes the set of legal spoken word inputs for that co, .le~l. The selected ~loice Synlax file govems the operation of the speech recognition procedures 156 until another Voice Syntax file is selected for a different fomm context. A new voice syntax file (context) is loaded by the Char Tool procedures 150 eac:h time th'e user enters a new sec~ion of a data entry 25 form.

The speech recognition procedures 156 utilize the Voice Syntax files and Voice Dictionary file 159 described above, which define a rule base for interpreting words spoken by an end user (speech inputs). The speech 30 recognition procedures also utilize a Voice Model 172 (of which there may be several, typically including at least one voice model for female speakers -CA 022ll869 l997-07-30 WO 96124129 lPCT/US96/01251 and one for male speakers) that stores data representing the relatiol.shi,.s between measurable acoustic information (from speech) and phonemes.

When the speech recognition procedures 1~6 match an end user's spoken 5 input with an entry in the currently selected voice syntax file, the speech recognition procedures return to the Char Tool 1~0 a ~alue that directly ide,)li~ies a cGr.~s~o"d;ny input value or user co.,.")and, which may indicate Ie~1iGn of an object in the data form or may be a fomm navigaliol.
cor-...and. The CharTool procedures 150 also receive i~fo~ll)aliGIl about 10 the specific words spoken by the end user, but in most contexts that information is not used. In an altemate embodiment of the present invention, the CharTool procedures 1~0 use the detailed i-~o"..ation about the end user's spoken words so as to enter dictated sequences of words into data entry fields in the data entry form.
A set of Voice Generation procedures 160 are optionally used to verbally co,~fi~", the end user's verbal com,l,ands. Verbal cG~ ation helps the end user to catch and correct errors made by the speech recognition procedures 156.
The Char Tool 150 accepts keyboard and/or pointer i~puts from end users as well as spoken inputs. Once an end user has completed entering data in a form, the entered data is stored both in a transaction log 162 and as a set of data 164 to be stored in specified fields of a d~t~hace in a d~t~h~ce 25 management system 166. As is standard, data stored in the ~t~h~se management system is ~ccessil~le through a set of d~t~h~-ce query and report generation procedures 168.

Fgure 2 ~epicts an example of a paper based data entry fomm. As is the 30 case for many data entry forms, the form is divided into a number of distinctsections, some of which call for checking various boxes applicable to a particular data entry situation, some of which call for entry of text and/or numbers, and some of which may call both for checking boxes and entry of text and/or numbers. Furthemmore, most, although not necess~.;ly all, Se~,1iGIlS of a data entry fom~ include a title or label that helps the user identify the form selclioo.
In the "rt;f~ d embGdi-"~"l, each aspect of a data entry form is ~lefi--eul as an ~object~. Thus, logical se.,~tiGns of the data entry form are each '-jartc, each c~.eckL,ox button and its ~ssoci~led text or fill in line is an object, each text box for entry of data is an object, and fixed text labels for form SeCtiGnS10 and text boxes are also obj~ctc. Each object has a specified physical loc~l;o,. or l,osilion (e.g., ~osiliol, of its top left comer) and extent (i.e.,height and width) ~I~/ithin the form. Each object also has a set of specified ~,ropellies including (A) links for linking the object to a specified field in a~i~t~h~se, and (B) speech input data indicating word sequences for end user 15 voiced selection of the object.

For each object in the data form, the form tool user specifies all necess:~ry voice co,..,..ands and keywords in an object "property~ dialog window. As will be described in more detail below, the Form Tool 134 and Voice Tool 20 170 procedures (see Figure 1) generate a Voice Dictionary for the entire fomm and a Voice ',ynt~3~ file 1~8 for each context (e.g., section) of the data form based on the text of the speci~ied coll",lands and keywords.

Data Entry by End User Voice Input Referring to Figure 3, an end user subsystem 200 in the preferred embodiment incl~dles a Voice Dictionary file 159 that stores phoneme strings that describe the p,ronunciation of words ~ssoci~ted with various form 30 sections, textboxes and buttons as well as the pronunciation of words used in navig~tiol) comnnands common to all data entry forms. Navigaliol1 commands include! words such as ~cancel,~ ~close,~ ~remove,~ and so on.

To the extent possible, the phoneme strings in Voice Dictionary file 159 are selected from a standard dictionary of several tens of thousands of commonly spoken words. For words not in the standard d;~tio"ary but specified during the form deG-,ilio-~ process, phoneme strings to be inclu~e~
5 in the Voice DictiG"ary file 159 are generated using a set of pronuncialio"
rules.

To implement speech recognition without requiring the end user to leam about computer technology, the end user subsystem 200 allows end users 10 to say as little or as much as he/she wants so long as he/sh~ uniquely icle,lti~ies one of the available items in each context. For example, if the items listed in a menu are ~in the left eye,~ ~in the right eye,~ and ~in both eyes,~ the voice syntax for one specified menu item allows the user to select the first item by saying ~left~U "left eye,b "the lefl~U or uin the left eye.~ All 15 these possible syntaxes are automatically generated by the voice tool 170 and are stored in the voice syntax files 158.

The current context 202 of the data entry process defines which Voice Syntax file 158 is to be used to decode the next voice input. The context 20 202 is dynamically updated by the application program ~of which Char Tool 150 is just one example) during data entry. Each Voice Syntax fil~ 158 inc~udes references to all the words and/or phrases in the Voice Dictionary file 159 that are cao~ tes for speech recognition when that Voice Syntax file 158 is selected for use (i.e., when the application program 150 sets its 25 context value to that ~ssoci~ted with the Voice Syntax file 158). The use of a separate Voice Syntax for each data entry context helps to limit the number of words in the Voice Dictionary that need to be compared with an end use~s speech inputs, and reduces the number of wrong matches made.

30 During the data entry process, the display is constantly updated to let the user know the set of available choices for user selection via spoken input or otherwise, as well as to show the data previously entered in the form section last selected by the end user. When the end user speaks, the speech recognition procedures 156 respond by sending a list of ~t:coyllized words and a ~parse tag" to the applic~tion program. The parse tag identifies the spoken menu item or form object without unnecess~ry detail. For i~st~"ce, 5 regardless whether the end user says ~left,~ ~left eye,~ ~the left,~ or ~in the left eye,~ the ~ppli~tio~ I program r~ceives the same ~left~ parse tag, which i-Je"liries a menu item, without ad~lilio"al analysis by the ~ )Iic~l;o,~
program.

Figures 4A and 4B show pull down menus 210, 212 ~ssoci~ted with a drawing program. E ach such pull down menus can, when using the present invention, be defined as a distinct context, with the items in the main hG,i~ol,tal menu 214 being used as navigatio" commands available for e~le~.liol, within all pull down menu col~texls.
Referring to Figure ~A, the input to the Voice Tool 170 is a set of menu files 230, each menu file~ representing all the predefined input values for one CGnte~ csoci~.te~i ~vvith the application program (which is Char Tool 150 in the example show in Figure 3). Thus, a distinct menu file 230 is provided 20 for each distinct context of the application program. The menu files 230 to be processed are iclentified by a file list 232. ~

Each menu file 230 is, in the preferred embodiment, a text file in which each line re~,,ese"ls a distinct predefined input value and contains a sequence of 25 words that represent the longest verbal expression ~ssoci~ted with that input value. Predefined input values are often herein called menu items for convenience, even though some or all the input values may not be ~menu items~ in the standard usage of that term. A line in a menu file is marked with an initial left parentheses ~(u if the person using the Voice Tool 170 has 30 determined that selection of the associated menu item must be accomplished by speaking all the words in that line.

- i4 -- For each menu file 230, the Voice Tool 170 generates a syntax file 234 and a tag file 236. The syntax file contains one syntax statement for each input value listed in the menu file 230, plus a final statement called an S rule.
The standard format for a syntax statement is Tagvalue -~ wordseql ¦ v:ordse.~2 ¦ wordseq3 ¦ ...

where Tagvalue is a ~tag~ that is passed back to the ~ Jlic~l;o,l ~,rog.~r,~
(e.g., Char Tool 150) when a spoken input .,.atching that line is r~cGy..i~ed, 10 and wordseql, wordse.l2, wordseq3 are altemate word sequences ~ssoci~led with the input value.
The S rule is simply a statement representing the ~OR~ of all the Tagvalues:
S-~tag1 ¦ tag2 ¦ tag3 ¦

where S is True if a user's verbal input matches any of the syntax statements in the syntax file. The tag file 236 co"es,uGr,ding to each menu file 230 is simply a list of the tags for the syntax lines in the corresponding 20 syntax file 234. There is one tag for each menu item, and the tags are in the same order as the menu items. This makes it easy for the application ~r~g.alo, when it receives a tag value from the speech recognition system, to determine which menu item was spoken.

2~ The Voice Tool 170 also generates a voice dictionary file 159 whose cG,-Ienls define a set of phoneme sequences for every di~li..cl word in all the menu files 230.

For an applic~tion program that includes a "dicPtion mode~ context, the 30 Voice Diclionary file 159 will include entries for al! the words allowable inboth the ~i;ct~liGIl mode context and in all the other input cGnle,cls ~ssoci~ted with the application program. Preferably, the speech recognition system is put in a syntax rule free mode when the application program is in a ~ io..
mode. When the field of use for the .Jiut~licjll is well defined, such as for palient med;cal hislories in a hospital admittance form, it is oRen possible to include less than one thousand words in the di.,1icsnary file 159 to support 5 that co.,leAl.

Re~r,ing to Figur~3 5B, the primary data structures used to genel~le a syntax file 234 from a menu file 230 are a Line() data structure 240 that is used to store a copy of a menu file in memory, and a WordSeq() data 10 structure 242 that is used to store word sequences generated during the syntax statement ~leneration process. Each row of the WordSeq() data structure 242 correspG,.ds to one menu item in the menu file 230 being l,,uce.ssed. A ~nurnlinesU ~egister 244 stores a value indicali.)y the number of menu items in the menu file 230 being ,~rucessed. The Word() array 246 15 is used to store th~ words from onç m~n~itçm w~!e i~ i~ being pr~c~s~ed and the Cptr() array 247 is used to store pointers to "conte.lt~ words in the Word() array. Finally, a binary counter 248 is defined with bit 1 being defined as its most significant bit~bit 2 as its next most significant bit, and so on.
Referring to Figures 5B, 6 and 7, the Voice Tool procedure 170 will first be describsd in a ge,.eral manner with reference to Figure 6, and then certain aspe.:t-~ of the procedure will be descril,ed in more detail with reference to Figure 7.
The Voice Tool procedure 170 begins by selecting a rnenu file 230 from the file list 232 (2~0). Then, the menu items in the selected menu file are copied into the Linle() data structure, with each menu item being copied into a different entry of Line(). This process also determines the number of 30 menu items in the menu file, which is then stored in the numlines register 244. Next, for each menu item, a tag value is generated and stored in a tag file 236. Then the text of the menu item is revised, if necess~ry, by expanding certain abbreviations, expanding numbers and ordinals into their equivalent full word text, converting initialisms into sequences of letters, ahddeleting most punchlAtion. Finally, the resulting revised text for each menu item is used to ~e,~er~le all possib'Q sequences of input words that could ~e"lially be used to select the menu item (252). Step 252 is Jesc.ibed in more detail below with r~fert:"ce to Figures 7A and 7B.

~Content words~ are defined for the pur~oses of this document to m~an words other than ~ r~positions, articles snd conjunctions. The prepo6i~io"s, 10 articles and conjunctions that are treated as ~non-cG"tent words~ in the pr~rer,ed embodiment are as follows:
~ preposilions: at, between, by, for, from, in, into, of, on, through, to, with - articles: a, an, the 15 ~ conjunctions: and, but, or, nor After all the menu items in the selected menu file have been processed and the ~ssoci~ted word sequences have been generated, the next step is to delete all redundant word sequences. Two word sequences are considered 20 to be redundant when the two word sequences satisfy predefined match criteria. In the preferred embodiment of the present invention, the predefined match criteria are (A) that both word sequences have the identical conlet,l words in the identical order, and (B) that non-content words are ignored for pu",oses of deter",;nil,g which word sequences are 2~ redundant. Thus when two words sequences can have identical content words in the identical order, but different non-conlel)t words, the ~,refer,ed embodiment of the present invention icle"liries those two word sequences as being redundant. In other ernbodi"~el,ts of the present invention, different predefined match criteria could be used.
30 .
Whenever the same sequence of content words is inclur~ed in the word sequences generated for more than one menu item, that sequence of cor,te,lt words is deleted for all menu items bec~use it cannot be used to uniquely identify any menu item. After the redundant word sequences have been ~PIeted the r,emaining word sequences for each menu item are -r~co~ inecl, if possible, to reduce the number of syntax temms, and the 5 resulting word seqLrences are used to fomn the syntax slaten.ellt for that menu item. After all syntax statements for a menu file are generated, an S
Rule is generated for the syntax file (254). Step 254 is JescliLed in more detail below with reference to Figures 7C and 7D.

10 After syntax and tag files have been generated for all the menu files (252, 254), the ~source syntax filesU are cG,l-piled using a commercially available voice syntax c~---piler, such as the compiler provided in the Phoneti Eng~e~ (a trademark of Speech Systems l"col~,or~ed) 500 Speech RecGy.,ition System by Speech Systems Incorporated (256). The resulting 15 files are called compiled syntax files.

The compilation process (256) produces, as a side product, a list of all the words found in the syntax files 232. At step 258 a Voice Dictionar,v file 159 is generated from that word list.
The Voice Dictionaly file 159 is generated as follows. One word at a time from the word list isi selected, and a standard word to phoneme transc,i,utio,) dictionary 152 (see Figure 1) is searched to determine if the selected word is inclucle~ in the dictionary (260). If the selected word is found in the standard25 dictionar,v (262), the dictionary entry for the selected word is copied into a custom Voice Dictionary File 159 (264).

If the selected word is not found in the standard dictionary (262), transc,iptiol)s of the selected word are generated using a commercially 30 available word to phoneme converter 153 (see Figure 1), and the resulting tra.,sc,ip~iol,s are copied into the custom Voice Dictionary File 159 (266).

WO96/24129 CA 022ll869 l997-07-30 lPCT/US96/01251 This process is repeated for all words in the word list, and th~ resulting Voice Dictionary File 159 is then saved for use by end users.

R~f~ y to Figure 7A, a menu file 230 is processe~ as follows to produce a 5 source syntax file. The menu file is read (300) and each menu item in the menu file is stored in a separate record of the Line() array (302). A pci..ler iis in~ 7ed to zero (302) and then incremented (304) at the beginning of an instruction loop. ~ach pass through this loop processes one menu item in the menu file.
The tag for each menu item is defined as the left most word in the Line(i) record, excluding prepositions, articles and conjunctions. If the tag incl~des any periods, those are deleted (306). If the same tag was previously used for another menu item in the same menu file (308), the line number of the 15 current menu item is appended to the previously constructed tag to generate a unique tag value (310).

Next, all punctu~tion is deleted from the current menu item, except periods in words formed from capital letters separated by periods, and a left 20 parentheses at the beginning of the menu item (312). A left parentheses at ~~.the beginning of the menu item is used to indicate that the only acceptable verbal input to select the cGrl~s~onding menu itern is one that states all the words of the menu item in sequence.

25 At step 314, certain abbreviations are expanded to their full text equivalent, the primary example in the preferred embodiment being that the ~times~
symbol is converted to the word ~times~ followed by a space. The symbol ~x3~ is converted to ~times three~.

30 At step 316, numbers and ordinals up to a predefined maximum value (e.g., 9,999) are converted to their full text equivalents. At step 318, initialisms, if expressed in the menu item as capital letters separated by WO g6/24129 PCT/US96/01251 periods, are converted into a list of letters separated by sp~ces The word ~I.B.M.~ is converted to ~I t3 MU. One special exce,ulio,~ is that whenever a letter in an initialisrn is the same as the preceding one, it is made o~,lio"al.Thus, the initialism ~H.E.E.N.T~ is converted to ~H E (E) N r. The reason 5 for making the second occ-..,t.-ce of a doubled letter o,vliG..al is ~e~use the second letter is often skipped or slurred together with the first when the initialism is spoken.

At step 320 the me!nu item is tested to see if it starts with a left parentheses.
10 If it does, a syntax statement having a single word sequence is ye"e,~ted for the menu item 1322) having all the words of the menu item in sequence, and the initial left parentheses is replaced with an asterisk. OthenNise, the word sequence generation procedure shown in Figure 7B is executed (324) to generate a set of identifying word sequences for the current menu item.
The word sequence generation procedure 324 shown in Figure 7B
~,rocesses a single menu item. The basic methodology of the word generation procedure is as follows. A binary counter is defined with as many bits as the current menu item has content words. For every non-zero 20 value of the binary counter, a distinct word sequence is generated. In particular, the bit values of the binary counter determine which content words in the menu item are included in the word sequence, such that when the first bit of the binary counter is equal to 1 the first col,le,.t word of current menu item iis included in the word sequence, and when it is equal to 25 zero the first col,lel,l word is not included in the word sequence; when the second bit of the binary counter is equal to 1 the second content word of current menu item ls included in the word sequence, and when it is equal to zero the second content word is not included in the word sequence; and so on. Thus, if the menu item being processed has four content words, and the 30 binary counter has a value of 0101, then the second and fourth contenl words of the menu item are incll~r~ed in the corresponding (fifth) word sequence generateld for the menu item.

Referring to Figure 7B, the words of the menu item being processed (herein called the current menu item) are stored individually in an array called Word() 246, a variable MAX is set equal to the number of conle~l words in the current menu item, and variables K, p and CW are initi~li7ed to zero 5 (340).

The meaning of these variables, and some others used in this ~rocedure, are as follows:
~ p: an index into the Word() array.
10 ~ CW: the number of content words in the menu item; CW is also used as a counter and index into the Cptr array 247 while the number and localio,. of the content words in the Word() array is being determined.
- MAX: initially used to store the number of words in the Word() array;
then used to store a number equal to 2CW ~ 1, which is the number of pote"lial identifying word sequences to be generated for the current menu item.
~ BC: a binary counter 248, whose bits are used to determine which co"lel,t words to include in each generated word sequence.
~ b: an index for the bits of the BC binary counter.
20 ~ K: an index into the WordSeq() array identifying the entry in the curr~t row of WordSeq() to be generated.

To help fobow the operation of this procedure, we will show how the menu item is prucessed by the procedure of Figure 7B, where P1, P2 and P3 are non-col,t6"l words, and C1, C2, C3 and C4 are content words. Note that 30 the terrns ~co"t~"t word" and "non-content word" are defined above.

WO 96/24129 PCr/US96/012~1 After initi~ tion at step 340, the word sequence generation procedure 324, steps 342 through ~48 sort through the words in the Word() array to det~rmine how ma~ny conteni words are stored in the Word() array and where they are loc ~te~ In particular, the procedure selectc the first or next 5 word in the Word('l array (342) and tests that word to see if it is a col~le,lt word (344). If so, the cont~r.t word counter CW is incremented and a pointer to the CG~ l word is stored in Cptr(CW) (346). If not, the next word in the Word() array is selected (342). Steps 342 through 346 are r~pefl until the last word in the Word() array has been pr~cesse~l (348).
Once all the word~, in the Word() array have been plocessed, CW is equal to the number of content words in the Word() array and Cptr(1) through Cptr(CW) point to the content words in the Word() array.

15 Next, a binary counter BC with CW bits is defined and initi~ e~l to a value of zero. In addition, the variable MAX is set equal to 2CW - 1, which is the number of word sequences to be generated (349).

Each new word selquence is initialized by incrementing the binary counter, 20 incrementing the \llordSeq index K, storing an initial blank space in the current word sequence, WordSeq(i,K), and initializing the bit index b to 1 (350). If the bit value of the binary counter colrespGI~dii-g to the bit index bis equal to 1 (352) the corresponding content word, Word(Cptr(b)), and its Associ~d optionall words are appended to the end of the word sequence 25 being generated (354).

In the preferred ernbodiment, the non-content words ~-ssoci~te~ with each cG,.te"l word are all the non.content words (if any) following the conle"l word until either another content word or the end of the menu item is 30 reached, except that the first content word in the menu item also has Associ~te~ with it any initial non-content words in the rnenu item that precede the first content word. Non-content words are marked as being optional by enclosing each optional word in parentheses when it is stored in the word sequence being generated.

The bit index b is incremented (356) and then tested to see i~ it is larger 6 than the number of cG"tent words in the current menu item (358). If not, the bit of the binary counter cones~G~ding to the value of the bit index b is checked (352) and if it is equal to 1 the colr~spGnding co,.~enl word, Word(Cptr(b)), and its ~soci~ted optional words are appeln~ed to the end of the word sequence being generated (354). This process continues until the 10 bit index b exceeds the number of conlent words in the current menu item (358). At that point, the current word sequence is complete, and the t,rocedure starts generating the next word sequence at step ~50 so long as the number of word sequences generated so far is less than the total number (i.e., MAX) to be generated (360).
Note that all word sequences generated for a menu item are stored in a row of the WordSeq() array 242 associated with that menu item, with each word sequence generated for a single menu item being stored in different columns. The words in each word sequence are preceded by a blank space 20 to provide a place for marking some word sequences for deletion and others for protection during subsequent processing of the word sequences.

The full set of fifteen word sequences generated by this procedure for the menu item are as follows:

30 E3C Value Word Sequence 0001 C4 (P3) WO 96/24129 PCT/US9~/01251 0011 C3 C4 (P3) 0100 C2 (P2) 0101 C2 (P7) C4 (P3) 0110 C2 (P2) C3 0111 C2 (P2) C3 C4 (P3) 1000 (P1) C1 1001 (P1) C1 C4 (P3) 1010 (P1) C1 C3 1011 (P1) C1 C3 C4 (P3) 1100 (P1)C1 C2(P2) 1101 (P1) C1 C2 (P2) C4 (P3) 1110 (P1) C1 C2 (P2) C3 1111 (P1) C1 C2 (P2) C3 C4 (P3) 1~ Other equivalent procedures can be used to generate all the possi~lQ word sequences of content words in a specified menu item.

Referring back to Figure 7A, once all the menu items in a menu file have been processe~, the S Rule for the syntax file is generated, as .Jes~ri6ed 20 above.

Referring to Figure 7C, the procedure for eliminating redundant word sequences is as follows. The longest word sequence in each row of the WordSeq() array is marked with an asterisk (370). This protects the full 25 length word sequence for each menu item from deletion.

A row of the Word';eq() array is selected as the current row (37Z). Word sequences in the current row of WordSeq() (374) that are already marked with curly left bracket ~{u are skipped. For each remaining word sequence in 30 the current row of WordSeq() (374), herein called the Ucurrent word sequence,~ each word sequence in each subsequent row of WordSeq() is identified (376, 378l 380), skipped if it is already marked with an asterisk WO 96124129 PCI~/US96/01251 or curly left bracket ~{" (382) and compared with the current word sequence (384). If the two word sequences have identical sequences of con~e"l words (384), the second of the word sequences is marked for deletion with a left curly bracket ~{~ (386). The first of the compared word seguences is also marked with a left curly bracket if it was not previously marked with an aslerisk (bec~se asterisk marked word sequences are pr~te.:ted from ~eletion). Word sequences previously marked with a left curly bracket are sk~ )ed at step 382 bec~llse those word sequences are already marked for ~eletion, and word sequences marked with an asterisk are shipped at step 382 ~ec~-~se those word sequences are protected from delelion.

When l-.alcl,i.,y word sequences are found, no other word sequences in the row being compared with the current row need to be checked bec~lJse each word sequence within each row is unique. Thus, processing resumes at step 378 (which advances the row pointer j to a next row) after a ~natcl,i"g sequence is identified.

Referring to Figure 7D, once all-of the redundant word sequences in WordSeq() have been marked, the syntax file generation procedure once again steps through the rows of the WordSeq() array (390, 392), deleting all word sequences marked with left curly brackets (393) in each row, recombining the remaining word sequences in the row (394) to the extent such recG~ 9 is possible, and then forming a syntax statement (also called a syntax rule) from the resulting word sequences (396).
2~
Word sequences are combined (394) by detemmining that two word sequences differ by only one co-~te~t word (i.e., it is present in one sequence and absent in the other), and then replacing both word sequences with a combined word sequence in which the differing content word is noted as being optional. For instance, the following two word sequences:

(P1) C1 C2 (P2) C2 (P2) can be combined to form the following word sequence:

5 ' (P1) ~C1) C2 (P2).

After all the rows of the WordSeq(j array are ~.rocessed in this ~..a,.,~r (steps 392 throuslh 396), an S Rule for the syntax file is generated, as explained above.

Alternate Embodiments While the present invention has been described with reference to a few 15 specific embodiments, the desc,i~ tion is illustrative of the invention and is not to be construed as limiting the invention. Various modi~icaliol)s may occur to those sh;illed in the art without departing from the true spirit and scope of the invention as defined by the appended claims.

Claims (18)

WHAT IS CLAIMED IS:
1. A syntax rule authoring system for use in conjunction with a speech recognition system, comprising:
a first data structure storing data corresponding to a set of user selectable predefined inputs associated with an application program;
a second data structure storing for each of said predefined inputs an associated sequence of one or more words, wherein each said associated sequence of one or more words comprises a longest word sequence for uniquely identifying its associated predefined input; and a voice syntax generation procedure, coupled to said second data structure, for generating syntax rules, each generated syntax rule corresponding to a distinct one of said predefined inputs and including a representation of said longest word sequence associated with said one predefined input; at least a plurality of said generated syntax rules each including a representation of additional word sequences, each of said additional word sequences comprising a subset of said longest word sequence that uniquely identifies said one predefined input; wherein said generated syntax rules are suitable for use in a syntax based speech recognition system.
2. The system of claim 1, said voice syntax generation procedure including:
a word sequence generation procedure, coupled to said second data structure, for automatically generating for each said predefined input a set of potential identifying word sequences, each of said potential identifying word sequences including a subset of said longest word sequence associated with said each predefined input; said word sequence generation procedure storing said potential identifying word sequences for all of said predefined inputs in a third data structure;
a redundant word sequence elimination procedure, coupled to said third data structure, for identifying redundant sets of matching word sequences in said second data structure, where each redundant set of matching word sequences includes potential identifying word sequences for at least two distinct ones of said predefined inputs whose word sequences satisfy predefined match criteria; and a syntax generation procedure for generating said syntax rules, each generated syntax rule corresponding to those of said potential identifying word sequences for a distinct one of said predefined inputs included in any of said identified redundant sets.
3. The system of claim 2, said generated potential identifying word sequences each including the corresponding longest identifying word sequence; and said redundant word sequence elimination procedure including instructions for not including in said redundant sets of matching word sequences said longest identifying word sequence in each of said sets of potential identifying word sequences.
4. The system of claim 1, further including:
a microphone for receiving verbal inputs from a user; and a syntax based speech recognition subsystem, coupled to said microphone and said generated syntax rules, for receiving said verbal inputs from said user and for identifying which of said predefined inputs, if any, corresponding to said verbal inputs in accordance with said generated syntax rules.
5. The system of claim 1, wherein said application program has multiple contexts, each context having an associated set of user selectable predefined inputs represented by distinct sets of data in said first data structure;
said second data structure defining for each context, a longest word sequence for uniquely identifying each predefined input associated with said context, and said voice syntax generation procedure generating a separate set of syntax rules for each said context.
6. A syntax rule authoring system for use in conjunction with a speech recognition system, comprising:
memory for storing an application program having an associated set of user selectable predefined inputs;
a first data structure defining for each of said predefined inputs an associated sequence of one or more words, wherein each said associated sequence of one or more words comprises a longest word sequence for uniquely identifying its associated predefined input;
a voice syntax generation procedure, including a word sequence generation procedure, coupled to said first data structure, for automatically generating for each said predefined input a set of potential identifying word sequences, each of said potential identifying word sequences including a subset of said longest word sequence associated with said each predefined input; said word sequence generation procedure storing said potential identifying word sequences for all of said predefined inputs in a second data structure;
a redundant word sequence elimination procedure, coupled to said second data structure, for identifying redundant sets of matching word sequences in said second data structure, where each redundant set of matching word sequences includes potential identifying word sequences for at least two distinct ones of said predefined inputs whose word sequences satisfy predefined match criteria; and a syntax generation procedure for generating syntax rules, each generated syntax rule corresponding to those of said potential identifying word sequences for a distinct one of said predefined inputs included in any of said identified redundant sets; wherein said generated syntax rules are suitable for use in a syntax based speech recognition system.
7. The system of claim 6, said generated potential identifying word sequences each including the corresponding longest identifying word sequence;
said redundant word sequence elimination procedure including instructions for not including in said redundant sets of matching word sequences said longest identifying word sequence in each of said sets of potential identifying word sequences.
8. The system of claim 6, further including:
a microphone for receiving verbal inputs from a user; and a syntax based speech recognition subsystem, coupled to said microphone and said generated syntax rules, for receiving said verbal inputs from said user and for identifying which of said predefined inputs, if any, corresponding to said verbal inputs in accordance with said generated syntax rules.
9. The system of claim 8, wherein said application program has multiple contexts, each context having an associated set of user selectable predefined inputs;
said first data structure storing for each context, a longest word sequence for uniquely identifying each predefined input associated with said context;
said voice syntax generation procedure generating a separate set of syntax rules for each said context;
said application program including instructions for sending context signals to said syntax based speech recognition subsystem indicating which of said contexts is in use by said application program; and said syntax based speech recognition subsystem including instructions for receiving said context signals and for identifying which of said predefined inputs, if any, correspond to said verbal inputs in accordance with the generated set of syntax rules corresponding to said received context signals.
10. A method of generating syntax rules for use in conjunction with a speech recognition system, comprising the steps of:
storing in a first data structure data corresponding to a set of user selectable predefined inputs associated with an application program;
storing in a second data structure a sequence of one or more words for each of said predefined inputs, wherein each said associated sequence of one or more words comprises a longest word sequence for uniquely identifying its associated predefined input; and generating said syntax rules, each generated syntax rule corresponding to a distinct one of said predefined inputs and including a representation of said longest word sequence associated with said one predefined input; at least a plurality of said generated syntax rules each including a representation of additional word sequences, each of said additional word sequences comprising a subset of said longest word sequence that uniquely identifies said one predefined input; wherein said generated syntax rules are suitable for use in a syntax based speech recognition system.
11. The method of claim 10, said syntax rule generating step including:
automatically generating for each said predefined input a set of potential identifying word sequences, each of said potential identifying word sequences including a subset of said longest word sequence associated with said each predefined input, and storing said potential identifying word sequences for all of said predefined inputs in a third data structure;
identifying redundant sets of matching word sequences in said third data structure, where each redundant set of matching word sequences includes potential identifying word sequences for at least two distinct ones of said predefined inputs whose word sequences satisfy predefined match criteria; and generating said syntax rules, each generated syntax rule corresponding to those of said potential identifying word sequences for a distinct one of said predefined inputs included in any of said identified redundant sets.
12. The method of claim 11, said generated potential identifying word sequences each including the corresponding longest identifying word sequence; and said redundant sets identifying step including the step of excluding from said redundant sets of matching word sequences said longest identifying word sequence in each of said sets of potential identifying word sequences.
13. The method of claim 10, further including:
providing a microphone for receiving verbal inputs from a user; and receiving said verbal inputs from said user and identifying which of said predefined inputs, if any, correspond to said verbal inputs in accordance with said generated syntax rules.
14. The method of claim 10, wherein said application program has multiple contexts, each context having an associated set of user selectable predefined inputs represented by distinct sets of data in said first data structure;
said second storing step including storing in said second data structure, for each context, a longest word sequence for uniquely identifying each predefined input associated with said context; and said syntax generating step including generating a separate set of syntax rules for each said context.
15. A method of generating syntax rules for use in conjunction with a speech recognition system, comprising the steps of:
storing an application program having an associated set of user selectable predefined inputs;

storing in a first data structure data a sequence of one or more words for each of said predefined inputs, wherein each said associated sequence of one or more words comprises a longest word sequence for uniquely identifying its associated predefined input;
automatically generating for each said predefined input a set of potential identifying word sequences, each of said potential identifying word sequences including a subset of said longest word sequence associated with said each predefined input, and storing said potential identifying word sequences for all of said predefined inputs in a second data structure;
identifying redundant sets of matching word sequences in said second data structure, where each redundant set of matching word sequences includes potential identifying word sequences for at least two distinct ones of said predefined inputs whose word sequences satisfy predefined match criteria; and generating said syntax rules, each generated syntax rule corresponding to those of said potential identifying word sequences for a distinct one of said predefined inputs included in any of said identified redundant sets.
16. The method of claim 15, said generated potential identifying word sequences each including the corresponding longest identifying word sequence; and said redundant sets identifying step including the step of excluding from said redundant sets of matching word sequences said longest identifying word sequence in each of said sets of potential identifying word sequences.
17. The method of claim 15, further including:
providing a microphone for receiving verbal inputs from a user; and receiving said verbal inputs from said user and identifying which of said predefined inputs, if any, correspond to said verbal inputs in accordance with said generated syntax rules.
18. The method of claim 17, wherein said application program has multiple contexts, each context having an associated set of user selectable predefined inputs represented by distinct sets of data in said first data structure;
said second storing step including storing in said second data structure, for each context, a longest word sequence for uniquely identifying each predefined input associated with said context;
said syntax generating step including generating a separate set of syntax rules for each said context;
said application program generating context signals indicating which of said contexts is in use by said application program; and identifying which of said predefined inputs, if any, correspond to said verbal inputs in accordance with the generated set of syntax rules corresponding to said received context signals.
CA002211869A 1995-01-31 1996-01-25 Speech recognition system and method with automatic syntax generation Abandoned CA2211869A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/381,202 1995-01-31
US08/381,202 US5668928A (en) 1995-01-31 1995-01-31 Speech recognition system and method with automatic syntax generation

Publications (1)

Publication Number Publication Date
CA2211869A1 true CA2211869A1 (en) 1996-08-08

Family

ID=23504101

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002211869A Abandoned CA2211869A1 (en) 1995-01-31 1996-01-25 Speech recognition system and method with automatic syntax generation

Country Status (7)

Country Link
US (1) US5668928A (en)
EP (1) EP0807306B1 (en)
JP (1) JPH10513275A (en)
AU (1) AU690830B2 (en)
CA (1) CA2211869A1 (en)
DE (1) DE69607601T2 (en)
WO (1) WO1996024129A1 (en)

Families Citing this family (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6014626A (en) * 1994-09-13 2000-01-11 Cohen; Kopel H. Patient monitoring system including speech recognition capability
US6151598A (en) * 1995-08-14 2000-11-21 Shaw; Venson M. Digital dictionary with a communication system for the creating, updating, editing, storing, maintaining, referencing, and managing the digital dictionary
US5867817A (en) * 1996-08-19 1999-02-02 Virtual Vision, Inc. Speech recognition manager
US5850429A (en) * 1996-12-11 1998-12-15 Lucent Technologies Inc. Method and system for remotely controlling an interactive voice response system
US6456974B1 (en) * 1997-01-06 2002-09-24 Texas Instruments Incorporated System and method for adding speech recognition capabilities to java
EP0856787B1 (en) * 1997-01-25 2001-06-13 Kabushiki Kaisha Toshiba Adjustment rule generating method, adjustment rule generating apparatus, adjustment control method, and adjustment control apparatus
US6587122B1 (en) * 1998-01-30 2003-07-01 Rockwell Automation Technologies, Inc. Instruction syntax help information
US6418431B1 (en) * 1998-03-30 2002-07-09 Microsoft Corporation Information retrieval and speech recognition based on language models
US6321226B1 (en) * 1998-06-30 2001-11-20 Microsoft Corporation Flexible keyboard searching
US6101338A (en) * 1998-10-09 2000-08-08 Eastman Kodak Company Speech recognition camera with a prompting display
US6631368B1 (en) 1998-11-13 2003-10-07 Nortel Networks Limited Methods and apparatus for operating on non-text messages
US6208968B1 (en) * 1998-12-16 2001-03-27 Compaq Computer Corporation Computer method and apparatus for text-to-speech synthesizer dictionary reduction
US6400809B1 (en) * 1999-01-29 2002-06-04 Ameritech Corporation Method and system for text-to-speech conversion of caller information
US6477240B1 (en) * 1999-03-31 2002-11-05 Microsoft Corporation Computer-implemented voice-based command structure for establishing outbound communication through a unified messaging system
US6574599B1 (en) 1999-03-31 2003-06-03 Microsoft Corporation Voice-recognition-based methods for establishing outbound communication through a unified messaging system including intelligent calendar interface
US6876991B1 (en) 1999-11-08 2005-04-05 Collaborative Decision Platforms, Llc. System, method and computer program product for a collaborative decision platform
US6813603B1 (en) * 2000-01-26 2004-11-02 Korteam International, Inc. System and method for user controlled insertion of standardized text in user selected fields while dictating text entries for completing a form
ES2362583T3 (en) 2000-08-25 2011-07-07 Contura A/S POLYACRYLAMIDE HYDROGEL AND ITS USE AS AN ENDOPROOTHESIS.
US7181400B2 (en) * 2001-04-20 2007-02-20 Intel Corporation Method and apparatus to provision a network appliance
US7849400B2 (en) * 2001-09-13 2010-12-07 Speech Products, Inc. Electronic charting system
US7292689B2 (en) 2002-03-15 2007-11-06 Intellisist, Inc. System and method for providing a message-based communications infrastructure for automated call center operation
US20050096910A1 (en) * 2002-12-06 2005-05-05 Watson Kirk L. Formed document templates and related methods and systems for automated sequential insertion of speech recognition results
US7774694B2 (en) 2002-12-06 2010-08-10 3M Innovation Properties Company Method and system for server-based sequential insertion processing of speech recognition results
US7444285B2 (en) * 2002-12-06 2008-10-28 3M Innovative Properties Company Method and system for sequential insertion of speech recognition results to facilitate deferred transcription services
US7263483B2 (en) * 2003-04-28 2007-08-28 Dictaphone Corporation USB dictation device
US7369998B2 (en) * 2003-08-14 2008-05-06 Voxtec International, Inc. Context based language translation devices and methods
US20050075884A1 (en) * 2003-10-01 2005-04-07 Badt Sig Harold Multi-modal input form with dictionary and grammar
DE102005031611B4 (en) * 2005-07-06 2007-11-22 Infineon Technologies Ag Proof of a change in the data of a data record
US7697827B2 (en) 2005-10-17 2010-04-13 Konicek Jeffrey C User-friendlier interfaces for a camera
US7861159B2 (en) * 2006-04-07 2010-12-28 Pp Associates, Lp Report generation with integrated quality management
US8121626B1 (en) 2006-06-05 2012-02-21 Callwave, Inc. Method and systems for short message forwarding services
WO2008036879A2 (en) * 2006-09-21 2008-03-27 Nuance Communications, Inc. Grammar generation for password recognition
US10031830B2 (en) * 2006-10-13 2018-07-24 International Business Machines Corporation Apparatus, system, and method for database management extensions
US8612230B2 (en) * 2007-01-03 2013-12-17 Nuance Communications, Inc. Automatic speech recognition with a selection list
US8447285B1 (en) 2007-03-26 2013-05-21 Callwave Communications, Llc Methods and systems for managing telecommunications and for translating voice messages to text messages
US8325886B1 (en) 2007-03-26 2012-12-04 Callwave Communications, Llc Methods and systems for managing telecommunications
US7813929B2 (en) * 2007-03-30 2010-10-12 Nuance Communications, Inc. Automatic editing using probabilistic word substitution models
US8583746B1 (en) 2007-05-25 2013-11-12 Callwave Communications, Llc Methods and systems for web and call processing
US8145655B2 (en) * 2007-06-22 2012-03-27 International Business Machines Corporation Generating information on database queries in source code into object code compiled from the source code
DE102007042842A1 (en) * 2007-09-07 2009-04-09 Daimler Ag Method and device for recognizing alphanumeric information
US20110184736A1 (en) * 2010-01-26 2011-07-28 Benjamin Slotznick Automated method of recognizing inputted information items and selecting information items
US9081829B2 (en) 2011-10-05 2015-07-14 Cumulus Systems Incorporated System for organizing and fast searching of massive amounts of data
US20130091266A1 (en) 2011-10-05 2013-04-11 Ajit Bhave System for organizing and fast searching of massive amounts of data
US9081834B2 (en) 2011-10-05 2015-07-14 Cumulus Systems Incorporated Process for gathering and special data structure for storing performance metric data
SG11201508013YA (en) * 2013-03-29 2015-10-29 Cumulus Systems Inc Organizing and fast searching of data
US9361084B1 (en) 2013-11-14 2016-06-07 Google Inc. Methods and systems for installing and executing applications
US9606983B1 (en) * 2014-08-27 2017-03-28 Amazon Technologies, Inc. Human readable mechanism for communicating binary data
US9866393B1 (en) 2014-12-22 2018-01-09 Amazon Technologies, Inc. Device for creating reliable trusted signatures
US10110385B1 (en) 2014-12-22 2018-10-23 Amazon Technologies, Inc. Duress signatures
US9819673B1 (en) 2015-06-24 2017-11-14 Amazon Technologies, Inc. Authentication and authorization of a privilege-constrained application
US11062707B2 (en) 2018-06-28 2021-07-13 Hill-Rom Services, Inc. Voice recognition for patient care environment
US11881219B2 (en) 2020-09-28 2024-01-23 Hill-Rom Services, Inc. Voice control in a healthcare facility
US11829396B1 (en) * 2022-01-25 2023-11-28 Wizsoft Ltd. Method and system for retrieval based on an inexact full-text search

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5228110A (en) * 1989-09-15 1993-07-13 U.S. Philips Corporation Method for recognizing N different word strings in a speech signal
US5425128A (en) * 1992-05-29 1995-06-13 Sunquest Information Systems, Inc. Automatic management system for speech recognition processes
EP0602296A1 (en) * 1992-12-17 1994-06-22 International Business Machines Corporation Adaptive method for generating field dependant models for intelligent systems
US5384892A (en) * 1992-12-31 1995-01-24 Apple Computer, Inc. Dynamic language model for speech recognition
DE69326900T2 (en) * 1992-12-31 2000-07-20 Apple Computer VOICE RECOGNITION SYSTEM
US5390279A (en) * 1992-12-31 1995-02-14 Apple Computer, Inc. Partitioning speech rules by context for speech recognition
US5390073A (en) * 1993-01-11 1995-02-14 Maxwell Laboratories, Inc. Dielectric material containing dipolar molecules
EP0618565A3 (en) * 1993-04-02 1996-06-26 Ibm Interactive dynamic grammar constraint in speech recognition.

Also Published As

Publication number Publication date
AU4946696A (en) 1996-08-21
US5668928A (en) 1997-09-16
EP0807306A1 (en) 1997-11-19
AU690830B2 (en) 1998-04-30
EP0807306B1 (en) 2000-04-05
WO1996024129A1 (en) 1996-08-08
DE69607601D1 (en) 2000-05-11
JPH10513275A (en) 1998-12-15
DE69607601T2 (en) 2000-11-23

Similar Documents

Publication Publication Date Title
CA2211869A1 (en) Speech recognition system and method with automatic syntax generation
EP0681284B1 (en) Speech interpreter with a unified grammar compiler
US6023697A (en) Systems and methods for providing user assistance in retrieving data from a relational database
US5970448A (en) Historical database storing relationships of successively spoken words
US7529678B2 (en) Using a spoken utterance for disambiguation of spelling inputs into a speech recognition system
US7072837B2 (en) Method for processing initially recognized speech in a speech recognition session
US5748841A (en) Supervised contextual language acquisition system
US6937983B2 (en) Method and system for semantic speech recognition
US5819220A (en) Web triggered word set boosting for speech interfaces to the world wide web
US7783474B2 (en) System and method for generating a phrase pronunciation
US11776533B2 (en) Building a natural language understanding application using a received electronic record containing programming code including an interpret-block, an interpret-statement, a pattern expression and an action statement
US8086444B2 (en) Method and system for grammar relaxation
US7742924B2 (en) System and method for updating information for various dialog modalities in a dialog scenario according to a semantic context
CA2731013C (en) Integrated language model, related systems and methods
GB2355833A (en) Natural language input
JP3634863B2 (en) Speech recognition system
US20020116194A1 (en) Method for preserving contextual accuracy in an extendible speech recognition language model
KR20120052591A (en) Apparatus and method for error correction in a continuous speech recognition system
US20060136195A1 (en) Text grouping for disambiguation in a speech application
Di Fabbrizio et al. AT&t help desk.
US7548857B2 (en) Method for natural voice recognition based on a generative transformation/phrase structure grammar
Ferreiros et al. Increasing robustness, reliability and ergonomics in speech interfaces for aerial control systems
CA2498736A1 (en) System and method for generating a phrase pronunciation
Lajoie et al. Application of language technology to Lotus Notes based messaging for command and control
Lai A guided parsing and understanding system for spoken utterance inputs

Legal Events

Date Code Title Description
FZDE Discontinued