CA1335001C - Pattern matching system - Google Patents

Pattern matching system

Info

Publication number
CA1335001C
CA1335001C CA000561475A CA561475A CA1335001C CA 1335001 C CA1335001 C CA 1335001C CA 000561475 A CA000561475 A CA 000561475A CA 561475 A CA561475 A CA 561475A CA 1335001 C CA1335001 C CA 1335001C
Authority
CA
Canada
Prior art keywords
time point
matching system
distance
time
pattern matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA000561475A
Other languages
French (fr)
Inventor
Hiroaki Sakoe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP62061736A external-priority patent/JPS63226697A/en
Priority claimed from JP62061734A external-priority patent/JPS63226695A/en
Application filed by NEC Corp filed Critical NEC Corp
Application granted granted Critical
Publication of CA1335001C publication Critical patent/CA1335001C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/12Speech classification or search using dynamic programming techniques, e.g. dynamic time warping [DTW]

Abstract

A distance dn(i, j) between a feature ? of an input pattern and a feature ? of a reference pattern is calculated for a time point (i, j). The minimum cumulative distance of the distances obtained previous to the time point (i, j) is calculated as an optimal cumulative value gn(i, j) in accordance with the recurrence formula of a dynamic programming method. A range of combinations (n, j) of words n and the time points j for new optimal cumulative values gn(i, j) to be calculated is restricted based on the previous optimal cumulative value for each time point i and memory areas exclusively for the optimal cumulative values of the combinations (n, j) are produced.

Description

1335l PATTERN MATCHING SYSTEM

BACKGROUND OF THE INVENTION:
The present invention relates to a pattern matching system available for various systems such as speech recognition system.
There have been proposed various technologies of the pattern matching in speech recognition; the DP
(dynamic programming) matching method disclosed in U.S.
PatO No. 3,816,722 is one of the most popularly used methods. The clockwise DP method disclosed in U.S. Pat.
No. 4,592,086 is a continuous or an isolated speech recognition method with syntax control. For simplicityls sake, description will be given herein only to the clock-wise DP method of the type for isolated speech recognition.
A set of words is assumed to be an object of recognition wherein the name of a word is designated by a numerical number'n.
{ n¦ n = 1, 2, ........ ...N}
A reference pattern is prepared for each word as below:

Bn = bln, b2n ....... bjn . bJnn wherein ~ denotes a time point, and bjn a-feature of the reference pattern Bn at the time point ~.

., ~
An input speech pattern is expressed similarly as below:
, al, a2 -- ai o - aI

Speech recognition is performed by calculating the distance between the input pattern A and the reference pattern Bn, and determining the word _ giving the minimum cumulative distance as the recognition result.
In the DP matching method, the above cumulative distance is calculated by the dynamic programming (DP) as follows:
O initial conditions gn (1 1) = dn (1, 1) .... (1) o recurrence formula gn(i - 1, j) g (i, j) = d (i, j) + min gn(i- 1, j - 1) (2) gn(i -1, j- 2) i = 1, 2, ........... I
j = 1, 2, ........... J

o distance D(A, Bn) between patterns is determined in accordance with the formula (3).

D(A, Bn~ = gn(I Jn) wherein dn(i, j) denotes the distance between features ai and bjn i.e., 1¦ ai ~ bjn ¦¦ . A cumulative distance is expressed-by gn(i, j) and is called an optimal cumulative distance.
The DP matching process is initially executed for each word, but is improved later in the clockwise DP
method so as to be executed in parallel for plural words respectively. More particularly, in the clockwise DP
method the optimal cumulative value gn(i, j) is calculated for _ and ~ specified by all the combinations of _ and ~
for each time 1 of the input pattern in a space defined by J i ~ and n as shown in FIG. 1 with increasing the time i to complete the process.
In practice, however, it is not necessary to prepare work areas for all the spaces. The calculation of the formula (2) can be proceeded with the work areas for two time points, i.e., i and (i-l). This clockwise DP method is excellent in real-time processing with shorter response time since the process can be executed in synchronization with'the input of feature al of the input pattern.
However, in the DP matching method, the amount of the distance calculation still causes problem: the features ai and bjn are generally vectors of ten or higher dimension order, and it is an extremely heavy load for an ordinary N

hardware to conduct the distance calculations of ~ Jn n=l times within one clock (usually about 10 ms).
In order to solve the problem, L. R. Rabinèr et al have proposed a method using vector quantization in a paper entitled "On the Performance of Isolated Word Speech Recognizers Using Vector Quantization and Temporal Energy .,~

Contours", AT&T Bell Laboratories Technical Journal, Vol. 63, No. 7, September issue, 1984. pp. 1,245 - 1,2600 In the method, a set { ~ ~ of code vectors is prepared.
The features {b~n} of the reference pattern is approximated by the code vectors. Namely, each reference pattern Bn is expressed as a time series of the number k = k(n, j) specifying the code vector Ck which is the most similar to each bjn. Then, the distance D(k) between the feature ai of the input pattern and each code vector is calculated and stored in a table during the time of DP matching processing. At the time of recurrence formula calculation, the formula (2) is calculated with reference to the equation (4).

dn(i, j) = D(k(n, j)) .... (4) The distance calculation amount can be reduced by employing the vector quantization method, but not to a satisfactory extent.
In general, the required number of code vectors is more than 256 for maintaining the good recognition rate~
If it is assumed that one calculation of D(i, k) requires 40 ~s, the calculation for 256 vectors would take lO ms.
In other words, one clock for i (lO ms) is almost completely consumed by the calculation of vector distance to leave no time for recurrence formula calculation.

For this reason, a high speed hardware specially designed for the purpose has heretofore been used for executing DP
matching.

Moreover, when the above-mentioned clockwise DP
method is used for recognition of large vocabularly speech, the work area inevitably becomes large in order to retain gn(i, j) and the calculation amount is enormous. More specifically, the r~currence formula (2) should be executed and stored for all the combinations of n and ] within one cycle of i. Where the reference pattern length is Jn =
and N = 1000 words are to be recognized, the formula (2) should be calculated at the points in the number of as many as 3 x 104 and the result should be retained.

SUMMARY OF THE INVENTION:
An object of this invention is to provide a pattern matching system which can remarkably reduce the amount of calculation for DP matching.
~Another object of this invention is to provide a pattern matching system which enables the high speed DP
matching operation;
Still another object of this invention is to provide a pattern matching system which enables the high speed operation for a large vocabularly speèch recognition.
Still another object of this invention is to provide a speech recognition system which enables recognition with a shorter response time.
Still another object of this invention is to provide a pattern matching system with smaller work areas.

According to one aspect of the present invention, there is provided a pattern matching system wherein at each time l in the time series of the features of input patterns, an optimal cumulative value indicating the minimum value of the cumulative distance between the input patterns and the reference pattern for all combinations of n and ~ is successively determined with increasing i, the _ and ~ designating each reference pattern Bn and the time ~ in the time series of the features of each reference pattern, the system determines the optimal cumulative distance only for the combinations of n and ] which are related to the optimal cumulative values smaller than a predetermined threshold value at the time i, and making a memory area dynamically for the optimal cumulative value gn(i, jJ for the combination (,n, j).
According to another aspect ofthe invention, there lS
provided a pattern matching system comprising: means for storing a reference pattern of each word n as a feature time series bjn for a time point ]; means for temporarily storing an input speech patterns as a feature time series ai for a time point l; means for calculating a distance dn(i, j) between the features ai and bjn for a time point (i, j) and determining the minimum cumulative distance of the distances obtained previous to the time point (i, j) as an optimal cumulative value gn(i, j) in accordance with the recurrence formula of a dynamic programming method;

-restriction means for restricting a range of combinations (n, j) of the words _ and the time points ~ for new optimal cumulative values gn(i, j) which are to be calculated based on the previous optimal cumulative value for each time point i; and means for producing memory area exclusively for the optimal cumulative values of the combinations (j/ j)~
In the above invention the memory area generation is controlled by mutual relation between the combination ~n, j) at the time point i and the combination (n', j') of a word n' and a time point ~' processed at a time point (i - 1) one time point previous to the time point 1.
Other objects and features will be clarified from the following description with reference to the drawings.

BRIEF DESCRIPTION OF THE DRAWINGS;
FIG. 1 is a graph to explain the clockwise DP matching used in an embodiment of this invention;
FIG. 2 is a c,hart to explain the principle of this invention;
FIGs. 3A and 3B are structural views of work areas used in the embodiments of this invention;
FIGs. 4A through 4C are diagrams to show matching paths to explain the principle of this invention;
FIG. 5 is a structural view of an embodiment of the speech recognizer according to this invention;
FIGs. 6A through 6E are operational flow charts to describe the embodiment of FIG. 5;

133SOOl FIG. 7 is a structural view of another embodiment of the speech recognizer according to this invention; and FIG. 8 is an operational flow chart of the embodiment of FIG. 7.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
DP matching seeks an optimal path giving the minimum total sum of the distance dn(i, j) from the point (1, 1) to the point (I, J) or the minimum cumulative value of the distance D(A, Bn) for each word. Since the optimal cumulative value gn(i, j) represents a cumulative value of the distance dn(i, j) from the point (1, 1) to the point (i, j), the point (i, j) having large gn(i, j) is less likely to be on the optimal path. This invention system may increase the operation speed by omitting the recurrence formula calculation in DP matching when the value gn(i, j) is estimated to be large.
More specifically, as shown in FIG. 2, the optimal cumulative value gn(i, j) calculated at the previous clock (time point) (i - 1) is estimated by a specified standard, and a set w (illustrated with a mark o in Fig. 2) of points (n, j) having the small cumulative value is determined so that the recurrence formula is calculated for points in proximity of these points.
Although this method could reduce the amount of calculation, there still remains a problem that the memory area for gn(i, j) is large. The system according to the present invention may produce a work area for storing gn(i, j) which is newly determined to eliminate the necessity of storing gn(i, j) which is unnecessary to be determined. FIGs. 3A and 3B show embodiments of the structure of such work areas. In the first area of Fig. 3A~
gn(i, j) is stored in Gl(k), and the corresponding n and ~
are stored in nl(k) and jl(k), respectively. In the second area of Fig. 3B, the information at the time point (i - 1) is stored. In G2(k) is stored gn(i-l, j) and the corresponding n and ~ are stored in n2(k) and j2(k),`
- respectively.
Under such a storing manner of gn(i- 1, j) and gn(i, j), it becomes impossible to execute the recurrence formula (2) since a part or all of the right side term g (i- 1, j), gn(i-l, j - 1) and gn(i- 1, j - 2) might not be stored as G2(k), n2(k) and j2(k).
According to the invention, the recurrence formula calculation and the work area production for gn(i, j) are controlled on the basis of the mutual relation between the (n, j) which is to be executed and the (n', j') which were processed immediately before.
The principle of this invention will be described referring to the recurrence formula (2) of dynamic programming. The large optimal cumulative value g (i- 1, j) would be excluded by the minimum value detection in the - lO 13350~1 recurrence formula (2) and is least likely to contribute to the future determination of the optimal cumulative valuec Therefore, threshold ~(i) is determined at respective time point i to omit the processing which involves the optimal cumulative value of:

gn(i - 1, j) ~ ~(i) ...O (5) In other words, the processing is conducted for (n, j) corresponding to n = n2(k), and j = i2(k) where g2(k) _ ~(i)o Then, a combination (n', j') of n' = n2(k) and i' = i2(k), where G2(k) _ ~(i), which have been processed at a time immediately prior to k is considered. Resistors R0, Rl and R2 are incorporated within a processor which conducts the recurrence formula calculation. At the time point when the processing at (n', j') has been completed, the relation holds as Rl = gn (i - 1, j'), R2 = gn (i - 1, j' - 1). Under this state, the processing at (n, j) can be classified as follows in accordance with the relation between (n, j) and (n', j~)O

(A) under n = n', j - j' = 1:
This condition corresponds to the state shown in FIG. 4A. More particularly, g (i, j) is calculated at the point (i, j'), and gn(i, j) is calculated at the next point j = j' + 1. The content of the register Rl is gn (i - 1, j') = gn(i - 1, j - 1) while that of the register R2 is gn (i - 1, j' - 1) = gn(i - 1, j - 2).
G2(k) = gn(i - 1, j) is read out by the resistor R0, from which is calculated gn(i, j) as follows.

g (i, j) - d (i, j) + min (R0, Rl, R2) .... (6) This value is written in Gl(k'), and n and ~ are also written in nl(k') and jl(k'), respectively. After new information has thus been written in Gl(k'), nl(k') and jl(k'), k' is counted-up by one. Then if the contents in the registers are transferred as Rl R2, R0 ~ Rl, Rl = g (i - 1, j') and R2 = g (i - 1, j' - 1) are determined for the processing at (n, j).
In short, the operation under n = n' and j - j' = 1 becomes as below wherein R0 has been set with Gl(k) = g (i, j).

(1) dn(i, j) + min (R0, Rl, R2) - Gl(k') (2) n ~ nl(k )~ i ~ il( .. - (7) (3) k' + 1 ~ k' (4) Rl R2, R0 ~ Rl (B) under n = n', j - j' = 2:
This condition corresponds to the state shown in FIG. 4B wherein g (i - 1, j' + 1) is not included in the G2(k) table. However, gn(i, j' + 1) is calculated with Rl = g (i - 1, j') and R2 = g (i - 1, j' - 1) as follows gn(i, j' + 1) = dn(i, j' + 1) + min (Rl, R2) Further, since gn (i - 1, j') = g (i - 1, j - 2), the calculation is carried out as:
g = (i, j) = d (i, j) + min (R0, Rl).
The above operations can be summarized as followso (1) dn(i, j' + 1) + min (Rl, R2 ~ Gl(k') (2) n ~ nl(k ), j + 1 jl( ) (3) k' + 1 ~ k' (4) d (i, j') + min (R0, Rl) ~ Gl(k') ... (8) (5) n ~ nl(k ), i il( (6) k' + 1 ~ k' (7) R0 ~ Rl, ~ ~ R2 In the operation (8), (~ ~ R2) means that g (i - 1, j - 1) is not yet defined.

(C) n = n', j ~ 2:
This condition corresponds to FIG.-4C and is analogous to the above (B). The following operations are to be conducted.
Due to g (i, j' + 1) (1) d (i, j' + 1) + min (Rl,R2) ~ Gl(k') (2) n' ~ nl(k'), j + 1 ~ jl(k ) (3) k' + 1 ~ k' Due to g (i, j' + 2) (4) d (i, j' + 2) + Rl ~ Gl(k') (5) n' ~ nl(k'), j + 2 il( (6) k' + 1 ~ k' 335l Due to gn(i, j) (7) dn(i, j) + R0 ~ Gl(k') (8) n ~ nl(k ), j il( (9) k' + 1 ~ k' For the preparation for the next (n, j), (10) R0 > Rl, ~ ~ R2 (D) when n ~ n':
Since the contents of the registers Rl and R2 do not affect gn(i, j), the same operation as the above (C) will suffice.
As described above, the operation is switched depending on the mutual relation between (n, j) and (n', j~)O
The above operation is conducted for all of n = n2(k) and i = i2(k) wherein R0 = G2(k) ~ ~(i) to complete the operation at the time i. Then Gl(k), nl(k) and jl(k) are switched to G2(k), n2(k) and j2(k) respectively, and the operation proceeds to the next time as i + 1 ~ i.
This enables 'the operation equivalent to the conventional operation in the space (n, i, j) without deteriorating the performance but with a smaller calculation amount and a smaller memory capacity.
FIG. 5 shows an embodiment structure of the speech recognizer operable by the above pattern matching principle.
A speech waveform inputted from a microphone 10 is analyzed in frequency by an analyzer 20, and the result is inputted to a microprocessor 30 as a time series of the feature ai~

~14 - 1335001 The microprocessor 30 incorporates registers R0, Rl, R2, k, k' and n, and is connected to three types of memories ~ 40, 50 and 60. The reference pattern memory 40 stores reference patterns Bn = ~ln, b2n .,,, bjn ,,,, bnJn. The work memory 50 stores the data G2(k), n2(k) and i2(k) which were calculated. The work memory 60 provides the memory area for Gl(k), nl(k) and jl(k) which are to be calculated newly. In response to the input of al of the input, the work memory 50 is initialized based on the equation (1) as follows:

G2(k) = d (1, 1) n2(k) = k ..... (9) j2(k) = 1 K = N

This corresponds to the initializing of gn(l, 1) = dn(l, 1) for n = 1, 2 .... N. FIGs. 6A through 6E show the flow chart of the operation when ai is given at the time i.
The operations from the input of ai to the operation of the block 110 corresponds to the operation for (n, j) wherein k = 1. In the operation of the block 113, the G2(k) is transferred from the work memory 50 to the register R0, and is compared with the threshold ~(i) in the judgement block 114. There may be various definitions for the threshold ~(i). Noting that the quantity of gn(i, j) is accumulated value along with the progress of i, ~(i) can be expressed as a monotonously increasing function as:

- 13350l ~ (i) = (X i + ~ (10) wherein ~ and ~ are constants determinable dependent upon speakers, word sets, and surrounding noises, etc., and can be obtained experimentally. It may also possible to obtain gmin = min[gn(i, j)~ at each time i, and ~(i) = gmin + 1, wherein 1 is a constant to give allowance.
When R0 ~(i), this operation for k is omitted. When R0< ~(i), n = n2(k) and j = i2(k) are read out from the work memory 50, and are compared to n' and ~' which have been determined in the operation of the block 111.
Depending on the comparison result, the next step is selected from the processes ~ (FIGs.
6C through 6E) and executed. The step ~ corresponds to the operation under the condition (D), and the steps ~ to (A), (B), and (C) respectively. The step returns to ~ after all the steps are completed, and k is increased by one. The steps proceeds as n ~ n', j ~ j' in the block 111, and the steps after the block 113 are repeated. The distance calculation between vectors such as dn(i, j' + 1) of the block 130 in the process is conducted by feeding n and (j' + 1) (not shown) to the reference pattern memory 40 and reading out (bnj, + 1).
The operation of the block 112 judges whether all the data in the work memory 50 have already been processed, then the step goes to ~ (FIG. 6B). The blocks 120 and 121 are the steps to execute the remaining operation for - 16 - 13~5001 the last (n', j')-. The data of Gl, nl and jl are transferred to G2, n2 and il in the block 122. This is conducted not by the transfer of the data but by switching of the work memory from 50 to 60. By the process in the block 123, all the processes at the time i is completed, and the step proceeds to the next time point (i + 1).
At the time the above processes were completed to i = I, gn(I, Jn) data have been included in Gl(k), nl(k) and jl(k) of the work memory 60. Gl(k) for k where jl(k) = Jn under n = nl(k) indicates such data. They are used as the distance (A, Bn) between the input pattern A and the reference pattern Bn, and n giving the minimum distance is outputted as the recognition result n = n.
The above statement describes the principle of this invention referring to the preferred embodiment, but the description by no means limits the scope of this invention.
There may be the threshold ~(i) which can be used for the judgement block 114 shown in FIG. 6A other than those stated before. For instance, ~(i) may be linked with the minimum of G2(k) = g (i - 1, j). Alternatively, a predetermined number from the smallest of G2(k) = g (i 1, j) may be selected, and the processing may be omitted for others than the above.
FIG. 7 shows an embodiment of an isolated word speech recognizer according to this invention. A speech waveform inputted via a microphone 110 is analyzed in frequency by ~ - 17 - 133500 1 an analyzer 20, converted into a time series of the feature vector al, and inputted to a microprocessor 130. Code vectors ~ are stored in a code book 140, and reference pattern Bn of each word n is stored in a reference memory 160 as a time series of the number k(n, j) which designate the code vector number. D memory 150 temporarily stores the distance D(k) between the code vector ~ and the input vector ai. The g memory 170 is the work memory for the recurrence formula calculation (2) and stores gn(i, j) and gn(i - 1, j) for required n and ~. These memories 140, 150, 160 and 170 may be areas on the main memory of the microprocessor 30.
In response to the input of the first feature vector al of the input pattern, the microprocessor 30 performs an initial setting as below for the area gn(i - 1, j) of the memory 170.
gn(l~ 1) = D(k(n, l)) In other words, k - k(n, 1) is read out from the reference pattern memory 160 for each word, and the code vector corresponding thereto is read out from the code book 140 for calculating the distance from the feature vector a and setting the distance as the initial value gn(l, 1).
For the part where j ~ 1, a numerical value sufficiently large is set.
General operation at the time point i is shown in FIG. 8. In response to the input of a feature vector a all the contents D(k) of the D table (memory) 150 are reset with ~ . Then, the following processes are performed for j = 1, 2, .... Jn. The optimal cumulative values at the previous times gn(i - 1, j), gn(i - 1, j - 1) and gn(i - 1, j - 2) are read out from the g memory 170 to determine the minimum g thereat. The determined cumulative value g is compared with the threshold ~(i) (in block 200) to judge whether or not the time point (i, j) is on the optimal path. When g > ~(i), it is judged that the point (i, j) is not on the optimal path. Under that condition, the calculation of the recurrence formula and distance is omitted and ~ is set as gn(i, j). On the other hand, when g - ~(i), tne point (i, j) is judged to be on the optimal path and the following recurrence formula calculation is performed.
In response to the value k = (n, j) read out from the reference pattern memory 160, D(k) is read out from the D table 150. If D(k) is ~ , it is judged that the value D(k) has not yet been calculated, and the distance between the feature vector ai and the code vector ~
read out from the code book is calculated to determine the value D and to write the same in the D table as D(k).
When D(k) is not ~ , it is judged that the distance between the code vector ~ and the feature vector ai has already been calculated and D = D(k) is established. These series of processings are carried out to calculate the distance _ between the code vector ~ designated by k = k(n, j) and the feature vector al for the point of the n, i and ~
which is estimated to be on the optimal path, thereby avoiding unnecessary calculations. Thus after the process (g + D) -L gn(i, j) ~ the following recurrence formula is calculated similarly to the equations (2) and (4).
'g(i - 1, j) `
g (i,j) = D(k(n,j)) + min g(i-l, j - 1) ...O ~11) g(i-l, j - 2) This new optimal cumulative value is written in the g memory 170. The same processes are repeated for ~ and n to complete recurrence formula calculations at the time i.
In the operation of the block 120, the area gn(i, j) is exchanged to the area gn(i - 1, j) to provide the optimal cumulative value obtained at,the time i as a past (previous) data~ and the step proceeds to the next time point (i + 1).
After completion of the speech input or i = I, the pattern di~tance D(A, B ) is stored in the g memory 170 for each word n. Thus obtained distances are compared to determine the word n = n giving the minimum distance and the word is generated as the recognition result. As described above, according to this invention, code vectors for which distance calculation is to be required at the time point are defined by the past optimal cumulative values, thereby the number of distance calculations can be reduced.

133SOOl Various modifications are possible for the above embodiments. The distance calculation between ai and Ck may be conducted in advance outside the loop of ~ and n.

Claims (13)

1. A pattern matching system comprising:
means for storing a reference pattern of each word n as a feature time series bjn for a time point j;
means for temporarily storing an input speech patterns as a feature time series ? for a time point i;
means for calculating a distance dn(i, j) between the features ai and bjn for a time point (i, j) and determining the minimum cumulative distance of the distances obtained previous to said time point (i, j) as an optimal cumulative value gn(i, j) in accordance with the recurrence formula of a dynamic programming method;
restriction means for restricting a range of combinations (n, j) of the words n and the time points j for new optimal cumulative values gn(i, j) which are to be calculated based on the previous optimal cumulative value for each time point i; and means for producing memory area exclusively for the optimal cumulative values of said combinations (n, j).
2. The pattern matching system as claimed in Claim 1, wherein said memory area generation is controlled by mutual relation between the combination (n, j) at the time point i and the combination (n', j') of a word n' and a time point j' processed at a time point (i-l) one time point previous to the time point i.
3. The pattern matching system as claimed in Claim 1, wherein said memory area is only for two time points i and (i - l).
4. The pattern matching system as claimed in Claim 2, wherein said mutual relation between (n', j) and (n, j) is as follows.
a) n = n', j - j' = 1 b) n = n', j - j' = 2 c) n = n', j - j' > 2 d) n ? n'
5. The pattern matching system as claimed in Claim 1, wherein said restriction means omit the calculation of the optimal cumulative values for the time points where the previous optimal cumulative value is larger than a predetermined threshold value .THETA.(i).
6. The pattern matching system as claimed in Claim 1, wherein said restriction means executes the calculation of said cumulative values only for the time points where the previous optimal cumulative value is smaller than a predetermined threshold value .THETA.(i).
7. The pattern matching system as claimed in Claim 6, wherein said threshold value .THETA.(i) is expressed as .THETA.(i) = .alpha..i + .beta.

wherein .alpha. and .beta. are constants determined by speakers, words sets and surrounding noises and can be determined experimentally in advance.
8. The pattern matching system as claimed in Claim 6, wherein said threshold value .THETA.(i) is determined by obtaining gmin = min [gn(i, j)] for each time point i and calculating in accordance with the expression .THETA.(i) = gmin + .lambda. wherein .lambda. is a predetermined constant giving an allowance.
9. A pattern matching system comprising:
a code book for storing code vectors (k denotes a code number) which represents the feature of a specified speech signals;
a reference pattern memory for storing a reference pattern Bn represented as a time series of the code number k (n, j) which designates one of the code vectors for the time point j of each word n;
an input buffer for temporarily storing the feature of an input speech pattern;
a distance calculation means for calculating the distance D(k) between the feature and each code vector ;
a cumulative value calculating means for calculating an optimal cumulative values gn(i, j) of the distance D(k(n, j)) for each word n in accordance with dynamic programming method; and a selection means for selecting the code vector sets which are necessary to calculate the distance from at the time point i based on the previous optimal cumulative values calculated for each time point previous to the time point i, and calculated the distance D(k) only for the selected sets.
10. The pattern matching system as claimed in Claim 9, wherein said selection means selects the sets of the code vector set only for the time points where the optimal cumulative values calculated previously are smaller than a predetermined threshold value .THETA.(i).
11. The pattern matching system as claimed in Claim 10, wherein said threshold value .THETA.(i) is given in accordance with the following expression:
.THETA.(i) = .alpha.. i + .beta.
wherein .alpha. and .beta. are constants determined by speakers, word sets and surrounding noises.
12. The pattern matching system as claimed in Claim 10, wherein said threshold value .THETA.(i) may be obtained by calculating gmin = min [gn(i, j)] at each time point, and calculating in accordance with the expression below.
.THETA.(n') = gmin + .lambda.
wherein .lambda. is a predetermined constant giving an allowance.
13. A pattern matching system wherein at each time i in the time series of the features of input patterns, an optimal cumulative value indicating the minimum value of the cumulative distance between said input patterns and the reference pattern for all combinations of n and j is successively determined with increasing i, said n and j designating each reference pattern Bn and the time j in the time series of the features of each reference pattern, said system determines the optimal cumulative distance only for the combinations of n and j which are related to the optimal cumulative values smaller than a predetermined threshold value at said time i, and making a memory area dynamically for the optimal cumulative value gn(i, j) for the combination (n, j).
CA000561475A 1987-03-16 1988-03-15 Pattern matching system Expired - Fee Related CA1335001C (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP62061736A JPS63226697A (en) 1987-03-16 1987-03-16 Highly efficient pattern matching system
JP61734/1987 1987-03-16
JP62061734A JPS63226695A (en) 1987-03-16 1987-03-16 High-speed pattern matching system
JP61736/1987 1987-03-16

Publications (1)

Publication Number Publication Date
CA1335001C true CA1335001C (en) 1995-03-28

Family

ID=26402795

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000561475A Expired - Fee Related CA1335001C (en) 1987-03-16 1988-03-15 Pattern matching system

Country Status (4)

Country Link
US (1) US5121465A (en)
EP (1) EP0283902B1 (en)
CA (1) CA1335001C (en)
DE (1) DE3882062T2 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH04194999A (en) * 1990-11-27 1992-07-14 Sharp Corp Dynamic planning method using learning
JP2980420B2 (en) * 1991-07-26 1999-11-22 富士通株式会社 Dynamic programming collator
US5189709A (en) * 1991-08-26 1993-02-23 The United States Of America As Represented By The United States National Aeronautics And Space Administration Dynamic pattern matcher using incomplete data
US5388183A (en) * 1991-09-30 1995-02-07 Kurzwell Applied Intelligence, Inc. Speech recognition providing multiple outputs
JP2964881B2 (en) * 1994-09-20 1999-10-18 日本電気株式会社 Voice recognition device
IL113204A (en) * 1995-03-30 1999-03-12 Advanced Recognition Tech Pattern recognition system
US6195638B1 (en) * 1995-03-30 2001-02-27 Art-Advanced Recognition Technologies Inc. Pattern recognition system
JP3576272B2 (en) * 1995-06-22 2004-10-13 シャープ株式会社 Speech recognition apparatus and method
JP2980026B2 (en) * 1996-05-30 1999-11-22 日本電気株式会社 Voice recognition device
US6122757A (en) * 1997-06-27 2000-09-19 Agilent Technologies, Inc Code generating system for improved pattern matching in a protocol analyzer
JP2007047575A (en) * 2005-08-11 2007-02-22 Canon Inc Pattern matching method and device therefor, and speech information retrieval system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5529803A (en) * 1978-07-18 1980-03-03 Nippon Electric Co Continuous voice discriminating device
JPS55157799A (en) * 1979-05-29 1980-12-08 Nippon Electric Co High efficiency pattern matching unit
US4277644A (en) * 1979-07-16 1981-07-07 Bell Telephone Laboratories, Incorporated Syntactic continuous speech recognizer
US4624008A (en) * 1983-03-09 1986-11-18 International Telephone And Telegraph Corporation Apparatus for automatic speech recognition

Also Published As

Publication number Publication date
DE3882062D1 (en) 1993-08-05
EP0283902A1 (en) 1988-09-28
US5121465A (en) 1992-06-09
DE3882062T2 (en) 1994-01-05
EP0283902B1 (en) 1993-06-30

Similar Documents

Publication Publication Date Title
CA1335001C (en) Pattern matching system
US4829575A (en) Apparatus and methods for analyzing transitions in finite state machines
US5073939A (en) Dynamic time warping (DTW) apparatus for use in speech recognition systems
US4326101A (en) System for recognizing a word sequence by dynamic programming and by the use of a state transition diagram
US5369728A (en) Method and apparatus for detecting words in input speech data
EP0086081B1 (en) Continuous speech recognition system
CA1226945A (en) Pattern matching method and apparatus therefor
EP0144689B1 (en) Pattern matching system
Brown et al. An adaptive, ordered, graph search technique for dynamic time warping for isolated word recognition
EP0525640B1 (en) Dynamic programming matching system for speech recognition
JPS6360919B2 (en)
CA2206505A1 (en) Speech recognition system
EP0215573B1 (en) Apparatus and methods for speech recognition
JP2964881B2 (en) Voice recognition device
EP0139875B1 (en) Pattern matching apparatus
EP0138166A1 (en) Pattern matching apparatus
JPS61145599A (en) Continuous voice recognition equipment
GB2179483A (en) Speech recognition
GB2209418A (en) Analysing transitions in finite state machines
US4620316A (en) Speech recognition system
EP0278528A2 (en) Area searching system
JPH0465395B2 (en)
JPS638797A (en) Pattern recognition
JPH0355836B2 (en)
JPH0134400B2 (en)

Legal Events

Date Code Title Description
MKLA Lapsed