WO2003036614A2 - System and apparatus for speech communication and speech recognition - Google Patents

System and apparatus for speech communication and speech recognition Download PDF

Info

Publication number
WO2003036614A2
WO2003036614A2 PCT/SG2002/000149 SG0200149W WO03036614A2 WO 2003036614 A2 WO2003036614 A2 WO 2003036614A2 SG 0200149 W SG0200149 W SG 0200149W WO 03036614 A2 WO03036614 A2 WO 03036614A2
Authority
WO
WIPO (PCT)
Prior art keywords
signal
filter
signals
headset
headset system
Prior art date
Application number
PCT/SG2002/000149
Other languages
French (fr)
Other versions
WO2003036614A3 (en
Inventor
Siew Kok Hui
Kok Heng Loh
Yean Ming Lau
Original Assignee
Bitwave Private Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bitwave Private Limited filed Critical Bitwave Private Limited
Priority to US10/487,229 priority Critical patent/US7346175B2/en
Priority to AU2002363054A priority patent/AU2002363054A1/en
Priority to EP02802082A priority patent/EP1425738A2/en
Publication of WO2003036614A2 publication Critical patent/WO2003036614A2/en
Publication of WO2003036614A3 publication Critical patent/WO2003036614A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02168Noise filtering characterised by the method used for estimating noise the estimation exclusively taking place during speech pauses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/26Devices for calling a subscriber
    • H04M1/27Devices whereby a plurality of signals may be stored simultaneously
    • H04M1/271Devices whereby a plurality of signals may be stored simultaneously controlled by voice recognition

Definitions

  • the present invention relates to a system and apparatus for speech communication and speech recognition. It further relates to signal processing methods which can be implemented in the system.
  • the present invention seeks to provide a headset system performing improved signal processing of audio signals and suitable for speech communication.
  • the present invention further seeks to provide signal processing methods and apparatus suitable for use in a speech communication and/or speech recognition system.
  • a first aspect of the present invention proposes a headset system including a base unit and a headset unit to be worn by a user (e.g. resting on the user's head or around the user's shoulders) and having a plurality of microphones, the headset unit and base unit being in mutual wireless communication, and at least one of the base unit and the headset unit having digital signal processing means arranged to perform signal processing in the time domain on audio signals generated by the microphones, the signal processing means including at least one adaptive filter to enhance a wanted signal in the audio signals and at least one adaptive filter to reduce an unwanted signal in the audio signals.
  • the digital signal processing means are part of the headset unit.
  • the headset can be used for communication with the base unit, and optionally with other individuals, especially via the base unit.
  • the headset system may comprise, or be in communication with, a speech recognition engine for recognizing speech of the user wearing the headset unit.
  • the signal processing may be as described in PCT/SG99/00119, more preferably, the signal processing is modified to distinguish between the noise and interference signals.
  • Signals received from the microphones (array of sensors) are processed using a first adaptive filter to enhance a target signal, and then divided and supplied to a second adaptive filter arranged to reduce interference signals and a third filter arranged to reduce noise.
  • the outputs of the second and third filters are combined, and may be subject to further processing in the frequency domain.
  • this concept provides a second, independent aspect of the invention which is a method of processing signals received from an array of sensors comprising the steps of sampling and digitising the received signals and processing the digitally converted signals, the processing including: filtering the digital signals using a first adaptive filter arranged to enhance a target signal in the digital signals, transmitting the output of the first adaptive filter to a second adaptive filter and to a third adaptive filter, the second filter being arranged to suppress unwanted interference signals, and the third filter being arranged to suppress noise signals; and combining the outputs of the second and third filters.
  • the invention further provides signal processing apparatus for performing such a method.
  • Fig.1 illustrates a general scenario in which an embodiment of the invention may operate.
  • Fig.2 is a schematic illustration of a general digital signal processing system which is an embodiment of present invention.
  • Fig.3 is a system level block diagram of the described embodiment of Fig.2.
  • Fig.4a-d is a flow chart illustrating the operation of the embodiment of Fig.3.
  • Fig.5 illustrates a typical plot of non-linear energy of a channel and the established thresholds.
  • Fig.6 (a) illustrates a wave front arriving from 40 degree off-boresight direction.
  • Fig.6 (b) represents a time delay estimator using an adaptive filter.
  • Fig.6 (c) shows the impulse response of the filter indicates a wave front from the boresight direction.
  • Fig.7 shows the response of time delay estimator of the filter indicates an interference signal together with a wave front from the boresight direction.
  • Fig.8 shows the schematic block diagram of the four channels Adaptive
  • Fig.9 is a response curve of S-shape transfer function (S function).
  • Fig.10 shows the schematic block diagram of the Adaptive Interference Filter.
  • Fig.11 shows the schematic block diagram of the Adaptive Ambient Noise
  • Fig.12 is a block diagram of Adaptive Signal Multiplexer.
  • Fig.13 shows an input signal buffer.
  • Fig.14 shows the use of a Hanning Window on overlapping blocks of signals.
  • Fig.15 illustrates a sudden rise of noise level of the nonlinear energy plot.
  • Fig. 16 illustrates a specific embodiment of the invention schematically.
  • Fig. 17 illustrates a headset unit which is a component of the embodiment of
  • Fig. 18, which is composed- of Figs. 18(a) and 18(b), shows two ways of wearing the headset unit of Fig. 17.
  • FIG. 1 illustrates schematically the operating environment of a signal processing apparatus 5 of the described embodiment of the invention, shown in a simplified example of a room.
  • These unwanted signals cause interference and degrade the quality of the target signal "s" as received by the sensor array.
  • the actual number of unwanted signals depends on the number of sources and room geometry but only three reflected (echo) paths and three direct paths are illustrated for simplicity of explanation.
  • the sensor array 10 is connected to processing circuitry 20-60 and there will be a noise input q associated with the circuitry which further degrades the target signal.
  • FIG.2 An embodiment of signal processing apparatus 5 is shown in FIG.2.
  • the apparatus observes the environment with an array of four sensors such as microphones 10a-10d.
  • Target and noise/interference sound signals are coupled when impinging on each of the sensors.
  • the signal received by each of the sensors is amplified by an amplifier 20a-d and converted to a digital bitstream using an analogue to digital converter 30a-d.
  • the bit streams are feed in parallel to the digital signal processor 40 to be processed digitally.
  • the processor provides an output signal to a digital to analogue converter 50 which is fed to a line amplifier 60 to provide the final analogue output.
  • FIG.3 shows the major functional blocks of the digital processor in more detail.
  • the multiple input coupled signals are received by the four-channel microphone array 10a-10d, each of which forms a signal channel, with channel 10a being the reference channel.
  • the received signals are passed to a receiver front end which provides the functions of amplifiers 20 and analogue to digital converters 30 in a single custom chip.
  • the four channel digitized output signals are fed in parallel to the digital signal processor 40.
  • the digital signal processor 40 comprises five sub-processors. They are (a) a 3/036614
  • Preliminary Signal Parameters Estimator and Decision Processor 42 (b) a Signal Adaptive Filter 44, (c) an Adaptive Interference Filter 46, (d) an Adaptive Noise Estimation Filter 48, and (e) an Adaptive Interference and Noise Cancellation and Suppression Processor 50.
  • the basic signal flow is from processor 42, to processor 44, to processor 46 and 48, to processor 50.
  • the output of processor 42 is referred to as "stage 1" in this process; the output of processor 44 as "stage 2", and the output of processors 46, 48 as "stage 3". These connections being represented by thick arrows in FIG.3.
  • the filtered signal S is output from processor 50.
  • processor 42 which receives information from processors 44-50, makes decisions on the basis of that information and sends instructions to processors 44-50, through connections represented by thin arrows in FIG.3.
  • the outputs I, S of the processor 40 are transmitted to a Speech recognition engine, 52.
  • the splitting of the processor 40 into the five component parts 42, 44, 46, 48 and 50 is essentially notional and is made to assist understanding of the operation of the processor.
  • the processor 40 would in reality be embodied as a single multi-function digital processor performing the functions described under control of a program with suitable memory and other peripherals.
  • the operation of the speech recognition engine 52 also could in principle be incorporated into the operation of the processor 40.
  • FIG 4a-d A flowchart illustrating the operation of the processors is shown in FIG 4a-d and this will firstly be described generally. A more detailed explanation of aspects of the processor operation will then follow.
  • the front end 20,30 processes samples of the signals received from array 10 at a predetermined sampling frequency, for example 16kHz.
  • the apparatus includes an input buffer 43 that can hold N such samples for each of the four channels.
  • the apparatus collects a block of N/2 new signal samples for all the channels at step 500, so that the buffer holds a block of N/2 new samples and a block of N/2 previous samples.
  • the processor 42 then removes any DC from the new samples and pre-emphasizes or whitens the samples at step 502. •
  • the total non-linear energy of a stage 1 signal sample E rl and a stage 2 signal sample E r3 is calculated at step 504.
  • the samples from the reference channel 10a are used for this purpose although any other channel could be used.
  • step 506 There then follows a short initialization period at step 506 in which the first 20 blocks of N/2 samples of signal after start-up are used to estimate a Bark Scale system noise B n at step 516 and a histogram Pb at step 518. During this short period, an assumption is made that no target signals are present.
  • the updated Pb is then used with updated Pbs to estimate the environment noise energy E n and two detection thresholds, a noise threshold T n ⁇ and a larger signal threshold T n2 , are calculated by processor 42 from E n using scaling factors.
  • the routine then moves to point B and point F.
  • Pbs and B n are updated when an update condition is fulfilled.
  • step 508 it is determined if the stage 3 signal energy E r3 is greater than the noise threshold T n ⁇ . If not, the Bark Scale system noise B n is updated at step 510. Then, it'll proceed to step 512. If so, the routine will skip step 510 and proceed to step 512. A test is made at step 512 to see if the signal energy E r ⁇ is greater than the noise threshold T n -
  • Tn1 and Tn2 will follow the environment noise level closely.
  • the histogram is used to determine if the signal energy level shows a steady state increase which would indicate an increase in noise, since the speech target signal will show considerable variation over time and thus can be distinguished. This is illustrated in FIG.15 in which a signal noise level rises from an initial level to a new level which exceeds both thresholds.
  • a test is made at step 520.to see if the estimated energy E r ⁇ in the reference channel 10a exceeds the second threshold T n2 . If so, a counter C ⁇ _ is reset and a candidate target signal is deemed to be present.
  • the apparatus only wishes to process candidate target signals that impinge on the array 10 from a known direction normal to the array, hereinafter referred to as the boresight direction, or from a limited angular departure there from, in this embodiment plus or minus 15 degrees. Therefore, the next stage is to check for any signal arriving from this direction.
  • a correlation coefficient C x a correlation time delay T d and a filter coefficient peak ratio P k which together provide an indication of the direction from which the target signal arrived.
  • step 530 three tests are conducted to determine if the candidate target signal is an actual target signal.
  • the cross correlation coefficient C x must exceed a predetermined threshold T c
  • the size of the delay coefficient must be less than a value ⁇ indicating that the signal has impinged on the array within the predetermined angular range and lastly the filter coefficient peak ratio P k must exceed a predetermined threshold Tp k i. If these conditions are not met, the signal is not regarded as a target signal and the routine passes to step 534 (non-target signal filtering). If the conditions are met, the confirmed target signal is fed to step 532 (target signal filtering) of Signal Adaptive Spatial Filter 44.
  • step 520 If at step 520, the estimated energy E r ⁇ in the reference channel 10a is found not to exceed the second threshold T n2 , the target signal is considered not to be present and the routine passes to step 534 via steps 522-526 in which the counter CL is incremented.
  • step 524 CL is checked against a threshold TCL- If the threshold is reached, block leak compensation is performed on the filter coefficient W td and counter CL is reset at step 526. This block leak compensation step improves the adaptation speed of the filter coefficient Wt d to the direction of fast changing target sources and environment. If the threshold is not reached, the program moves to step 534 described below.
  • the confirmed target signal is fed to step 532 at the Signal Adaptive Spatial Filter 44.
  • the filter is instructed to perform adaptive filtering at step 532 and 536, in which the filter coefficients W su are adapted to provide a "target signal plus noise" signal in the reference channel and "noise only” signals in the remaining channels using the Least Mean Square (LMS) algorithm.
  • LMS Least Mean Square
  • a running energy ratio R Sd is computed at every sample at step 532. This running energy ratio R Sd is used as a condition to test whether that the filter coefficient corresponding to that particular sample should be updated or not.
  • the filter 44 output channel equivalent to the reference channel is for convenience referred to as the Sum Channel and the filter 44 output from the other channels, Difference Channels.
  • the signal so processed will be, for convenience, referred to as A'.
  • step 534 the routine passes to step 534 in which the signals are passed through filter 44 without the filter coefficients being adapted, to form the Sum and Difference channel signals.
  • the signals so processed will be referred to for convenience as B'.
  • the effect of the filter 44 is to enhance the signal if this is identified as a target signal but not otherwise.
  • a new filter coefficient peak ratio P 2 is calculated based on the filter coefficient W su -
  • the routine passes to step 548.
  • the peak ratio calculated at step 538 is compared with a best peak ratio BP k at step 540. If it is larger than best peak ratio, the value of best peak ratio is replaced by this new peak ratio P k2 and all the filter coefficients W su are stored as the best filter coefficients at step 542. If it is not, the peak ratio P k2 is again compared with a threshold Tp at step 544. If the peak ratio is below the threshold, a wrong update on the filter coefficients is deemed to be occurred and the filter coefficients are restored to the previous stored best filter coefficients at step 546. If it is above the threshold, the routine passes to step 548.
  • an energy ratio R Sd and power ratio P rS d between the Sum Channel and the Difference Channels are estimated by processor 42. Besides these, two other coefficients are also established, namely an energy ratio factor R Sdf and a second stage non-linear signal energy E r2 . Following this, the adaptive noise power threshold Tp rsd is updated based on the calculated power ratio P rS d.
  • the signal is divided into two parallel paths namely point C and point D. Following point C, the signal is subject to a further test at step 552 to determine if the noise or interference presence.
  • the routine passes to step 556.
  • the filter coefficient peak ratio P k2 is compared to a threshold T Pk2 . If it is higher than threshold, this may indicate that there is a target signal and routine passes to step 556.
  • the R Sd and P rSd are compared to threshold Trsd and Tprs d respectively.
  • the routine passes to step 556. For all other non-target signals, the routine passes to step 554.
  • the signals are processed by the Adaptive Interference Filter 46, the purpose of which is to reduce the unwanted signals.
  • the filter 46 at step 554 is instructed to perform adaptive filtering on the non-target signals with the intention of adapting the filter coefficients to reducing the unwanted signal in the Sum channel to some small error value e c ⁇ . This computed e c ⁇ is also fed back to step 554 to prevent signal cancellation cause by wrong updating of filter coefficients.
  • step 556 the target signals are fed to the filter 46 but this time, no adaptive filtering takes place, so the Sum and Difference signals pass through the filter.
  • the output signals from processor 46 are thus the Sum channel signal S c ⁇ and filtered Difference signal Sj.
  • step 560 First, if the signals are A' signals from step 532, the routine passes to step 564. Second, if the signals are classified as non-target signal by step 552 (C signal), the routine passes to step 564. Third, the R S df and P rs d are compared to threshold T rS df and T PrSd respectively. If the ratios are both lower than threshold, this indicates probable ambient noise signal but if higher, this may indicate that there has been some leakage of the target signal into the Difference channel, indication the presence of a target signal after all. Lastly, if the estimated energy E r2 is found exceeds the first threshold T n ⁇ , signals are considered to be present. For such signals, the routine also passes to step 564. For all other ambient noise signals, the routine passes to step 562.
  • the signals are processed by the Adaptive Ambient noise Estimation Filter 48, the purpose of which is to reduce the unwanted ambient noise.
  • the filter 48 at step 562 is instructed to perform adaptive filtering on the ambient noise signals with the intention of adapting the filter coefficients to reducing the unwanted ambient noise in the Sum channel to some small error value e C 2.
  • step 564 the signals are fed to the filter 48 but this time, no adaptive filtering takes place, so the Sum and Difference signals pass through the filter.
  • the output signals from processor 48 are thus the Sum channel signal S c2 and filtered Difference signal S n .
  • output signals from processor 46: S c ⁇ and Sj and output signals from processor 48: S C2 and S n are processed by an adaptive signal multiplexer.
  • those signals are multiplex and a weighted average error signal e s (t), a sum signal S c (t) and a weighted average interference signal l s (t) are produced.
  • These signals are then collected for the new N/2 samples and the last N/2 samples from the previous block and a Hanning Window H n is applied to the collected samples as shown in FIG.13 to form vectors S n , lh and E h .
  • a Fast Fourier Transform is then performed on the vectors S n , l h and E h to transform the vectors into frequency domain equivalents S f , I f and E f at step 570.
  • a modified spectrum is calculated for the transformed signals to provide "pseudo" spectrum values P s and Pj.
  • a frequency scanning is performed between P s and Pj to look for the peaks in the same frequency components at step 574. Attenuation is then performed on those peaks in Pj to reduce the signal cancellation effect.
  • P s and Pi are then warped into the same Bark Frequency Scale to provide Bark Frequency scaled values B s and Bj at step 576.
  • a voice unvoice detection is performed on B s and Bj to reduce the signal cancellation on the unvoice signal.
  • a weighted combination B y * of B n (through path F) and Bj is then made at step 580 and this is combined with B s to compute the Bark Scale non-linear gain G b at step 582.
  • G b is then unwrapped to the normal frequency domain to provide a gain value G at step 584 and this is then used at step 586 to compute an output spectrum S ou t using the signal spectrum S f and E f from step 570.
  • This gain- adjusted spectrum suppresses the interference signals, the ambient noise and system noise.
  • the system also provides a set of useful information indicated as I on Fig. 3.
  • This set of information may include any one or more of:
  • Target speech signal presence, A' (steps 530 and 532)
  • the reference signal is taken at a delay half the tap-size.
  • the signal is delayed by Lsu/2 and Luq/2.
  • the signal energy calculations are calculated at 3 junctions, resulting in 3 pairs of the signal energy.
  • the first signal energy is calculated at no delay and is used by the time delay estimation and stagel Adaptive Spatial Filter.
  • the second signal energy is calculated at a delay of half of Adaptive Spatial Filter tap-size, Lsu/2.
  • the last signal energy is calculated at a delay of Lsu/2 + Luq/2 and is used by noise updating.
  • the processor 42 estimates two thresholds T habiti and T n2 based on a statistical approach.
  • Two sets of histogram referred to as Pb and Pbs, are computed in the same way, except that Pbs is computed every block of N/2 samples and Pb is computed only on the first 20 blocks of N/2 samples or when En ⁇ T n1 which means that there is neither a target signal nor an interference signal is present.
  • E r ⁇ is used as the input sample of the histograms, and the length of the histograms is a number M (which may for example be 24).
  • Each histogram is as found from the following equation:
  • H aH l + (l -a) ⁇ (i -D).I Ux B.1
  • H ; - stands for either of Pb and Pb, and has the form:
  • is a forgetting factor.
  • a is chosen empirically to be 0.9988 and for Pbs, a is equal to 0.9688.
  • the Emax values in table 1 were chosen experimentally based on a statistical method. Samples (in this case, E r ⁇ ) were collected under certain environments (office, car, super-market, etc) and a histogram was generated based on the collected samples. From the histogram, a probability density function is computed and from there the Emax values were decided.
  • the two signal detection thresholds T n ⁇ and T n2 are established as follows:
  • Tni ⁇ i E n B.5
  • T n2 ⁇ 2 E n B.6
  • the noise level can be tracked more robustly yet faster.
  • a further motivation for the above algorithm for finding the thresholds is to distinguish between signal and noise in all environments, especially noisy environments (car, supermarket, etc.). This means that the user can use the embodiment any where.
  • FIG 6A illustrates a single wave front impinging on the sensor array.
  • the wave front impinges on sensor 10d first (A as shown) and at a later time impinges on sensor 10a (A' as shown), after a time delay t d .
  • the filter has a delay element 600, having a delay Z 'U2 , connected to the reference channel 10a and a tapped delay line filter 610 having a filter coefficient W td connected to channel 10d.
  • Delay element 600 provides a delay equal to half of that of the tapped delay line filter 610.
  • the outputs from the delay element is d(k) and from filter 610 is d'(k).
  • the Difference of these outputs is taken at element 620 providing an error signal e(k) (where k is a time index used for ease of illustration). The error is fed back to the filter 610.
  • W td (k + l) W ld (k) + 2 ⁇ !d S m (k)e(k) ....B.l
  • ⁇ td is a user selected convergence factor 0 ⁇ td ⁇ 2,
  • the impulse response of the tapped delay line filter 620 at the end of the adaptation is shown in Fig. 6c.
  • the impulse response is measured and the position of the peak or the maximum value of the impulse response relative to origin O gives the time delay T d between the two sensors which is also the angle of arrival of the signal.
  • T d the time delay between the two sensors which is also the angle of arrival of the signal.
  • the threshold ⁇ at step 506 is selected depending upon the assumed possible degree of departure from the boresight direction from which the target signal might come. In this embodiment, ⁇ is equivalent to ⁇ 15°. Normalized Cross Correlation Estimation CY (STEP 528)
  • the normalized crosscorrelation between the reference channel 10a and the most distant channel 10d is calculated as follows:
  • represents the transpose of the vector and I I represent the norm of the vector and I is the correlation lag.
  • I is selected to span the delay of interest. For a sampling frequency of 16kHz and a spacing between sensors 10a, 10d of 18cm, the lag I is selected to be five samples for an angle of interest of 15°.
  • the impulse response of the tapped delay line filter with filter coefficients W td at the end of the adaptation with the present of both signal and interference sources is shown in FIG.7.
  • the filter coefficient W td is as follows:
  • A MaxW td n 0 A ⁇ Z0 .
  • is calculated base on the threshold ⁇ at step 530. In this embodiment, with ⁇ equal to ⁇ 15°, ⁇ is equivalent to 2.
  • a low P k ratio indicates the present of strong interference signals over the target signal and a high P k ratio shows high target signal to interference ratio.
  • Adaptive Spatial Filter 44 (STEPS 532-536)
  • FIG.8 shows a block diagram of the Adaptive Linear Spatial Filter 44.
  • the function of the filter is to separate the coupled target interference and noise signals into two types.
  • the objective is to adopt the filter coefficients of filter 44 in such a way so as to enhanced the target signal and output it in the Sum Channel and at the same time eliminate the target signal from the coupled signals and output them into the Difference Channels.
  • the adaptive filter elements in filter 44 acts as linear spatial prediction filters that predict the signal in the reference channel whenever the target signal is present.
  • the filter stops adapting when the signal is deemed to be absent.
  • the filter coefficients are updated whenever the conditions of steps are met, namely:
  • the adaptive threshold detector detects the presence of signal;
  • the peak ratio exceeds a certain threshold;
  • the running R Sd exceeds a certain threshold;
  • the digitized coupled signal Xo from sensor 10a is fed through a digital delay element 710 of delay Z "Lsu/2 .
  • Digitized coupled signals XL X 2 , X 3 from sensors 10b, 10c, 10d are fed to respective filter elements 712,4,6.
  • the outputs from elements 710,2,4,6 are summed at Summing element 718, the output from the Summing element 718 being divided by four at the divider element 719 to form the Sum channel output signal.
  • the output from delay element 710 is also subtracted from the outputs of the filters 712,4,6 at respective Difference elements 720,2,4, the output from each Difference element forming a respective Difference channel output signal, which is also fed back to the respective filter 712,4,6.
  • the function of the delay element 710 is to time align the signal from the reference channel 10a with the output from the filters 712,4,6.
  • the filter elements 712,4,6 adapt in parallel using the normalized LMS algorithm given by Equations E.1...E.8 below, the output of the Sum Channel being given by equation E.1 and the output from each Difference Channel being given by equation E.6:
  • denotes the transpose of a vector.
  • X m (k) and W su (k) are column vectors of dimension (Lsu x 1).
  • the weight Xm(k) is updated using the normalized LMS algorithm as follows:
  • Running R Sd is computed every N/2 samples and it's being used with other conditions to test whether that particular sample should update or not.
  • Running R S is calculated as follows:
  • the coefficients of the filter could adapt to the wrong direction or sources.
  • a set of 'best coefficients' is kept and copied to the beam-former coefficients when it is detected to be pointing to a wrong direction, after an update.
  • a set of 'best weight' includes all of the three filter coefficients (W su 1 - W su 3 ). They are saved based on the following conditions: When there is an update on filter coefficients W su , the calculated P 2 ratio is compared with the previous stored B P , if it is above the BPk, this new set of filter coefficients shall become the new set of 'best weight' and current P k2 ratio is saved as the new BP k .
  • a second mechanism is used to decide when the filter coefficients should be restored with the saved set of 'best weights'. This is done when filter coefficients are updated and the calculated P k2 ratio is below BP and threshold Tp k .
  • the value of T Pk is equal to 0.65.
  • J N/2, the number of samples, in this embodiment 256.
  • ESUM is the sum channel energy and EDIF is the difference channel
  • the energy ratio between the Sum Channel and Difference Channel (R S d) must not exceed a predetermined threshold.
  • the threshold is determined to be about 1.5.
  • the power ratio between the Sum Channel and Difference Channel must not exceed a dynamic threshold, Tp rS d-
  • This Energy Ratio Factor R Sdf is obtained by passing the R Sd to a non-linear S- shape transfer function as shown in FIG. 9. Certain range of the R Sd value can be boosted up or suppressed by changing the shape of the transfer function using different sets of threshold level, SL and SH.
  • Tp rSd This dynamic noise power threshold, Tp rSd is updated base on the following conditions:
  • FIG.10 shows a schematic block diagram of the Adaptive Interference Filter 46. This filter adapts to interference signal and subtracts it from the Sum Channel so as to derive an output with reduced interference noise.
  • the filter 46 takes outputs from the Sum and Difference Channels of the filter 44 and feeds the Difference Channel Signals in parallel to another set of adaptive filter elements 750,2,4 and feed the Sum Channel signal to a corresponding delay element 756.
  • the outputs from the three filter elements 750,2,4 are subtracted from the output from delay element 756 at Difference element 758 to form and error output e c , which is fed back to the filter elements 750,2,4.
  • the output from filter 44 is also passed to an Adaptive Signal Multiplexer to mix with filter output from filter 48 and subtract it from the Sum Channel.
  • LMS Least Mean Square algorithm
  • ⁇ ⁇ q is a user select factor 0 ⁇ / H? ⁇ 2 and where m is 0,1,2....M-1 , the number of channels, in this case 0..3.
  • FIG.11 shows a schematic block diagram of the Adaptive Ambient Noise Estimation Filter 48. This filter adapts to the environment noise and subtracts it from the Sum Channel so as to derive an output with reduced noise.
  • the filter 48 takes outputs from the Sum and Difference Channels of the filter 44 and feeds the Difference Channel Signals in parallel to another set of adaptive filter elements 760,2,4 and feed the Sum Channel signal to a corresponding delay element 766.
  • the outputs from the three filter elements 760,2,4 are subtracted from the output from delay element 766 at Difference element 768 to form and error output e C2 , which is fed back to the filter elements 760,2,4.
  • the output from filter 48 also passed to an Adaptive Signal Multiplexer to mix with filter output from filter 46 and subtract it from the Sum Channel.
  • LMS Least Mean Square algorithm
  • WZ (* + !) WZ (k)+2 ⁇ ZY m (k)e C 2 W
  • FIG.12 shows a schematic block diagram of the Adaptive Signal Multiplexer. This multiplexer adaptively multiplex the output from interference filter 46 S,- and ambient noise filter 48 S n to produce two interference signals l c and / s as follows:
  • the weights (W e1 , W e2 ) and (W n1 , W n ) can be changed base on different input signal environment conditions to minimize signal cancellation or improve unwanted signal suppression.
  • the weights are determined base on the following conditions:
  • lc is subtracted from the Sum Channel S c so as to derive an output e s with reduced noise and interference.
  • This output e s is almost interference and noise free in an ideal situation. However, in a realistic situation, this cannot be achieved. This will cause signal cancellation that degrades the target signal quality or noise or interference will feed through and this will lead to degradation of the output signal to noise and interference ratio.
  • the signal cancellation problem is reduced in the described embodiment by use of the Adaptive Spatial Filter 44, which reduces the target signal leakage into the Difference Channel. However, in cases where the signal to noise and interference is very high, some target signal may still leak into these channels.
  • the other output signal from Adaptive Signal Multiplexer / s is fed into the Adaptive Non-Linear Interference and Noise Suppression Processor 50.
  • This processor processes input signals in the frequency domain coupled with the well-known overlap add block-processing technique.
  • S c (t), e s (t) and l s (t) is buffered into a memory as illustrated in FIG.13.
  • the buffer consists of N/2 of new samples and N/2 of old samples from the previous block.
  • a Hanning Window is then applied to the N samples buffered signals as illustrated in FIG.14 expressed mathematically as follows:
  • (H n ) is a Hanning Window of dimension N, N being the dimension of the buffer.
  • the "dot” denotes point-by-point multiplication of the vectors.
  • T is a time index.
  • Equation H.9 and H.10 A modified spectrum is then calculated, which is illustrated in Equations H.9 and H.10:
  • the values of the scalars (r s and ⁇ ) control the tradeoff between unwanted signal suppression and signal distortion and may be determined empirically.
  • (r s and ⁇ ) are calculated as 1/(2 VS ) and 1/(2 VI ) where vs and vi are scalars.
  • Pi may contain some of the frequency components of P s due to the wrong estimation of Pj. Therefore, frequency scanning is applied to both P s and Pj to look for the peaks in the same frequency components. For those peaks in Pi is then multiplied by an attenuation factor which is chosen to be 0.1 in this case.
  • the Spectra (P s ) and (Pj) are warped into (Nb) critical bands using the Bark Frequency Scale [See Lawrence Rabiner and Bing Hwang Juang, Fundamental of Speech Recognition, Prentice Hall 1993].
  • the warped Bark Spectrum of (P s ) and (Pj) are denoted as (B s ) and (Bi).
  • Unvoice Ratio vpenal If Unvoice _Ratio > Unvoice_Th
  • voice band upper cutoff k the value of voice band upper cutoff k, ⁇ nvoiced band lower cutoff /, unvoiced threshold Unvoice_Th and amplification factor A is equal to 16, 18, 10 and 8 respectively.
  • B n A Bark Spectrum of the system noise and environment noise is similarly computed and is denoted as (B n ).
  • B n is updated as follows:
  • nh ⁇ *nl1 +(1- ⁇ ) ⁇ r1 ;
  • Bn ⁇ *B n + (1- ⁇ )*B s ;
  • ⁇ i and ⁇ 2 are weights whose can be chosen empirically so as to maximize unwanted signals and noise suppression with minimized signal distortion.
  • R p0 and R pp are column vectors of dimension (Nb x1), Nb being the dimension of the Bark Scale Critical Frequency Band and i is a column unity vector of dimension (Nb x 1) as shown below:
  • Equation J.7 means element-by-element division.
  • R pr is also a column vector of dimension (Nb x 1).
  • Table 2 The value of ⁇ is given in Table 2 below:
  • the value i is set equal to 1 on the onset of a signal and ⁇ j value is therefore equal to 0.01625. Then the i value will count from 1 to 5 on each new block of N/2 samples processed and stay at 5 until the signal is off. The i will start from 1 again at the next signal onset and the ⁇ i is taken accordingly.
  • ⁇ j is made variable and starts at a small value at the onset of the signal to prevent suppression of the target signal and increases, preferably exponentially, to smooth R pr .
  • R rr is calculated as follows:
  • Equation J.8 is again element-by-element.
  • R rr is a column vector of dimension (Nb x 1 ).
  • L x is a column vector of dimension (Nb x 1) as shown below:
  • L (J.10) l x (nb)
  • L y of dimension (Nb x 1) is then defined as:
  • E(nb) is truncated to the desired accuracy.
  • L y can be obtained using a look-up table approach to reduce computational load.
  • G b is a column vector of dimension (Nb x 1) as shown: S(l) g(2)
  • G b is still in the Bark Frequency Scale, it is then unwrapped back to the normal linear frequency scale of N dimensions.
  • the unwrapped G b is denoted as G.
  • the recovered time domain signal is given by:
  • IFFT denotes an Inverse Fast Fourier Transform, with only the Real part of the inverse transform being taken.
  • the embodiment described is not to be construed as limitative. For example, there can be any number of channels from two upwards.
  • many steps of the method employed are essentially discrete and may be employed independently of the other steps or in combination with some but not all of the other steps.
  • the adaptive filtering and the frequency domain processing may be performed independently of each other and the frequency domain processing steps such as the use of the modified spectrum, warping into the Bark scale and use of the scaling factor ⁇ , can be viewed as a series of independent tools which need not all be used together.
  • Figs. 16 and 17 an embodiment of the invention is shown which is a headset system. As shown schematically in Fig. 16, the system has two units, namely a base station 71 and a mobile unit 72.
  • the base unit provides connection to any host system 73 (such as a PC) through a USB (universal serial bus). It acts as a router for steaming audio information between the host system and the mobile unit 72. It is formed with a cradle (not shown) for receiving and holding the mobile unit 72.
  • the cradle is preferably provided with a charging unit co-operating with a rechargeable power source which is part of the mobile unit 72. The charging unit charges the power source while the mobile unit 72 is held by the cradle.
  • the base unit 71 includes at least one aerial 74 for two-way wireless communication with at least one aerial 75 of the mobile unit 72.
  • the mobile unit includes a loadspeaker 76 (shown physically connected to the mobile unit 72 by a wire, though as explained below, this is not necessary), and at least two microphones (audio sensors) 77.
  • the wireless link between mobile unit 72 and base station 71 is a highly secure RF Bluetooth link.
  • Fig. 17 shows the mobile unit 72 in more detail. It has a structure defining an open loop 78 to be placed around the head or neck of a user, for example so as to be supported on the user's shoulders. At the two ends of the loop are multiple microphones 77 (normally 2 or 4 in total), to be placed in proximity of the user's mouth for receiving voice input. One of more batteries 79 may be provided near the microphones 76. In this case there are two antennas 75 embedded in the structure. Away from the antennas, the loop 78 is covered with RF absorbing material. A rear portion 80 of the loop is a flex-circuit containing digital signal processing and RF circuitry.
  • the system further includes an ear speaker (not shown) magnetically coupled to the mobile unit 72 by components (not shown) provided on the mobile unit 72.
  • the user wears the ear speaker in one of his ears, and it allows audio output from the host system 73. This enables two-way communication applications, such as internet telephony and other speech and audio applications.
  • the system includes digital circuitry carrying out a method according to the invention on audio signals received by the multiple microphones 76.
  • Some or all of the circuitry can be within the circuitry 80 and/or within the base unit 71.
  • Figures 18(a) and 18(b) show two ways in which a user can wear the mobile unit 72 having the shape illustrated in Fig. 17.
  • Fig. 18(a) the user wears the mobile unit 72 resting on the top of his head with the microphones close to his mouth.
  • Fig.18(b) the user has chosen to wear the mobile unit 72 supported by his shoulders and with the two arms of the loop embracing his neck, again with the microphone close to his mouth.

Abstract

A headset system is proposed including a headset unit to be worn by a user and having two or more microphones, and a base unit in wireless communication with the headset. Signals received from the microphones are processed using a first adaptive filter to enhance a target signal, and then divided and supplied to a second adaptive filter arranged to reduce interference signals and a third filter arranged to reduce noise. The outputs of the second and third filters are combined, and are be subject to further processing in the frequency domain. The results are transmitted to a speech recognition engine.

Description

System and Apparatus for Speech Communication and Speech recognition
Field of the invention
The present invention relates to a system and apparatus for speech communication and speech recognition. It further relates to signal processing methods which can be implemented in the system.
Background of the Invention
The present applicant's PCT application PCT/SG99/00119, the disclosure of which is incorporated herein by reference in its entirety, proposes a method of processing signals in which signals received from an array of sensors are subject to a first adaptive filter arranged to enhance a target signal, followed by a second adaptive filter arranged to suppress unwanted signals. The output of the second filter is converted into the frequency domain, and further digital processing is performed in that domain.
The present invention seeks to provide a headset system performing improved signal processing of audio signals and suitable for speech communication.
The present invention further seeks to provide signal processing methods and apparatus suitable for use in a speech communication and/or speech recognition system. Summary of the Invention
In general terms, a first aspect of the present invention proposes a headset system including a base unit and a headset unit to be worn by a user (e.g. resting on the user's head or around the user's shoulders) and having a plurality of microphones, the headset unit and base unit being in mutual wireless communication, and at least one of the base unit and the headset unit having digital signal processing means arranged to perform signal processing in the time domain on audio signals generated by the microphones, the signal processing means including at least one adaptive filter to enhance a wanted signal in the audio signals and at least one adaptive filter to reduce an unwanted signal in the audio signals.
Preferably the digital signal processing means are part of the headset unit.
The headset can be used for communication with the base unit, and optionally with other individuals, especially via the base unit. The headset system may comprise, or be in communication with, a speech recognition engine for recognizing speech of the user wearing the headset unit.
Although the signal processing may be as described in PCT/SG99/00119, more preferably, the signal processing is modified to distinguish between the noise and interference signals. Signals received from the microphones (array of sensors) are processed using a first adaptive filter to enhance a target signal, and then divided and supplied to a second adaptive filter arranged to reduce interference signals and a third filter arranged to reduce noise. The outputs of the second and third filters are combined, and may be subject to further processing in the frequency domain. In fact, this concept provides a second, independent aspect of the invention which is a method of processing signals received from an array of sensors comprising the steps of sampling and digitising the received signals and processing the digitally converted signals, the processing including: filtering the digital signals using a first adaptive filter arranged to enhance a target signal in the digital signals, transmitting the output of the first adaptive filter to a second adaptive filter and to a third adaptive filter, the second filter being arranged to suppress unwanted interference signals, and the third filter being arranged to suppress noise signals; and combining the outputs of the second and third filters.
The invention further provides signal processing apparatus for performing such a method.
Brief Description of the Drawings
An embodiment of the invention will now be described by way of example with reference to the accompanying drawings in which:
Fig.1 illustrates a general scenario in which an embodiment of the invention may operate.
Fig.2 is a schematic illustration of a general digital signal processing system which is an embodiment of present invention. Fig.3 is a system level block diagram of the described embodiment of Fig.2.
Fig.4a-d is a flow chart illustrating the operation of the embodiment of Fig.3.
Fig.5 illustrates a typical plot of non-linear energy of a channel and the established thresholds.
Fig.6 (a) illustrates a wave front arriving from 40 degree off-boresight direction. Fig.6 (b) represents a time delay estimator using an adaptive filter.
Fig.6 (c) shows the impulse response of the filter indicates a wave front from the boresight direction.
Fig.7 shows the response of time delay estimator of the filter indicates an interference signal together with a wave front from the boresight direction.
Fig.8 shows the schematic block diagram of the four channels Adaptive
Spatial Filter.
Fig.9 is a response curve of S-shape transfer function (S function).
Fig.10 shows the schematic block diagram of the Adaptive Interference Filter. Fig.11 shows the schematic block diagram of the Adaptive Ambient Noise
Estimator.
Fig.12 is a block diagram of Adaptive Signal Multiplexer.
Fig.13 shows an input signal buffer.
Fig.14 shows the use of a Hanning Window on overlapping blocks of signals. Fig.15 illustrates a sudden rise of noise level of the nonlinear energy plot.
Fig. 16 illustrates a specific embodiment of the invention schematically.
Fig. 17 illustrates a headset unit which is a component of the embodiment of
Fig. 16.
Fig. 18, which is composed- of Figs. 18(a) and 18(b), shows two ways of wearing the headset unit of Fig. 17.
Detailed Description of the Embodiment of the Invention
Below, with reference to Figs. 16 and 17, we describe a specific embodiment of the invention. Before that, we describe in detail a digital signal processing technique which may be employed by the invention.
FIG. 1 illustrates schematically the operating environment of a signal processing apparatus 5 of the described embodiment of the invention, shown in a simplified example of a room. A target sound signal "s" emitted from a source s' in a known direction impinging on a sensor array, such as a microphone array 10 of the apparatus 5, is coupled with other unwanted signals namely interference signals u1, u2 from other sources A, B, reflections of these signals u1r, u2r and the target signal's own reflected signal sr. These unwanted signals cause interference and degrade the quality of the target signal "s" as received by the sensor array. The actual number of unwanted signals depends on the number of sources and room geometry but only three reflected (echo) paths and three direct paths are illustrated for simplicity of explanation. The sensor array 10 is connected to processing circuitry 20-60 and there will be a noise input q associated with the circuitry which further degrades the target signal.
An embodiment of signal processing apparatus 5 is shown in FIG.2. The apparatus observes the environment with an array of four sensors such as microphones 10a-10d. Target and noise/interference sound signals are coupled when impinging on each of the sensors. The signal received by each of the sensors is amplified by an amplifier 20a-d and converted to a digital bitstream using an analogue to digital converter 30a-d. The bit streams are feed in parallel to the digital signal processor 40 to be processed digitally. The processor provides an output signal to a digital to analogue converter 50 which is fed to a line amplifier 60 to provide the final analogue output.
FIG.3 shows the major functional blocks of the digital processor in more detail. The multiple input coupled signals are received by the four-channel microphone array 10a-10d, each of which forms a signal channel, with channel 10a being the reference channel. The received signals are passed to a receiver front end which provides the functions of amplifiers 20 and analogue to digital converters 30 in a single custom chip. The four channel digitized output signals are fed in parallel to the digital signal processor 40. The digital signal processor 40 comprises five sub-processors. They are (a) a 3/036614
Preliminary Signal Parameters Estimator and Decision Processor 42, (b) a Signal Adaptive Filter 44, (c) an Adaptive Interference Filter 46, (d) an Adaptive Noise Estimation Filter 48, and (e) an Adaptive Interference and Noise Cancellation and Suppression Processor 50. The basic signal flow is from processor 42, to processor 44, to processor 46 and 48, to processor 50. The output of processor 42 is referred to as "stage 1" in this process; the output of processor 44 as "stage 2", and the output of processors 46, 48 as "stage 3". These connections being represented by thick arrows in FIG.3. The filtered signal S is output from processor 50. Decisions necessary for the operation of the processor 40 are generally made by processor 42 which receives information from processors 44-50, makes decisions on the basis of that information and sends instructions to processors 44-50, through connections represented by thin arrows in FIG.3. The outputs I, S of the processor 40 are transmitted to a Speech recognition engine, 52.
It will be appreciated that the splitting of the processor 40 into the five component parts 42, 44, 46, 48 and 50 is essentially notional and is made to assist understanding of the operation of the processor. The processor 40 would in reality be embodied as a single multi-function digital processor performing the functions described under control of a program with suitable memory and other peripherals. Furthermore, the operation of the speech recognition engine 52 also could in principle be incorporated into the operation of the processor 40.
A flowchart illustrating the operation of the processors is shown in FIG 4a-d and this will firstly be described generally. A more detailed explanation of aspects of the processor operation will then follow.
The front end 20,30 processes samples of the signals received from array 10 at a predetermined sampling frequency, for example 16kHz. The processor 42 3/036614
includes an input buffer 43 that can hold N such samples for each of the four channels. Upon initialization, the apparatus collects a block of N/2 new signal samples for all the channels at step 500, so that the buffer holds a block of N/2 new samples and a block of N/2 previous samples. The processor 42 then removes any DC from the new samples and pre-emphasizes or whitens the samples at step 502.
Following this, the total non-linear energy of a stage 1 signal sample Erl and a stage 2 signal sample Er3 is calculated at step 504. The samples from the reference channel 10a are used for this purpose although any other channel could be used.
There then follows a short initialization period at step 506 in which the first 20 blocks of N/2 samples of signal after start-up are used to estimate a Bark Scale system noise Bn at step 516 and a histogram Pb at step 518. During this short period, an assumption is made that no target signals are present. The updated Pb is then used with updated Pbs to estimate the environment noise energy En and two detection thresholds, a noise threshold Tnι and a larger signal threshold Tn2, are calculated by processor 42 from En using scaling factors. The routine then moves to point B and point F.
After this initialization period, Pbs and Bn are updated when an update condition is fulfilled.
At step 508, it is determined if the stage 3 signal energy Er3 is greater than the noise threshold Tnι. If not, the Bark Scale system noise Bn is updated at step 510. Then, it'll proceed to step 512. If so, the routine will skip step 510 and proceed to step 512. A test is made at step 512 to see if the signal energy Erι is greater than the noise threshold Tn-|. If so, Pb and Pbs are estimated at step 518 for computing En, Tnι and Tn2. The routine then moves to point B and point F. If not, only Pbs will be updated and it's used with previous Pb to compute En, Tnι and Tn2 at step 514. Tn1 and Tn2 will follow the environment noise level closely. The histogram is used to determine if the signal energy level shows a steady state increase which would indicate an increase in noise, since the speech target signal will show considerable variation over time and thus can be distinguished. This is illustrated in FIG.15 in which a signal noise level rises from an initial level to a new level which exceeds both thresholds.
A test is made at step 520.to see if the estimated energy Erι in the reference channel 10a exceeds the second threshold Tn2. If so, a counter Cι_ is reset and a candidate target signal is deemed to be present. The apparatus only wishes to process candidate target signals that impinge on the array 10 from a known direction normal to the array, hereinafter referred to as the boresight direction, or from a limited angular departure there from, in this embodiment plus or minus 15 degrees. Therefore, the next stage is to check for any signal arriving from this direction.
At step 528, three coefficients are established, namely a correlation coefficient Cx, a correlation time delay Td and a filter coefficient peak ratio Pk which together provide an indication of the direction from which the target signal arrived.
At step 530, three tests are conducted to determine if the candidate target signal is an actual target signal. First, the cross correlation coefficient Cx must exceed a predetermined threshold Tc, second, the size of the delay coefficient must be less than a value θ indicating that the signal has impinged on the array within the predetermined angular range and lastly the filter coefficient peak ratio Pk must exceed a predetermined threshold Tpki. If these conditions are not met, the signal is not regarded as a target signal and the routine passes to step 534 (non-target signal filtering). If the conditions are met, the confirmed target signal is fed to step 532 (target signal filtering) of Signal Adaptive Spatial Filter 44.
If at step 520, the estimated energy Erι in the reference channel 10a is found not to exceed the second threshold Tn2, the target signal is considered not to be present and the routine passes to step 534 via steps 522-526 in which the counter CL is incremented. At step 524, CL is checked against a threshold TCL- If the threshold is reached, block leak compensation is performed on the filter coefficient Wtd and counter CL is reset at step 526. This block leak compensation step improves the adaptation speed of the filter coefficient Wtd to the direction of fast changing target sources and environment. If the threshold is not reached, the program moves to step 534 described below.
Following step 530, the confirmed target signal is fed to step 532 at the Signal Adaptive Spatial Filter 44. The filter is instructed to perform adaptive filtering at step 532 and 536, in which the filter coefficients Wsu are adapted to provide a "target signal plus noise" signal in the reference channel and "noise only" signals in the remaining channels using the Least Mean Square (LMS) algorithm. In order to prevent the filter coefficient updated wrongly, a running energy ratio RSd is computed at every sample at step 532. This running energy ratio RSd is used as a condition to test whether that the filter coefficient corresponding to that particular sample should be updated or not. The filter 44 output channel equivalent to the reference channel is for convenience referred to as the Sum Channel and the filter 44 output from the other channels, Difference Channels. The signal so processed will be, for convenience, referred to as A'.
If the signal is considered to be a noise signal, the routine passes to step 534 in which the signals are passed through filter 44 without the filter coefficients being adapted, to form the Sum and Difference channel signals. The signals so processed will be referred to for convenience as B'.
The effect of the filter 44 is to enhance the signal if this is identified as a target signal but not otherwise.
At step 538, a new filter coefficient peak ratio P 2 is calculated based on the filter coefficient Wsu- At step 539, if the signal is not A' signals from step 532 the routine passes to step 548. Else, the peak ratio calculated at step 538 is compared with a best peak ratio BPk at step 540. If it is larger than best peak ratio, the value of best peak ratio is replaced by this new peak ratio Pk2 and all the filter coefficients Wsu are stored as the best filter coefficients at step 542. If it is not, the peak ratio Pk2 is again compared with a threshold Tp at step 544. If the peak ratio is below the threshold, a wrong update on the filter coefficients is deemed to be occurred and the filter coefficients are restored to the previous stored best filter coefficients at step 546. If it is above the threshold, the routine passes to step 548.
At step 548, an energy ratio RSd and power ratio PrSd between the Sum Channel and the Difference Channels are estimated by processor 42. Besides these, two other coefficients are also established, namely an energy ratio factor RSdf and a second stage non-linear signal energy Er2. Following this, the adaptive noise power threshold Tprsd is updated based on the calculated power ratio PrSd.
At this point, the signal is divided into two parallel paths namely point C and point D. Following point C, the signal is subject to a further test at step 552 to determine if the noise or interference presence. First, if the signals are A' signals from step 532, the routine passes to step 556. Second, if the estimated energy Er2 is found not to exceed the second threshold Tn2, the signal is considered not to be present and the routine passes to step 556. Third, the filter coefficient peak ratio Pk2 is compared to a threshold TPk2. If it is higher than threshold, this may indicate that there is a target signal and routine passes to step 556. Lastly, the RSd and PrSd are compared to threshold Trsd and Tprsd respectively. If the ratios are both lower than threshold, this indicates probable noise but if higher, this may indicate that there has been some leakage of the target signal into the Difference channel, indication the presence of a target signal after all. For such target signals, the routine also passes to step 556. For all other non-target signals, the routine passes to step 554.
At step 554-558, the signals are processed by the Adaptive Interference Filter 46, the purpose of which is to reduce the unwanted signals. The filter 46, at step 554 is instructed to perform adaptive filtering on the non-target signals with the intention of adapting the filter coefficients to reducing the unwanted signal in the Sum channel to some small error value ecι. This computed ecι is also fed back to step 554 to prevent signal cancellation cause by wrong updating of filter coefficients.
In the alternative, at step 556, the target signals are fed to the filter 46 but this time, no adaptive filtering takes place, so the Sum and Difference signals pass through the filter.
The output signals from processor 46 are thus the Sum channel signal Scι and filtered Difference signal Sj.
Following point D, the signals will pass through few test conditions at step
560. First, if the signals are A' signals from step 532, the routine passes to step 564. Second, if the signals are classified as non-target signal by step 552 (C signal), the routine passes to step 564. Third, the RSdf and Prsd are compared to threshold TrSdf and TPrSd respectively. If the ratios are both lower than threshold, this indicates probable ambient noise signal but if higher, this may indicate that there has been some leakage of the target signal into the Difference channel, indication the presence of a target signal after all. Lastly, if the estimated energy Er2 is found exceeds the first threshold Tnι, signals are considered to be present. For such signals, the routine also passes to step 564. For all other ambient noise signals, the routine passes to step 562.
At step 562-566, the signals are processed by the Adaptive Ambient noise Estimation Filter 48, the purpose of which is to reduce the unwanted ambient noise. The filter 48, at step 562 is instructed to perform adaptive filtering on the ambient noise signals with the intention of adapting the filter coefficients to reducing the unwanted ambient noise in the Sum channel to some small error value eC2.
In the alternative, at step 564, the signals are fed to the filter 48 but this time, no adaptive filtering takes place, so the Sum and Difference signals pass through the filter.
The output signals from processor 48 are thus the Sum channel signal Sc2 and filtered Difference signal Sn.
At step 568, output signals from processor 46: Scι and Sj and output signals from processor 48: SC2 and Sn are processed by an adaptive signal multiplexer. Here, those signals are multiplex and a weighted average error signal es(t), a sum signal Sc(t) and a weighted average interference signal ls(t) are produced. These signals are then collected for the new N/2 samples and the last N/2 samples from the previous block and a Hanning Window Hn is applied to the collected samples as shown in FIG.13 to form vectors Sn, lh and Eh. This is an overlapping technique with overlapping vectors Sn, lh and En being formed from past and present blocks of N/2 samples continuously. This is illustrated in FIG.14. A Fast Fourier Transform is then performed on the vectors Sn, lh and Eh to transform the vectors into frequency domain equivalents Sf, If and Ef at step 570.
At step 572, a modified spectrum is calculated for the transformed signals to provide "pseudo" spectrum values Ps and Pj.
In order to reduce signal distortion due to wrong estimation of the noise spectra, a frequency scanning is performed between Ps and Pj to look for the peaks in the same frequency components at step 574. Attenuation is then performed on those peaks in Pj to reduce the signal cancellation effect. Ps and Pi are then warped into the same Bark Frequency Scale to provide Bark Frequency scaled values Bs and Bj at step 576. At step 578, a voice unvoice detection is performed on Bs and Bj to reduce the signal cancellation on the unvoice signal.
A weighted combination By *of Bn (through path F) and Bj is then made at step 580 and this is combined with Bs to compute the Bark Scale non-linear gain Gb at step 582.
Gb is then unwrapped to the normal frequency domain to provide a gain value G at step 584 and this is then used at step 586 to compute an output spectrum Sout using the signal spectrum Sf and Ef from step 570. This gain- adjusted spectrum suppresses the interference signals, the ambient noise and system noise.
An inverse FFT is then performed on the spectrum Sout at step 588 and the output signal is then reconstructed from the overlapping signals using the overlap add procedure at step 590. Hence, besides providing the Speech Recognition Engine 52 with a processed signal S, the system also provides a set of useful information indicated as I on Fig. 3. This set of information may include any one or more of:
1. The direction of speech signal, Td (step 528).
2. Signal Energy, Erι (step 504).
3. Noise threshold, Tnι & Tn2 (step 514 and 518).
4. Estimated SINR (signal to interference noise ratio) and SNR (signal to noise ratio), and RSd (step 548).
5. Target speech signal presence, A' (steps 530 and 532)
6. Spectrum of processed speech signal, Sout (step 586).
7. Potential speech start and end point.
8. Interference signal spectrum, If (step 570).
Major steps in the above described flowchart will now be described in more detail.
Non-Linear Energy Estimation (STEPS 504.548)
At each stage of adaptive filter, the reference signal is taken at a delay half the tap-size. Thus, the end of two stages adaptive filter, the signal is delayed by Lsu/2 and Luq/2. In order for the decision-making mechanism for the different stages to accurately follow these delays, the signal energy calculations are calculated at 3 junctions, resulting in 3 pairs of the signal energy.
The first signal energy is calculated at no delay and is used by the time delay estimation and stagel Adaptive Spatial Filter.
30
Erl = -^-∑x(i)2 -x(i + l)x(i -l)
J — 2 ,=ι A.1
The second signal energy is calculated at a delay of half of Adaptive Spatial Filter tap-size, Lsu/2.
I J-Lsu/2-2 5
E = ~TX ∑*(02 -*(* + !)*(/-!) A 7
J - 2 j^-Lsu/2 Λ-
The last signal energy is calculated at a delay of Lsu/2 + Luq/2 and is used by noise updating.
•j J-(Lsu/2+Luq/2)-2 10
^ = - JT ~ 2 i=-(Lsu Σt 24- Lu(q//)22) - X(ϊ + 1)X Ϊ ~ l) -™ A- *3
These delays are implemented by means of buffering.
Threshold Estimation and Updating (STEPS 514.518)
The processor 42 estimates two thresholds T„i and Tn2 based on a statistical approach. Two sets of histogram, referred to as Pb and Pbs, are computed in the same way, except that Pbs is computed every block of N/2 samples and Pb is computed only on the first 20 blocks of N/2 samples or when En < Tn1 which means that there is neither a target signal nor an interference signal is present. Erι is used as the input sample of the histograms, and the length of the histograms is a number M (which may for example be 24). Each histogram is as found from the following equation:
25 H, = aHl + (l -a)δ(i -D).IUx B.1 Where H;- stands for either of Pb and Pb, and has the form:
Figure imgf000017_0001
Figure imgf000017_0002
0,i D δ(i - D) B.4 = D
Thus, α is a forgetting factor. For Pb, a is chosen empirically to be 0.9988 and for Pbs, a is equal to 0.9688.
The value of D which is used in Equation B1 is determined using table 1 below. Specifically, we find the value of Emax in table 1 which is lowest but which is above the input sample Erl , and the corresponding D is used in Equation B.1. Thus, each D labels a corresponding band of values for Eri. For example, if Erι is 412, this the band up to Emax= 424, i.e. the range corresponding to D=73, and accordingly D=13 is used in Equation B.1. Thus, if Eri continues to stay at a certain level, say in the band up to Emax(D), the weight of the corresponding D value in the histogram will start to build up to become the maximum. It indicates that the current running average noise level is approximately Emax(D).
Figure imgf000018_0001
Table 1
After computing Pb and Pbs, the peak values of Pb and Pbs are labelled pp and pps respectively, pp is reset to be equal to (pps - 5) if (pps - pp) > 5.
Below is the pseudo-C which uses pp to estimate Tnι and Tn2-
Np = Emaxfpp]; Rpp = En /(En + Np); gamma = sfun(Rpp, 0, 0.8);
Ep = gamma*Ep + (1 - gamma)Εn; if (En >= Ep)
En = 0.7Εn + 0.3Εp else if (En <= Er_old)
{
En = 0.9995*En + 0.0005ΕP;
Er_old = En;
} else
En = 0.995Εn + 0.005*EP;
The Emax values in table 1 were chosen experimentally based on a statistical method. Samples (in this case, Erι) were collected under certain environments (office, car, super-market, etc) and a histogram was generated based on the collected samples. From the histogram, a probability density function is computed and from there the Emax values were decided.
Similarly, all the factors in the first order recursive filters and the lower, upper limit of the s-function above are chosen empirically. Once the noise energy En is obtained, the two signal detection thresholds Tnι and Tn2 are established as follows:
Tni = δi En B.5 Tn2 = δ2 En B.6
δi and δ2 are scalar values that are used to select the thresholds so as to optimize signal detection and minimize false signal detection. As shown in FIG.5, Tnι should be above the system noise level, with Tn2 sufficient to be generally breached by the potential target signal. These factors may be found by trial and error. In this embodiment, δi = 1.375 and δ2 = 1.675 have been found to give good results.
In comparison to the algorithms for setting Tnι and Tn2 in PCT/SG99/00119, the noise level can be tracked more robustly yet faster. A further motivation for the above algorithm for finding the thresholds is to distinguish between signal and noise in all environments, especially noisy environments (car, supermarket, etc.). This means that the user can use the embodiment any where.
Time Delay Estimation (TH) YSTEP 528)
FIG 6A illustrates a single wave front impinging on the sensor array. The wave front impinges on sensor 10d first (A as shown) and at a later time impinges on sensor 10a (A' as shown), after a time delay td. This is because the signal originates at an angle of 40 degrees from the boresight direction. If the signal originated from the boresight direction, the time delay td will have been zero ideally.
Time delay estimation of performed using a tapped delay line time delay estimator included in the processor 42 which is shown in Fig. 6B. The filter has a delay element 600, having a delay Z'U2, connected to the reference channel 10a and a tapped delay line filter 610 having a filter coefficient Wtd connected to channel 10d. Delay element 600 provides a delay equal to half of that of the tapped delay line filter 610. The outputs from the delay element is d(k) and from filter 610 is d'(k). The Difference of these outputs is taken at element 620 providing an error signal e(k) (where k is a time index used for ease of illustration). The error is fed back to the filter 610. The Least Mean Squares
Wtd(k + l) = Wld(k) + 2μ!dSm(k)e(k) ....B.l
(LMS) algorithm is used to adapt the filter coefficient Wtd as follows:
Figure imgf000020_0001
Figure imgf000021_0001
where βtd is a user selected convergence factor 0<βtd<2, | | denoted the norm of a vector, k is a time index, L0 is the filter length.
e(k) = d(k)- d'(k) B4
'd'(k) = Wιd(kf.Slod(k) B.5
The impulse response of the tapped delay line filter 620 at the end of the adaptation is shown in Fig. 6c. The impulse response is measured and the position of the peak or the maximum value of the impulse response relative to origin O gives the time delay Td between the two sensors which is also the angle of arrival of the signal. In the case shown, the peak lies at the centre indicating that the signal comes from the boresight direction (Td=0). The threshold θ at step 506 is selected depending upon the assumed possible degree of departure from the boresight direction from which the target signal might come. In this embodiment, θ is equivalent to ± 15°. Normalized Cross Correlation Estimation CY (STEP 528)
The normalized crosscorrelation between the reference channel 10a and the most distant channel 10d is calculated as follows:
Samples of the signals from the reference channel 10a and channel 10d are buffered into shift registers X and Y where X is of length J samples and Y is of length K samples, where J>K, to form two independent vectors Xr and Yr:
Figure imgf000022_0001
A time delay between the signals is assumed, and to capture this Difference, J is made greater than K. The Difference is selected based on angle of interest. The normalized cross-correlation is then calculated as follows:
Figure imgf000023_0001
Figure imgf000023_0002
Xr(l+1)
Where .. χrl- ...CA
xr(K+l -l)
Where τ represents the transpose of the vector and I I represent the norm of the vector and I is the correlation lag. I is selected to span the delay of interest. For a sampling frequency of 16kHz and a spacing between sensors 10a, 10d of 18cm, the lag I is selected to be five samples for an angle of interest of 15°.
The threshold Tc is determined empirically. Tc = 0.65 is used in this embodiment.
Block Leak compensation LMS for Time Delay Estimation (STEP 526)
In the time delay estimation LMS algorithm, a modified leak compensation form is used. This is simply implemented by:
Wtd = Wtd (where a=forgetting_factor ~=0.98)
This leak compensation form has the property of adapting faster to the direction of fast changing sources and environment. Filter Coefficient Peak Ratio. Pk (STEP 528)
The impulse response of the tapped delay line filter with filter coefficients Wtd at the end of the adaptation with the present of both signal and interference sources is shown in FIG.7. The filter coefficient Wtd is as follows:
Figure imgf000024_0001
With the present of both signal and interference sources, there will be more than one peak at the tapped delay line filter coefficient. The P ratio is calculated as follows:
A = MaxWtd n 0 A ^ Z0 . where A ≤ n ≤ — + Δ
2 2
B = MaxWtd n
. Λ L0 A ZO A where 0 < « < Δ, — + A < n
2 2
P> = -
A + B
Δ is calculated base on the threshold θ at step 530. In this embodiment, with θ equal to ±15°, Δ is equivalent to 2. A low Pk ratio indicates the present of strong interference signals over the target signal and a high Pk ratio shows high target signal to interference ratio. Adaptive Spatial Filter 44 (STEPS 532-536)
FIG.8 shows a block diagram of the Adaptive Linear Spatial Filter 44. The function of the filter is to separate the coupled target interference and noise signals into two types. The first, in a single output channel termed the Sum Channel, is an enhanced target signal having weakened interference and noise i.e. signals not from the target signal direction. The second, in the remaining channels termed Difference Channels, which in the four channel case comprise three separate outputs, aims to comprise interference and noise signals alone.
The objective is to adopt the filter coefficients of filter 44 in such a way so as to enhanced the target signal and output it in the Sum Channel and at the same time eliminate the target signal from the coupled signals and output them into the Difference Channels.
The adaptive filter elements in filter 44 acts as linear spatial prediction filters that predict the signal in the reference channel whenever the target signal is present. The filter stops adapting when the signal is deemed to be absent.
The filter coefficients are updated whenever the conditions of steps are met, namely:
The adaptive threshold detector detects the presence of signal; The peak ratio exceeds a certain threshold; The running RSd exceeds a certain threshold;
As illustrate in FIG.8, the digitized coupled signal Xo from sensor 10a is fed through a digital delay element 710 of delay Z"Lsu/2. Digitized coupled signals XL X2, X3 from sensors 10b, 10c, 10d are fed to respective filter elements 712,4,6. The outputs from elements 710,2,4,6 are summed at Summing element 718, the output from the Summing element 718 being divided by four at the divider element 719 to form the Sum channel output signal. The output from delay element 710 is also subtracted from the outputs of the filters 712,4,6 at respective Difference elements 720,2,4, the output from each Difference element forming a respective Difference channel output signal, which is also fed back to the respective filter 712,4,6. The function of the delay element 710 is to time align the signal from the reference channel 10a with the output from the filters 712,4,6.
The filter elements 712,4,6 adapt in parallel using the normalized LMS algorithm given by Equations E.1...E.8 below, the output of the Sum Channel being given by equation E.1 and the output from each Difference Channel being given by equation E.6:
Figure imgf000026_0001
Where: S(k) = Σ ∑Sm (k) E.2 m=\
Sm (k) = (W:i (k))TXm(k) E.3
Where m is 0,1 ,2...M-1 , the number of channels, in this case 0...3 and τ denotes the transpose of a vector.
Figure imgf000027_0001
Figure imgf000027_0002
Where Xm(k) and Wsu (k) are column vectors of dimension (Lsu x 1).
The weight Xm(k) is updated using the normalized LMS algorithm as follows:
Figure imgf000027_0003
w;: (k+i) = w; (k) + 2μ xm (k)dcm (k) E.7
Where:
Figure imgf000027_0004
and where βsu is a user selected convergence factor 0 < βsu ≤ 2, || || denoted the norm of a vector and k is a time index. Running R^ within Adaptive Spatial Filter (STEP 532)
To prevent filter coefficients being updated wrongly, conditions for updating a block of N/2 samples is insufficient. Running RSd is computed every N/2 samples and it's being used with other conditions to test whether that particular sample should update or not.
Running RS is calculated as follows:
7sd = -^≡ — F.9
F sum + F err
Where:
Eranι = 0.98^,,,, + 0.02(abs[(Sc(k + l))2 -Sc(k) Sc(k + 2)]) F.10
E„ = 0.98Eerr + 0.02(ab [(dcm (k + l)y -dcm (k)dcm (k + 2)]) F.11
Adaptive Spatial Filter Coefficient Restoration (STEPS 540-546)
In the events of wrong updating, the coefficients of the filter could adapt to the wrong direction or sources. To reduce the effect, a set of 'best coefficients' is kept and copied to the beam-former coefficients when it is detected to be pointing to a wrong direction, after an update.
Two mechanisms are used for these: A set of 'best weight' includes all of the three filter coefficients (Wsu 1 - Wsu 3). They are saved based on the following conditions: When there is an update on filter coefficients Wsu, the calculated P 2 ratio is compared with the previous stored BP , if it is above the BPk, this new set of filter coefficients shall become the new set of 'best weight' and current Pk2 ratio is saved as the new BPk.
A second mechanism is used to decide when the filter coefficients should be restored with the saved set of 'best weights'. This is done when filter coefficients are updated and the calculated Pk2 ratio is below BP and threshold Tpk. In this embodiment, the value of TPk is equal to 0.65.
Calculation of Energy Ratio R^ (STEP 548) This is performed as follows:
Figure imgf000029_0001
Figure imgf000029_0002
Figure imgf000029_0003
J=N/2, the number of samples, in this embodiment 256.
Where ESUM is the sum channel energy and EDIF is the difference channel
Figure imgf000030_0001
energy.
Figure imgf000030_0002
The energy ratio between the Sum Channel and Difference Channel (RSd) must not exceed a predetermined threshold. In the four channel case illustrated here the threshold is determined to be about 1.5.
Calculation of Power Ratio Pπm (STEP 548)
This is performed as follows:
d =
Figure imgf000030_0003
Figure imgf000030_0004
J = N/2, the number of samples, in this embodiment 128. Where PSUM is the sum channel power and PDIF is the difference channel power.
Figure imgf000031_0001
The power ratio between the Sum Channel and Difference Channel must not exceed a dynamic threshold, TprSd-
Calculation of Energy Ratio Factor R^ (STEP 548)
This Energy Ratio Factor RSdf is obtained by passing the RSd to a non-linear S- shape transfer function as shown in FIG. 9. Certain range of the RSd value can be boosted up or suppressed by changing the shape of the transfer function using different sets of threshold level, SL and SH.
Dynamic Noise Power Threshold Updating Tg^ (STEP 550)
This dynamic noise power threshold, TprSd is updated base on the following conditions:
If the reference channel signal energy is more than 700 and power ratio is less than 0.45 for 64 consecutive processing blocks, Tprsd = <X1* Tprsd + (1-0tl)*Prsd Else if the reference channel signal energy is less than 700, then prsd = (X2* rSd + (1-α2)*Max_Prsd
In this embodiment, αι = 0.67, α2 = 0.98 and Max_Prsd = 1.3 have been found to give good results.
Adaptive Interference Filter 46 (STEPS 554-558)
FIG.10 shows a schematic block diagram of the Adaptive Interference Filter 46. This filter adapts to interference signal and subtracts it from the Sum Channel so as to derive an output with reduced interference noise.
The filter 46 takes outputs from the Sum and Difference Channels of the filter 44 and feeds the Difference Channel Signals in parallel to another set of adaptive filter elements 750,2,4 and feed the Sum Channel signal to a corresponding delay element 756. The outputs from the three filter elements 750,2,4 are subtracted from the output from delay element 756 at Difference element 758 to form and error output ec , which is fed back to the filter elements 750,2,4. The output from filter 44 is also passed to an Adaptive Signal Multiplexer to mix with filter output from filter 48 and subtract it from the Sum Channel.
Again, the Least Mean Square algorithm (LMS) is used to adapt the filter coefficients Wuq as follows:
Figure imgf000032_0001
Where w;'(kf .Y"'(k (I.2)
Figure imgf000032_0002
Figure imgf000033_0001
(1.4) ( + *) = tø + 2 γm (* «. w
Figure imgf000033_0002
and where βυq is a user select factor 0</ H?<2 and where m is 0,1,2....M-1 , the number of channels, in this case 0..3.
When only target signal is present and the Interference filter is updated wrongly, the error signal in equation 1.1 will be very large and the norm of Y71 will be very small. Hence, by including norm of error signal |ecl| into weight updating μ calculation (equation 1.5), the μ will become very small whenever there is a wrong updating of Interference filter occur. This step help to prevent a wrong updating of weight coefficients of Interference filter and hence reduce the effect of signal cancellation.
Adaptive Ambient Noise Estimation Filter 48 (STEPS 562-566)
FIG.11 shows a schematic block diagram of the Adaptive Ambient Noise Estimation Filter 48. This filter adapts to the environment noise and subtracts it from the Sum Channel so as to derive an output with reduced noise.
The filter 48 takes outputs from the Sum and Difference Channels of the filter 44 and feeds the Difference Channel Signals in parallel to another set of adaptive filter elements 760,2,4 and feed the Sum Channel signal to a corresponding delay element 766. The outputs from the three filter elements 760,2,4 are subtracted from the output from delay element 766 at Difference element 768 to form and error output eC2, which is fed back to the filter elements 760,2,4. The output from filter 48 also passed to an Adaptive Signal Multiplexer to mix with filter output from filter 46 and subtract it from the Sum Channel.
Again, the Least Mean Square algorithm (LMS) is used to adapt the filter coefficients Wn0 as follows:
Figure imgf000034_0001
M m--\i
Where: S„ (k) = ∑ dcm (k) and dcm (fc)= WZ (kJ.Ym (k) m=\
Figure imgf000034_0002
WZ (* + !) = WZ (k)+2μZYm (k)eC2 W
Figure imgf000034_0003
and where βm is a user select factor 0<βm ≤2 and where m is 0,1,2....M-1 , the number of channels, in this case 0...3. Adaptive Signal Multiplexer (STEP 568)
FIG.12 shows a schematic block diagram of the Adaptive Signal Multiplexer. This multiplexer adaptively multiplex the output from interference filter 46 S,- and ambient noise filter 48 Sn to produce two interference signals lc and /s as follows:
ιΛ)= ws. )+we2sn(t
Figure imgf000035_0001
The weights (We1, We2) and (Wn1, Wn ) can be changed base on different input signal environment conditions to minimize signal cancellation or improve unwanted signal suppression. In this embodiment, the weights are determined base on the following conditions:
If target signal is detected and updating condition for filter 46 (552) and filter 48 (560) are false then We1 = 0, We2 = 1.0, Wπ1 =0.8 and Wn2 = 1.0.
Else if no target signal is detected and updating condition for filter 46 (552) is true then We1 = 1.0, We2 = 1.0, Wn1 =1.0 and Wn2 = 1.0.
Else if no target signal is detected and updating condition for filter 46 (552) is false and updating condition for filter 48 (560) is true then Weι = 0, We2 = 1.0, H/ =1.0 and Wn2 = 1.0.
lc is subtracted from the Sum Channel Sc so as to derive an output es with reduced noise and interference. This output es is almost interference and noise free in an ideal situation. However, in a realistic situation, this cannot be achieved. This will cause signal cancellation that degrades the target signal quality or noise or interference will feed through and this will lead to degradation of the output signal to noise and interference ratio. The signal cancellation problem is reduced in the described embodiment by use of the Adaptive Spatial Filter 44, which reduces the target signal leakage into the Difference Channel. However, in cases where the signal to noise and interference is very high, some target signal may still leak into these channels.
To further reduce the target signal cancellation problem and unwanted signal feed through to the output, the other output signal from Adaptive Signal Multiplexer /s is fed into the Adaptive Non-Linear Interference and Noise Suppression Processor 50.
Adaptive Non-Linear Interference and Noise Suppression Processor 50 (STEPS 570-590)
This processor processes input signals in the frequency domain coupled with the well-known overlap add block-processing technique.
Sc(t), es(t) and ls(t) is buffered into a memory as illustrated in FIG.13. The buffer consists of N/2 of new samples and N/2 of old samples from the previous block.
A Hanning Window is then applied to the N samples buffered signals as illustrated in FIG.14 expressed mathematically as follows:
Figure imgf000036_0001
Figure imgf000037_0001
Figure imgf000037_0002
Where (Hn) is a Hanning Window of dimension N, N being the dimension of the buffer. The "dot" denotes point-by-point multiplication of the vectors. T is a time index.
The resultant vectors [Sh], [Eh] and [lh] are transformed into the frequency domain using Fast Fourier Transform algorithm as illustrated in equation H.6, H.7 and H.8 below:
Sf = FJFT Sh) (H.6)
Ef = FFT(Eh) (H.7)
If = FFT(Ih) (H.8)
A modified spectrum is then calculated, which is illustrated in Equations H.9 and H.10:
Figure imgf000037_0003
P/ =|Re(//)| + |lm(//)| + ^(//) «r/ (H.10) Where "Re" and "Im" refer to taking the absolute values of the real and imaginary parts, rs and n are scalars and F(Sf) and F(lf) denotes a function of Sf and If respectively.
One preferred function F using a power function is shown below in equation H.11 and H.12 where "Conj" denotes the complex conjugate:
Ps =|Re(S/)| + |lm(S/)| + (S •coηj'Sf))*r, (H.11)
Figure imgf000038_0001
A second preferred function F using a multiplication function is shown below in equations H.13 and H.14:
Ps = |Re(S/)| + |lm(S/)| + |Re(S/)|*|lnι(S )| *r, (H.13)
P, = |Re(//)| + |lm(//)| + | e(// |*|lm(//)|*rl (H.14)
The values of the scalars (rs and η) control the tradeoff between unwanted signal suppression and signal distortion and may be determined empirically. (rs and η) are calculated as 1/(2VS) and 1/(2VI) where vs and vi are scalars. In this embodiment, vs=vi is chosen as 8 giving rs = π = 1/256. As vs, vi reduce, the amount of suppression will increase.
Freguencv Scan for similar peak between P and Pi.
Pi may contain some of the frequency components of Ps due to the wrong estimation of Pj. Therefore, frequency scanning is applied to both Ps and Pj to look for the peaks in the same frequency components. For those peaks in Pi is then multiplied by an attenuation factor which is chosen to be 0.1 in this case.
The Spectra (Ps) and (Pj) are warped into (Nb) critical bands using the Bark Frequency Scale [See Lawrence Rabiner and Bing Hwang Juang, Fundamental of Speech Recognition, Prentice Hall 1993]. The number of Bark critical bands depends on the sampling frequency used. For a sampling of 16kHz, there will be Nb = 22 critical bands. The warped Bark Spectrum of (Ps) and (Pj) are denoted as (Bs) and (Bi).
Voice Unvoiced Detection and Amplification
This is used to detect voice or unvoiced signal from the Bark critical bands of sum signal and hence reduce the effect of signal cancellation on the unvoiced signal. It is performed as follows:
Figure imgf000039_0001
Vsum = ∑ Bs {n) n=0 where k is the voice band upper cutoff
Figure imgf000039_0002
where / is the unvoiced band lower cutoff
Unvoice Ratio = v„ If Unvoice _Ratio > Unvoice_Th
Bs{n)=Bs(n)xA
where l≤n≤Nb
In this embodiment, the value of voice band upper cutoff k, αnvoiced band lower cutoff /, unvoiced threshold Unvoice_Th and amplification factor A is equal to 16, 18, 10 and 8 respectively.
A Bark Spectrum of the system noise and environment noise is similarly computed and is denoted as (Bn). Bn is first established during system initialization as Bn = Bs and continues to be updated when no target signal is detected (step) by the system i.e. any silence period. Bn is updated as follows:
if((Er3<Tni)||(loop_cnt<20))
{ if (Er3 < nil)} = 0.98; else α = 0.90;
nh = α*nl1 +(1-α)Εr1; Bn = α*Bn + (1-α)*Bs;
}
Using (Bs, Bj and Bn) a non-linear technique is used to estimate a gain (Gb) as follows:
First the unwanted signal Bark Spectrum is combined with the system noise Bark Spectrum by using as appropriate weighting function as illustrate in Equation J.1. B, = ϊB, +a3B (J.1)
Ωi and Ω2 are weights whose can be chosen empirically so as to maximize unwanted signals and noise suppression with minimized signal distortion. In this embodiment, Ωi = 1.0 and Ω2 = 0.25.
Following that a post signal to noise ratio is calculated using Equation J.2 and J.3 below:
R - B> (J.2)
The division in equation J.2 means element-by-element division and not vector division. Rp0 and Rpp are column vectors of dimension (Nb x1), Nb being the dimension of the Bark Scale Critical Frequency Band and i is a column unity vector of dimension (Nb x 1) as shown below:
Figure imgf000041_0001
Figure imgf000041_0002
Figure imgf000042_0001
If any of the rpp elements of Rpp are less than zero, they are set equal to zero.
Using the Decision Direct Approach [see Y. Ephraim and D. Malah: Speech Enhancement Using Optimal Non-Linear Spectrum Amplitude Estimation; Proc. IEEE International Conference Acoustics Speech and Signal Processing (Boston) 1983, pp1118-1121.], the a-priori signal to noise ratio Rpr is calculated as follows:
Bn
Rpr = (l -βi) *Rpp +βχ (J.7)
The division in Equation J.7 means element-by-element division. B0 is a column vector of dimension (Nb x 1) and denotes the output signal Bark Scale Bark Spectrum from the previous block B0 = Gb x Bs (See Equation J.15) (B0 initially is zero). Rpr is also a column vector of dimension (Nb x 1). The value of βι is given in Table 2 below:
Figure imgf000042_0002
Table 2
The value i is set equal to 1 on the onset of a signal and βj value is therefore equal to 0.01625. Then the i value will count from 1 to 5 on each new block of N/2 samples processed and stay at 5 until the signal is off. The i will start from 1 again at the next signal onset and the βi is taken accordingly.
Instead of βi being constant, in this embodiment βj is made variable and starts at a small value at the onset of the signal to prevent suppression of the target signal and increases, preferably exponentially, to smooth Rpr.
From this, Rrr is calculated as follows:
Figure imgf000043_0001
The division in Equation J.8 is again element-by-element. Rrr is a column vector of dimension (Nb x 1 ).
From this, Lx is calculated:
L. = R„ • R po (J.9)
The value Lx of is limited to Pi («3.14). The multiplication is Equation J.9 means element-by-element multiplication. Lx is a column vector of dimension (Nb x 1) as shown below:
L = (J.10) lx(nb) A vector Ly of dimension (Nb x 1) is then defined as:
Figure imgf000044_0001
Where nb = 1 ,2... Nb. Then Ly is given as:
ly(nb) = exp\ - ^ \ (J.12)
and
EOU - -osrm-w, m-)+ιΛbχM≠X+ 9. *»* AW)4
8 96
(J.13)
E(nb) is truncated to the desired accuracy. Ly can be obtained using a look-up table approach to reduce computational load.
Finally, the Gain Gb is calculated as follows:
Gb = R„ »Ly (J.14)
The "dot" again implies element-by-element multiplication. Gb is a column vector of dimension (Nb x 1) as shown: S(l) g(2)
G„ = (J.15) g(nb)
g'Nb)
As Gb is still in the Bark Frequency Scale, it is then unwrapped back to the normal linear frequency scale of N dimensions. The unwrapped Gb is denoted as G.
The output spectrum with unwanted signal suppression is given as:
S , = (1 - Rsdf).G •Sf + Rsdf.Ef (J.16)
The "•" again implies element-by-element multiplication. In eqn J.16 if Rsdf is high (implying high signal energy to interference energy) the output signal spectrum is weighted more from Ef than the Noise suppression part (G»Sf) to prevent signal cancellation caused by the noise suppression part.
The recovered time domain signal is given by:
S, = -Re(IFFT'Sf)) (J.17)
IFFT denotes an Inverse Fast Fourier Transform, with only the Real part of the inverse transform being taken.
Finally, the output time domain signal is obtained by overlap add with the previous block of output signal: S, = (J.18)
Figure imgf000046_0002
Where: (J.19)
Figure imgf000046_0001
The embodiment described is not to be construed as limitative. For example, there can be any number of channels from two upwards. Furthermore, as will be apparent to one skilled in the art, many steps of the method employed are essentially discrete and may be employed independently of the other steps or in combination with some but not all of the other steps. For example, the adaptive filtering and the frequency domain processing may be performed independently of each other and the frequency domain processing steps such as the use of the modified spectrum, warping into the Bark scale and use of the scaling factor β, can be viewed as a series of independent tools which need not all be used together.
Turning now to Figs. 16 and 17, an embodiment of the invention is shown which is a headset system. As shown schematically in Fig. 16, the system has two units, namely a base station 71 and a mobile unit 72.
The base unit provides connection to any host system 73 (such as a PC) through a USB (universal serial bus). It acts as a router for steaming audio information between the host system and the mobile unit 72. It is formed with a cradle (not shown) for receiving and holding the mobile unit 72. The cradle is preferably provided with a charging unit co-operating with a rechargeable power source which is part of the mobile unit 72. The charging unit charges the power source while the mobile unit 72 is held by the cradle.
The base unit 71 includes at least one aerial 74 for two-way wireless communication with at least one aerial 75 of the mobile unit 72. The mobile unit includes a loadspeaker 76 (shown physically connected to the mobile unit 72 by a wire, though as explained below, this is not necessary), and at least two microphones (audio sensors) 77. The wireless link between mobile unit 72 and base station 71 is a highly secure RF Bluetooth link.
Fig. 17 shows the mobile unit 72 in more detail. It has a structure defining an open loop 78 to be placed around the head or neck of a user, for example so as to be supported on the user's shoulders. At the two ends of the loop are multiple microphones 77 (normally 2 or 4 in total), to be placed in proximity of the user's mouth for receiving voice input. One of more batteries 79 may be provided near the microphones 76. In this case there are two antennas 75 embedded in the structure. Away from the antennas, the loop 78 is covered with RF absorbing material. A rear portion 80 of the loop is a flex-circuit containing digital signal processing and RF circuitry.
The system further includes an ear speaker (not shown) magnetically coupled to the mobile unit 72 by components (not shown) provided on the mobile unit 72. The user wears the ear speaker in one of his ears, and it allows audio output from the host system 73. This enables two-way communication applications, such as internet telephony and other speech and audio applications.
Preferably, the system includes digital circuitry carrying out a method according to the invention on audio signals received by the multiple microphones 76. Some or all of the circuitry can be within the circuitry 80 and/or within the base unit 71.
Figures 18(a) and 18(b) show two ways in which a user can wear the mobile unit 72 having the shape illustrated in Fig. 17. In Fig. 18(a) the user wears the mobile unit 72 resting on the top of his head with the microphones close to his mouth. In Fig.18(b) the user has chosen to wear the mobile unit 72 supported by his shoulders and with the two arms of the loop embracing his neck, again with the microphone close to his mouth.
Use of first, second etc. in the claims should only be construed as a means of identification of the integers of the claims, not of process step order. Any novel feature or combination of features disclosed is to.be taken as forming an independent invention whether or not specifically claimed in the appendant claims of this application as initially filed.

Claims

Claims
1. A headset system including a base unit and a headset unit to be worn by a user and having a plurality of microphones, the headset unit and base unit being in mutual wireless communication, and at least one of the base unit and the headset unit having digital signal processing means arranged to perform signal processing in the time domain on audio signals generated by the microphones, the digital signal processing means including. at least one adaptive filter to enhance a wanted signal in the audio signals and at least one adaptive filter to reduce an unwanted signal in the audio signals.
2. A headset system according to claim 1 in which the base unit includes a cradle for holding the headset unit.
3. A headset system according to claim 1 or claim 2 in which the headset unit is associated with a loudspeaker operable by the headset unit for generating audio signals to the user.
4. A headset system according to claim 1 in which the digital signal processing means includes: a first adaptive filter arranged to enhance a target signal in the digital signals, and a second adaptive filter and to a third adaptive filter each receiving the output of the first adaptive filter, the second filter being arranged to suppress unwanted interference signals, and the third filter being arranged to suppress noise signals.
5. A headset system according to claim 4 which the digital processing means is adapted to combine the outputs of the second and third adaptive filters, convert to the frequency domain and perform further processing in the frequency domain.
6. A headset system according to claim 5 in which an output Sj(t) of the second filter and an output Sn(t) of the third filter are linearly combined using weighting factors to derive two interference signals, a first of the interference signals lc being subtracted from the output of the first filter, and a second of the interference signals Is being converted into the frequency domain.
7. A headset system according to any of claims 4 to 6 in which the second and third filter are not adapted if it is determined that a target signal is present.
8. A headset system according to any of claims 4 to 7 in which the second filter is not updated if it is determined that an interference signal is not present.
9. A headset system according to claim 7 or claim 8 further comprising the step of at intervals determining signal energy, and deriving at least one noise threshold from a plurality of values of the signal energy, said determination including determining whether a further signal energy is above the noise threshold.
10. A headset system according to claim 9 in which the derivation of said noise threshold includes using the plurality of signal energy to derive a histogram representing the statistical frequencies of signal energy values in each of a number of bands, and deriving the noise threshold from a signal energy value Emax associated with the band having the highest histogram value.
11. A headset system according to any of claims 4 to 10 in which the digital signal processing means comprises a fourth adaptive filter for determining the direction of arrival of the target signal.
12. A headset system according to claim 11 in which the weights of the fourth adaptive filter are updated including repeatedly performing an update process which attenuates each existing weight value by a forgetting factor α.
13. A headset system according to claim 11 or 12, in which the digital signal processing means is adapted to determine a ratio Pk indicating the ratio of the highest central weight value A of the fourth adaptive filter to the sum of A and the highest peripheral weight value B, the digital signal processing means only adapting the first filter if the ratio Pk is above a given value TP i.
14. A headset system according to claim 13 in which, following an adaptation of the first filter, the digital signal processing means calculates a new value Pk2 of the ratio, determines whether the value of P 2 is below the previous maximum value of P 2 and below a threshold TPk, and if so restores at least one of the first, second and third filters to its previous state.
15. A headset system according to claim 13 or claim 14 when dependent on claim 8 in which the determination that an interference signal is not present includes a determination that the value of said ratio is below a threshold T k2-
16. A headset system according to any of claims 4 to 15 in which the weights of the second filter are adapted by a weight updating factor μ which varies inversely with an error output ecι of the second filter.
17. A headset system according to claim 5 in which the combined signals are transformed into two frequency domain signals which are a desired signal Sf and an interference signal If , Sf and If are transformed into respective modified spectra Ps and Pi, the modified spectra are warped into respective Bark spectra Bs and Bi.
18. A headset system according to claim 17 in which, prior to said warping, frequency scanning is applied to the modified spectra Ps and Pi, and peaks which are found to be common to both are attenuated in Pi.
19. A headset system according to claim 17 or claim 18 in which a ratio is derived of the sum of the values of Bs over the Bark critical bands up to the voice band upper cutoff, and the sum of the values of Bs over the Bark critical bands at and above the unvoiced bank lower cutoff.
20. A headset system according to claim 16 in which the ratio is above a given threshold, the values of Bs above the unvoiced band lower threshold are amplified.
21. A headset system according to any preceding claim further including a speech recognition engine receiving the output of the digital signal processing means.
22. A headset system according to claim 18 in which the speech recognition engine receives from the digital signal processing means information indicating any one or more of: a) a direction of a target signal Td, b) a signal Energy Er1 , c) a noise threshold used by the digital signal processing means, d) an estimated SINR (target signal to interference ratio) and SNR (target signal to noise ratio), e) a signal A' indicating the presence of target speech, f) a spectrum of processed speech signal Sout, g) potential speech start and end points, and h) an interference signal spectrum, If (step 570).
23. A headset system according to any preceding claim in which the headset unit comprises two arms for location proximate the mouth of the user and for positioning to either side of the user's head.
24. A headset system according to claim 23 in which the headset is suitable for positioning supported by the user's shoulders with the arms embracing the user's neck.'
25. A headset system according to claim 23 or claim 24 in which at least one microphone is provided on a free end of each of the arms.
26. A headset unit for use in the headset system of any preceding claim.
27. A method of processing signals received from an array of sensors comprising the steps of sampling and digitising the received signals and processing the digitally converted signals, the processing including: filtering the digital signals using a first adaptive filter arranged to enhance a target signal in the digital signals, transmitting the output of the first adaptive filter to a second adaptive filter and to a third adaptive* filter, the second filter being arranged to suppress unwanted interference signals, and the third filter being arranged to suppress noise signals; and combining the outputs of the second and third filters.
28. Signal processing apparatus arranged to carry out a method according to claim 27.
9. A microphone headset comprising first and second microphones disposed at respective ends of a support, the support being adapted to be worn around the neck or head of a user.
PCT/SG2002/000149 2001-09-12 2002-07-02 System and apparatus for speech communication and speech recognition WO2003036614A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US10/487,229 US7346175B2 (en) 2001-09-12 2002-07-02 System and apparatus for speech communication and speech recognition
AU2002363054A AU2002363054A1 (en) 2001-09-12 2002-07-02 System and apparatus for speech communication and speech recognition
EP02802082A EP1425738A2 (en) 2001-09-12 2002-07-02 System and apparatus for speech communication and speech recognition

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG200105600-1 2001-09-12
SG200105600 2001-09-12

Publications (2)

Publication Number Publication Date
WO2003036614A2 true WO2003036614A2 (en) 2003-05-01
WO2003036614A3 WO2003036614A3 (en) 2004-03-18

Family

ID=20430832

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2002/000149 WO2003036614A2 (en) 2001-09-12 2002-07-02 System and apparatus for speech communication and speech recognition

Country Status (4)

Country Link
US (1) US7346175B2 (en)
EP (1) EP1425738A2 (en)
AU (1) AU2002363054A1 (en)
WO (1) WO2003036614A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1617419A2 (en) * 2004-07-15 2006-01-18 Bitwave Private Limited Signal processing apparatus and method for reducing noise and interference in speech communication and speech recognition
US6999541B1 (en) 1998-11-13 2006-02-14 Bitwave Pte Ltd. Signal processing apparatus and method
EP1729492A2 (en) 2005-05-31 2006-12-06 Bitwave PTE Ltd. System and apparatus for wireless communication with acoustic echo control and noise cancellation
EP2129168A1 (en) * 2008-05-28 2009-12-02 Yat Yiu Cheung Microphone neck supporting member for hearing aid
CN102142259A (en) * 2010-01-28 2011-08-03 三星电子株式会社 Signal separation system and method for automatically selecting threshold to separate sound source

Families Citing this family (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4195267B2 (en) 2002-03-14 2008-12-10 インターナショナル・ビジネス・マシーンズ・コーポレーション Speech recognition apparatus, speech recognition method and program thereof
US6910911B2 (en) 2002-06-27 2005-06-28 Vocollect, Inc. Break-away electrical connector
EA011361B1 (en) * 2004-09-07 2009-02-27 Сенсир Пти Лтд. Apparatus and method for sound enhancement
KR100677396B1 (en) * 2004-11-20 2007-02-02 엘지전자 주식회사 A method and a apparatus of detecting voice area on voice recognition device
US20070116300A1 (en) * 2004-12-22 2007-05-24 Broadcom Corporation Channel decoding for wireless telephones with multiple microphones and multiple description transmission
US20060135085A1 (en) * 2004-12-22 2006-06-22 Broadcom Corporation Wireless telephone with uni-directional and omni-directional microphones
US20060133621A1 (en) * 2004-12-22 2006-06-22 Broadcom Corporation Wireless telephone having multiple microphones
US20060147063A1 (en) * 2004-12-22 2006-07-06 Broadcom Corporation Echo cancellation in telephones with multiple microphones
US7983720B2 (en) * 2004-12-22 2011-07-19 Broadcom Corporation Wireless telephone with adaptive microphone array
US8509703B2 (en) * 2004-12-22 2013-08-13 Broadcom Corporation Wireless telephone with multiple microphones and multiple description transmission
EP1865745A4 (en) * 2005-04-01 2011-03-30 Panasonic Corp Handset, electronic device, and communication device
US7876856B2 (en) * 2005-06-23 2011-01-25 Texas Instrumentals Incorporated Quadrature receiver with correction engine, coefficient controller and adaptation engine
US20100130198A1 (en) * 2005-09-29 2010-05-27 Plantronics, Inc. Remote processing of multiple acoustic signals
US8417185B2 (en) 2005-12-16 2013-04-09 Vocollect, Inc. Wireless headset and method for robust voice data communication
US7885419B2 (en) 2006-02-06 2011-02-08 Vocollect, Inc. Headset terminal with speech functionality
US7773767B2 (en) * 2006-02-06 2010-08-10 Vocollect, Inc. Headset terminal with rear stability strap
WO2007127182A2 (en) * 2006-04-25 2007-11-08 Incel Vision Inc. Noise reduction system and method
US7764798B1 (en) * 2006-07-21 2010-07-27 Cingular Wireless Ii, Llc Radio frequency interference reduction in connection with mobile phones
US8331430B2 (en) * 2006-08-02 2012-12-11 Broadcom Corporation Channel diagnostic systems and methods
EP2070391B1 (en) * 2006-09-14 2010-11-03 LG Electronics Inc. Dialogue enhancement techniques
EP1933303B1 (en) * 2006-12-14 2008-08-06 Harman/Becker Automotive Systems GmbH Speech dialog control based on signal pre-processing
US20080181392A1 (en) * 2007-01-31 2008-07-31 Mohammad Reza Zad-Issa Echo cancellation and noise suppression calibration in telephony devices
US20080274705A1 (en) * 2007-05-02 2008-11-06 Mohammad Reza Zad-Issa Automatic tuning of telephony devices
US8767975B2 (en) * 2007-06-21 2014-07-01 Bose Corporation Sound discrimination method and apparatus
US8428661B2 (en) * 2007-10-30 2013-04-23 Broadcom Corporation Speech intelligibility in telephones with multiple microphones
KR101459317B1 (en) 2007-11-30 2014-11-07 삼성전자주식회사 Method and apparatus for calibrating the sound source signal acquired through the microphone array
WO2009076523A1 (en) 2007-12-11 2009-06-18 Andrea Electronics Corporation Adaptive filtering in a sensor array system
US9392360B2 (en) 2007-12-11 2016-07-12 Andrea Electronics Corporation Steerable sensor array system with video input
USD626949S1 (en) 2008-02-20 2010-11-09 Vocollect Healthcare Systems, Inc. Body-worn mobile device
US8611554B2 (en) * 2008-04-22 2013-12-17 Bose Corporation Hearing assistance apparatus
US8542843B2 (en) * 2008-04-25 2013-09-24 Andrea Electronics Corporation Headset with integrated stereo array microphone
US8498425B2 (en) * 2008-08-13 2013-07-30 Onvocal Inc Wearable headset with self-contained vocal feedback and vocal command
USD605629S1 (en) 2008-09-29 2009-12-08 Vocollect, Inc. Headset
US8386261B2 (en) 2008-11-14 2013-02-26 Vocollect Healthcare Systems, Inc. Training/coaching system for a voice-enabled work environment
US8160287B2 (en) 2009-05-22 2012-04-17 Vocollect, Inc. Headset with adjustable headband
US8582699B2 (en) * 2009-07-30 2013-11-12 Texas Instruments Incorporated Maintaining ADC input magnitude from digital par and peak value
US8842848B2 (en) * 2009-09-18 2014-09-23 Aliphcom Multi-modal audio system with automatic usage mode detection and configuration capability
US8438659B2 (en) 2009-11-05 2013-05-07 Vocollect, Inc. Portable computing device and headset interface
TWI429225B (en) * 2009-11-16 2014-03-01 Mstar Semiconductor Inc Target signal determination method and associated apparatus
US8565446B1 (en) 2010-01-12 2013-10-22 Acoustic Technologies, Inc. Estimating direction of arrival from plural microphones
US8725506B2 (en) * 2010-06-30 2014-05-13 Intel Corporation Speech audio processing
US8659397B2 (en) 2010-07-22 2014-02-25 Vocollect, Inc. Method and system for correctly identifying specific RFID tags
USD643400S1 (en) 2010-08-19 2011-08-16 Vocollect Healthcare Systems, Inc. Body-worn mobile device
USD643013S1 (en) 2010-08-20 2011-08-09 Vocollect Healthcare Systems, Inc. Body-worn mobile device
US9078077B2 (en) 2010-10-21 2015-07-07 Bose Corporation Estimation of synthetic audio prototypes with frequency-based input signal decomposition
US9685171B1 (en) * 2012-11-20 2017-06-20 Amazon Technologies, Inc. Multiple-stage adaptive filtering of audio signals
EP2877993B1 (en) * 2012-11-21 2016-06-08 Huawei Technologies Co., Ltd. Method and device for reconstructing a target signal from a noisy input signal
US9048942B2 (en) * 2012-11-30 2015-06-02 Mitsubishi Electric Research Laboratories, Inc. Method and system for reducing interference and noise in speech signals
US9589560B1 (en) * 2013-12-19 2017-03-07 Amazon Technologies, Inc. Estimating false rejection rate in a detection system
US10133702B2 (en) * 2015-03-16 2018-11-20 Rockwell Automation Technologies, Inc. System and method for determining sensor margins and/or diagnostic information for a sensor
KR102444061B1 (en) * 2015-11-02 2022-09-16 삼성전자주식회사 Electronic device and method for recognizing voice of speech
US9881630B2 (en) * 2015-12-30 2018-01-30 Google Llc Acoustic keystroke transient canceler for speech communication terminals using a semi-blind adaptive filter model
US10366701B1 (en) * 2016-08-27 2019-07-30 QoSound, Inc. Adaptive multi-microphone beamforming
CN110383700A (en) * 2017-03-10 2019-10-25 英特尔Ip公司 Spuious reduction circuit and device, radio transceiver, mobile terminal, for spuious reduced method and computer program
EP3429222B1 (en) 2017-07-14 2019-08-28 Hand Held Products, Inc. Adjustable microphone headset
CN108564963B (en) * 2018-04-23 2019-10-18 百度在线网络技术(北京)有限公司 Method and apparatus for enhancing voice
RU2716556C1 (en) * 2018-12-19 2020-03-12 Общество с ограниченной ответственностью "ПРОМОБОТ" Method of receiving speech signals
CN110060695A (en) * 2019-04-24 2019-07-26 百度在线网络技术(北京)有限公司 Information interacting method, device, server and computer-readable medium
US20220014280A1 (en) * 2020-06-18 2022-01-13 The Government Of The United States, As Represented By The Secretary Of The Navy Methods, apparatuses, and systems for noise removal
CN111798860B (en) * 2020-07-17 2022-08-23 腾讯科技(深圳)有限公司 Audio signal processing method, device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000030264A1 (en) 1998-11-13 2000-05-25 Bitwave Private Limited Signal processing apparatus and method

Family Cites Families (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4025721A (en) 1976-05-04 1977-05-24 Biocommunications Research Corporation Method of and means for adaptively filtering near-stationary noise from speech
SE428167B (en) 1981-04-16 1983-06-06 Mangold Stephan PROGRAMMABLE SIGNAL TREATMENT DEVICE, MAINLY INTENDED FOR PERSONS WITH DISABILITY
US4589137A (en) 1985-01-03 1986-05-13 The United States Of America As Represented By The Secretary Of The Navy Electronic noise-reducing system
US4630305A (en) 1985-07-01 1986-12-16 Motorola, Inc. Automatic gain selector for a noise suppression system
US4630304A (en) 1985-07-01 1986-12-16 Motorola, Inc. Automatic background noise estimator for a noise suppression system
US4628529A (en) 1985-07-01 1986-12-09 Motorola, Inc. Noise suppression system
US4931977A (en) 1987-10-30 1990-06-05 Canadian Marconi Company Vectorial adaptive filtering apparatus with convergence rate independent of signal parameters
US4887299A (en) 1987-11-12 1989-12-12 Nicolet Instrument Corporation Adaptive, programmable signal processing hearing aid
US5225836A (en) 1988-03-23 1993-07-06 Central Institute For The Deaf Electronic filters, repeated signal charge conversion apparatus, hearing aids and methods
US5027410A (en) 1988-11-10 1991-06-25 Wisconsin Alumni Research Foundation Adaptive, programmable signal processing and filtering for hearing aids
US4956867A (en) 1989-04-20 1990-09-11 Massachusetts Institute Of Technology Adaptive beamforming for noise reduction
US5224170A (en) 1991-04-15 1993-06-29 Hewlett-Packard Company Time domain compensation for transducer mismatch
DE4121356C2 (en) * 1991-06-28 1995-01-19 Siemens Ag Method and device for separating a signal mixture
JP3279612B2 (en) 1991-12-06 2002-04-30 ソニー株式会社 Noise reduction device
US5412735A (en) 1992-02-27 1995-05-02 Central Institute For The Deaf Adaptive noise reduction circuit for a sound reproduction system
US5680467A (en) 1992-03-31 1997-10-21 Gn Danavox A/S Hearing aid compensating for acoustic feedback
JPH05316587A (en) 1992-05-08 1993-11-26 Sony Corp Microphone device
US5251263A (en) * 1992-05-22 1993-10-05 Andrea Electronics Corporation Adaptive noise cancellation and speech enhancement system and apparatus therefor
US5402496A (en) 1992-07-13 1995-03-28 Minnesota Mining And Manufacturing Company Auditory prosthesis, noise suppression apparatus and feedback suppression apparatus having focused adaptive filtering
US5737430A (en) 1993-07-22 1998-04-07 Cardinal Sound Labs, Inc. Directional hearing aid
DE4330143A1 (en) 1993-09-07 1995-03-16 Philips Patentverwaltung Arrangement for signal processing of acoustic input signals
US5687239A (en) * 1993-10-04 1997-11-11 Sony Corporation Audio reproduction apparatus
US5557682A (en) 1994-07-12 1996-09-17 Digisonix Multi-filter-set active adaptive control system
US5627799A (en) 1994-09-01 1997-05-06 Nec Corporation Beamformer using coefficient restrained adaptive filters for detecting interference signals
JP2758846B2 (en) 1995-02-27 1998-05-28 埼玉日本電気株式会社 Noise canceller device
US5764778A (en) * 1995-06-07 1998-06-09 Sensimetrics Corporation Hearing aid headset having an array of microphones
US5835608A (en) 1995-07-10 1998-11-10 Applied Acoustic Research Signal separating system
US6002776A (en) 1995-09-18 1999-12-14 Interval Research Corporation Directional acoustic signal processor and method therefor
US5694474A (en) 1995-09-18 1997-12-02 Interval Research Corporation Adaptive filter for signal processing and method therefor
US6072884A (en) 1997-11-18 2000-06-06 Audiologic Hearing Systems Lp Feedback cancellation apparatus and methods
US6127973A (en) * 1996-04-18 2000-10-03 Korea Telecom Freetel Co., Ltd. Signal processing apparatus and method for reducing the effects of interference and noise in wireless communication systems
US5793875A (en) 1996-04-22 1998-08-11 Cardinal Sound Labs, Inc. Directional hearing system
US5825898A (en) 1996-06-27 1998-10-20 Lamar Signal Processing Ltd. System and method for adaptive interference cancelling
US6097771A (en) * 1996-07-01 2000-08-01 Lucent Technologies Inc. Wireless communications system having a layered space-time architecture employing multi-element antennas
DE19635229C2 (en) 1996-08-30 2001-04-26 Siemens Audiologische Technik Direction sensitive hearing aid
US5991418A (en) 1996-12-17 1999-11-23 Texas Instruments Incorporated Off-line path modeling circuitry and method for off-line feedback path modeling and off-line secondary path modeling
AUPO714197A0 (en) 1997-06-02 1997-06-26 University Of Melbourne, The Multi-strategy array processor
JPH1183612A (en) 1997-09-10 1999-03-26 Mitsubishi Heavy Ind Ltd Noise measuring apparatus of moving body
US6098040A (en) * 1997-11-07 2000-08-01 Nortel Networks Corporation Method and apparatus for providing an improved feature set in speech recognition by performing noise cancellation and background masking
US6091813A (en) 1998-06-23 2000-07-18 Noise Cancellation Technologies, Inc. Acoustic echo canceller
US6049607A (en) 1998-09-18 2000-04-11 Lamar Signal Processing Interference canceling method and apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000030264A1 (en) 1998-11-13 2000-05-25 Bitwave Private Limited Signal processing apparatus and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DIRK VAN COMPERNOLLE: "Switching Adaptive Filters for Enhancing Noisy and Reverberant Speech From Microphone Array Recordings", PROC. IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, 3 April 1990 (1990-04-03), pages 833 - 836
OSAMU HOSHUYAMA: "A Robust Adaptive Beamformer for Microphone Arrays with a Blocking Matrix Using Constrained Adaptive Filters", IEEE TRANSACTIONS ON SIGNAL PROCESSING, October 1999 (1999-10-01)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6999541B1 (en) 1998-11-13 2006-02-14 Bitwave Pte Ltd. Signal processing apparatus and method
US7289586B2 (en) 1998-11-13 2007-10-30 Bitwave Pte Ltd. Signal processing apparatus and method
EP1617419A2 (en) * 2004-07-15 2006-01-18 Bitwave Private Limited Signal processing apparatus and method for reducing noise and interference in speech communication and speech recognition
US7426464B2 (en) * 2004-07-15 2008-09-16 Bitwave Pte Ltd. Signal processing apparatus and method for reducing noise and interference in speech communication and speech recognition
EP1617419A3 (en) * 2004-07-15 2008-09-24 Bitwave Private Limited Signal processing apparatus and method for reducing noise and interference in speech communication and speech recognition
EP1729492A2 (en) 2005-05-31 2006-12-06 Bitwave PTE Ltd. System and apparatus for wireless communication with acoustic echo control and noise cancellation
EP2129168A1 (en) * 2008-05-28 2009-12-02 Yat Yiu Cheung Microphone neck supporting member for hearing aid
CN102142259A (en) * 2010-01-28 2011-08-03 三星电子株式会社 Signal separation system and method for automatically selecting threshold to separate sound source
CN102142259B (en) * 2010-01-28 2015-07-15 三星电子株式会社 Signal separation system and method for automatically selecting threshold to separate sound source

Also Published As

Publication number Publication date
US20040193411A1 (en) 2004-09-30
EP1425738A2 (en) 2004-06-09
US7346175B2 (en) 2008-03-18
WO2003036614A3 (en) 2004-03-18
AU2002363054A1 (en) 2003-05-06

Similar Documents

Publication Publication Date Title
WO2003036614A2 (en) System and apparatus for speech communication and speech recognition
US10319392B2 (en) Headset having a microphone
US7426464B2 (en) Signal processing apparatus and method for reducing noise and interference in speech communication and speech recognition
EP1743323B1 (en) Adaptive beamformer, sidelobe canceller, handsfree speech communication device
US7174022B1 (en) Small array microphone for beam-forming and noise suppression
KR101449433B1 (en) Noise cancelling method and apparatus from the sound signal through the microphone
US8112272B2 (en) Sound source separation device, speech recognition device, mobile telephone, sound source separation method, and program
US7164620B2 (en) Array device and mobile terminal
US7289586B2 (en) Signal processing apparatus and method
KR100480404B1 (en) Methods and apparatus for measuring signal level and delay at multiple sensors
US7983907B2 (en) Headset for separation of speech signals in a noisy environment
US7206418B2 (en) Noise suppression for a wireless communication device
US20060147063A1 (en) Echo cancellation in telephones with multiple microphones
US20070230712A1 (en) Telephony Device with Improved Noise Suppression
CN110249637B (en) Audio capture apparatus and method using beamforming
CN110140359B (en) Audio capture using beamforming
US20140355775A1 (en) Wired and wireless microphone arrays
US10297245B1 (en) Wind noise reduction with beamforming
JP2019036917A (en) Parameter control equipment, method and program
Schwab et al. 3D Audio Capture and Analysis

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 10487229

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2002802082

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2002802082

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP