CA1285071C - Voice coding process and device for implementing said process - Google Patents

Voice coding process and device for implementing said process

Info

Publication number
CA1285071C
CA1285071C CA000535921A CA535921A CA1285071C CA 1285071 C CA1285071 C CA 1285071C CA 000535921 A CA000535921 A CA 000535921A CA 535921 A CA535921 A CA 535921A CA 1285071 C CA1285071 C CA 1285071C
Authority
CA
Canada
Prior art keywords
signal
derive
band
sensitive
high frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA000535921A
Other languages
French (fr)
Inventor
Claude Galand
Jean Menez
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Application granted granted Critical
Publication of CA1285071C publication Critical patent/CA1285071C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/06Determination or coding of the spectral characteristics, e.g. of the short-term prediction coefficients

Abstract

ABSTRACT

The voice signal is analyzed to derive therefrom a low frequency base band signal, linear prediction coefficients and HF descriptors. Said HF descriptors include HF energy indications as well as indications relative to the phase shift between the low frequency and the high frequency band. Said HF
descriptors are used during the voice synthesis operation to provide an inphase HF bandwidth component to be added to the base band prior to be used for driving a linear prediction synthesis filter tuned using said linear prediction parameters.
Fig. 2

Description

~#~ ~fj IMPROVED VOICE CODING PROCESS AND DEVICE FOR IMPLEMENTING

SAID PROCESS.

TEC~NIC~L FIELD

This invention deals with voice coding and more particularly with a method and system for improving said coding when performed using base-band (or residual) coding techniques.

BACKGROUND OF INVENTION
.

Base~band or residual coding techniques involve processing the original signal to derive therefrom a low frequency bandwidth signal and a few parameters characterizing the high frequency bandwidth signal components. Said low and high frequency components are then respectively coded separately. At the other end of the process, the original voice signal is obtained by adequately recombining the coded data. The first set of operations is generally referred to as analysis, as opposed to synthesis for the recombining operations.
.~ ~
Obviously any processing involving coding and decoding spoils the voice signal and is said to generate noises. This invention, further described with reference to an example of base-band coding technique, i.e. known as Residual-Excited Linear Prediction Vocoding (RELP), but valid for any base-band coding technique, is made to lower substantially said noises.

RELP analysis is made to generate, besides the low frequency bandwidth signal, parameters relating to the high frequency bandwidth energy contents and to the original voice signal spectral characteristics.

, . ~ . . . , . " ~ .

~s~

RELP methods enable reproducing speech signal with communications quality at rates as low as 7.2 Xbps. For example, such a coder has been described in a paper by D.Esteban, C.Galand, J.Menez, and D.Mauduit, at the 1978 ICASSP in Tulsa: '7.2/9.6 kbps ~oice Excited Predictive Coder'. However, at this rate, some roughness remains in some synthesized speech segments, due to a non-ideal regeneration of the high-frequency signal. Indeed, this regeneration is implemented by a straight non-linear distortion of the analysis generated base-band signal, which spreads the harmonic structure over the high-frequency band. As a result, only the amplitude spectrum of the high-fxequency part of the signal is well regenerated, while the phase spectrum of the reconstructed signal does not match the phase spectrum of the original signal. Although this mismatching i5 not critical in stationary portions of speech, like sustained vowels, it may produce audible distortions in transient portions of speech, like consonants.

It is an object of this invention to provide means for enabling in phase regeneration of HF bandwidth contents.

The foregoing and other objects features and advantages of the invention will be made apparent from the following more particular description of the preferred embodiments of the invention as illustrated in the accompanying drawings~

BRIEF DESCRIPTION OF THE DRAWINGS.

Figure 1 represents the general block diagram of a RELP
vocoder.

Figure 2 represents the general block diagram of the proposed improved process applied to a RELP vocoder.

Figure 3 shows typical signal wave-forms obtained with the proposed process.

Fig.3a speech signal Fig.3b residual signal Fig.3c base-band signal x(n) Fig.3d high~band signal y(n) Fig.3e high-band signal synthesized by conventional RELP

Fig.3f pulse train u(n) Fig.3g cleaned base-band pulse ~rain z(n) Fig.3h windowing signal w~n) Fig.3i windowed high-band signal y''(n~

Fig.3j high-band signal s(n) synthesized by the proposed method Figure 4 represents a detailed block diagram of the proposed pulse/noise analysis of the upper-band signal.

Figure 5 represents a detailed block diagram of the proposed pulse/noise synthesis of the upper-band signal.

Figure 6 represents the block diagram of a preferred embodiment of the base-band pre-processing building block of Fig. 4 and Fig.5.

7~
- . ~

Fîgure 7 represents the block diagram of a preferred embodiment of the phase evaluation building block appearing in Fig. 4.

Figure 8 represents the block diagram of a preferred embodiment of the upper-band analysis building block appearing in Fig. 4.

Figure 9 represents the block diagram of a preferred embodiment of the upper-band synthesis building block appearing in Fig.5.

Figure 10 represents the block diagram of the base-band pulse train cleaning device (9).

Figure 11 represents the block diagram of the windowing device (11) SUMMARY OF THE IN~ENTION.

A voice coding process wherein the original voice signal is analyzed to derive therefrom a low frequency bandwidth signal and parameters characterizing the high frequency bandwidth components of said voice signal said parameters including energy indications about said high frequency bandwidth signal, said voice coding process being further characterized in that said analysis is made to provide additional parameters including information relative to the phase-shift between low and high frequency bandwidth contents, whereby said voice signal may be synthesized with in phase high and low frequency bandwidths contents.

DESCRIPTION OF A PREFERRED EMBODIMENT, The following description will be made with reference to a residual-excited linear prediction vocoder ~RELP) an example 35~

~ ~ 5 of which has been described both at the ICASSP Conference cited above and in European Patent 0002998, which deals more particularly with a specific kind of RELP coding, i.e. Voice Excited Predictive Coding (VEPC)o Figure 1 represents the general block diagram of such a conventional RELP vocoder including both devices, i.e. an analyzer and a synthesizer. In the analyzer the input speech signal is processed to derive therefrom the following set of speech descriptors:

II) the spectral descriptors represented by a set of llnear prediction parameters. (see LP Analysis in Fig.l).

(II) the base-band signal obtained by band limiting (300-1000 Hz) and subsequently sub-sampling at ~kHz the residual (or excitation) signal resulting from the inverse filtering of the speech signal by its predictor (see BB Extraction in Eig.l) or by a conventional low frequency filtering operation.

(III) the energy of the upper band (or High-Frequency band) signal (lO00 to 3400 Hz) which has been removed from the excitation signal by low-pass filtering (see HF Extraction and Energy Computation).

These speech descriptors are quantized and multiplexed to generate the coded speech data to be provided to the speech synthesizer whenever the speech signal needs be reconstructed.

The synthesizer is made to perform the following operations:
-decoding and up-sampling to 8kHz the Base-Band signal(see Bs Decode in Fig.1) - generating a high frequency signal (1000-3400 Hz) by non-linear distorsion high-pass filtering and energy adjustment of the base-band signal (see Non Linear Distortion HP Filtering and Energy Adjustment) - exciting an all~pole prediction filter corresponding the vocal tract by the sum of the base-band signal and of the high-frequency signal.

Figure 2 represents a block diagram of a RELP
analyzer/synthesizer incorporating the invention. Some o~ the elements of a conventional RELP device have been kept unchanged. They have been given the same references or names as already used in connection with the device of figure 1.

In the analyzer the input speech is still processed to derive therefrom a set of coefficients (I) and a Base-Band BB (II).
These data (I) and (II) are separately coded. But the third speech descriptors (III~ derived through analysis of the high and low frequency bandwidth contents, differs from the descriptor (III) of a conventional RELP as represented in figure 1. These new descriptors might be generated using different methods and vary a little from one method to another. They will however all include data characterizing to a certain extent the energy contained in the upper (HF) band as well as the phase relation (phase shift) between high and low bandwidth contents. In the preferred embodiment of figure 2 these new descriptors have been designated by K, A and E
respectively standing for phase, amplitude and energy. They will be used for the speech synthesis operations to synthesize the speech upper band contents.

A better understanding of the proposed new process and more particularly of the significance of the considered parameters or speech descriptors will be made easier with the help of figure 3 showing typical waveforms. For further details on this RELP coding techni~ues one may refer to the above mentioned references.

~28~3~

As already mentioned, some roughness still remains in the synthesized signal when processed as above indicated. The present invention enables avoiding said roughness by representing the high frequency signal in a more sophisticated way.

The advantage of the proposed method over the conventional method consists in a representation of the high-frequency signal by a pulse/noise model. The principle of the proposed method will be explained with the help of Fig.3 which shows typical wave-forms of a speech segment (Fig.3a) and the corresponding residual (Fig.3b), base-band (Fig.3c), and high-frequency (or upper-band) (Fig.3d~ signals.

The problem faced with RELP vocoders is to derive at the receiver end (synthesizer) a synthetic high-frequency signal from the transmitted base-band signal. As recalled above, the classical way to reach this objective is to capitalize on the harmonic structure of the speech by making a non-linear distortion of the base-band signal followed by a high-pass filtering and a level adjustment according to the transmitted energy. The signal obtained through these operations in example of figure 3 is shown on Fig.3e. The comparison of this signal with the original one (Fig.3d) shows in this example that the synthetic high-frequency signal exhibits some amplitude overshoots which furthermore result in much audible distortions in the reconstructed speech signal. Since both signals have very close amplitude spectra, the difference should comes from the lack of phase spectra matching between both signals. The process proposed here makes use of a time domain modeling of the high-frequency signal, which allows reconstructing both amplitude and phase spectra more precisely than with the classical process. A careful comparison of the high-frequency (Fig.3d) and base-band signals (Fig.3c) reveals that although the high-frequency signal does not contain the fundamental frequency, it looks like if it would contain it.

~, . . . .

7~

In other words, both the high-frequency and the base-band signals exhibit the same quasi-periodicity. Furthermore, most of the significant samples of the high-frequency signal are concentrated within this periodicity. So, the basic idea behind the proposed method is twofold: it first consists in coding only the most significant samples within each period of the high-frequency signal; then, since these samples are periodically concentrated at the pitch period which is carried by the base-band signal, only transmit these samples to the receiviny end, (synthesizer) and locate their positions with reference to the received base-band signal. The only information required for this task is the phase between the base-band and the high-frequency signals. This phase, which can be characterized by the delay between the pitch pulses of the base-band signal and the pitch pulses of the high-band signal, must be determined at the analysis and trans~itted. So as to illustrate the proposed method, next section describes a preferred embodiment of the Pulse/Noise Analysis (illustrated by Figure 4) and Synthesis ~illustrated by Figure 5) means made to improve a VEPC coder according to the present invention. In the following, x(nT) or simpler x(n) will denote thenth sample of the signal x(t) sampled at the frequency l/T.
Also it should be noted that the voice signal is processed by blocks of N consecutive samples as performed in the above cited reference, using BCPCM techniques.

Fig.4 shows a detailed block diagram of the pulse/noise analyser in which the base-band signal x~n) and high-band signal y(n) are processed so as to determine, for each block of N samples of the speech signal a set of enhanced high-frequency (HF) descriptors which are coded and transmitted: - the phase K between the base-band signal and the high-frequency signal, - the amplitudes A(i) of the significant pulses of the high-frequency signal, - the energy E of the noise component of the high-frequency signal. The derivation of these HF descriptors is implemented as follows.

The first processing task consists in the e.valuation, in device (1) of figure 4, of the phase delay K between the base-band signal and the high-frequency signal. This is performed by computation of the cross correlation between the base-band signal and the high-frequency signal. Then a peak picking of the cross-correlation function gives the phase delay K. Fig.7 will show a detailed block diagram of the phase evaluation device (1). In fact, the cross-correlation peak can be much sharpened by pre-processing both signals prior to the computation of the cross-correlationO The base-band signal x(n) is pre-processed in device (2) of figure 4, so as to derive the signal z(n) (see 3g in Figure 3) which would ideally consist in a pulse train at the pitch frequency, with pulses located at the time positions corresponding to the extrema of the base-band signal x(n).

The pre-processing device (2) is shown in detail on Fig.6. A
first evaluation of the pulse train is achieved in device (8 implementing the non-linear operation:

(1) c'(n) = sign (x(n)-x(n-l)) c(n) = sign (c'(n) - c'(n-l)) (2) u(n) = c(n).x(n) if c(n) > 0 u(n) = 0 if c(n~ <= o for n=l,...,N, and where the value x(-l) and x(-2) obtained in relation (1) for n=l and n=2 correspond respectively to the x(N) and x(N-l) values of the previous bloc]c which is supposed to be memorized from one block to the next one. For reference, Fig.3f represents the signal u(n) obtained in our example.
The output pulse train is then modulated by the base-band signal x(n) to give the base-band pulse train vln):

~2~)7~
(3) v(n) = u(n).x(n~

The base-band pulse train v(n) contains pulses both at the fundamental frequency and at harmonic frequencies. Only fundamental pulses are retained in the cleaning device (9).
For that purpose, another input to device (9) is an estimate value M of the periodicity of the input signal obtained by using any conventlonal pitch detection algorithm implemented in device (10). For example, one can use a pitch detector, as described in the paper entitled 'Real-Time Digital Pitch Detector' by J.J Dubnowski, R.W.Schafer, and L.R.Rabiner in the IEEE Transactions on ASSP, VOL.ASSP-24, No.l, Feb 1976, pp.2-8.

Referring to Fig.6, the base-band pulse train v(n) is processed by the cleaning device (9) according to the following algorithm depicted in Fig.10. The se~uence v(n),(n=l,...,N) is first scanned so as to determine the positions and respective amplitudes of its non-null samples (or pulses). These information are stored in two buffers pos(i) and amp(i) with i=l,...,NP, where NP represents the number of non-null pulses. Each non-null value is then analyzed with reference to its neighbor. If their distance, obtained by subtracting their positions is greater than a prefixed portion of the pitch period M (we took 2M/3 in our implementation), the next value is analyzed. In the other case, the amplitudes of the two values are compared and the lowest is eliminated. Then, the entire process is re-iterated with a lower number of pulses (NP-l), and so on until the cleaned base-band pulse train z(n) comprises remaining pulses spaced by more than the pre-fixed portion of M. The number of these pulses is now denoted NP0. Assuming a block of samples corresponding to a voiced segment of speech, the number of pulses is generally low. For example, assuming a block length of 20 ms, and given that the pitch frequency is always comprised between 60Hz for male speakers and 400Hz for female , . . . . ~

n~l speakers, the number NP0 will range from 1 to 8. For unvoiced signals however, the estimated value of M may be such that the number of pulses become greater than 8. In this case, it is limited by retaining the 8 first found pulses. ~his limitation does not affect the proposed method since in unvoiced speech segments, the high-band signal does not exhibit significant pulses but only noisy signals. So, as described below, the noise component of our pulse/noise model is sufficient to ensure a good representation of the signal.

For reference purposes, the signal z(n) obtained in our example is shown on Fig.3g.

Coming back to the detailed block diagram of the phase evaluation device (1) shown on Fig.7, the upper band signal y(n) is pre-processed by a conventional center clipping device (5). For example, such a device is described in details in the paper 'New methods of pitch extraction' by M.M.Sondhi, in IEEE
TransO Audio Electroacoustics, vol.AU-16, pp.262-266, June 1968.

The output signal y'(n) of this device is determined according to:
(4) y'(n) = y(n) if y(n) > a.Ymax = 0 if y(n) <= a.Ymax where~
5) Ymax = Max y(n) n=l,N

Ymax represents the peak value of the signal over the considered block and is computed in device (5). 'a' is a constant that we took equal to 0.8 in our implementation.

.. ... . . .. . . . .. . . .

7~

Then, the cross-correlation function R(k) between the pre-processed high-band signal y'~n) and the base-band pulse train z(n) is computed according to:

N-k (6) R(k) = y'(n).z(n+k) k=O,...,M
n=l The lag K of the extremum R(K) of the R(k) function is then searched in device (7) and represents the phase shift between the base-band and the high-band:
7) R(K) = Max R(k) k=l,M

Now referring back to the general block diagram of the proposed analyser shown on Fig.4, the base-band pulse train is shifted by a delay equal to the previously determined phase K, in the phase shifter circuit (3) n This circuit contains a delay line with a selectable delay equal to phase K. The output of the circuit is the shifted base-band pulse train z(n-K).

B~th the high-band y(n) and the shifted base-band pulse train z(n-K) are then forwarded to the upper-band analysis device (4), which derives the amplitudes A(i) (i=l,...,NP0) of the pulses and the energy E of the noise used in the pulse/noise modeling.

Fig.8 shows a detailed block diagram of device (4). The shifted base-band pulse train z(n-K) is processed in device (ll) so as to derive a rectangular time window w(n-K) with windows of width (M/2) centered on the pulses of the base-band pulse train.

.. ..

~\
~.~8~7~

, .

The upper-band signal y(n) is then modulated by the windowing s.ignal w(n-K).
(8) y' 7 (n) = y(n).w(n-K).

For reference, Fig.3i shows the modulated signal y''(n) obtained in our example. This signal contains the significant samples of the high-frequency band located at the pitch frequency, and is forwarded in device (12) which actually implements the pulse modeling as follows. For each of the NPO
windows, the peak value of the signal is searched:

(9~ Amax(i) = Max Y''(i,n) n=-M/4,M/4 (lOj Amin(i) = Min y''(i,n) n=~M/4,M/4 where y''(i,n) represents the samples of the signal y''(n) within the ith window, and n represents the time index of the samples within each window, and with reference to the center of the window.

Amax(i) + Amin(i) (11) A(i) The global energy Ep of the pulses is computed according to:

NPO
(12~ Ep = A2(i) i=l ~ , , ., , , . . . ; . .

~Z8~07~

....

.

The energy Ehf o~ the upper-band signal y(n) is computed over the considered block in device ~14~ according to:

~13) Ehf = y ~n~
n=l These energies are subtracted in device (13) to give the noise energy descriptor E which will be used to adjust the energy of the remote pulse/noise model.

(14) E = Ehf - Ep The various coding and decoding operations are respectively performed within the analyæer and synthesizer according to the following principles.

As described in the paper by D.Esteban et al. in the ICASSP
1978 in Tulsa, the base-band signal is encoded with the help of a sub-band coder using an adaptive allocation of the available bit resources. The same algorithm is used at the synthesis part, thus avoiding the transmission of the bit allocation.

The pulse amplitudes A(i), i=l,NP0, are encoded by a Block Companded PCM quantizer, as described in a paper by A.Croisier, at the 1974 Zurich Seminar: 'Progress in PCM and Delta modulation: block companded coding of speech signals' The noise energy E is encoded by using a non-uniform quantizer. In our implementation, we used the quantizer described in the VEPC paper here above referenced on the Voice Excited Predictive Coder ~VEPC).

The phase K is not encoded, but transmitted ~ith 6 bits. Fig.5 shows a detailed block diagram of the pulse/noise synthesizer.

-l~sn7~

` 15 The synthetic high-frequency signal s(n) is generated using the data provided by the analyzer.

The decoded base-band signal is first pre-processed in device (2) of Fig.5 in the same way it was processed at the analysis and described with reference to Fig.6 to derive a Base-Band pulse train z(n) therefrom; and the K parameters are then used in a phase shifter (3) identical to the one used at the analysis, to generate a replica of the pulse components z(n-K) of the original high-frequency signal.

Finally, the z(n-K) signal, the A (i) parameters, and the E
parameter are used to synthesize the upper band according to the pulse/noise model in device (15), as represented in Fig.9.

This high-frequency signal s(n) is then added to the delayed base-band signal to obtain the excitation signal of the predictor filter to be used for performing the LP Synthesis function of Fig.2.
Fig.9 shows a detailed block diagram of the upper-band synthesis device (15). The synthetic high-band signal s(n) is obtained by the sum of a pulse signal and of a noise signal.
The generation of each of these signals is implemented as follows.
-The function of the pulses generator (18) is to create a pulse signal matching the positions and energy characteristics of the most significant samples of the original high-band signal. For that purpose, recall that the pulse train z(n-K) consists in NP0 pulses at the pitch period located at the same time positions than the most significant samples of the original high-band signal. The shifted base-band pulse train z(n-K) is sent to the pulses generator device (18) where each pulse is replaced by a couple of pulses which is furthermore modulated by the corresponding window amplitude A(i), (i=l,...,NP0).

The noise component is generated as follows. A white noise generator (16) generates a sequence of noise samples eln) with unitary variance. The energy of this sequence is then adjusted in device (17), according to the transmitted energy E. This adjustment is made by a simple multiplication of each noise sample by (E~**.5.

(15) e'(n~ = e(n).El/2 In addition, the noise generator is reset at each pitch period so as to improve the periodicity of the full high-band signal stn). This reset is achieved by the shifted pulse train z(n-K).

The pulse and noise signal components are then summed up and filtered by a high-pass filter 19 which removes the (0-lOOO~Iz) of the upper-band signal s(n). Note on Fig.5 that the delay introduced by the high-pass filter on the high-frequency band is compensated by a delay (20~ on the base-band signal. For reference, Fig.3j shows the obtained upper-band signal s(n) in our example.

Although the invention was described with reference to a preferred embodiment, several alternatives may be used by a man skilled in the art without departing from the scope of the invention, bearing in mind that the basis of the method is to reconstruct the high-frequency component of the residual signal in a RELP coder with a correct phase with reference to the low frequency component (base-band). Several alternatives may be used to measure and transmit this phase K with respect to the base-band signal itself. This choice allows to align the regenerated high-frequency signal with the help of only the transmitted phase K. Another implementation could be based on the alignment of the high-frequency signal with respect to the block boundary. This implementation would be simpler but requires the transmission of more information: the phase with respect to the block boundary which would require more bits than the transmission of the phase with respect to the base-band signal.

Note also that instead of re-computing the pitch period (M) at the synthesis, this period could be transmitted to the receiver. This would save processing resources, at the price of an increased transmitted information.

Claims (18)

1. A process for coding voice signals wherein said voice signal is analyzed by being split into a low frequency (HF) bandwidth and a high frequency bandwidth the signal contents of which are to be coded separately, said process being characterized in that it includes:
- coding said low frequency bandwidth signals;
- processing said high frequency-bandwidth contents to derive therefrom high frequency energy information;
- processing both said low frequency bandwidth and said high frequency bandwidth contents to derive therefrom information relative to the phase shift between said high frequency signal and said low frequency signal - coding separately said high frequency energy information and said phase shift information; whereby said coded voice signal is represented by said coded low frequency signal, said coded high frequency energy information and said coded phase shift information.
2. A process according to claim 1 wherein said voice signal is processed by consecutive segments of signal of predetermined length, said segments being represented by blocks of samples.
3. A process according to claim 2 wherein said processing to derive high frequency bandwidth energy information includes:

- measuring the voice pitch period;

- defining a time window at the pitch rate;
- measuring the high frequency energy within said time window and generating data representing said HF energy within said time window; and - generating noise energy data for each segment, by sub tracting said high frequency energy over said time window from the high frequency energy over the segment.
4. A process according to claim 3 wherein said windowed HF
energy is represented by a predetermined number of samples within the time window.
5. A process for decoding a voice signal coded according to claim 1 using synthesis operations including :
- demultiplexing and decoding said coded data;
- shifting said low frequency bandwidth decoded data using said phase shift information - combining said shifted low frequency decoded data with said high frequency energy data to derive therefrom a synthesized upper band signal; and - adding said low frequency signal and said synthesized band signal.
6. A process for coding voice signals according to any one of claims 1-3 based on Voice Excited Predictive coding techniques wherein said voice signal is also used to derive a linear set of prediction parameters, said parameters being also multiplexed with said coded data.
7. A decoding process according to claim 5 wherein said synthesis operations are made to synthesize a voice signal coded according to claim 6, said decoding process including:
- demultiplexing and decoding said linear parameters;
- using said decoded linear prediction parameters to adjust a synthesis filter fed with the signal provided by said adding operation.
8. A coding process according to claim 4 wherein said samples are limited to peak values through a center clipping operation using self adaptive threshold level.
9 A coding process according to claim 8 wherein said threshold is adjusted to eliminate a predetermined percentage of signal samples within the high frequency bandwidth contents.
10. A coding process according to any one of claims 1-3 wherein said low frequency bandwidth signal is coded using split band techniques, with dynamic allocation of quantizing resources throughout the split band contents.
11. A Voice Excited Predictive Coder (VEPC) including first means sensitive to the Voice signal for generating spectral descriptors representing linear prediction parameters, second means for generating a low frequency or Base Band signal (x(n)) and third means for generating high frequency (HF) or upper band signal descriptors said third means including:
- base band preprocessing means connected to said second means for generating a pitch parameter M and a base band pulse train z(n);

- phase evaluation means connected to said base band preprocessing means and sensitive to said upper band signal to derive therefrom a phase shift descriptor K;

- phase shifter means sensitive to said z(n) pulse train and to said phase shift descriptor K to derive therefrom a shifted pulse train z(n-k);

- upper band analysis means sensitive to said upper band signal, to said shifted pulse train and to said pitch parameter M, to derive therefrom noise energy information E and HF amplitude information A(i); and, - coding means for coding said phase shift descriptor K, amplitude A(i), noise energy E and base band signal x (n).
12. A VEPC coder according to claim 11 wherein said base band preprocessing means include:

- digital derivative and sign means sensitive to said base-band signal x(n) to derive therefrom a signal u(n) according to the following expressions:

u(n) = c(n).x() if c(n) > 0 or u(n) = 0 if c(n) ? 0 with c(n) = sign (c'(n) - c'(n-1)) and c'(n) = sign (x(n) - x(n-1) - modulating means sensitive to u(n) and x(n) to derive therefrom a signal v(n) = u(n). x(n);

- pitch evaluation means sensitive to said base band signal to derive therefrom the pitch parameter M; and, - cleaning means sensitive to said v(n) signal and M
parameter to derive therefrom a cleaned base band pulse train z (n) containg base band pulses spaced by more than a prefixed portion of M.
13. A VEPC according to claim 11 or 12 wherein said phase evaluation means include:
center clipping means sensitive to said upper band signal y(n) to derive therefrom a clipped signal y'(n), with:
y'(n) = y(n) if y(n) > a.Ymax.
or = O if y(n) ? a Ymax.
where Ymax = Max y(n) n = 1,N
N being a predetermined block number of samples and "a" a predetermined constant coefficient;
- cross correlation means, sensitive to said y'(n), base band pulse train z(n) and pitch M, to derive therefrom a cross correlation function R(k), with:
k = O, ....., M;
- peak picking means sensitive to said R(k) and pitch M
to derive phase shift K indication through the extrenum of R(K), with:

R(K) = Max R(k).
k = 1,M
14. A VEPC according to claim 13 wherein said phase shifter is a delay line adjustable to the K value to derive a shifted pulse train z(n-K).
15. A VEPC Coder according to claim 14, wherein said upper band analysis means include:
- windowing means sensitive to said shifted pulse train and to said pitch M to derive therefrom a w(n-k) train;
- modulating means sensitive to said w(n-K) train and to said upper band y(n) to derive a y"(n) train through y"(n) = y(n). w(n-K);
- a pulse modeling means sensitive to said y"(n) to derive A(i) pulse amplitudes through:

with :
Amax(i) = Max y"(i,n) n = M/4, M/4 and Amin(i) = Min y"(i,n) n = M/4, M/4 where y"(i,n) represent the samples of y"(n) within the ith window, and n represents the time index of the samples within each window;

said pulse modeling means also providing pulse energy (i) , where NPO is the number of pulses within a cleaned base band train per predetermined block of voice samples;
- HF energy means sensitive to y(n) to derive (n); and, - noise energy E generating means deriving E = Ehf - Ep.
16. A VEPC synthesizer for decoding a voice signal coded through a device according to claim 11, said synthetiser including - decoding means for decoding said LP parameters, said E, A(i), K and x(n);
- base-band preprocessing means sensitive to said x(n) train to derive a base-band train z(n);
- phase shifter means sensitive to z(n) and K to derive a shifted train z(n-K);
- upper band synthesis means sensitive to E, A(i) and z(n-K) to derive s(n);

- summing means for summing said upper band train s(n) and a delayed x(n) train;
- LP synthesis filter tuned by said decoded LP parameters and sensitive to the output of said summing means to derive the synthesized voice signal.
17. A VEPC synthesizer according to claim 16 wherein said base band preprocessing means include:
means sensitive to x(n) to derive z(n) according to claim 12.
18. A VEPC synthesizer according to claim 17 wherein said upper band synthesis means include :
- pulse generator means sensitive to A(i) and z(n-K) to derive a pulse signal component by replacing each pulse by a couple of pulses modulated by A(i);
- noise generator means sensitive to z(n-K) to derive a sequence of noise samples e(n);
- noise adjusting means sensitive to the noise energy E
to derive a noise signal component e'(n) = e(n). E1/2;
- adding means for adding said noise component to said pulse signal component; and, - high pass filter connected to said adding means to provide said s(n).
CA000535921A 1986-04-30 1987-04-29 Voice coding process and device for implementing said process Expired - Fee Related CA1285071C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP86430014A EP0243562B1 (en) 1986-04-30 1986-04-30 Improved voice coding process and device for implementing said process
EP86430014. 1986-04-30

Publications (1)

Publication Number Publication Date
CA1285071C true CA1285071C (en) 1991-06-18

Family

ID=8196395

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000535921A Expired - Fee Related CA1285071C (en) 1986-04-30 1987-04-29 Voice coding process and device for implementing said process

Country Status (5)

Country Link
US (1) US5001758A (en)
EP (1) EP0243562B1 (en)
JP (1) JPS62261238A (en)
CA (1) CA1285071C (en)
DE (1) DE3683767D1 (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE68916944T2 (en) * 1989-04-11 1995-03-16 Ibm Procedure for the rapid determination of the basic frequency in speech coders with long-term prediction.
US5261027A (en) * 1989-06-28 1993-11-09 Fujitsu Limited Code excited linear prediction speech coding system
JP2598159B2 (en) * 1990-08-28 1997-04-09 三菱電機株式会社 Audio signal processing device
DK46493D0 (en) * 1993-04-22 1993-04-22 Frank Uldall Leonhard METHOD OF SIGNAL TREATMENT FOR DETERMINING TRANSIT CONDITIONS IN AUDITIVE SIGNALS
BE1007617A3 (en) * 1993-10-11 1995-08-22 Philips Electronics Nv Transmission system using different codeerprincipes.
JPH07160299A (en) * 1993-12-06 1995-06-23 Hitachi Denshi Ltd Sound signal band compander and band compression transmission system and reproducing system for sound signal
FR2720849B1 (en) * 1994-06-03 1996-08-14 Matra Communication Method and device for preprocessing an acoustic signal upstream of a speech coder.
US5787387A (en) * 1994-07-11 1998-07-28 Voxware, Inc. Harmonic adaptive speech coding method and system
US5497337A (en) * 1994-10-21 1996-03-05 International Business Machines Corporation Method for designing high-Q inductors in silicon technology without expensive metalization
JPH08123494A (en) * 1994-10-28 1996-05-17 Mitsubishi Electric Corp Speech encoding device, speech decoding device, speech encoding and decoding method, and phase amplitude characteristic derivation device usable for same
EP0878790A1 (en) * 1997-05-15 1998-11-18 Hewlett-Packard Company Voice coding system and method
EP0945852A1 (en) * 1998-03-25 1999-09-29 BRITISH TELECOMMUNICATIONS public limited company Speech synthesis
SE9903553D0 (en) 1999-01-27 1999-10-01 Lars Liljeryd Enhancing conceptual performance of SBR and related coding methods by adaptive noise addition (ANA) and noise substitution limiting (NSL)
US6691082B1 (en) * 1999-08-03 2004-02-10 Lucent Technologies Inc Method and system for sub-band hybrid coding
US6704711B2 (en) * 2000-01-28 2004-03-09 Telefonaktiebolaget Lm Ericsson (Publ) System and method for modifying speech signals
SE0001926D0 (en) 2000-05-23 2000-05-23 Lars Liljeryd Improved spectral translation / folding in the subband domain
US20020128839A1 (en) * 2001-01-12 2002-09-12 Ulf Lindgren Speech bandwidth extension
AUPR433901A0 (en) 2001-04-10 2001-05-17 Lake Technology Limited High frequency signal construction method
SE0202159D0 (en) 2001-07-10 2002-07-09 Coding Technologies Sweden Ab Efficientand scalable parametric stereo coding for low bitrate applications
US8605911B2 (en) 2001-07-10 2013-12-10 Dolby International Ab Efficient and scalable parametric stereo coding for low bitrate audio coding applications
EP1423847B1 (en) 2001-11-29 2005-02-02 Coding Technologies AB Reconstruction of high frequency components
US20030116454A1 (en) * 2001-12-04 2003-06-26 Marsilio Ronald M. Lockable storage container for recorded media
US20030187663A1 (en) 2002-03-28 2003-10-02 Truman Michael Mead Broadband frequency translation for high frequency regeneration
US7447631B2 (en) * 2002-06-17 2008-11-04 Dolby Laboratories Licensing Corporation Audio coding system using spectral hole filling
SE0202770D0 (en) 2002-09-18 2002-09-18 Coding Technologies Sweden Ab Method of reduction of aliasing is introduced by spectral envelope adjustment in real-valued filterbanks
US7318027B2 (en) * 2003-02-06 2008-01-08 Dolby Laboratories Licensing Corporation Conversion of synthesized spectral components for encoding and low-complexity transcoding
US7318035B2 (en) * 2003-05-08 2008-01-08 Dolby Laboratories Licensing Corporation Audio coding systems and methods using spectral component coupling and spectral component regeneration
FR2865310A1 (en) * 2004-01-20 2005-07-22 France Telecom Sound signal partials restoration method for use in digital processing of sound signal, involves calculating shifted phase for frequencies estimated for missing peaks, and correcting each shifted phase using phase error
CN1989548B (en) * 2004-07-20 2010-12-08 松下电器产业株式会社 Audio decoding device and compensation frame generation method
US8219391B2 (en) * 2005-02-15 2012-07-10 Raytheon Bbn Technologies Corp. Speech analyzing system with speech codebook
WO2006089055A1 (en) * 2005-02-15 2006-08-24 Bbn Technologies Corp. Speech analyzing system with adaptive noise codebook
JP5807453B2 (en) * 2011-08-30 2015-11-10 富士通株式会社 Encoding method, encoding apparatus, and encoding program
US9236058B2 (en) * 2013-02-21 2016-01-12 Qualcomm Incorporated Systems and methods for quantizing and dequantizing phase information
EP2963645A1 (en) 2014-07-01 2016-01-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Calculator and method for determining phase correction data for an audio signal

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2412987A1 (en) * 1977-12-23 1979-07-20 Ibm France PROCESS FOR COMPRESSION OF DATA RELATING TO THE VOICE SIGNAL AND DEVICE IMPLEMENTING THIS PROCEDURE
US4330689A (en) * 1980-01-28 1982-05-18 The United States Of America As Represented By The Secretary Of The Navy Multirate digital voice communication processor
EP0070948B1 (en) * 1981-07-28 1985-07-10 International Business Machines Corporation Voice coding method and arrangment for carrying out said method
US4495620A (en) * 1982-08-05 1985-01-22 At&T Bell Laboratories Transmitting data on the phase of speech
US4535472A (en) * 1982-11-05 1985-08-13 At&T Bell Laboratories Adaptive bit allocator
US4667340A (en) * 1983-04-13 1987-05-19 Texas Instruments Incorporated Voice messaging system with pitch-congruent baseband coding
US4672670A (en) * 1983-07-26 1987-06-09 Advanced Micro Devices, Inc. Apparatus and methods for coding, decoding, analyzing and synthesizing a signal
US4704730A (en) * 1984-03-12 1987-11-03 Allophonix, Inc. Multi-state speech encoder and decoder

Also Published As

Publication number Publication date
JPH0575296B2 (en) 1993-10-20
JPS62261238A (en) 1987-11-13
DE3683767D1 (en) 1992-03-12
EP0243562B1 (en) 1992-01-29
US5001758A (en) 1991-03-19
EP0243562A1 (en) 1987-11-04

Similar Documents

Publication Publication Date Title
CA1285071C (en) Voice coding process and device for implementing said process
Tribolet et al. Frequency domain coding of speech
CA2140329C (en) Decomposition in noise and periodic signal waveforms in waveform interpolation
US4852169A (en) Method for enhancing the quality of coded speech
US5574823A (en) Frequency selective harmonic coding
EP0331857B1 (en) Improved low bit rate voice coding method and system
US6098036A (en) Speech coding system and method including spectral formant enhancer
US6067511A (en) LPC speech synthesis using harmonic excitation generator with phase modulator for voiced speech
JP4662673B2 (en) Gain smoothing in wideband speech and audio signal decoders.
US6377916B1 (en) Multiband harmonic transform coder
US6119082A (en) Speech coding system and method including harmonic generator having an adaptive phase off-setter
US4757517A (en) System for transmitting voice signal
US6081776A (en) Speech coding system and method including adaptive finite impulse response filter
US6138092A (en) CELP speech synthesizer with epoch-adaptive harmonic generator for pitch harmonics below voicing cutoff frequency
US20050252361A1 (en) Sound encoding apparatus and sound encoding method
US6094629A (en) Speech coding system and method including spectral quantizer
EP0865028A1 (en) Waveform interpolation speech coding using splines functions
KR100496670B1 (en) Speech analysis method and speech encoding method and apparatus
CA2412449C (en) Improved speech model and analysis, synthesis, and quantization methods
EP0865029B1 (en) Efficient decomposition in noise and periodic signal waveforms in waveform interpolation
JP3191926B2 (en) Sound waveform coding method
Esteban et al. 9.6/7.2 kbps voice excited predictive coder (VEPC)
EP0987680B1 (en) Audio signal processing
Viswanathan et al. Voice-excited LPC coders for 9.6 kbps speech transmission
Shoham Low complexity speech coding at 1.2 to 2.4 kbps based on waveform interpolation

Legal Events

Date Code Title Description
MKLA Lapsed