ACQUIRING LOCAL SIGNAL BEHAVIOR PARAMETERS
CROSS-REFERENCE TO A RELATED APPLICATION This application claims the benefit under 35 U.S.C. section 119(e) of U.S.
Provisional application Ser. No. 60/143,074, filed on July 9, 1999, entitled SIGNAL PROCESSING WITH LOCAL SIGNAL BEHAVIOR, naming Aleksandar Ignjatovic and Nicholas A. Carlin as inventors, which provisional application discloses an exemplary embodiment of the present invention and which is incorporated herein by reference in its entirety.
FIELD OF THE INVENTION
The invention relates to signal representation and processing, particularly to signal representation and processing using local signal behavior parameters.
BACKGROUND
The standard method of signal processing uses representation of band-limited signals by Nyquist rate samples. Unique determination of band-limited signals requires a sequence of Nyquist rate samples from the entire interval (∞, ∞). Moreover, such a sequence contains no redundant information, because by Nyquist's Theorem for arbitrary- sequence of real numbers {. ^ ^^∑I^ < ∞ there exists a πband-limited
signal f(t) such that fi ή) = am. Of course, in practice operators act (essentially) on a sequence of approximations of the input signal obtained using a suitable moving window and setting the values of the signal to 0 at all sampling points outside of the support of the window.
Historically, numerical signal processing is older than the use of digital computers.
For example, fast Fourier transforms (FFT) were first performed by hand. Also, early hardware for data acquisition (A/D conversion) and for numerical data processing was very limited in speed and capacity. In such circumstances it was important that the signal be represented without any redundancy and that the implementations of signal processing operators required minimal number of basic arithmetical operations. This determined the direction of the development of signal processing algorithms, which resulted in an extremely elaborate and powerful signal processing paradigm centered around the "minimalist" representation of signals by the Nyquist rate samples. This paradigm is usually referred to as "signal processing based on harmonic analysis." However, such "minimalist" approach to signal representation, free of any redundancy, need not always be optimal. Specifically, the standard signal processing operators act on sequences of values of the input signal at consecutive (Nyquist rate spaced) sampling points which are within the support of an appropriate window. These values are stored in some form of a shift register and the output value is then obtained either from these samples alone (FIR procedures) or from these samples together with the previously computed value of the signal (IIR procedures). Both types of procedures necessarily produce significant delays and/or phase-shifts in the output signal.
Moreover, although the present-day hardware is capable of accessing the information contained between the Nyquist rate sampling points, no approach based on harmonic analysis has been able to encode this information without producing troublesome proliferation of numerical data which must be stored. The concept of "Signal Processor with Local Signal Behavior" had introduced a method for using the information contained between the Nyquist rate sampling points without producing troublesome proliferation of numerical data which must be stored, and without significantly increasing the computational complexity of the algorithms for subsequent processing of the data. The present invention significantly improves the mentioned one, by increasing accuracy, reducing the requirements for data storage, and by allowing simpler algorithms for signal processing.
A need exists for signal representation and processing that does not necessarily produce delays and/or phase-shifts in the output signal. Also, a need exists for signal representation and processing that is not limited by the standard approaches. In particular, a need exists for making use of the information contained between Nyquist rate points
without producing troublesome proliferation of numerical data which must be stored. Moreover, given the present-day hardware, a need exists to access this information without increasing the computational complexity of the algorithms. The present invention provides methods for the above tasks significantly improving on those described in the invention "Signal Processor With Local Signal Behavior".
SUMMARY
The invention provides signal representation and processing that does not necessarily produce delays and/or phase-shifts in the output signal, much superior to the one described in the invention "Signal Processor with Local Signal Behavior". Also, the invention provides superior signal representation and processing that go beyond the standard approaches of harmonic analysis. In particular, the invention makes use of the information contained between Nyquist rate points and represents it in an extremely efficient way. Moreover, given the present-day hardware, the invention provides means for signal processing with low computational complexity of the algorithms.
Specifically, the invention is drawn to a method and a system for facilitated acquisition of local signal behavior parameters (LSBPs) of a band-limited (BL) signal, wherein the LSBPs encode the signal's local behavior in between Nyquist rate points. Present-day hardware is capable of accessing the information contained between the Nyquist rate sampling points. This information can be encoded by LSBPs in a way which neither produces troublesome proliferation of numerical data which must be stored, nor does it increase the computational complexity of the algorithms for subsequent data processing.
More specifically, the values of the signal between the Nyquist rate points can be accessed using an array (or a matrix) of suitable analog, digital, or mixed signal pre-processing stages. This access can be done in a discrete format (i.e., as discrete voltages as in CCD type devices, numerical digital values provided by an oversampling A/D converter). This access can also be done in a continuous format (by various analog multipliers-integrators or an analog or mixed signal multiplier-integrator circuit as described in this invention). The values of these LSBPs can be obtained from the pre-processing devices at Nyquist rate or even sub-Nyquist rate. These LSBPs are obtained in parallel by some suitable hardware. The LSBPs describe the local signal
behavior around a Nyquist rate sampling point. Additionally, the LSBPs encode local signal behavior of the signal between Nyquist rate points.
Preferably, a section of a BL signal within a sampling window is represented as a truncated series of order n at a sampling moment within the sampling window. The truncated series having n + 1 LSBPs as its coefficients, encoding the signal's local behavior between Nyquist rate points. Compared to the conventional approach that encodes a signal's behavior by signal samples taken at Nyquist rate points, the invention encodes more accurately and more completely the behavior of the input signal. The LSBPs are solved numerically such that the interpolated values of the truncated series provide the best fit with the input signal. The LSBPs are respectively the values of chromatic differential operators of order 0 to order n evaluated at the sampling moment. As understood herein, the present embodiment is not restricted to the least-square fitting technique. In an alternative embodiment, curve fitting techniques other than least-square fit technique can also be implemented. Also as understood herein, LSBPs need not be solved by using discrete signal samples. For example, in another embodiment, LSBPs are solved by using a continuous signal section.
Alternatively, when a sampling window is constituted by two local sampling windows sharing an overlap, a section of a BL analog signal has a first subsection in the first local window and a second subsection in the second local window. In this scenario, the first subsection is first represented as a first truncated series at a first sampling moment in the first local window. This first truncated series is parametrized by n + 1 coefficients that are n + 1 LSBPs adapted for characterizing local signal behavior of the first subsection between Nyquist points. The second subsection is represented as a second truncated series at a second sampling moment in the second local window. This second truncated series is parametrized by m + 1 coefficients that are m + 1 LSBPs adapted for characterizing local signal behavior of the second subsection between Nyquist points. Numerical values of the n + 1 numerical values of the LSBPs for some k> n and k> m. These k LSBPs provide a single curve fitting over both local sampling windows. As understood herein, the present embodiment is not restricted to the least-square fitting technique. In an alternative embodiment, curve fitting techniques other than least-square fit technique are implemented. Also as understood herein, LSBPs need not be solved by using discrete signal samples. For example, in another embodiment, LSBPs are solved
by using a continuous signal section.
In another embodiment, the invention is drawn to an oversampling analog/digital (A/D) converter followed by an array of multiplier-accumulator circuits which comprises in this case a data acquisition unit. The data acquisition unit is adapted for obtaining LSBPs to characterize an input signal. In particular, the data acquisition performs the steps that include representing the input signal as a truncated series (at a time t) having the LSBPs as coefficients, obtaining discrete signal samples or a continuous signal section from the input signal, and solving for the LSBPs by an approximation technique such as, for example, least-square fitting. Typically, the truncated series is formed by fundamental basis functions. Moreover, the LSBPs are values of chromatic linear operators of various orders evaluated at the time t.
In another embodiment, the invention is drawn to a analog or mixed signal multiplier-integrator circuit that includes a data acquisition unit for obtaining LSBPs to characterize an input signal. In particular, the data acquisition performs the steps that include representing the input signal as a truncated series (at a time t) having the LSBPs as coefficients, obtaining discrete signal samples or a continuous signal section from the input signal, and solving for the LSBPs by an approximation technique such as, for example, least-square fitting. Typically, the truncated series is formed by fundamental basis functions. Moreover, the LSBPs are values of chromatic linear operators of various orders evaluated at the time t. In particular, this circuit can be obtained by combining a multiplying digital to analog converter followed by an analog integrator. More specifically, the input signal is used as a variable reference voltage of the multiplying D/A converter, whose digital input is supplied by a properly clocked digital values of a fixed function. These digital values can be stored in a memory for which they are sent to the above mentioned D/A converter timed with a clock whose speed corresponds to an oversampling frequency with respect to the input signal. The output of this D/A converter is then fed to the analog integrator circuit whose output is sampled at the end of each integration period. The length of this integration period depends on the number of LSBPs being acquired. After sampling, the integrator is reset, and the cycle is repeated. In yet another embodiment, the invention is drawn to a signal processing system for processing LSBPs obtained by the steps that include representing the input signal as a truncated series having the LSBPs as coefficients, obtaining discrete signal samples or
a continuous signal section from the input signal, and solving for the LSBPs by an approximation technique such as, for example, least-square fitting. Typically, the truncated series is formed by fundamental basis functions. Moreover, the LSBPs are values of chromatic linear operators of various orders evaluated at the time t. The present invention significantly improves the mentioned one, by increasing accuracy, reducing the requirements for data storage, and by allowing simpler algorithms for signal processing.
BRIEF DESCRIPTION OF THE FIGURES Figures 1 A- 1 C are graphs plotted at three different resolutions for comparing two multiplicative factors associated with an ordinary derivative differential operator.
Figures ID- IE are graphs plotted at three different resolutions for comparing two multiplicative factors associated with chromatic differential operator.
Figure 2 A depicts an A/D converter in accordance with one embodiment of the invention.
Figure 2B depicts a schematic diagram of analog/mixed-signal circuitry for performing curve fitting in accordance with one embodiment of the invention.
Figure 3 is a flow chart outlining steps for coding a signal section within a sampling window in accordance with one embodiment of the invention. Figure 4 is a flow chart outlining steps for coding a signal section within two overlapping sampling windows in accordance with one embodiment of the invention.
Figure 5 is a flow chart outlining steps for performing signal processing transformation in accordance with one embodiment of the invention.
Figure 6 is a flow chart outlining steps for coding a signal section from local signal behavior parameters of two overlapping subsections that constitute the signal section in accordance with one embodiment of the invention.
Figure 7 is a flow chart outlining steps for obtaining a chromatic derivative from lower order chromatic derivatives in accordance with one embodiment of the invention.
DETAILED DESCRIPTION
The Chromatic Signal Processing (CSP) method is based on differential operators of a special kind. It allows signal processing based on local signal behavior, i. e. , behavior
of the signal over a relatively short period of time, provided that the signal is given either by its sufficiently oversampled values (with respect to its corresponding Nyquist rate) or by its analog format. If the signal is given by Nyquist rate samples, the method is still applicable, but some of its features are lost. CSP not only naturally integrates with methods based on harmonic analysis but also it greatly enhances these methods adding complementary means based on a different mathematical paradigm (genuine differential calculus, rather than finite differences).
Here the fundamental procedures and systems of CSP are disclosed together with the underlying basic mathematical principles.
Signal Representation
The method of harmonic analysis represents the signal by the Nyquist interpolation formula,
. r,ι V . m____£(______!
which is a linear combination of all integer shifts of a single function
— - — with coefficients equal to the values j[k] of the input signal /at these integers. πt
The method of wavelets, on the other hand, represents the input signal as a linear combination of integer shifts and binary scalings ψ 2J t- k] of a single "mother wavelet"
CSP method uses representation of the signal obtained using a single operator
CD[., .] such that for each k, CD[k, .] is a linear differential operator of order k. The signal is represented in the form of a sum of the outputs of the operator CD[/ , .] applied to a single basis function B0[t], with coefficients obtained by applying the same differential operator CD[/ , .] on the input signal and then evaluating these outputs at the central point of expansion t0. This results in a "Taylor-type" expansion whose convergence properties make such representation highly local, with error of approximation minimal around the central point of expansion t0. Such expansion has the following form:
f[t] = ∑ AkCD[k,f][t0]CD[k, B0[t0]] k=0
(t - t )k with functions CD[k, R0[t-t0]] replacing polynomials — , and with coefficients k\
CD[/ , |[t0] replacing the values f{k)(t0) of the standard operator of differentiation in the standard Taylor's formula. However, the error of approximation at a point t moving away from the central point of expansion t0 has different features than the error of the standard Taylor's expansion. These coefficients CD[/ς /J[t0] can be considered as local signal behavior parameters (LSBPs) that encode the local behavior of/[t] between Nyquist rate points. Moreover, differential operators CD[n, f have the property that, unlike the standard differentiation operators, they faithfully encode the spectral content of the signal without accumulating the noise. For this reason the family of linear differential operators {CD[n,y]}ne]sj obtained from the operator CD[., .] defined by recursion are called a family of chromatic derivatives. Chromatic derivatives of high order (16 and above) can be evaluated with accuracy better than 10"3 of their full scale, in the presence of out-of-band noise exceeding in amplitude the signal itself. Functions D[k, B0 [t-t0]] replace polynomials used in the invention "Signal Processor with Local Signal Behavior" and represent a very significant improvement, providing much higher accuracy, as well as simplified and novel signal processing methods.
In the following sections, global approximations (or representations) of a signal using harmonic analysis technique and local approximations of a signal using the chromatic derivative technique are introduced in turn. Then a specific type of local approximation using chromatic derivatives associated with Chebyshev polynomials are introduced to demonstrate the chromatic derivative technique.
Global Approximations
and that its Fourier transform H
f[co\ is equal to 0 outside a finite interval. We can assume that the frequency variable ά is chosen so that H
f [CL>] = 0 outside the interval [-7 , π].
Then
and
To represent a band-limited function one can decompose its Fourier transform into series. The most fundamental such decomposition results in Nyquist's Theorem. In this case a Fourier transform is represented by a Fourier series. Since
and since Hf[ώ = 0 outside the interval [-π, π], Hf[CO] can be expanded into Fourier series:
for
Thus,
sin[.r (t - k)] = ∑ /[*]
k=_
∞ π(t - k)
It can be shown that the convergence in the sense of L2 (R) implies also the
pointwise convergence. For f[t] = ∑ _ ∞ f[k] — — — , the absolute integrability
condition j \f[t] fdt < » , due to orthonormality of the base I — — L^-. is
equivalent to the condition Y" /[jc]2 < oo • Notice that if ^ ^__∞ [/c]2 does not
converge, then Y
∞ f[k] — might not converge, even if [t] is bounded. For
example, let g[k] = (- l)k, then
Σ
does not converge. This shows the truly global nature of the Nyquist interpolation formula: the truncation error can be bounded only in terms of energy outside the truncation interval :
In the above representation, the Fourier transform Hf[CO] off was expanded into series. Thus, the expansion was valid for every t.
Local Approximations
Generalized Representation of Signals for CSP
We now consider chromatic derivatives associated with an arbitrary family of polynomials of appropriate type. Let /[t] = Hf [ω ]e'ωtdω be a representation of
a signal/by its Fourier transform. Then the usual differentiation gives
/'['] = 2π- Jf- i<DH J f[φ *d®
Thus an application of a linear differential operator D[n, f] = ∑ "m_Q — <In n^(d I dt) '" of order
n to the input signal produces a multiplicative factor in the Fourier transform of the signal which is a polynomial Pn [iCύ/π]:
1 fπ r iω
D[n, f][t] - — P HΛω eιωtdω
2% * " π
A family of differential operators is a chromatic differentiation family, if the corresponding family of polynomial multiplicative factors Pn [io] satisfies some special properties.
We say that system of polynomials _P„ feasibly separates points on [-π, ι\ if, for every two numbers x and v such that x≠y, x, y € [- π, 71] there is a polynomial p l l in
the system of arbitrary large degree, such that, for M= max {Pn [" "],- % ≤ ω ≤ n } , it holds that
for a nonzero k, yet reasonably small positive number k. Thus it is not the case that all polynomials in the system with high index attain small values for all points within an interval inside the interval [-7T, π].
A family of differential operator is a chromatic differential family if the corresponding family of polynomial multiplicative factors Pn [iώ feasibly separates points. This insures that the family preserves the spectral features of the signal since at no part of the spectrum all of the polynomials have excessively small values. The main example of polynomials which feasibly separate points are orthogonal polynomials.
Thus, we choose a continuous, non negative, even function w[t], (i.e., such that w[t] = w[-tj) which has finitely many zeros in [- π,
and such that w[t] P[t lπ it
converges for every polynomial. Then
is a scalar product on the vector space of all functions/such that f wit] f [t]2 dt exists.
Linear independence of polynomials 1, x, x2, ..., x' ... and the Gram-Schmit procedure guarantee the existence of a unique sequence of polynomials, SP = {R;}ιeN such that
J* {t]P,[t / j[]Pj[t / = 0 for alli≠j and such that j w[t]Pt [t / πf dt = h, ≠ 0
for each i≥O. As is well known, any family of orthogonal polynomials of the form
Pn [t]=k„ f'+k 'nfΛ Λ — satisfies the recursion of the formula obtained as follows.
A system of polynomials [-π, π) is regular if i. It contains exactly one polynomial for of each degree n (thus also a constant function); ii. Polynomials P- - -l which belongs to a regular family and are of even index
contain only even powers of i Owhile P1M\ ] which belongs to a regular family
and are of odd index contain only odd powers of id). Thus, polynomials containing only even powers are real while the polynomials containing only odd powers are of the form iQ[ O] where Q is a real polynomial in CO.
The significance of a regular systems comes from the fact that the Fourier transform of a real function has even real part and odd imaginary part.
/c' /c'
In case of orthogonal polynomials, if αn = 0, i.e., if "+1 = — - for all n > 0, and
^ +l ^n
in addition also k'n - 0, these polynomials form a regular system, satisfying P0[t] = k0, Pλ[t] = and B+1 W = bntPn[t] - cnPn_x[t] .
To any regular set of orthogonal polynomials SP we associate a class SD of linear differential operators defined by the following recursion scheme. Let _P0 [t] = kQ and
Px[t] = kxt . We set: Δ 0[ ] = /c0 [t] ;
Δl[ ] = iϊ ;
Δ JI+1[ ] = — <?/ (Δ I,[ ])W + cllΔ l_1[/] , fiιr ι≥ l. π
A family of differential operators SD is a Chromatic differentiation family, for example, if the corresponding family of polynomial multiplicative factors p \ ] is a
regular family of orthogonal polynomials. As described in the Patent Disclosure "Signal Processing with Local Signal Behavior", Chebyshev and Legendre polynomials are typical examples of such polynomials, and there it is also explained why such polynomials encode the spectrum of the signal. From the same disclosure it is clear that the family 1, ,( -)2,...,( -)",... does NOT feasibly separate points. The last condition ensures
that the polynomials efficiently encode the spectral features of the signal rather than, e.g. , accumulating the noise from the ends of the spectrum as it is the case with 1, j__ (m.)2 (mλ"
Let BQ[t] = 27 [ w ω ]e'ω'dω ; then we call _50[t] the fundamental basis function
of order 0 associated with a regular family ofpolynomials SP, orthogonal with respect to the scalar product < f,g >= f* w[t]f[t]g[ . Let also Bn[t] = (- 1)" Δ „[50] . Then
Bn[t] is called the fundamental basis function of order n.
An example of a regular system, heavily used in the presented embodiment of the subsequent chromatic Signal Processing method, is the Chebyshev-type chromatic derivative system and it relies on the properties of this system listed below.
Chebyshev Polynomials and Chebyshev-type Chromatic Derivative System
Decomposing the Fourier transform of a function into Fourier series resulted in
Nyquist's Theorem. Instead, in the equality f[t] - j Hf[to ]e""d(ύ one can expand
the factor eιωt into series. As an example of the mathematics behind our signal processing method, we use the fundamental basis of functions generated by the well-known regular series of orthogonal polynomials, called the Chebyshev T polynomials.
Let Tk(x) (or equivalently T\k, x]) be the k"! Chevyshev polynomial, and let Jm(t)
(or equivalently J[m, tj) be the m"' Bessel function of the first kind. Then, the well known equalities ri dω . , π f
] T2M (ω ) sm[aω ] r = (- 1) *+ι (a)
0 VI - ω l
hold for integers k ≥ 0 and imply that
and
The series of Chebyshev polynomials Tk ) are orthogonal over [-71, π] with the
window 1 /
Thus, _4
A and R
A are coefficients of expansion of sin[ 0) t] and
cos[ω t], respectively, in the series of Chebyshev T polynomials after we normalize. Hence, we can write:
This can be shown to imply that for any fixed parameter t0 and every t,
f[t+t0] = ^ j Hf [ω ]eιω^dω
2π J-∞ J = O /Cto ] + 2∑ Jk (πt) - - i * Tk <?τ)Hf l ]eia"" dω
Since T
m ( ) are polynomials, the above factors
are
values of linear differential operators applied to f[t] and then evaluated at t = t0.
Due to their properties we call these differential operators chromatic derivatives of Chebyshev type.
The stability of the expansion follows from the standard theory of orthogonal functions (e.g., the uniqueness of approximation coefficients) because Bessel functions form an orthogonal system. {Jra[t] : m € N} is an orthogonal system in the space of all π band-limited functions /[t] such that
[ Hf [ω Hf [ω] l - [ —J dω = \Hf [ω \\2 l - \ — ) dω n J π
converges, where again Hf[0)] is the Fourier transform off, and with the scalar product defined by
It can be shown that this is a correct definition of a scalar product. This space properly extends the space of 7T band-limited square integrable functions; obviously it includes all the Bessel functions {Jm[t] : m € N} .
The orthogonality, in particular, implies that expansions of lower order are just the truncations of expansions of higher order. We will find that the Chebyshev polynomials (frequency domain) and the Bessel functions (time domain) form a very tightly-linked pair for performing signal processing on band-limited signals. In practice we will find a series of useful corresponding time domain basis functions tied to any regular, orthogonal frequency domain polynomials that we choose.
Chromatic Derivatives associated with the Family of Chebyshev Polynomials These differential operators were introduced in the patent "Signal Processor with
Local Signal Behavior".
Let fif) be an arbitrary function; then the values of the operators on Ν><C° -*C° defined below are called the CHROMATIC DERIVATIVES of (t) associated with the family of Chebyshev polynomials Tn (-7) .
^DChebysheVl l > Jl ~ "*" ((5CEtaeijl,to [0, f / & ) '
CVchebyshev[k,f] = ^CDChebyshev[k - l,f] I dt) + CDChebyshev[k - 2,f] for £>2.
(CDChebyshev[£:J])[t] will be denoted by CDChebyshev|7./; t] and is simpler by CD[k,t] if it is clear what the input function/is and what family ofpolynomials is used.
Properties of Chromatic Derivatives associated with Chebyshev Polynomials Chromatic derivatives associated with Chebyshev polynomials have several important properties. These properties are summarized below.
The formula relating the chromatic derivatives of a signal to the signal's spectrum is: If f[t] = -^ Hf[ω]e'ωtdω , then
1 f .. fω^
(CO[k,f])[t] =— f inTk[-)Hf[ωyωtdω
2π J-*: πJ J
Expansions of Band-Limited Signals in terms of Chromatic Derivatives associated with Chebyshev Polynomials
The following list of formulas' exact forms is special to the Chebyshev-Bessel expansion, but analogous equations can be computed for any such orthogonal polynomial-basis function pair by using simple properties particular to each such pair. The expansion is a novelty first presented herein.
We can formulate a "Taylor-Chebyshev-Bessel" type expansion formula. Since, as shown above,
1 t°° f[t+t0]=J0(πt)~[∞Hf[ω ]e"*° dω
+ 2∑ J2k(πt)—f i2kT2k[-)Hf[ωyωt"dω k=l in J~∞ v % J oo I Λ
+ 2∑ J2M(πt)— i2k%k -)Hf[ω]e'"'°dω
we obtain, once more, that
/['+'„] = f[toVo( t)+ ∑ CD[k,f,t0]Jk(πt). k=\
Setting t + t0 = I , we get the more usual form
-
* ))
+ 2y.
ιco[fc, ,fo)__(
π(r _
ro))-
We now briefly review basic properties of chromatic differentiation.
Composition of Chromatic Derivatives associated with Chebyshev Polynomials
CD\_m, CJOln, yjl = CCZ>[.m + n,fl+ ~ 1) """ <•" ) CDftm - n\, fj) .
Chromatic Derivatives of Bessel Functions (1) CD[m,J0[πt]]= (-1)"' Jm[πt]= J_m[πt];
^m+„[^]+(-l)mra(" %-„|[^] (2) CD[m, Jn [πt]] = (- 1)'" 2 ^— - •
Thus, we can show that for any band-limited function/,
/W= fltol-Solπit- _-0)]+ 2∑ CD_k,f,t0^Jk_π<:- tQJ\
= f\.t0-\J0\_π(t- _■„)] + 2∑ (-l)*CZ3[fc,y._'0]C_D[*, .To [«■(*- ** 0)],
While the global representation of E[t] is obtained by representing the Fourier transform H[C] by a Fourier series, the local representation is obtained by representing e'ωt as a series of Chebyshev polynomials in the frequency variable CO with multiples of Bessel functions of time as coefficients:
in the base consisting of Chebyshev polynomials.
The difference in local and global representation is manifested in the nature of the convergence of the series. The error of truncated local approximation can be shown to be estimated as follows.
(JSrrlf,tJ)m≡ f[t - f t Q-\J 0(π Ct - *„))- 2∑ CΣ>V,f,t0-JkCπCt - t„ ))
^ Ar-t
( m \
< l-J0[π(t-t0)]2-∑2Js[π(t-t0)]2 _=ι
where A is a constant which depends upon the energy of the signal. Analogous Taylor-type approximation estimates can be derived from any series of basis functions we so choose to use.
Expansions of Chromatic Derivatives of Band-Limited Signals
There are two ways of expanding chromatic derivatives. These two ways are summarized in the two formulas below:
Formula A: (obtained by chromatic differentiation of the chromatic expansion of the signal) Let/[t] be a band-limited signal of finite energy, i. e. , an absolutely integrable function whose Fourier transform is 0 outside the interval [- 7T, π]. Then
CD[n,f,t * (- !)"( f[t0]Jn[π(t - t0)]
+ ∑ CD[m,f,t0pn+m[π(t - t0)] + (- l)min['"'' _,nl[;r(t - t0)]) ) .
Formula B: (obtained by chromatic differentiation of the signal and then forming its chromatic expansion) Let /ft] be as in the previous formula. Then
CD[n,f,t
0 J
0[π(t - t
0)]
It is easy to see that reordering of terms in the expression
CD[n,f,t] * (- f[t
0[J
n[π (t - t
0)]
M . + ∑ CD[m,f,t0] + (Jm+n[π(t - t0)] + (- l)mia[m'"] J]m_nl[π(t - t0)]J
results in an expression of the form
where R[t] contains only Bessel functions of high degree. This demonstrates (recall that Bessel functions of high degree have small values around t = 0) that the approximation
f[t\ » flhVM - 2∑ CD[m,f,t0]Jm[π(t- 10)]
has the property that its chromatic derivatives of order n are good approximations of order
M- n of the chromatic derivatives of order n of the function/
Chromatic Signal Processing
In the following sections, fundamental procedures and systems of the Chromatic Signal Processing methods are described in detail. These procedures can be classified as either (I) Data Acquisition Procedures or (II) Signal Processing Procedures.
Data Acquisition Procedures
This data acquisition stage (the stage by which we obtain the local signal behavior parameters) preferably but not necessarily takes place in a dedicated data acquisition unit. The reason is that the data acquisition unit (hereafter, DAU) performs the same operation for any subsequent type of processing and thus should be performed in a unit which needs no programming specific to the subsequent processing. The only possible programming of the DAU is adaptation with respect to the band- width of the input signal and the out of band noise level present in the signal.
The unit can be embodied in either of the following ways:
(i) by first sampling the analog signal at higher than the Nyquist rate, and then obtaining local signal behavior parameters from either a) the numerical values of the samples (if an oversampling A/D converter is used; b) from discrete but analog voltages (if, for example, a CCD device is used);
(ii) by using analog or mixed signal circuitry to perform an operation needed to obtain the values of the local signal behavior parameters from the continuous form of the signal (e.g., by analog or mixed signal multiplication of the input signal with a specific function (as described herein) followed by analog integration) and then sampling the output of the circuit at the appropriate sampling moments, usually at a sub-Nyquist rate. The signal may be given in the presence of severe additive out of band noise (for example the switching noise of a pulse width modulator), in which case t) represents the signal plus the noise; the approximations are intrinsically noise robust and will pick out the 7T-band-limited part of βf), as desired, filtering out the out of band noise.
Referring now to Figure 2A, a DAU 200 is depicted in accordance with one embodiment of the present invention. An input signal 210 as the input of DAU 200 is processed by DAU 200. The output for this example is 16 local signal behavior parameters 270-285. In this case, each of these output parameters 270-285 is a value of a chromatic derivative evaluated at a sampling moment. As understood herein, a DAU need not be limited to 16 outputs.
The data acquisition procedures employed can be broadly classified by the method they use as approximation techniques. In one embodiment of the invention, approximation technique is specialized to least square fitting. However, in an alternative embodiment, other types of curve fitting can be implemented.
Basic Least Square Fit Approximations
Basic least square fit approximations use finite linear combinations of interpolation functions {ax[t\, a2[t], ... , ak[t]} which within a given neighborhood [t0 - δ, ^o + ^l of the central point of approximation t0 satisfy:
1) Each ai [t] "locally resembles" a band-limited function, i.e., for some band-limited function b[t], |α;[t] - b[t]| < e'for some small given € and all t such that t0 - δ < t < t0 + δ.
2) Functions {aλ[i , a2[t], ... , ak[t]} allow stable numerical differentiation of any degree.
3) The set {aλ[t\, a2[t], ... , ak[t]} is sufficient as a "local basis", i.e., for every band-limited function J[t] there exists a linear combination ^^[t] + A2a2[t] + • • • + Akak[t] such that \A aλ[t] + A2a2[t] + • • • + Akak[t] - J[t]\ ≤ ζ[t0-t] for all t such that t0 - δ ≤ t < t0 + δ, where ξ[t] is an error function, associated with the family {ax[t\, a2[t], ... , ak[t]}, and which has small values around 0. As such, interpolation functions a [t], a2[t], ... , ak[t] can also be considered as basis functions.
Since only BL functions can be approximated, the approximations can serve to remove noise from a noisy signal. These basic approximations can be made using various basis functions; the patent disclosure "Signal Processing with Local Signal Behavior" describes application of a Lagrangian-type polynomials. Specifically, for the case of
Lagrangian-type interpolation polynomials in the neighborhood [-1,1] a good error
estimate is , e - for a suitably chosen N depending on the interpolation
Vι- ( )2
polynomials used, as described in the invention "Signal Processing with Local Signal Behavior". As another example of local, interpolatory basis functions, the present embodiment uses Bessel functions which are band-limited functions but of infinite energy. Specifically, for interpolations which use Bessel functions of order up to N, an
error estimate can be written in the form c l - J0[t]2 - 2∑ .=l J; [t]2 . Thus, the present
embodiment improves on the previous by having an error bound which remains very small for a larger interval near the center of approximation (in the above case, 0).
Again, in comparison with the procedure of the Patent Disclosure "Signal Processing with Local Signal Behavior", for a signal/ if the Lagrangian approximation of size N is large enough, we get that for all sufficiently small t:
where the constant^ depends upon the energy of the signal/ Numerically comparing the two kinds of approximations makes it clear that approximations using Bessel functions with Chebyshev type chromatic derivatives converge at much faster rate than approximations using Lagrangian functions (interpolating shifted sine functions) with
Chebyshev type chromatic derivatives.
In addition to having a superior convergence rate, using Bessel basis function approximations of band-limited signals instead of Lagrangian interpolations of those signals is superior for the following reasons: a) Bessel expansions of a band-limited signal are more accurate; fewer degrees of expansion and fewer centers for expansion are needed to obtain the same accuracy in approximating an extended slice of a signal. b) Bessel approximations (and other fundamental basis functions) are stable as one moves out from the center of the approximation — for example, asymptotically (x → ∞) Bessel functions converge to 0 as - , whereas
polynomials asymptotically are governed by the largest term in the
polynomial: XN — they increase rapidly and without a bound. c) Accurate higher order derivatives are available using Bessel expansion approximations compared to the Lagrangian interpolations of sine functions. d) Lower order chromatic derivatives of a signal at a point are approximately the same whether one is also obtaining much higher order chromatic derivatives during differentiation or not: Lagrangian interpolation parameters do not have this "degree-of-differentiation invariance".
The error estimates we have provided when representing a signal using fundamental basis functions from 0 to n can be used to define suitable windowing (weight) functions for the least square fit approximations. We provide the details below for the two special cases of the Bessel function interpolants and the sine function interpolants, although any class of fundamental basis functions could be used.
Type 1: Bessel functions as interpolation functions
Suppose that one begins with 8-times oversampled data. To obtain a least square fit, one considers the following sum:
M πι
S = v∑ A(k)
2 + u ∑ W A(0) + 2∑ J
k A(j) - f k=l i=-M
k=l
Here v, u are constants (v possibly equal to 0) and W[t] is the following windowing function obtained from the error estimate for approximations using Bessel functions.
The first sum provides noise robustness, by reducing aliasing between Bessel functions of different order. Bessel functions are independent on [-∞, ∞], but in a finite interval clearly an aliasing phenomenon is inevitable. However, if all the coefficients A(k) of the linear combination are kept small, the properties of Bessel functions around 0 ensure that very small error will be due to such aliasing. Aliasing is due to the fact that one Bessel function can be approximated by a linear combination of other Bessel functions, with small error over the interval of interest.
The sum ∑ is an interpolation evaluated
at 8 times oversampled points in the interval [- - ,- ] consisting of ■ %■ Nyquist rate unit
intervals. M is the greatest integer such that - < m, where W[m] is the first local minimum of W[t] for t>0 (i.e., - is the last oversampling point inside of the monotonically decreasing part of J^t]). Thus,
corresponds to the sum of squares of the differences of the interpolated values
and (8 times over-) sampled values of the signal f t], windowed by W[t], corresponding to the error of the interpolation, thereby "equalizing" the numerical significance of differences between the sampled and extrapolated value. In this way, allowing correspondingly larger errors away from the center in the same way that the accuracy of interpolation decreases, the least squares produces the best approximations A(j) of the values of the corresponding chromatic derivatives CD\J,f\ [0] . Of course, one can use any oversampling rate: 8 is just an example. Also, the sampling interval does not need to be symmetric about 0.
The total sum is then minimized using the standard numerical techniques consisting of differentiation of the sum S with respect to the variables A(i) (here / is an index and not a variable) and setting all partial derivatives equal to 0, thus obtaining a system of linear equations, which can be solved for the variables A(i). The resulting expression is of the form ∑ _ ctf( ) . The coefficients c, are obtained from the values
of Bessel functions at sampling points and do not depend on the input signal.
If one wishes to perform the curve fitting with the continuous signal (rather than with samples), the above summation is simply replaced by the integral below. We take [-m, m] to be the integration interval, measured in Nyquist intervals, where m can be taken such that W[m] is the first local minimum of W[t] greater than 0. (Of course, this is just
one possible choice.)
S is minimized exactly as in the previous method, resulting again in a set of linear equations which can be solved for the values of the chromatic derivatives. Each will now f(t)J(t)dt for some function J as previously. This
integral can be computed using analog or mixed signal circuitry; its value will then need to be sampled at the appropriate point in time to obtain the value of the chromatic derivative. This sample only needs to be taken once per integration interval. An example is given in Figure 2B. Referring now to Figure 2B , a schematic diagram of analog/mixed-signal circuitry
250 is shown for performing curve fitting in accordance with one embodiment of the invention. As shown, a multiplier 260 takes an analog signal input//) 255 as a reference voltage. In the present embodiment, multiplier 260 is a D/A converter. In addition, a table 257 of predetermined numbers is coupled to multiplier 260. Table 257 provides these predetermined numbers as a set 265 of digital input coefficients to be combined with analog signal input f(t) 255 by multiplier 260 for producing an analog output 267. An integrator 268 is coupled to multiplier 260. Integrator 268 takes analog output 267 from multiplier 260 as an input and produces therefrom a signal 270. In turn, signal 270 is received by an A/D converter coupled to integrator 268. CD- the value of they'th order chromatic derivative of fit) 255 evaluated at a sampling moment t0 is outputed from A/D converter 275.
Referring still to Figure 2B, one CD (i.e., CD,) of a particular order is produced as the output of circuitry 250. However, as understood herein, circuitry 250 need not be limited to producing CD of just one particular order such as j. For example, in another embodiment, by combining delays with circuitry 250, different orders of CDs can be produced as sequential outputs of circuitry 250. However, as understood herein, multiple copies of circuitry 250 can also be implemented. For example, in an alternative embodiment, multiple copies of circuitry 250 are arranged in parallel to provide multiple CDs as parallel outputs. Referring now to Figure 3, a flow chart 300 is shown outlining steps for digitizing
a section of a BL analog signal within a sampling window in accordance with one embodiment of the invention. In particular, flow chart 300 outlines steps for implementing the introduced mathematical concepts in accordance with the present embodiment.
In step 301 , n + 1 , the number of LSBPs that will be used to characterize the local behavior of the section in between Nyquist rate points, is used to determine the width of a sampling window.
In step 305, the section is represented as a truncated series at a sampling moment within said sampling window. This truncated series has n + 1 variable LSBPs as its coefficients to be solved numerically. This truncated series also has n + 1 basis functions that are fundamental basis functions of order 0 to order n. Typically, these fundamental basis functions can be Bessel functions of order 0 to order n. Also, these fundamental basis functions can be sine functions.
In step 325, discrete signal samples are obtained by over-sampling the section within said sampling window. The over-sampling process can also be performed within an interval lying within the sampling window. In another embodiment, the over-sampling process is performed in the entire sampling window.
In step 345, by using an approximation technique for the truncated series and the discrete signal samples, the n + 1 variable LSBPs are solved numerically to obtain n + 1 numerical LSBPs. Specifically, in one embodiment where the approximation technique is least square fitting, the numerical LSBPs are solved by the least square fitting of the truncated series with the discrete signal samples.
Also in step 345, the n + 1 numerical LSBPs are values of n + 1 chromatic differential operators of order 0 to order n evaluated at the sampling moment. These n+ 1 chromatic differential operators of order 0 to order n are associated respectively with regular polynomials of order 0 to order n, wherein said n+ 1 regular polynomials feasibly separate points. In one embodiment, the regular polynomials are Chebyshev polynomials of order 0 to order n. In another embodiment, the regular polynomials are Legendre polynomials of order 0 to order n.
Referring still to Figure 3, Step 345 can be further refined into smaller steps such as step 351-354. Specifically, in step 351, squares are formed by squaring differences between the discrete signal samples and interpolated values of the truncated series.
In step 352, the squares are summed.
In step 353, a system of n + 1 linear equations with said n + 1 variable LSBPs as variables is formed by minimizing the sum of the squares respectively with said n+1 variable LSBPs.
In step 354, solving for said n + 1 variable LSBPs to obtain said n+1 numerical LSBPs that satisfy said system of n + 1 linear equations.
The following facts may be used to improve the accuracy of the above method. We present it here for the continuous case (using integration), but it can be done for discrete samples as well. Also, the method is not restricted to Bessel functions as interpolants, but can also be used with other fundamental basis functions.
Let us denote by Km [t] the th chromatic derivative of the sinc[t] function, i.e. for sinc[t] = ^r1 - Kϋ[t] we have
Let also f[t]= 2 f Hf[ω]eiωtdω . Then
= CD[ ,/][0]
Letnow Θ[t]= CD[0,/][0]J0[t]+ 2∑k=lCD[k,f][0]Jk[t] ; then
This implies (Km[t] * J„[t])[0] = 0 if m ≠ n, and (Km[t] * J t])[0] = λm . Here λm = 1 ifm = 0 and λm = 1/2 otherwise. Consequently,
(K„χt]*<d[t])[0]= CD[0,f][0](J0(t)*K t])lO]
+ 2∑CD[k,f][0](Jk(t)*Km[t])[0] k=\
Andso
(i°[t]*Θ[t])[0]= CD[0,/][0](J0(t)*^0[t])[0] = CD[0,/][0], and
( „[']* Θ [*])[()] = 2CD[m,f][0](Jm(t Km[t])[0]=2x^CD[m,f][0]= CD[m,f][0]
i.e., in both cases, (iς,[t]*Θ[t])[0]= CD[/n,/][0] . Thus, l∞Km[uMu]du = (-!)'" [∞Km[0- uβ[u]du
= (-l)"!(^„[^*Θ[t])[0] = (-l)"'CE>[m,/][0]
Km [i [u]du = f[u]Km [u]du = (- 1) '" CD[m, /] [0] .
These facts can be now used for improving evaluations of chromatic derivatives using either the oversampled data or the analog format of the signal. First, the factor 1 - J0 [t]2 - 2∑ .=1 J. [t]2 in the truncation error
\f[t]~ f[t0]J0(t- t0)+ 2∑ CD[k,f,t0]Jk(t-t0)\ k=\ is very steep, yet bounded by
l.
Thus the associated window
is very steep. We choose the approximation interval based on the function W[t\. For given degree of approximation N we denote by -iWthe largest sampling point such that the window W[t] is monotone in the interval [-M, 0]. Let ψ [t] = [0]J
0 [t] + 2∑ ^
+l X[k]J
k [t] . Then we consider the sum:
+ w∑ X[m]2 m=0 rM 2
Here (W[u](ψ[u\- f[u])) du replaces the discrete least squares fit; the part
Σ /v* tM 2
I (Km [u W[u](ψ [u] - f[ ])) du is designed to reduce an "aliasing"
phenomenon, and is the "finite simulation" of the fact that ∞ Km[u]Θ [u]du = [∞f[u]Km[u]du = (- l)"!CD[m,/][0] . Thus, this part restricts
possibility of one Bessel function being "replaced" by a linear combination of others, with small error within the FINITE interval [-M,M].
One now proceeds with the standard quadratic minimization techniques which consist in differentiation with respect to the variables X[i], setting these derivatives all equal to zero, and solving the corresponding system of linear equations, exactly as above.
Type 2:sinc function as interpolation function
One can use the standard base used in Nyquist interpolation formula consisting of several integer shifts of the sinc[t] function to replace Bessel functions in this first stage of data acquisition unit. One first approximates locally Bessel function Jm[M] using
Nyquist interpolation formula with, say 17 shifts of ^η^y- :
Notice that we are not using for A j, m] the values J
m[ττj], but choose them so that the interpolation formulas provides the best local approximation. This is accomplished by choosing has Chebyshev type chromatic
derivatives at 0 equal to the values of the chromatic derivatives of the corresponding Bessel function J
k[ πi\. Thus, Aj, m] is chosen so that
must be computed (e.g., with Mathematica software) using limits and with high precision due to the tendency to error accumulation of evaluations of higher order derivatives of
sm t-
j) ■ F°
r example, Bessel function J
0[ 7tt]
can be approximated by J
0[πt] » ∑ with the following values of
_4[/,0], -8 < y < 8:
_4[-8,0] = 7.0499449940993569 _4[-7,0] = 9.1083068045593241 A[-6,0] = 4.2205228655115949 _4[-5,0] = 0.5490843317607693 A[-4,0] = 0.1968047186334627 4-3,0] = -0.1806903578086363 A[-2,0] = 0.2202775461799985 A[-l,Q] = -0.3042421776406465 _4[0,0] = 1.0000000000000000 _4[1,0] = -0.30424217764064650 _4[2,0] = 0.22027754617999856 A[3,0] = -0.18069035780863606 _4[4,0] = 0.19680471863346274 A[5,0] = 0.54908433176076931 _4[6,0] = 4.22052286551159488 _4[7,0] = 9.10830680455932409
_4[8,0] = 7.04994499409935698
Thus, instead of using Bessel functions themselves, one uses their approximations of the form J,„[πt] χ B[m,t] = ∑ = ^" ^-j '^ A[j ,m] with the values for Aj, m]
determined as described above. One now constructs the sum with unknown parameters C[m], 0 < m < k and weights u, v:
which plays the same role as in the previous case:
is the value of the interpolation at an oversampling point f ;
is the weighted (windowed) sum of squares of differences between the interpolated values
£[0,
8"]C[0] + 2
and the sampled values /[
■§
•] . However, the term which
ensures the stability of the least square fit
is significantly different. Instead of limiting the sum of squares of the Bessel-type chromatic derivatives one now limits the energy of the approximation, which is, in this case, finite. Note that ∑ B[m,j]C[m] represents the value of the interpolation at a
point/ and thus ∑ .= I ∑ B[m,j]C[m]\ is the energy of approximation (all non zero
sampled values are between -8 and 8 by construction). Thus, if the average energy of the noise is known, it is easy to choose the parameters u, v in a way that ensures optimal performance and stability of the algorithm. Also, u, v can be changed dynamically, in order to obtain an adaptive algorithm. Again, we can replace the summation over oversampled points with an integration over the same time interval, allowing for analog or mixed signal processing, as above.
Also, using the shifts of the sinc[t] function provides filtering of different type than the filtering provided by the Bessel functions which are used in the subsequent processing, thus adding the noise robustness.
Clearly the above procedure immediately generalizes to any family of fundamental functions associated with a regular family of orthogonal polynomials: the Bessel functions are just one example.
Zero delay zero phase-shift filter-differentiators and predictors
The same technique applied to least square fits to obtain filters without delay or phase-shifts applies here as well. We minimize
+ w∑ X[m]2 m=0
for L < M, such that point L represents the ends of the available data stream.
Approximations by a Complex
In order to avoid excessive aliasing which large degree expansions would inevitably produce and still be able to extend the interval of approximation, we employ a construction vaguely resembling (in the idea) polynomial splines. A complex is a sequence of basic approximations of any kind as described above, parametrized (simultaneously) by a set of sequences of variables, one sequence for each basic approximation as employed above. To the usual conditions associated with every basic approximation one adds conditions ensuring that the adjacent basic approximations "agree" about the values of the chromatic derivatives evaluated at the same point. This technique has been applied to polynomial approximations as described in the disclosure "Signal Processing with Local Signal Behavior". We first explain this on the example of a complex with Bessel functions as interpolants. Using properties of the Chebyshev-type chromatic derivatives one can introduce the following approximations for the ath derivative of a band-limited function, associated with each basic approximation of a 3-complex, /. e. , a complex involving three basic approximations, and a slightly modified windowing function WN [m, i .
+ ∑ {x2[a + i]+ (-lT^a]X2[\a - t
'(:
Thus, the parameter m in W N [m, t] reduces the degree of approximation. This is necessary because it can be shown that, if the signal is interpolated using Bessel functions
J0, J!, ..., JN, then its mth derivative, interpolated by A P [m, t] is essentially an interpolation of order N- m.
Subsequently, one defines the following three sums, each corresponding to the appropriate basic approximation, with z equal to the oversampling factor. The interval [-M, M] is chosen on the basis of the corresponding window WN [0, t] as the smallest integer point larger than the first point p at which the first derivative of the function WN [0, t] is equal to 0 and thus where WN [0, t] is no longer decreasing in a monotone way for 0 < t < p.
Notice that these approximations are centered at 0, - 1 , -2. This explains the shifted z-times oversampled values f[j], f[j - l], f[ - 2] of the signal/ The sums El , E2, E3 play the usual role as in the simple approximations:
El = ∑ Xx[s]2 , E2 = ∑ X2[sf , E3 = ∑ X,[s]2 .
_=t _ = 1
Finally, Dl andD2 correspond to an approximation of the RMS value of the sum of differences of chromatic derivatives calculated using each of the three interpolations. Thus, Dl and D2 will ensure that (if multiplied with a large weight w) the three basic approximations "agree" about the values of the chromatic derivatives within the error given by the appropriate windowing function WN [k, t .
The weighted sum S = u (SI + S2 + S3) + v (El + E2 + E3) + w (Dl + D2) is now minimized using the standard procedure. Again, oversampling can be replaced by integration.
Similar procedure can be applied to produce a complex in which Bessel functions are replaced by their Nyquist type approximations as in the basic case described above, or with any other class of basis functions.
Referring now to Figure 4, a flow chart 400 is shown outlining steps for digitizing a section of a BL analog signal in accordance with one embodiment of the invention. Specifically, within a sampling window constituted by a first and a second windows having an overlap, the section has a first subsection in the first local window and a second subsection in the second local window. Moreover, flow chart 400 demonstrates a way to implement the introduced mathematical concepts in accordance with the present embodiment. As understood herein, steps outlined in flow chart 400 are not limited to the case of a sampling window having two overlapping constituent windows. In another embodiment, steps in flow chart 400 is applied to a case of a sampling window having more than two overlapping constituent windows.
In step 402, n+1 , the number of LSBPs for characterizing the local behavior of the first subsection, is used to determine the width of the first local window. Also, m+1, the number of LSBPs for characterizing the local behavior of the second subsection, is used to determine the width of the second local window.
In step 405, the first subsection is represented as a first truncated series at a first sampling moment in the first local window. The first truncated series is parametrized by i n+1 coefficients that are n+1 LSBPs adapted for characterizing the local signal behavior of the first subsection between Nyquist points. The first truncated series is formed by n+1 basis functions, with the n+1 LSBPs as coefficients respectively for the n+1 basis functions. The n+1 LSBPs are values of n+1 chromatic differential operators of order 0 to order m evaluated at the first sampling moment. These n+1 chromatic differential operators of order 0 to order n are associated respectively with regular polynomials of order 0 to order n. These regular polynomials feasibly separate points. In one embodiment, these regular polynomials are Chebyshev polynomials. In another embodiment, these regular polynomials are Legendre polynomials.
In step 410, a first set of discrete signal samples is obtained by oversampling the first subsection within a first interval disposed within the first local window. In some embodiments, the first interval can be as large as the first local window. Moreover, the basis functions are fundamental functions of order 0 to order n. In one embodiment, the
fundamental functions are Bessel functions. In yet another embodiment, the fundamental functions are sine functions.
In step 415, the second subsection is represented as a second truncated series at a second sampling moment in the second local window. The second truncated series is parametrized by m+1 coefficients that are m+1 LSBPs adapted for characterizing the local signal behavior of the subsection between Nyquist points. The first truncated series is formed by m+1 basis functions, with the m+1 LSBPs as coefficients respectively for the m+1 basis functions. Moreover, the basis functions are fundamental functions of order 0 to order m. In one embodiment, the fundamental functions are Bessel functions. In another embodiment, the fundamental functions are chromatic derivatives of sine function. The m+1 LSBPs are values of m+1 chromatic differential operators of order 0 to order m evaluated at the second sampling moment. These m+1 chromatic differential operators of order 0 to order m are associated respectively with regular polynomials of order 0 to order m. These regular polynomials separate points. In one embodiment, these regular polynomials are Chebyshev polynomials. In. another embodiment, these regular polynomials are Legendre polynomials.
In step 420, a second set of discrete signal samples is obtained by over-sampling the second subsection within a second interval disposed within the second local window. In some embodiments, the second interval can be as large as the second local window. In step 425, numerical values of said n+1 and m+1 LSBPs are solved together such that the numerical values of the n+l+m+1 LSBPs provide curve fitting for the first truncated series with the first plurality of discrete samples, and for the second truncated series with the second plurality of discrete signal samples. In one embodiment, the curve fitting used is least-square fitting. In another embodiment, numerical values of the n+1 and m+1 LSBPs are solved using approximation techniques other than least-square fitting.
The 3-complex was presented here merely for ease of explication: any number of basic approximations can be used. In general, of course, the basic approximations need not all be of the same order. They need not be evenly spaced, and they need not be symmetric. The domain of each application of the window function WN[k, t] may also be varied, as may the window function itself. The number of derivatives for which "agreement" is ensured may be varied, and "agreement" may be ensured also for non-adjacent basic approximations. The particulars presented above are simply the most
easily read.
Zero delay zero phase-shift filter-differentiators and predictors
In another embodiment, the above procedure can be applied to obtain a basic method for predictions, by using a 5-complex (a complex with 5 basic interpolations) for which the parts S1-S5 do not have oversampled data for the entire windows. This also provides a basic zero delay zero phase-shift filter, applicable in low-noise environments providing good approximations for the one-sided (left-sided) first and second order chromatic derivatives at the very end of the data stream. As understood herein, the above procedure need not be restricted to a 5-complex. In an alternative embodiment, a complex with more than 5 interpolations is used. "
Chromatic Signal Processing Procedures
Chromatic Signal Processing Procedures can be classified as: (i) Signal Transformation Procedures;
(ii) Error Minimization Procedures; and (iii) Data Extraction Procedures.
Signal Transformation Procedures act on signals locally represented by chromatic derivatives by a linear operator L (e.g. , a filtering operator) and providing a representation of the output in the same format.
Error Minimization Procedures prevent error accumulation and are of auxiliary nature.
Data Extraction Procedures output certain features of the signal which go beyond its mere local representation, but do not change the signal itself.
Signal Transformation Procedures: Applying an arbitrary linear operator to a signal
Let f[t + t0] = f[t0]B0[t] 2∑ ∞ s l Δ[s,f,t0]Bs[t] . Here function BQ is the basic
function of order 0 which corresponds to a family of regular polynomials II, and Δ [_?,/, t0 ] is the sth chromatic derivative off with respect to the differential operators corresponding to the family of regular polynomials II. For example, if II is the family of Chebyshev polynomials, then Δ [s , f, t0 ] are chromatic derivatives defined recursively as
described above, i.e.,
^D Chebyshev l . / J = / >
CDaιebysh l,f]=^(dCDChebyshev[0,f]/dt) ;
CD[k,f] = ^(dCDChebyshev[k - 1,/] / dt) + CDChebyshev[k - 2,/] π
Then for an arbitrary continuous, bounded, shift invariant, linear operator L,
(L[f])[t]=f[t0](L[B0])[t-t0]+ 2∑ A[s,f,t0](L[Bs])[t-t0] ■ s=l
However, the values Λ _[t] = ( [Bs])[t] are independent off and can be calculated in advance, and their local representation obtained in the form
Λ _[t]= ks[0]B0[t]+2∑A[ ,hs,0]Bs[t] s=l
All the linear operators of any practical interest have the property that ||E()||<&|| for a relatively small number k. Thus, truncation of the above formula up to M+ 1th term gives:
with a small truncation error. Thus, if the signal is represented by
M f[t+ t0]= /TΛ]*oM+ 2Σ [s,f,t0]Bs[t], s=\
the output can be represented by
M
W])M= (L[f])[t0]B0[t - t0]+ 2∑ A[s,L[f],t0]Bs[t- t0] s=l
where the values of (Lf])[t0] and A[s, Lf\, t0] are obtained by substituting
M (L[BSW- t0]= Λ _[t- t0]= Λ s[0]B0[t- t0]+ 2∑ Δ [s, Λ , ,0]Bs [t - t0]
_=ι
M
(L[fW]=f[t0]( [Bϋ])[t-t0]+2∑ Δ[.,/,t0](Z[R )[t- t0]
_=ι
This implies that the new values [s, L[f], t0] can be obtained by a simple matrix multiplication, multiplying vector (Δ[s,f, t0)s<M+l by a square matrix x ^corresponding to the substitution mentioned above.
Referring now to Figure 5, a flow chart 500 is shown outlining steps for signal transformation performed on an input signal in accordance with one embodiment of the invention. In particular, flow chart 500 demonstrates a way to implement the introduced mathematical concepts in accordance with the present embodiment.
In step 510, the signal processing transformation represented as a r + l by r + 1 matrix. In step 520, the input signal section is represented as a vector of said r + 1 LSBPs.
In step 530, the vector is multiplied by the matrix to obtain an output vector of r + 1 entries, wherein said r + 1 entries of the output vector are r + 1 output LSBPs of the output signal section. This output signal section is represented as a truncated series parametrized by said R + 1 output LSBPs, and wherein these output LSBPs characterize behavior of the output signal section between Nyquist rate points. As understood here, the matrix need not restricted to a square matrix. For example, in another embodiment, a matrix of r + 1 by q, where q < r +1.
Error Minimization Procedures: Uniformization The aim of the data uniformization procedure is to prevent error accumulation.
SU = v∑ (CD[k,0] - X[k])2 k=0
X[k]J
0[κt]+ + i]+ (- l)
min[A ']X[|/c -
- (CD[k,z]J
0[π(t - z) + ∑ (CD[k + i,z] + (- l)
minW]CD[|/ -
z)] )) dt
Thus, minimization of SU results in an approximation which minimizes the RMS value of the difference between the new approximation defined by the values X[k] of the chromatic derivatives of fit] and the old values CD[k, z], for 2R + 1 consecutive Nyqist rate spaced points -R, ..., 0, ..., R. Thus, Phighest order derivatives are treated as auxiliary data whose role is only to provide a good interpolation of the derivatives of order up to
N -P. Expansion of the derivative N -P replaces then the oversampled values of this derivative in an interval depending on the value of P, essentially in the window given by
One can perform several layers of uniformization, each extending the interval from which the data is used and thus reducing the error due to the out of band noise, the round-off, and intrinsic errors of approximations used.
Referring now to Figure 6, a flow chart 600 is shown outlining (according to one embodiment) steps for forming a signal representation of a signal section from LSBPs of two or more signal subsections that are making up the signal section. Specifically, multiple subsection's LSBPs are used together to generate the section's LSBPs. Moreover, flow chart 600 demonstrates a way to implement the introduced mathematical concepts in accordance with the present embodiment.
In step 610, the section as a truncated series at a sampling moment in a sampling window supporting the section. The truncated series is parametrized by r + 1 coefficients that are LSBPs adapted for encoding local signal behavior of the section between Nyquist rate points.
In step 620, s chromatic derivatives of order 1 to order s of said section are represented as s truncated series parametrized by said r + 1 LSBPs.
In step 630, values of said r + 1 LSBPs are solved to provide curve fitting for 1) said truncated series with said first truncated series, 2) said truncated series with the second truncated series, and 3) said s truncated series respectively with chromatic derivatives of order 1 to order s for the first truncated series, and 4) the s truncated series respectively with chromatic derivatives of order 1 to order s for the second truncated series. In one embodiment, the curve fitting implemented is least-square fitting.
Data Extraction Procedures Evaluation of High Order Derivatives
Higher order derivatives are evaluated from the values of the lower order derivatives as follows. Assume that the second layer processing has produced highly accurate values of N-P chromatic derivatives offif]. The derivative of orderN-Pis now
treated as a new band-limited signal, represented NOT by its oversampled values but by an expansion of order P. Now any of the first layer techniques are applicable, except that the oversampled values of the chromatic derivative of order N -P are replaced by the expansion of order P. The newNchromatic derivatives are now, if necessary, subjected to the second layer processing, . e. , uniformization, producing N - P chromatic derivatives of CD[N -P, / t0]. Finally, one uses formulas for the composition of chromatic derivatives to obtain:
CD[m, CD[N - P,f]] = τ:(CD[N - P+ m,f] + (- Ϊ)'" CD[N - P - m,f])
i.e., (CD[N - P+ m,f] = 2CD[m,CD[N - P,f]] - (- 1)'" CD[N ~ P - m,f]) for all m ≤ N -P to evaluate the chromatic derivatives of the input signal CD[N -P +m,f] for all m such that m < N -P. Repeating the procedure k times results in the values of k(N ~P) derivatives. This clearly allows evaluation of derivatives of very high order, by using data from intervals whose length increases appropriately. This also explains the stability of our methods for evaluating high order derivatives. Limit procedures at a single point are replaced by procedures requiring data from longer intervals, bridging the gap between local and global signal behavior means. The ultimate application of this integration of the local and global signal behavior paradigms is best exemplified in the procedures which approximate the spectrum of the signal fusing chromatic differentiation and the "global" signal behavior as explained below.
Referring now to Figure 7, a flow chart 700 is shown outlining steps of evaluating a signal's order k chromatic derivative from the signal' s lower order chromatic derivatives in accordance with one embodiment of the invention. In particular, flow chart 700 demonstrates a way to implement the introduced mathematical concepts in accordance with the present embodiment.
In step 710, chromatic derivative of order y for the truncated series is represented as a new truncated series of order h parametrized by coefficients that are h LSBPs of said new truncated series, wherey + h > k> r, wherein said curve fitting is least-square fitting. In step 720, the h LSBPs are solved such that they provide curve fitting of the chromatic derivative of order j with the new truncated series.
In step 730, the n LSBPs are used to express said chromatic derivative of order k.
Chromatic Derivatives and the Spectrum
Assume that fit] is a 7ϊ band-limited signal of finite energy whose Fourier transform is Hf [CO], i.e.,
1_ f[t] = — \_π πHf [ω ]eiMdω
2π
Then Hf[C ] is a continuous complex function and its real part is an even function while its imaginary part is an odd function. Both parts are continuous real functions and they can be approximated with any set of orthogonal polynomials. We recall that all band-limited real functions /have a Fourier transform with even real and odd imaginary part, as well as that all even degree Chebyshev polynomials are even functions (containing only even powers of the variable) and all odd degree Chebyshev polynomials are odd functions. Thus, by expanding both the real and the imaginary part of Hf [CO] into a series of Chebyshev polynomials, taking into account the above fact, for an infinite sequence of real numbers A [ ] we get:
1 1 ∞ ω fit] = c- \:π f [ω ]eiω'dω = -~ \_π π ∑ A[k]ikTk ldω Lit λπ k=o π
Since the convergence of polynomial approximations over a finite closed interval is uniform, we can exchange the order of integration and summation to get: ω fit] = ~ \:π i A[k]ikτk eiωtdω k=0 π _
= iωtdω
= ^- ∑ A[k]Km[t]
Iπ k=o
This provides a GLOBAL representation of the function f[t] = 2 1* _ Hf [a ]eimd( because the series ∑ ™=0 A[k]Km [t] is uniformly convergent
unlike the representation which uses Bessel functions and chromatic derivatives, i.e., f[t]
2ll,CD[k ][
k[t] which provides highly local,
non-uniformly converging approximation.
For practical purposes of signal processing, the expansion
is truncated and the coefficients A [k] are replaced by γ[k] such that A[k]-27rγ[k] . Then, ω
Hf[ω]«∑2πγ[k]ikTk and fc[t]= γ[k]Kk[t] k=0 π
This implies that an approximation of the spectrum of the signal by Chebyshev polynomials of the form ω
∑2nγ[k]ikTk Hf[ω] k=0 π
produces an approximation of the signal in the time domain which is a linear combination of the functions Kk[t] with the SAME coefficients γ[k]. Thus, these coefficients provide BOTH an approximation of the spectrum of the function (frequency domain) AND a uniform approximation of the function (in the time domain).
One possible method for obtaining coefficients γ[k] is by using a least square fit in several ways. First, we note that c c felt - Σγ[j]KJ[t]= ∑ [7]CD[/sinc[/sinc[t]] =0 J ..=0
implies that
CD[k,f
c[t]] = ∑ γ[j]CD[k,CD[j, sm ]]] y=o /sinc[t]]+
j|,sinc[t]])
The correctness of expansion follows from the fact that [R^„[t]] : m 'N} is an orthogonal system in the space of all π band-limited square integrable functions (signals of finite energy) with the scalar product defined by
1 Hf[ω]Hg[ω]
where H^C ] is the Fourier transform of/ and Hg[ω] is the conjugate of the Fourier
transform H [ CO] of g[t] . Since
and since all π band-limited square integrable functions have continuous Fourier transform on [- TV, π], H , [ω ]Hf [ω ] is bounded by some positive number Q we get that
is convergent and
2
is also positive for all 7Z band-limited square integrable functions. Thus, again the correctness of expansion follows from the general theory of orthogonal functions, including the uniqueness, and the fact that expansions of lower order is a truncation of the expansion of higher order. As shown before,
This implies
= 0 and λ
m = )-l)"72 otherwise.
The above exposition used the Chebyshev polynomials merely as an example. In general, for any system of orthogonal polynomials Pm[C ], orthogonal with respect to the scalar product i* πw[ω]Pn[ω]Pm[ω]dω , the fundamental basis functions
Bn[t] - \" w[(o]Pn[co]eio"dG) have the corresponding set of basis functions providing a
global approximation, defined by S
n[t] =
■ The global approximation
using functions Sn[t] comes from the corresponding approximation of the spectrum Hf[CO] using the family orthogonal polynomials Pm [CO] : n EN), with the same coefficients providing both an approximation in the frequency domain (the spectrum of the signal) and in the time domain.
Thus, if the signal is given by its values at a certain set of Nyqist rate sampling points S (not necessarily consecutive) AND also Nof its chromatic derivatives are known at these Nyquist rate sampling points, the coefficients γj] must provide not only the best least square fit for fit] but they must also satisfy the requirement that
CD[k,fc[t]][p] = i (Kk+J[p] + (-lTn[k'i]Kk_Α[p])
is the best least square fit of the values CD[k,p] for all / up to N and all of the sampling points p in the set S. Since the value of the signal and N chromatic derivatives at a point p determines the signal in an interval around the point p, if Nis sufficiently large such a least square fit extracts information from a very precise description of the ENTIRE waveform containing the set S of sampling points. Thus, the values of chromatic derivatives produce a condition for γ[f ' s which reduces the number of Nyquist rate points needed for a faithful representation of the Fourier transform of fit].
Assume that the Nyquist rate values fit] of fit] and its chromatic derivatives CD[m,/][t] for m ≤ N are given for integers i such that \i\ < M. The γ[f]'s can be obtained, by determining values for γ[j], 0 < j < C which minimize the following expression:
The minimization is again performed by differentiating with respect to γ[f\ and setting the resulting linear equations to 0. The system is then solved for all γ\f .
Approximation of the spectrum Hf [CO] of a signal fit] of the form Hf [CO] « ∑ t=02πγ [k]i k Tk [7] has one important property. It does NOT require any windowing of the signal. Rather, it produces an approximation which provides the best fit to the entire wave-form (rather than just Nyquist rate sampling points) allowing arbitrary behavior outside the time interval containing the samples. Thus, there are no artifacts of the windowing, with veiy accurate match of the wave-form between the sampling points. Such approximation of the spectrum can be more informative than the standard one. The above methods systems represent the core procedures which are called "The engine" of the Chromatic Signal Processing. As understood herein, these fundamental procedures do not depend in any essential way on any particular choice of a family of regular polynomials. Any method based on, for example Chebyshev polynomials, is adapted to any other family of regular polynomials by changing the base function B0[t] defined as described in this disclosure and by replacing the recursive relations particular to Chebyshev polynomials by the corresponding relations which the new family satisfies. What is essential and universal is the duality of operations on polynomials and on differential operators which allows the representation of band-limited function using differential operators as described in this disclosure.