CA1278867C - Method of and a device for digital signal coding by vector quantization - Google Patents

Method of and a device for digital signal coding by vector quantization

Info

Publication number
CA1278867C
CA1278867C CA000533883A CA533883A CA1278867C CA 1278867 C CA1278867 C CA 1278867C CA 000533883 A CA000533883 A CA 000533883A CA 533883 A CA533883 A CA 533883A CA 1278867 C CA1278867 C CA 1278867C
Authority
CA
Canada
Prior art keywords
dimensional space
coordinates
values
code book
coded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA000533883A
Other languages
French (fr)
Inventor
Garibaldi Conte
Mario Guglielmo
Fabrizio Oliveri
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telecom Italia SpA
Original Assignee
CSELT Centro Studi e Laboratori Telecomunicazioni SpA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CSELT Centro Studi e Laboratori Telecomunicazioni SpA filed Critical CSELT Centro Studi e Laboratori Telecomunicazioni SpA
Application granted granted Critical
Publication of CA1278867C publication Critical patent/CA1278867C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/94Vector quantisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/008Vector quantisation
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3082Vector coding

Abstract

ABSTRACT

A system for digital signal coding by vector quantization uses blocks of n simultaneously coded signal samples, the coordinates of which are mapped from an n-dimensional to a one dimensional space, for example by using Hilbert's curve, in a manner such as to preserve adjacency proper-ties, the one dimensional space being divided into inter-vals each associated with an index forming a coded value of points falling in that interval which indexes are used during decoding to look up mean values of vectors falling in their respective intervals. The code book may be com-piled by sorting coordinates of points in the one dimen-sional space which are produced during a training or other signal sequence, grouping the sorted coordinates into intervals, and calculating the mean values of coor-dinates in those intervals.

Description

78~367 The present invention relates to digital si~nal coding by vector ~uantization, particularly but not exclusively for redundancy reduction in transmission and/or storage systems.

Vector quantization is a quantization in which a set of samples extracted from a signal to be coded is processed instead of a single sample. Thus, in the case of image signals, a matrix formed by samples obtained from the signal by orthogonal sampling is scanned by rows and samples extracted from a number of rows which are succes-sive in time (e.g. two or four samples for each of two successive rows, or four samples for each of four succes-sive rows) are quantized together. Blocks of 2x2, 2x4 or 4x4 samples, respectively, are thus obtained. Consi-dering for example the 2x4 case, then if the samples arerepresented by 8 bits (corresponding to 256 quantization levels per sample) as proposed in CCIR recommendation 601, the number of possible configurations is 2568, which would normally require representation by 64 bits. By vector quantization techniques, the number of possible configurations may be reduced and becomes variable, ty-pically between 1024 and 128; if these configurations are equiprobable, then just lO and 7 bits are required to represent them.

Vector quantization is potentially advantageous, since not only does it permit a considerable reduction in redundancy reduction, but it also allows exploitation of statistical dependence amongst variables to be coded (in this example the image points) as well as correlation.
It does however present two problems in implementation.
The first problem relates to code book generation, i.e.
the identification, once a particular distortion minimi-zation criterion i6 defined, of the most probable values of a set of variables; in other ~ords, identification those regions of an n-dimensional space in which higher ~;~
.

~L278~7 value densities exist; in this particular case 1024 or 128 configurations ~or code vectors) must be obtained representing the possible 2568 quantized signal configur-ations~ The second problem relates to selection of a vector to be used for representation of a generic block~
taking into account the requirement for distortion mini-mization.

A known solution to the code book generation problem is described by Y. Linde, A. Buzo and R.M. Gray in the paper entitled "An Algorithm for Vector Quantizer Design", IEEE
Transactions on Communications, Vol. Com-28, No. 1, January 1980. This describes an iterative algorithm which, in the most usual application, computes in a first step the centroid of a set of the vectors relating to a sequence of training signals. This centroid is then multiplied by a scalar quantity to identify non-optimized representative points of two hypothetical classes in such a set. The set is partitioned by use of these two repre-sentative classes, allotting vectors to one class or the other. Next the actual centroid of each of the two classes is calculated. The same operations are repeated for the two new centroids, and so on until the desired number of representative vectors is obtained. Once the code book is built, actual coding can be effected either by comparison of each vector to be coded with all the representative vectors and consequent choice o~ the one which minimizes the distortion, or by selective tree techniques.

This known method can be applied regardless of the dis-tortion criterion to be satisfied, but:

1) the code book generation method is very inefficient because of the large number of operations to be per-formed on the training sequence, and it is also rather 78~36~

complicated since the analysis on the set is performed in an n-dimensionsal space;
2) the coding phase, if a comparison technique is chosen, is very lengthy due to the necessity of making com-parisons with the whole code book, and does not allow direct identification of the vector representative of the block to be coded; in case of selective tree techniques, the number of comparisons is reduced, but optimal representation is not ensured.

Where the distortion criterion adopted is that of mini-mizing the Euclidean distortion (i.e. the mean square error), the foregoing disadvantages can be overcome by the method and apparatus of the present invention, which permits simplified code book generation without iterative methods, since analysis of a set of variables is effected in a one-dimensional space, and which also allows direct identification of a vector representative of a block to be coded. The limitation inherent in a technique applic-able only to a particular distortion criterion is not particularly severe, since the criterion concerned is that most generally used in signal processing.

According to the invention there is provided a method of coding a digital signal by vector quantization, compris-ing subdividing the signal into blocks for sequential coding, the blocks each comprising a predetermined number of samples which are coded simultaneously, each block being representable as a vector identifying a point in an n-dimensional space, where n is the number of samples in a block, and having n components whose values repre-sent the values o~ the individual samples, wherein thecoordinates of the point in the n-dimensional space are transformed into coordinates of a point in a one dimen-sional space by a mapping which preserves the adjacency properties of the n-dimensional space in the one ~2'78i~7 dimensional space; the coordinates so obtained are com-pared with the values of coordinates of end points of a plurality of adjacent intervals in said one dimensional space, each interval being associated with an index form-ing a coded value of points falling within that interval;and the appropriate index is selected, such that upon de-coding each such index selects a vector out of a previously compiled code book consisting of the means values of vec-tors falling that interval.

Further features of the invention will become apparent from the following description with reference to the ac-companying drawings, wherein:

Figures 1 - 4 are diagrams depicting code book generation by the method of the invention;

Figure 5 is a block diagram of apparatus according to the invention;

; Figures 6 and 7 are flow charts of code book generation and coding operations;

Figure 8 is a schematic diagram of a practical embodiment of the device of Figure 5.

The generation of a code book will be discussed first, assuming such generation is carried out using a training image sequence. Code book generation takes place in several steps. Firstly, a mapping from an n-dimensional space to a one dimensional space is calculated, and a histogram of the distribution of the representative points in the one dimensional space is determined; then a quan-tization law is determined for such a one dimensional space, which minimizes the mean square error; and finally the actual code book is calculated.

To determine the histogram, a sequence of training images ~27~367 I1, I2 ... In is submitted to orthogonal sampling in order to obtain a sample matrix, and samples extracted from a certain number of successive rows are quantized together.
More specifically, s samples from each of r successive rows are quantized together, thus obtaining a block of n = r.s samples. By way of example, reference will be made as necessary to blocks of 4 samples per row from 4 successive rows, so that a 16 dimensional space is con-sidered. This sampling is performed by a sampler STB, which, for each block, calculates the values of the image elements forming the block and organizes these values into sequential form according to a predetermined law.
A point, and hence a vector, in such an n-dimensional space is associated with each such block. Assuming for simplicity that the value of each image element is only the luminance value and that this value is coded by 8 bits, each coordinate of such a point in the n-dimensional space can have a value 0 ... 255. The values utilized can be either actual values or values from which mean luminance value of the block has been subtracted, coded in any suitable manner, e.g. in 4 bits~

Mapping from the n-dimensional space to a one dimensional space is performed in mapper MH for the vector built by sampler STB for each block. This mapping i5 the esential step in the coding method of the invention; it requires ease of computation, and hence simple circuit implementa-tion, with good preservation of adjacency, i.e. points which are near one another in a region of the n-dimensional space remain so in the one dimensional space. This is essential because mapping from n dimensions to one dimen-sion might otherwise give rise to random dispersion of points from the same region through the one dimensional space, thus making coding inefficient. These requirements are satisfied by the so-called "space filling curves", known also as Peano's curves. Of these curves, Hilbert's curve has been selected since, being on a binary base, ~27~386~

it is well suited to implementation by electronic binary logic circuits. By associating each point of the n-dimensional space with a curve point, the latter can be identified by a single coordinate representing its dis-tance from the origin of the curve. Mapping then consistsin determining, for each point of the n-dimensional space, the coordinate value of the corresponding point on the curve, obtained by suitable permutation of the individual bits of the words representing the coordinates of the point in the n-dimensional space. Advantageously, the mapping is an inverse of the ampping described by A.R.
Butz in the paper entitled "Alternative ~lgorithm for Hilbert's Space Filling Curve", IEEE Transactions on Computers, Vol. C-20, 1971, pages 424-426, which discloses mapping from one to n dimensions.

After mapping, a histogram is generated of the coordinates of points on the Hilbert's curve. If each coordinate in an n-dimensional space is coded by n bits, then each cur vilinear coordinate value would be represented by m.n bits, and histogram generation would require the use of a 2m-n position memory. Since typical values for m and n are as previously stated (8 bits, 16 dimensions) it is clear that a memory (2128 positions) of this size cannot be utilized in practice and hence that the number of values that the curvilinear coordinate can assume must be reduced.

This reduction is carried out by quantizer LIM which per-forms a uniform quantization by splitting the Hilbert's curve into equal length intervals. If L is the number of intervals to be used for coding, the quantization per-formed in quantizer LIM may result, for example, in split-ting of the curve into K = lO.L intervals. For the values stated above for n and m, L may be 1024 and K may be about 10,000, in which case the histogram memory need 213 - 214 positions. This uniform quantization can be performed by simple truncation of the words supplied by mapper MH, i.e. by keeping only the x = rlog K~ most significant bits of each word (symbol r ~ indicates the upper in-teger of the quantity contained inside it). A possibly more efficient method involves calculating a maximum value HM of the coordinates supplied by mapper MH and a norma-lizing value Hi of the generic coordinate by the relation-ship:
Hi = K Hi M

where K is the histogram dimension. If this method is chosen, the values obtained during mapping from n to 1 dimensions should be stored to prevent unnecessary repeti-tions of the operations described.

Once the information has been reduced to acceptable levels, the actual histogram is generated by block HIST; to this end the normalized or limited value for each image block causes a unit increment of the content of the histogram memory, at an address corresponding to the value itself.
The histogram so produced will consist of a series of peaks separated by zones corresponding to quantization intervals in which few if any points have fallen. These peaks correspond to dominant configurations in the train-ing image and their number indicates a minimum possible number of coding levels for the images.

In a second stage (see Figure 2) the histogram is quan-tized in block QG according to a law which minimizes the means square error. This quantization corresponds to finding the abscissae bounding the various peaks and, if the number of peaks is different from the desired number of intervals, partitioning those peaks giving rise to the highest means square error tif the peak number is less than the interval number) or grouping together peaks with the lowest mean square error, until the desired number ~ - - ~
a86~

of intervals is obtained. Quantizer QG stores the end points of the quantization intervals in block QL, together with values representative of the intervals, although as later explained, the latter values are not utilized.
Various algorithms can be applied to obtain these values;
examples are the direct resolution of the system of non-linear equations defining error minimization conditions, as described by J. Max in the paper entitled "Quantizing for Minimum Distortion", IRE Transactions on Information Theory, Vol. IT-9, March 1960, pages 7-12, and iterative computation of the centroids of successive partitions, starting from an arbitrary partition.

A mapping inverse to that applied by mapper MH could be applied to the values of the points representative of the intervals, to obtain the desired code book from the stored values. Since only a portion of the word representing the curvilinear coordinate is available, and not the whole word, such an operation could however result in inaccur-acy. Code book vectors are preferably therefore directly - 20 computed by an averaging operation, which only requires knowledge of the interval end points; this renders un-necessary the storage in store QL of the points represen-ting the single intervals.

In a third stage (see Figure 3), the values of the coor-dinates o~ the individual blocks of the training images, supplied by quantizer LIM, are compared by comparator Qi with the values of the coordinates of the end points of the various intervals obtained from quantizer and stored in stored store QL. The interval in which a given image block falls is thus detected, the interval being identi-fied by a serial number or index i. Using such an index and the cQrresponding n-component vector from STB, a code book is built up in block LUT, which incorporates a mat~
rix with as many rows as there are vectors forming the code book and as many columns as there are vector ~2~a86~

9 _ components from sampler STB, and a vector with as many components as there are matrix rows. The matrix and vec-tor contents are reset to 0 at the commencement of code book generation. For each vector from sampler STB, Qi supplies LUT with the index i of the interval in which falls the vector image determined by the mapper MH and the quantizer LIM, and the values of the vector com~onents are added to the contents of row i of the matrix, while the content of position i of the vector is incremented by a unit. When the training image blocks are exhausted, the content of each matrix row is divided by the content of the corresponding vector position, thus obtaining mean values of the vectors whose one dimensional images fall in each quantization interval. These computed mean values form the code book which is then transferred in any suit-able manner to a memory of a decoder CBR; for example, if the coding system is part of an image transmission system, the code book can be transferred to decoder CBR
using a system transmission line.

Once the code book is obtained and transferred to the de-coder, images can then be coded using mapping from an n-dimensional to a one dimensional space by Hilbert's cuve, coding being carried out by determining index i of the interval in which the one dimensional coordinate of the block falls, with this index forming a coded signal to be transmitted or stored. During decoding, index i will act as an address to read the corresponding vector from the code book.

Figure 5 is a block diagram of a coder/decoder, in which the same symbols are used as in the previous Figures.
Blocks STB, MH, LIM, Qi, QL form a decoder COD; the de-coder consists of block CBR and of a further block BTS
described hereinafter. The sampler STB splits into blocks and re-organizes the samples of an image IC to be coded.
A first logic network forming mapper MH computes the ~IL278~36~7 coordinate value on Hilbert's curve for each block, and supplies the value to the quantizer LIM. A logic network forming comparator Qi receives values from quantizer LIM
and compares them with end points of quantization inter-vals stored in a memory forming store QL, so as to deter-mine the index i of the current image block. Still assuming the case of an image transmission system, the index i is then transmitted to a memory in decoder CBR
where it causes the reading of the vector contained in the i-th row. The vector is supplied to reconstituter BTS, which subjects the vectors to operations inverse to those carried out by sampler STB and provides a recons-tructed image block ID. If STB subtracts the mean value of the blocks from the actual value of the image samples, sampler STB must supply the decoder CBR with the relevant information, as indicated by the broken line connection between these blocks.

Chrominance data, for example in the form of U and V com-ponents, can be similarly coded.

The code book generation and coding operations described above are also summarized in the flow charts of Figures 6 and 7, which are self-explanatory.

Coding based on vector quantization is seriouslv affected by a non-stationary source, and experimental tests have in fact demonstrated that coder performance differs greatly according to whether the vectors to be coded be-long to the training sequence. Consequently, the same operations performed to obtain the quantization law and code book in blocks HIST, QG and LUT and to transfer the code book to the decoder should also be carried out during the actual coding phase. In other words, the coder should be made adaptive and should also contain blocks HIST, QG
and LUT, which are enabled only during code book computa-tion. Various ways of rendering the coder adaptive are ~q8~67 possible according to the nature of the signal to be coded. If individual images are to be coded, for example, statistical properties can be assumed to vary from image to image and hence each image should have its own code book. On the other hand, if an image sequence is to be coded, the code book can be updated at fixed intervals, or continuously calculated with updating carried out in the decoder when the differences with respect to the code book previously sent to the decoder become significant.

Figure 8 shows a preferred practical embodiment of appara-tus in accordance with the invention, which makes use of a microcomputer CPU, which carries out the operations o~
the blocks DEC, COD in Figure 5. An image signal source VS of any known type, e.g. a television camera or flying spot camera, supplies analog video signals to an input interface IIV, comprising an analog-to-digital converter and means selecting only the actual information signals from the signal flow. The source VS and input interface IIV are driven by computer CPU through a control inter-face IC controlling the source VS and providing timingsignals controlling image transfer to interface IIV, image conversion into digital data and other signal processing functions.

Digitized video information signals are stored in a video frame memory SU in a manner dependent on the type of image and the image components utilized; this memory is not a part of the invention and is not described further.
Memory SU can be read from or written to at addresses supplied by computer CPU, which selects each block to be coded; memor~ SU is additionally connected to an output interface IUV, also controlled by interface IC, which reverses the functions of interface IIV and allows the display of coded or decoded images on a monitor MO.

Computer CPU is associated with a mass memory or data base ~L27~

BD, in which coded images and code book(s) are stored.
An interface IL connects computer CP~ to a transmission line for image transfer to a similar remote device. A
keyboard TA allows the selection of various operating modes of computer CPU, and more particularly the follow-ing operations utilized during coding, namely code book computation, image coding, code book transmission, image transmission, reading and writing of the code book in the data base, and reading and writing of an image in a data base. The two last operations are usually required in the event that the invention is applied to an image stor-age system. Further operations may concern image moni-toring facilities as well as facilities for providing service to customers. The functions actually provided depend upon the service required. For instance, when applying the invention to a transmission system, at least image coding and transmission will be required, assuming that a standard code book is used which requires no up-dating during coding. A conventional printer ST may be provided allowing an operator to record information as to device operation.

The above description is only by way of non-limiting example, and variations and modifications are possible within the scope of the invention. For example, instead of creating a histogram of coordinates in the one dimen-sional space, a list of the coordinate values can be created and accumulation zones can be detected~ code vectors are identified on the basis of said zones, as before. This operation is slower than histogram creation, but it avoids the problem inherent in use of a histogram due to the truncation in quantizer LIM, of obtaining a reduced number of very high peaks, which ~ives rise to code vectors which may be only marginally satisfactory as to quantity and quality. As a further variant, crea-tion of an ordered list of the coordinates of points of ~278~

the training image, in the one dimensional space, isfollowed by subdivision of these points into L groups (L
being the number of intervals to be used for the coding);
the coordinates of the end points of each interval will be the mean values of the coordinates of the last ~first) point of a group and of the first (last) point of the subsequent (preceding) group.

Claims (15)

1. A method of coding a digital signal by vector quan-tization, comprising subdividing the signal into blocks for sequential coding, the blocks each comprising a pre-determined number of samples which are coded simultaneous-ly, each block being representable as a vector identifying a point in an n-dimensional space, where n is the number of samples in a block, and having n components whose values represent the values of the individual samples, wherein the coordinates of the point in the n-dimensional space are transformed into coordinates of a point in a one dimensional space by a mapping which preserves the adja-cency properties of the n-dimensional space in the one dimensional space; the coordinates so obtained are com-pared with the values of coordinates of end points of a plurality of adjacent intervals in said one dimensional space, each interval being associated with an index form-ing a coded value of points falling within that interval;
and the appropriate index is selected, such that upon de-coding each such index selects a vector out of a previ-ously compiled code book consisting of the mean values of vectors falling in that interval.
2. A method according to Claim 1, wherein said mapping of the n-dimensional space into a one dimensional space is performed by using Hilbert's curve to define the one dimensional space.
3. A method according to Claim 1 or 2, wherein the sig-nals to be coded are image signals and the point coordinates are defined by luminance signal values and chrominance signal values.
4. A method according to Claim 1, wherein the code book is compiled by using digital signals from a training sequence split into blocks of n samples; the coordinates of the representative point in the n-dimensional space being transformed by said mapping into the coordinates of a point in a one dimensional space, so as to preserve adjacency properties; wherein coordinates of points in the one dimensional space are sorted and the sorted coor-dinate set is quantized according to a least squares law to determine said intervals; the values of the coordi-nates of end points of the intervals are memorized; and means values of coordinates of blocks falling in each interval are calculated.
5. A method according to Claim 4, wherein the coordinates in the one dimensional space are ordered by the creation of a histogram in which the number of values a coordinate can take up is limited.
6. A method according to Claim 5, wherein limitation of the number of positions is effected by truncation of words representing the coordinates.
7. A method according to Claim 5, wherein limitation of the number of positions is performed by normalizing a value of the generic coordinate with respect to the maxi-mum value a coordinate can take up, according to relation where ?i is the normalized value of the generic coordi-nate Hi, HM is the maximum value, and K is the number of intervals used in the histogram.
8. A method according to Claim 4, wherein the coordi-nates are ordered by creating a list of coordinate values and identifying accumulation points of said coordinates.
9. A method according to Claim 4, wherein the coordinates are ordered by creating a list of the coordinate values and by distributing the points extracted from the training sequence into a number of groups equal to said predeter-mined number of intervals in such a way that all of the intervals contain the same number of points, the coordi-nates of the end points of an interval being the mean values of the coordinates of the last or first point of a group of the first or last point of the adjacent group.
10. A method according to Claim 4, wherein the code book is periodically updated during coding.
11. A method according to Claim 4, wherein the code book is calculated for each coding operation.
12. Apparatus for coding and decoding a digital signal using vector quantization, in which the signal is subdi-vided into blocks to be sequentially coded, each block comprising a predetermined number of simultaneously coded samples which can be represented as a vector identifying a point in an n-dimensional space, where n is the number of samples in a block, and having n components whose values represent the values of individual samples, said apparatus comprising a coder including sampler means for subdividing the signal to be coded into blocks and calcu-lating, for each block, the coordinates of a representa-tive point in the n-dimensional space, and a decoder comprising means for reconstituting blocks of samples from vectors and for combining the blocks into a decoded signal, wherein the coder further comprises:

a) a mapper which, for each block, calculates the coor-dinate of a point in a one dimensional space which corresponds to the point representative of the block in the n-dimensional space;

b) a first memory which stores the values of coordinates of end points of a plurality of intervals into which the one dimensional space is divided, and an index associated with each interval; and c) a comparator which receives the one dimensional co-ordinate of each signal block to be coded, compares said coordinate with those of the end points of said intervals to detect the internal in which the coordi-nate falls, and reads the index associated with the interval from said first memory and supplies it as a coded signal;

and wherein the decoder comprises also a second memory, addressable by said indices and storing a code book con-sisting of the mean values of the signal blocks falling in each interval.
13. Apparatus according to Claim 12, wherein said coder comprises means for computing code book entries, comprising:

a) means for ordering the values of the one dimensional coordinates supplied by said mapper;

b) a quantizer receiving the ordered coordinate set and supplying to said first memory the values of end points of said intervals in the one dimensional space;
and c) a code book entry generator which, for each signal block, receives the values of the coordinates of such a block in the n-dimensional space from the sampler means, and receives said indices from said comparator, and calculates for each interval the mean value of the blocks whose coordinates in the one dimensional space fall in said interval, such mean values being transferred into the second memory when code book generation or updating is required.
14. Apparatus according to Claim 13, wherein the means for ordering the coordinate values comprise a circuit limiting the number of values the coordinates can take up, and a third memory presenting a number of locations, equal to the number so limited, each location storing the number of blocks having that determined coordinate value.
15. Apparatus according to any of Claims 11 to 13, where-in the coder and the decoder are implemented by a micro-processor associated with a mass memory for storing the code book and/or coded signals, the microprocessor being associated with control means allowing selection of one of a plurality of different operation modes, including code book generation; coding of a signal; reading from or writing to the mass memory; and transmission of the code book and/or coded signals to a remote device.
CA000533883A 1986-04-07 1987-04-06 Method of and a device for digital signal coding by vector quantization Expired - Lifetime CA1278867C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT67273/86A IT1190565B (en) 1986-04-07 1986-04-07 PROCEDURE AND CODING DEVICE FOR NUMBERED SIGNALS BY VECTOR QUANTIZATION
IT67273-A/86 1986-04-07

Publications (1)

Publication Number Publication Date
CA1278867C true CA1278867C (en) 1991-01-08

Family

ID=11301042

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000533883A Expired - Lifetime CA1278867C (en) 1986-04-07 1987-04-06 Method of and a device for digital signal coding by vector quantization

Country Status (7)

Country Link
US (1) US4807298A (en)
EP (1) EP0240948B1 (en)
JP (1) JPH0681104B2 (en)
CA (1) CA1278867C (en)
DE (2) DE240948T1 (en)
DK (1) DK169287A (en)
IT (1) IT1190565B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110489605A (en) * 2019-07-31 2019-11-22 云南师范大学 A kind of Hilbert coding and decoding methods under data skew distribution

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
IL83549A (en) * 1987-08-16 1992-08-18 Yossi Matias Video scrambling apparatus and method based on space filling curves
GB2210236B (en) * 1987-09-24 1991-12-18 Newbridge Networks Corp Speech processing system
US5010574A (en) * 1989-06-13 1991-04-23 At&T Bell Laboratories Vector quantizer search arrangement
JPH0638274B2 (en) * 1989-07-31 1994-05-18 工業技術院長 Image recognition apparatus and image recognition method
FR2657695B1 (en) * 1990-01-30 1992-04-17 Elf Aquitaine METHOD FOR POINTING SURFACES IN A 3D VOLUME.
US5061924B1 (en) * 1991-01-25 1996-04-30 American Telephone & Telegraph Efficient vector codebook
TW256010B (en) * 1991-04-18 1995-09-01 Ampex
WO1992021101A1 (en) * 1991-05-17 1992-11-26 The Analytic Sciences Corporation Continuous-tone image compression
DE69223850T2 (en) * 1991-05-30 1998-05-14 Canon Kk Compression increase in graphic systems
US5267332A (en) * 1991-06-19 1993-11-30 Technibuild Inc. Image recognition system
US5315670A (en) * 1991-11-12 1994-05-24 General Electric Company Digital data compression system including zerotree coefficient coding
US5416856A (en) * 1992-03-30 1995-05-16 The United States Of America As Represented By The Secretary Of The Navy Method of encoding a digital image using iterated image transformations to form an eventually contractive map
US5596659A (en) * 1992-09-01 1997-01-21 Apple Computer, Inc. Preprocessing and postprocessing for vector quantization
EP1139289B1 (en) * 1992-09-01 2011-03-09 Apple Inc. Improved vector quantization
US5349545A (en) * 1992-11-24 1994-09-20 Intel Corporation Arithmetic logic unit dequantization
US5468069A (en) * 1993-08-03 1995-11-21 University Of So. California Single chip design for fast image compression
US5440652A (en) * 1993-09-10 1995-08-08 Athena Design Systems, Inc. Method and apparatus for preparing color separations based on n-way color relationships
US5592227A (en) * 1994-09-15 1997-01-07 Vcom, Inc. Method and apparatus for compressing a digital signal using vector quantization
US6738058B1 (en) * 1997-04-30 2004-05-18 Ati Technologies, Inc. Method and apparatus for three dimensional graphics processing
EP1228453A4 (en) * 1999-10-22 2007-12-19 Activesky Inc An object oriented video system
US7453936B2 (en) * 2001-11-09 2008-11-18 Sony Corporation Transmitting apparatus and method, receiving apparatus and method, program and recording medium, and transmitting/receiving system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4541012A (en) * 1982-01-04 1985-09-10 Compression Labs, Inc. Video bandwidth reduction system employing interframe block differencing and transform domain coding
US4670851A (en) * 1984-01-09 1987-06-02 Mitsubishi Denki Kabushiki Kaisha Vector quantizer

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110489605A (en) * 2019-07-31 2019-11-22 云南师范大学 A kind of Hilbert coding and decoding methods under data skew distribution
CN110489605B (en) * 2019-07-31 2023-06-06 云南师范大学 Hilbert coding and decoding method under data skew distribution

Also Published As

Publication number Publication date
JPH0681104B2 (en) 1994-10-12
IT8667273A0 (en) 1986-04-07
DE240948T1 (en) 1990-09-06
IT1190565B (en) 1988-02-16
DE3786412D1 (en) 1993-08-12
EP0240948B1 (en) 1993-07-07
IT8667273A1 (en) 1987-10-07
DK169287D0 (en) 1987-04-02
DK169287A (en) 1987-10-08
EP0240948A3 (en) 1990-05-02
US4807298A (en) 1989-02-21
JPS62239728A (en) 1987-10-20
EP0240948A2 (en) 1987-10-14
DE3786412T2 (en) 1993-11-11

Similar Documents

Publication Publication Date Title
CA1278867C (en) Method of and a device for digital signal coding by vector quantization
JP3978478B2 (en) Apparatus and method for performing fixed-speed block-unit image compression with estimated pixel values
US5450562A (en) Cache-based data compression/decompression
AU700265B2 (en) Method and system for representing a data set with a data transforming function and data mask
Stevens et al. Manipulation and presentation of multidimensional image data using the Peano scan
US6072910A (en) Method and apparatus for coding image information, and method of creating code book
US5463701A (en) System and method for pattern-matching with error control for image and video compression
US6658146B1 (en) Fixed-rate block-based image compression with inferred pixel values
US5124791A (en) Frame-to-frame compression of vector quantized signals and other post-processing
Li et al. A fast vector quantization encoding method for image compression
US6683978B1 (en) Fixed-rate block-based image compression with inferred pixel values
US5535311A (en) Method and apparatus for image-type determination to enable choice of an optimum data compression procedure
JP2000165678A (en) Method and device for improving transmission speed and efficiency of electronic data
US5594503A (en) Image information compressing method, compressed image information recording medium and compressed image information reproducing apparatus
CA1292320C (en) Pattern processing
EP1009167B1 (en) Method and apparatus for electronic data compression
Lo et al. Subcodebook searching algorithm for efficient VQ encoding of images
EP1225543A2 (en) HVQ-based filtering method
JP3170312B2 (en) Image processing device
Panchanathan et al. Indexing and retrieval of color images using vector quantization
JPH1013842A (en) Markov model image coder
Quweider et al. Use of space filling curves in fast encoding of VQ images
Lo et al. New fast VQ encoding algorithm for image compression
JP3146092B2 (en) Encoding device and decoding device
JP2693557B2 (en) Encoding device

Legal Events

Date Code Title Description
MKLA Lapsed