CA2642491A1 - Video encoding/decoding method and apparatus and program - Google Patents

Video encoding/decoding method and apparatus and program Download PDF

Info

Publication number
CA2642491A1
CA2642491A1 CA002642491A CA2642491A CA2642491A1 CA 2642491 A1 CA2642491 A1 CA 2642491A1 CA 002642491 A CA002642491 A CA 002642491A CA 2642491 A CA2642491 A CA 2642491A CA 2642491 A1 CA2642491 A1 CA 2642491A1
Authority
CA
Canada
Prior art keywords
quantization matrix
generation
quantization
parameter
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002642491A
Other languages
French (fr)
Inventor
Akiyuki Tanizawa
Takeshi Chujoh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2642491A1 publication Critical patent/CA2642491A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Abstract

A moving image encoding method performing quantization of conversion coefficients by using a quantization matrix of conversion coefficients corresponding to respective frequency positions, comprising a step for generating a quantization matrix by using a generating function and generating parameters used for generating a quantization matrix, a step for quantizing the conversion coefficients by using the quantization matrix thus generated, and a step for encoding the quantized conversion coefficients to generate an encoded signal.

Description

D E S C R I P T I O N
VIDEO ENCODING/DECODING METHOD
AND APPARATUS AND PROGRAM

Technical Field The present invention relates to a video encoding/decoding method and apparatus using quantization matrices.

Background Art There is proposed a system to quantize a DCT
coefficient by doing bit allocation every frequency position, using a frequency characteristic of DCT
coefficients provided by subjecting a video to orthogonal transformation, for example, discrete cosine transform (DCT) (W. H. Chen and C. H. Smith, "Adaptive Coding of Monochrome and Color Images", IEEE Trans. On Comm. Vol.25, No.11 Nov. 1977). According to this conventional system, many bits are allocated to a low level frequency domain to obtain coefficient information, whereas few bits are allocated to a high pass frequency domain, whereby the DCT coefficient is quantized in efficiency. However, this conventional system needs to prepare an allocation table according to coarseness of quantization. Therefore, it is not always an effective system in terms of robust quantization.

ITU-TT.81 and ISO/IEC10918-1 (referred to
2 JPEG: Joint Photographic Experts Group hereirlafter) urged in ITU-T and ISO/IEC quantize equally transformation coefficients over the entire frequency range with the same quantization scale. However, the human is comparatively insensitive on the high frequency region according to human visual property.
For this reason, the following system is proposed.
That is, in JPEG, weighting is done every frequency domain to change a quantization scale, so that many bits are assigned to a low frequency domain sensitive visually and the bit rate is decreased in the high frequency domain, resulting in improving a subjectivity picture quality. This system performs quantization every conversion quantization block. A table used for this quantization is referred to as a quantization matrix.

Further in recent years the video encoding method that largely improves the encoding efficiency than the conventional method is urged as ITU-TRec.H.264 and ISO/IEC14496-10 (referred to as H.264) in combination with ITU-T and ISO/IEC. The conventional encoding systems such as ISO/IEC:MPEG-l, 2, 4, and ITU-TH.261, H.263 quantize DCT coefficients after orthogonal transform to reduce the number of encoded bits of the transform coefficients. In a H.264 main profile, since the relation between a quantization parameter and a quantization scale is so designed that they become at
3 an equal interval on a log scale, the quaritization matrix is not introduced. However, in a H.264 high profile, the quantization matrix is to be newly introduced to improve a subjectivity image quality for a high-resolution image (refer to Jiuhuai Lu, "Proposal of quantization weighting for H.264/MPEG-4 AVC

Professional Profiles", JVT of ISO/IEC MPEG & ITU-T
VCEG, JVT-K029, March, 2004).

In the H.264 high profile, total eight kinds of different quantization matrixes can be established in correspondence with two transformed/quantized blocks (a 4x4 pixel block and a 8x8 pixel block) for every encoding mode (intra-frame prediction or inter-frame prediction) and for every signal (a luminance signal or color-difference signal).

Since the quantization matrix is employed for weighting the pixel according to each frequency component position at the time of quantization, the same quantization matrix is necessary at the time of dequantization too. In an encoder of the H.264 high profile, the used quantization matrixes are encoded and multiplexed and then transmitted to a decoder.
Concretely, a difference value is calculated in order of zigzag scan or field scan from a DC component of the quantization matrix, and the obtained difference data is subjected to variable length encoding and multiplexed as code data.
4 On the other hand, a decoder of the H.264 high profile decodes the code data according to a logic similar to the encoder to reconstruct it as a quantization matrix to be used at the time of dequantization. The quantization matrix is finally subjected to variable length encoding. In this case, the number of encoded bits of the quantization matrix requires 8 bits at minimum and not less than 1500 bits at maximum on syntax.

A method for transmitting a quantization matrix of H.264 high profile may increase an overhead for encoding the quantization matrix and thus largely decrease the encoding efficiency, in an application used at a low bit rate such as cellular phone or mobile device.

A method for adjusting a value of a quantization matrix by transmitting a base quantization matrix at first to update the quantization matrix with a small overhead and then transmitting a coefficient k indicating a degree of change from the quantization matrix to a decoder is proposed (refer to JP-A 2003-189308(KOKAI)).

JP-A 2003-189308(KOKAI): "Video encoding apparatus, encoding method, decoding apparatus and decoding method, and video code string transmitting method" aims to update a quantization matrix every picture type with the small number of encoded bits, and makes it possible to update the base quantization matrix at about 8 bits at most. However, since it is a system for sending only a degree of change from the base quantization matrix, the amplitude of the
5 quantization matrix can be changed but it is impossible to change its characteristic. Further, it needs to transmit the base quan'tization matrix, and thus the number of encoded bits may largely increase due to the situation of encoding.

Disclosure of Invention When a quantization matrix is encoded by a method prescribed by the H.264 high profile, and transmitted to a decoder, the number of encoded bits for encoding the quantization matrix increases. When the quantization matrix is transmitted every picture again, the number of encoded bits for encoding the quantization matrix increases further. Further, when a degree of change of the quantization matrix is transmitted, degrees of freedom for changing the quantization matrix are largely limited. There is a problem that these results make it difficult to utilize the quantization matrix effectively.

An aspect of the present invention provides a video encoding method comprising: generating a quantization matrix using a function concerning generation of the quantization matrix and a parameter relative to the function; quantizing a transform
6 coefficient concerning an input image signal using the quantization matrix to generate a quantized transform coefficient; and encoding the parameter and the quantized transform coefficient to generate a code signal.

Brief Description of Drawings FIG. 1 is a block diagram illustrating a structure of a video encoding apparatus according to a first embodiment.

FIG. 2 is a block diagram illustrating a structure of a quantization matrix generator according to the first embodiment.

FIG. 3 is a flow chart of the video encoding apparatus according to the first embodiment.

FIG. 4A is a schematic diagram of a prediction order/block shape related to the first embodiment.

FIG. 4B is a diagram illustrating a block shape of 16x16 pixels.

FIG. 4C is a diagram illustrating a block shape of 4x4 pixels.

FIG. 4D is a diagram illustrating a block shape of 8x8 pixels.

FIG. 5A is a diagram illustrating a quantization matrix corresponding to a 4x4 pixel block related to the first embodiment.

FIG. 5B is a diagram illustrating a quantization matrix corresponding to a 8x8 pixel block.
7 FIG. 6A is a diagram for explaining a quantization matrix generation method related to the first embodiment.

FIG. 6B is a diagram for explaining another quantization matrix generation method.

FIG. 6C is a diagram for explaining another quantization matrix generation method.

FIG. 7 is a schematic diagram of a syntax structure according to the first embodiment.

FIG. 8 is a diagram of a data structure of a sequence parameter set syntax according to the first embodiment.

FIG. 9 is a diagram of a data structure of a picture parameter set syntax according to the first embodiment.

FIG. 10 is a diagram of a data structure of a picture parameter set syntax according to the first embodiment.

FIG. 11 is a diagram of a data structure of a supplement mental syntax according to the first embodiment.

FIG. 12 is a flow chart of multipath encoding according to a second embodiment.

FIG. 13 is a diagram illustrating a syntax structure in a slice header syntax.

FIG. 14 is a diagram illustrating a slice header syntax.
8 FIG. 15 is a diagram illustrating a slice header syntax.

FIG. 16 is a diagram illustrating an example of a CurrSliceType.

FIG. 17 is a diagram illustrating a slice header syntax.

FIG. 18 is a block diagram illustrating a structure of a video decoding apparatus according to the third embodiment of the present invention.

FIG. 19 is a flow chart of the video decoding apparatus according to the third embodiment of the present invention.

Best Mode for Carrying Out the Invention There will now be described embodiments of the present invention in detail in conjunction with drawings.

(First embodiment: encoding) According to the first embodiment shown in FIG. 1, a video signal is divided into a plurality of pixel blocks and input to a video encoding apparatus 100 as an input image signal :116. The video encoding apparatus 100 has, as modes executed by a predictor 101, a plurality of prediction modes different in block size or in predictive signal generation method. In the present embodiment it is assumed that encoding is done from the upper left of the frame to the lower-right thereof as shown in FIG. 4A.
9 The input image signal 116 input to the video encoding apparatus 100 is divided into a plurality of blocks each containing 16x16 pixels as shown in FIG. 4B. A part of the input image signal 116 is input to the predictor 101 and encoded by an encoder 111 through a mode determination unit 102, a transformer 103 and a quantizer 104. This encoded image signal is stored in an output buffer 120 and then is output as coded data 115 in the output timing controlled by an encoding controller 110.

A 16x16 pixel block shown in FIG. 4B is referred to as a macroblock and has a basic process block size for the following encoding process. The video encoding apparatus 100 reads the input image signal 116 in units of block and encodes it. The macroblock may be in units of 32x32 pixel block or in units of 8x8 pixel block.

The predictor 101 generates a predictive image signal 118 with all modes selectable in the macroblock by using an encoded reference image stored in a reference image memory 107. The predictor 101 generates all predictive image signals for all encoding modes in which an object pixel block can be encoded.
However, when the next prediction cannot be done without generating a local decoded image in the macroblock like the intra-frame prediction of H.264 (4x4 pixel prediction (FIG. 4C) or 8x8 pixel prediction (FIG. 4D), the predictor 101 may perform orthogonal transformation and quantization, and dequantization and inverse transformation.

The predictive image signal 118 generated with the 5 predictor 101 is input to a mode determination unit 102 along with the input image signal 116. The mode determination unit 102 inputs the predictive image signal 118 to an inverse transformer 106, generates a predictive error signal 119 by subtracting the
10 predictive image signal 118 from the input image signal 116, and input it to the transformer 103. At the same time, the mode determination unit 102 determines a mode based on mode information predicted with the predictor 101 and the predictive error signal 119. Explaining it more to be concrete, the mode is determined using a cost k shown by the following equation (1) in this embodiment.

K= SAD+ A x OH (1) where OH indicates mode information, SAD is the absolute sum of predictive error signals, and A is a constant. The constant X is determined based on a value of a quantization width or a quantization parameter. In this way, the mode is determined based on the cost K. The mode in which the cost K indicates the smallest value is selected as an optimum mode.

In this embodiment, the absolute sum of the mode information and the predictive error signal is used.
11 As another embodiruent, the mode may be determined by using only mode information or only the absolute sum of the predictive error signal. Alternatively, Hadamard transformation may be subjected to these parameters to obtain and use an approximate value. Further, the cost may be calculated using an activity of an input image signal, and a cost function may be calculated using a quantization width and a quantization parameter.

A provisional encoder is prepared according to another embodiment for calculating the cost. A
predictive error signal is generated based on the encoding mode of the provisional encoder. The predictive error signal is really encoded to produce code data. Local decoded image data 113 is produced by local-decoding the code data. The mode may be determined using the number of encoded bits of the code data and a square error of the local decoded picture signal 113 and the input video signal 116. A mode determination equation of this case is expressed by the following equation (2).

J = D+ A x R (2) where J indicates a cost, D indicates an encoding distortion representing a square error of the input video signal 116 and the local decoded image signal 113, and R represents the number of encoded bits estimated by temporary encoding. When this cost J is used, a circuit scale increases because the temporary
12 encoding and local decoding (dequantization and inverse transformation) are necessary every encoding mode.
However, the accurate number of encoded bits and encoding distortion can be used, and the high encoding efficiency can be maintained. The cost may be calculated using only the number of encoded bits or only encoding distortion. The cost function may be calculated using a value approximate to these parameters.
The mode determination unit 102 is connected to the transformer 103 and inverse transformer 106. The mode information selected with the mode determination unit 102 and the predictive error signal 118 are input to the transformer 103. The transformer 103 transforms the input predictive error signal 118 into transform coefficients and generates transform coefficient data.
The predictive error signal 118 is subjected to an orthogonal transform using a discrete cosine transform (DCT), for example. As a modification, the transform coefficient may be generated using a technique such as wavelet transform or independent component analysis.
The transform coefficient provided from the transformer 103 is sent to the quantizer 104 and quantized thereby. The quantization parameter necessary for quantization is set to the encoding controller 110. The quantizer 104 quantizes the transform coefficient using the quantization matrix 114
13 input from a quantizat_ion matrix generator 109 and generates a quantized transform coefficient 112.

The quantized transform coefficient 112 is input to the encoding processor 111 along with information on prediction methods such as mode information and quantization parameter. The encoding processor 111 subjects the quantized transform coefficient 112 along with the input mode information to entropy encoding (Huffman encoding or arithmetic encoding). The code data 115 provided by the entropy encoding of the encoding processor 111 is output from the video encoder 100 to the output buffer 120 and multiplexed. The multiplexed code data is transmitted from the output buffer 120.

When the quantization matrix 114 to be used for quantization is generated, instruction information indicating use of the quantization matrix is provided to the generation parameter generator 108 by the encoding controller 110. The generation parameter generator 108 sets a quantization matrix generation parameter 117 according to the instruction information, and outputs it to the quantization matrix generator 109 and the encoding processor 111.

The quantization matrix generation parameter 117 may be set by an external parameter setting unit (not shown) controlled by the encoding controller 110.

Also, it may be updated in units of block of coded
14 image, ln units of slice or in units of picture. The generation parameter generator 108 comprises a function for controlling a setting timing of the quantization matrix generation parameter 117.

The quantization matrix generator 109 generates a quantization matrix 114 by a method established to the quantization matrix generation parameter 117 and output it to the quantizer 104 and the dequantizer 105. At the same time, the quantization matrix generation parameter 117 input to the encoding processor 111 is subjected to entropy coding along with mode information and transfer coefficient 112 which are input from the quantizer 104.

The dequantizer 105 dequantizes the transform coefficient 112 quantized with the quantizer 104 according to the quantization parameter set by the encoding controller 110 and the quantization matrix 114 input from the quantization matrix generator 109. The dequantized transform coefficient is sent to the inverse transformer 106. The inverse transformer 106 subjects the dequantized transform coefficient to inverse transform (for example, inverse discrete cosine transform) to decode a predictive error signal.

The predictive error signal 116 decoded with the inverse transformer 106 is added to the predictive image signal 118 for a determination mode, which is supplied from the mode determination unit 102. The addiLion signal of the predictive error signal and predictive image signal 118 becomes a local decoded signal 113 and is input to the reference memory 107.
The reference image memory 107 stores the local decoded 5 signal 113 as a reconstruction image. The image stored in the reference image memory 107 in this way becomes a reference image referred to when the predictor 101 generates a predictive image signal.

When an encoding loop (a process to be executed in 10 order of the predictor 101 -> the mode determination unit 102 -. the transformer 103 -> the quantizer 104 , the dequantizer 105 --> the inverse transformer 106 ---. the reference image memory 107 in FIG. 1) is executed for all modes selectable for an object
15 macroblock, one loop is completed. When the encoding loop is completed for the macroblock, the input image signal 116 of the next block is input and encoded. The quantization matrix generator 108 needs not generate a quantization matrix every macroblock. The generated quantization matrix is held unless the quantization matrix generation parameter 117 set by the generation parameter generator 108 is updated.

The encoding controller 110 performs a feedback control of the number of encoded bits, a quantization characteristic control thereof, a mode determination control, etc. Also, the encoding controller 110 performs a rate control for controlling the number of
16 eilcoded bits, a control of the predictor 101, and a control of an external input parameter. At the same time, the encoding controller 110 controls the output buffer 120 to output code data to an external at an appropriate timing.

The quantization matrix generator 109 shown in FIG. 2 generates the quantization matrix 114 based on the input quantization matrix generation parameter 117.
The quantization matrix is a matrix as shown in FIG. 5A

or a matrix as shown in FIG. 5B. The quantization matrix is subjected to weighting by a corresponding weighting factor every frequency point in the case of quantization and dequantization. FIG. 5A shows a quantization matrix corresponding to a 4x4 pixel block and FIG. 5B shows a quantization matrix corresponding to a 8x8 pixel block. The quantization matrix generator 109 comprises a generated parameter deciphering unit 201, a switch 202 and one or more matrix generators 203. A generated parameter deciphering unit 201 deciphers the input quantization matrix generation parameter 117, and outputs change over information of the switch 202 according to each matrix generation method. This change over information is set by the quantization matrix generation controller 210 and changes the output terminal of the switch 202.
The switch 202 is switched according to switch information provided by the generated parameter
17 deciphering unit 201 and set by the quantization matrix generation controller 210. For example, when the matrix generation type of the quantization matrix generation parameter 117 is a first type, the switch 202 connects the output terminal of the generated parameter deciphering unit 201 to the matrix generator 203. On the other hand, when the matrix generation type of the quantization matrix generation parameter 117 is an N-th type, the switch 202 connects the output terminal of the generated parameter deciphering unit 201 to the N-th matrix generator 203.

When the matrix generation type of the quantization matrix generation parameter 117 is a M-th type (N < M) and the M-th matrix generator 203 is not included in the quantization matrix generator 109, the switch 202 is connected to the corresponding matrix generator by a method in which the output terminal of the generated parameter deciphering unit 201 is determined beforehand. For example, when a quantization matrix generation parameter of the type that does not exist in the quantization matrix generator 109 is input, the switch 202 always connects the output terminal to the first matrix generator.
When a similar matrix qeneration type is known, it may be connected to the matrix generator of the nearest L-th to the input M-th type. In any case, the quantization matrix generator 109 connects the output
18 terminal of the generated parameter deciphering unit 201 to one of the first to N-th matrix generators 203 according to the input. quantization matrix generation parameter 117 by a predetermined connection method.

Each matrix generator 203 generates the quantization matrix 114 according to information of the corresponding quantization matrix generation parameter.
Concretely, the quantization matrix generation parameter information 117 is composed of parameter information of a matrix generation type (T), a change degree (A) of quantization matrix, a distortion degree (B) and a correction item (C). These parameters are labeled by different names, but may be used in any kind of ways. These parameters are defined as a parameter set expressed by the following equation (3):
QMP = (T, A, B, C) (3) QMP represents the quantization matrix generation parameter information. The matrix generation type (T) indicates that the matrix generator 203 corresponding to which type should be used. On the other hand, how to use the change degree (A), distortion degree (B) and correction item (C) can be freely defined every matrix generation type. The first matrix generation type is explained referring to FIG. 6A.

A matrix generation function when the matrix generation type is 1 is represented by the following equations (4) (5) and (6) :
19 [formula 1]

r= x+y (4) Q4z4(x'Y)=a*r+c (5) Qaxs(xI Y)=2*r+c (6) Further, table conversion examples of the change degree (A), distortion degree (B) and correction item (C) used for the first matrix type are shown by the following equations (7), (8) and (9):

a =0.1*A (7) B =0 (8) c = 16 + C (9) where the change degree (A) represents a degree of change when the distance from the DC component to the frequency position of the quantization matrix is assumed to be r. For example, if the change degree (A) is a positive value, the value of the matrix increases as the distance r increases. In this case, the high bandwidth can be set at a large value. In contrast, if the change degree (A) is a negative value, the value of the matrix increases with increase of the distance r.
In this case, the quantization step can be set coarsely in the low bandwidth. In the first matrix generation type, a 0 value is always set without using the distortion degree (B). On the other hand, the correction item (C) represents a segment of a straight line expressed by the change degree (A). Because the first matrix generation function can be processed by only multiplication, addition, subtraction and shift operation, it is advaritageous that a hardware cost can be decreased.

5 The quantization matrix generated based on equations (7), (8) and (9) in the case of QMP =(l, 40, 0, 0) is expressed by the following equation (10):

[formula 2]

Q4X4 (xI Y) - 24 28 32 36 28 32 36 40 (10) Since precision of variable of each of the change degree (A), distortion degree (B) and correction item (C) influences a hardware scale, it is important to 15 prepare a table having the good efficiency in a decided range. In the equation (7), when the change degree (A) is assumed to be a nonnegative integer of 6 bits, it is possible to obtain a gradient from 0 to 6.4. However, a negative value cannot be obtained. Accordingly, it
20 is possible to obtain a range from -6.3 to 6.4 bits by using a translation table using 7 bits as indicated by the following equation (11):

a=0_lx(A-63) (11) If the translation table of the change degree (A), distortion degree (B) and correction item (C) corresponding to a matrix generation type (T) is
21 provided, and precision of the change degree (A), distortion degree (B) and correction item (C) is acquired every matrix generation type (T), it is possible to set an appropriate quantization matrix generation parameter according to the encoding situation and use environment. In the first matrix generation type expressed by the equations (4), (5) and (6), the distortion degree (B) becomes always 0.
Therefore, it is not necessary to transmit a parameter corresponding to the distortion degree (B). By the matrix generation type, the number of parameters to be used may be decreased. In this case, the unused parameters is not encoded.

Subsequently, a quantization matrix generation function using a quadratic function is shown as the second matrix generation type. The schematic diagram of this matrix generation type is shown in FIG. 6C.
[formula 3]

a* 2 b*
Q4z4 I /X, Y) =~ r+ 2 r+ c (12) a QsXs(X, Y)=16*r2+g*r+c (13) Parameters (A), (B) and (C) related to functions a, b and c, respectively, represent a change degree, distortion and correction value of the quadratic function. These functions are apt to greatly increase in value as particularly a distance increases.
When the quantization matrix in the case of
22 QMP =(2, 10, 1, 0) is calculated using, for example, the equations (4), (8) and (10), a quantization matrix of the following equation (14) can be generated.
[formula 4]

Q4x4 (x'Y) = 18 20 22 25 20 22 25 28 (14) Further, the following equations (15) and (16) represent examples of matrix generation functions of the third matrix generation type.

[formula 5]
QQXq(x,y)=a*r+b(sin(16r))+c (15) QaXs (x,Y) = a * r + ~ (sin(32 r)) + c (16) The distortion item shown in FIG. 6B is added to the first matrix type. The distortion amplitude (B) represents the magnitude of the amplitude of a sine function. When b is a positive value, the effect that a straight line is warped on the downside emerges. On the other hand, when b is a negative value, an effect that the straight line is warped on the upper side emerges. It is necessary to change the corresponding phase by a 4x4 pixel block or 8x8 pixel block. Various distortions can be generated by changing the phase.

When the quantization matrix in the case of QMP = (3, 32, 7, -6) is calculated using the equations
23 (4) and (15), the quaritization matrix of the following equation (17) can be generated.

[formula 6]

QaXa (x, Y) ~ 19 23 27 31 5 23 27 31 35 (17) Although a sine function is used in this embodiment, a cosine function and other functions may be used, and a phase or a period may be changed. The distortion amplitude (B) can use various functions such 10 as sigmoid function, Gaussian function, logarithmic function and N-dimensional function. Further, when variables of the change degree (A) including the distortion amplitude (B) and the correction item (C) are an integer value, a translation table may be prepared beforehand to avoid the computation process of the high processing load such as sine functions.

The function used for the matrix generation type is subjected to real number calculation. Accordingly, when sine function calculation is done every encoding, the calculation process increases. Further, hardware for performing sine function calculation must be prepared. Thus, a trarislation table according to precision of a parameter to be used may be provided.

Since floating-point calculation increases in cost in comparison with integer calculation, the quantization matrix generation parameters are defined
24 by integer values respectively, and a corresponding value is extracted from an individual translation table corresponding to a matrix generation type.

When calculation of real number precision is possible, the distance may be computed by the following equation (18).

[formula 7]

r= xz+y2 (18) Further, it is possible to change values in vertical and lateral directions of the quantization matrix by weighting it according to the distance.
Placing great importance on, for example, a vertical direction, a distance function as indicated by the following equation (19) is used.
[formula 8]

r=1 2x+y~ (19) When a quantization matrix in the case of QMP = (2, 1, 2, 8) is generated by the above equation, a quantization matrix expressed by the following equation (20) is provided.

[formula 9]

Qaxa (xI }') - 10 14 20 28 12 17 24 33 (20) The quantization matrixes 204 generated with the first to N-th matrix generators 203 are output from the quantization matrix generator 109 selectively. The quantization matrix generation controller 210 controls the switch 202 to switch the output terminal of the switch 202 according to every matrix generation type 5 deciphered with the generation parameter deciphering unit 201. Further, the quantization matrix generation controller 210 checks whether the quantization matrix corresponding to the quantization matrix generation parameter is generated properly.

10 The configurations of the video encoding apparatus 100 and quantization matrix generator 109 according to the embodiment are explained hereinbefore. An example to carry out a video encoding method with the video encoding apparatus 100 and quantization matrix 15 generator 109 will be described referring to a flow chart of FIG. 3.

At first, an image signal of one frame is read from an external memory (not shown), and input to the video encoding apparatus 100 as the input image signal 20 116 (step S001). The input image signal 116 is divided into macroblocks each composed of 16x16 pixels. A
quantization matrix generation parameter 117 is set to the video encoding apparatus 100 (S002). That is, the encoding controller 110 sends information indicating to
25 use a quantization matrix for the current frame to the parameter generator 108. When receiving this information, the parameter generator 108 sends the
26 quantization matrix generation parameter to the quantization matrix generator 109. The quantization matrix generator 109 generates a quantization matrix according to a type of the input quantization matrix generation parameter.

When the input image signal 116 is input to the video encoding apparatus 100, encoding is started in units of a block (step S003). When one macroblock of the input image signal 116 is input to the predictor 101, the mode determination unit 102 initializes an index indicating an encoding mode and a cost (step S004). A predictive image signal 118 of one prediction mode selectable in units of block is generated by the predictor 101 using the input image signal 116 (step S005). A difference between this predictive image signal 118 and the input image signal 116 is calculated whereby a predictive error signal 119 is generated. A
cost is calculated from the absolute value sum SAD of this predictive error signal 119 and the number of encoded bits OH of the prediction mode (step S006).
Otherwise, local decoding is done to generate a local decoded signal 113, and the cost is calculated from the number of encoded bits D of the error signal indicating a differential value between the local decoded signal 113 and the input image signal 116, and the number of encoded bits R of an encoded signal obtained by encoding temporally the input image signal.
27 The mode determination unit 102 determines whether the calculated cost is smaller than the smallest cost min cost (step S007). When it is smaller (the determination is YES), the smallest cost is updated by the calculated cost, and an encoding mode corresponding to the calculated cost is held as a best mode index (step S008). At the same time a predictive image is stored (step S009). When the calculated cost is larger than the smallest cost min cost (the determination is NO), the index indicating a mode number is incremented, and it is determined whether the index after increment is the last mode (step SO10).

When the index is larger than MAX indicating the number of the last mode (the determination is YES), the encoding mode information of best mode and predictive error signal 119 are send to the transformer 103 and the quantizer 104 to be transformed and quantized (step S011). The quantized transform coefficient 112 is input to the encoding processor 111 and entropy-encoded along with predictive information with the encoding processor 111 (step S012). On the other hand, when index is smaller than MAX indicating the number of the last mode (the determination is NO), the predictive image signal 118 of an encoding mode indicated by the next index is generated (step S005).

When encoding is done in best mode, the quantized transform coefficient 112 is input to the dequantizer
28 105 and the inverse transformer 106 to be dequantized and inverse-transformed (step S013), whereby the predictive error signal is decoded. This decoded predictive error signal is added to the predictive image signal of best mode provided from the mode determination unit 102 to generate a local decoded signal 113. This local decoded signal 113 is stored in the reference image memory 107 as a reference image (step S014).

Whether encoding of one frame finishes is determined (step S015). When the process is completed (the determination is YES), an input image signal of the next frame is read, and then the process returns to step S002 for encoding. On the other hand, when the encoding process of one frame is not completed (the determination is NO), the process returns to step 003, and then the next pixel block is input and the encoding process is continued.

The above is the brief of the video encoding apparatus 100 and video encoding method in the embodiment of the present invention.

In the above embodiment, the quantization matrix generator 108 generates and uses one quantization matrix to encode one frame. However, a plurality of quantization matrixes may be generated for one frame by setting a plurality of quantization matrix generation parameters. In this case, since a plurality of
29 quantization matrixes generated in different matrix generation types with the first to N-th matrix generators 203 can be switched in one frame, flexible quantization becomes possible. Concretely, the first matrix generator generates a quantization matrix having a uniform weight, and the second matrix generator generates a quantization matrix having a large value in a high bandwidth. Control of quantization is enabled in a smaller range by changing these two matrixes every to-be-encoded block. Because the number of encoded bits transmitted for generating the quantization matrix is several bits, the high encoding efficiency can be maintained.

In the embodiment of the present invention, there is explained a quantization matrix generation technique of a 4x4 pixel block size and a 8x8 pixel block size for generation of a quantization matrix concerning a luminance component. However, generation of the quantization matrix is possible by a similar scheme for a color difference component. Then, in order to avoid increase of overhead for multiplexing the quantization matrix generation parameter of color difference component with syntax, the same quantization matrix as the luminance component may be used, and the quantization matrix with an offset corresponding to each frequency position may be made and used.

In the present embodiment of the present invention, there is explained a quantization matrix generation method using a trigonometric function (a sine function) in the N-th matrix generator 203.
However, the function to be used may be sigmoid 5 function and Gaussian function. It is possible to make a more complicated quantization matrix according to a function type. Further, when the corresponding matrix generation type (T) among the quantization matrix generation parameters QMP provided from the 10 quantization matrix generation controller 210 cannot use in the video encoding apparatus, it is possible to make a quantization matrix by substituting the matrix generation type well-resembling the matrix generation type (T). Concretely, the second matrix generation 15 type is a function that a distortion degree using a sine function is added to the first matrix generation type, and similar to the tendency of the generated quantization matrix. Therefore, when the third matrix generator cannot be used in the encoding apparatus when 20 T = 3 is input, the first matrix generator is used.
In the embodiment of the present invention, four parameters of the matrix generation type (T), the change degree (A) of quantization matrix, the distortion degree (B) and the correction item (C) are 25 used. However, parameters aside from these parameters may be used, and the number of parameters decided by the matrix generation type (T) can be used. Further, a translation tabie of parameters decided by the matrix generation type (T) beforehand may be provided. The number of encoded bits for encoding the quantization matrix generation parameters decreases as the number of quantization matrix gerieration parameters to be transmitted decreases and the precision lowers.
However, since at the same time the degree of freedom of the quantization matrix lowers, the number of quantization matrix generation parameters and the precision thereof have only to be selected in consideration of balance between a profile and hardware scale to be applied.

In the embodiment of the present invention, a to-be-processed frame is divided into rectangular blocks of 16x16 pixel size, and then the blocks are encoded from an upper left of a screen to a lower right thereof, sequentially. However, the sequence of processing may be another sequence. For example, the blocks may be encoded from the lower-right to the upper left, or in a scroll shape from the middle of the screen. Further, the blocks may be encoded from the upper right to the lower left, or from the peripheral part of the screen to the center part thereof.

In the embodiment of the present invention, the frame is divided into macroblocks of a 16x16 pixel block size, and a 8x8 pixel block or a 4x4 pixel block is used as a processing unit for intra-frame prediction. However, the to-be-processed block needs not to be made in a uniform block shape, and may be make in a pixel block size such as a 16x8 pixel block size, 8x16 pixel block size, 8x4 pixel block size, 4x8 pixel block size. For example, a 8x4 pixel block and a 2x2 pixel block are available under a similar framework. Further, it is not necessary to take a uniform block size in one macroblock, and different block sizes may be selected. For example, a 8x8 pixel block and a 4x4 pixel block may be coexisted in the macroblock. In this case, although the number of encoded bits for encoding divided blocks increases with increase of the number of divided blocks, prediction of higher precision is possible, resulting in reducing a prediction error. Accordingly, a block size has only to be selected in consideration of balance between the number of encoded bits of transform coefficients and local decoded image.

In the embodiment of the present invention, the transformer 103, quantizer 104, dequantizer 105 and inverse transformer 106 are provided. However, the predictive error signa:L needs not to be always subjected to the transformation, quantization, inverse transformation and dequantization, and the predictive error signal may be encoded with the encoding processor 109 as it is, and the quantization and inverse quantization may be omitted. Similarly, the transformation and inverse transformation need not be done.

(Second embodiment: encoding) Multipath encoding concerning the second embodiment is explained referring to a flow chart of FIG. 12. In this embodiment, the detail description of the encoding flow having the same function as the first embodiment of FIG. 3, that is, steps S002 - S015, is omitted. When the optimal quantization matrix is set every picture, the quantization matrix must be optimized. For this reason, multipath encoding is effective. According to this multipath encoding, the quantization matrix generation parameter can be effectively selected.

In this embodiment, for multipath encoding, steps S101 - S108 are added before step S002 of the first embodiment as shown in FIG. 12. In other words, at first, the input image signal 116 of one frame is input to the video encoding apparatus 100 (step S101), and encoded by being divided into macroblocks of 16x16 pixel size. Then, the encoding controller 110 initializes an index of the quantization matrix generation parameter used for the current frame to 0, and initializes min costQ representing the minimum cost, too (step S102). Then, the quantization matrix generation controller 210 selects an index of the quantization matrix generation parameter shown in PQM idx from a quantization matrix generation parameter set, and send it to the quantization matrix generator 109. The quantization matrix generator 109 generates the quantization matrix according to a scheme of the input quantization matrix generation parameter (step S103). One frame is encoded using the quantization matrix generated in this time (step S104). A cost accumulates every macroblock to calculate an encoding cost of one frame (step S105).

It is determined whether the calculated cost is smaller than the smallest cost min costQ (step S106).
When the calculated cost is smaller than the smallest cost (the determination is YES), the smallest cost is updated by the calculation cost. In this time, the index of the quantization matrix generation parameter is held as a Best PQM idx index (step S107). When the calculated cost is larger than the smallest cost min costQ (the determination is NO), PQM index is incremented and it is determined whether the incremented PQM idx is last (step S108). If the determination is NO, the index of the quantization matrix generation parameter is updated, and further encoding is continued. On the other hand, if the determination is YES, Best PQM idx is input to the quantization matrix generator 109 again, and the main encoding flow, that is, steps S002 - S015 of FIG. 3 are executed. When the code data encoded in Best PQM idx at the time of multipath process is held, the main encoding flow needs not be executed, and thus it is possible to finish encoding of the frame by updating the code data.

5 In the second embodiment, when encoding is done in multipath, it is not necessary to always encode the whole frame. An available quantization matrix generation parameter can be determined by transform coefficient distribution obtained in units of block.

10 For example, when transform coefficients generated at a low rate are almost 0, because the property of the code data does not change even if the quantization matrix is not used, the process can be largely reduced.

There will be explained an encoding method of a 15 quantization matrix generation parameter. As shown in FIG. 7, the syntax is comprised of three parts mainly.
A high-level syntax (401) is packed with syntax information of higher-level layers than a slice level.
A slice level syntax (402) describes necessary 20 information every slice. A macroblock level syntax (403) describes a change value of quantization parameter or mode information needed for every macroblock. These syntaxes are configured by further detailed syntaxes. In other words, the high-level 25 syntax (401) is comprised of sequences such as sequence parameter set syntax (404) and picture parameter set syntax (405), and a syntax of picture level. A slice level syntax (402) is comprised of a slice header syntax (406), a slice data syntax (407), etc. Further, the macroblock level syntax (403) is comprised of a macroblock header syntax (408), macroblock data syntax (409), etc.

The above syntaxes are components to be absolutely essential for decoding. When the syntax information is missing, it becomes impossible to reconstruct correctly data at the time of decoding. On the other hand, there is a supplementary syntax for multiplexing the information that is not always needed at the time of decoding. This syntax describes statistical data of an image, camera parameters, etc., and is prepared as a role to filter and adjust data at the time of decoding.

In this embodiment, necessary syntax information is the sequence parameter set syntax (404) and picture parameter set syntax (405). Each syntax is described hereinafter.

ex seq scaling matrix flag shown in the sequence parameter set syntax of FIG. 8 is a flag indicating whether the quantization matrix is used. When this flag is TRUE, the quantization matrix can be changed in units of sequence. On the other hand, when the flag is FALSE, the quantization matrix cannot be used in the sequence. When ex seq scaling matrix flag is TRUE, further ex matrix type, ex matrix A, ex matrix B and ex matrix C are sent. These correspond to the matrix generation type (T), change degree (A) of quantization matrix, distortion degree (B) and correction item (C), respectively.

ex pic scaling matrix flag shown in the picture parameter set syntax of FIG. 9 is a flag indicating whether the quantization matrix is changed every picture. When this flag is TRUE, the quantization matrix can be changed in units of picture. On the other hand, when the flag is FALSE, the quantization matrix cannot be changed every picture. When ex pic scaling matrix flag is TRUE, further ex matrix type, ex matrix A, ex matrix B and ex matrix C are transmitted. These correspond to the matrix generation type (T), change degree (A) of quantization matrix, distortion degree (B) and correction item (C), respectively.

An example that a plurality of quantization matrix generation parameters are sent is shown in FIG. 10 as another example of the picture parameter set syntax.

ex pic scaling matrix :flag shown in the picture parameter set syntax is a flag indicating whether the quantization matrix is changed every picture. When the flag is TRUE, the quantization matrix can be changed in units of picture. On the other hand, when the flag is FALSE, the quantization matrix cannot be changed every picture. When ex pic scaling matrix flag is TRUE, further, ex num of matrix type is sent. This value represents the number of sets of quantization matrix generation parameters. A plurality of quantization matrixes can be sent by the combination of sets.

ex_matrix_type, ex_matrix_A, ex_matrix_B and ex matrix C, which are sent successively, are sent by a value of ex num of matrix type. As a result, a plurality of quantization matrixes can be provided in a picture. Further, when the quantization matrix is to be changed in units of block, bits may be transmitted every block by the number of corresponding quantization matrixes, and exchanged. For example, if ex num of matrix type is 2, a syntax of 1 bit is added to the macroblock header syntax. The quantization matrix is changed according to whether this value is TRUE or FALSE.

In the present embodiment, when a plurality of quantization matrix generation parameters are held in one frame as described above, they may be multiplexed on a supplementary syntax. An example that a plurality of quantization matrix generation parameters are sent using the supplemental syntax is shown in FIG. 11.

ex sei scaling matrix flag shown in the supplemental syntax is a flag indicating whether a plurality of quantization matrixes are changed. When this flag is TRUE, the quantization matrixes can be changed. On the other hand, when the flag is FALSE, the quantization matrixes cannot be changed. When ex_sei_scaling_matrix_flag is TRUE, further, ex num of matrix type is sent. This value indicates the number of sets of quantization matrix generation parameters. A plurality of quantization matrixes can be sent by the combination of sets. As for ex_matrix_type, ex_matrix_A, ex_matrix_B, ex matrix C, which are sent successively, only a value of ex num of matrix type is sent. As a result, a plurality of quantization matrixes can be provided in the picture.

In this embodiment, the quantization matrix can be retransmitted by the slice header syntax in the slice level syntax shown in FIG. 7. An example of such a case will be explained using FIG. 13. FIG. 13 shows the syntax structure in the slice header syntax. The slice ex scaling matrix flag shown in the slice header syntax of FIG. 13 is a flag indicating whether a quantization matrix can be used in the slice. When the flag is TRUE, the quantization matrix can be changed in the slice. When the flag is FALSE, the quantization matrix cannot be changed in the slice. The slice ex matrix type is transmitted when the slice ex scaling matrix flag is TRUE. This syntax corresponds to a matrix generation type (T).

Successively, slice ex matrix A, slice ex matrix B and slice ex matrix C are transmitted. These correspond to a change degree (A) of a quantization matrix (C), a distortion degree (B) thereof and a correction item thereof respectively. NumOfMatrix in FIG. 13 represents the number of available quantization matrixes in the slice. When the quantization matrix is 5 changed in a smaller region in slice level, it is changed in luminance component and color component, it is changed in quantization block size, it is changed every encoding mode, etc., the number of available quantization matrixes can be transmitted as a modeling 10 parameter of the quantization matrix corresponding to the number. For purposes of example, when there are two kinds of quantization blocks of a 4x4 pixel block size and a 8x8 pixel block size in the slice, and different quantization matrixes can be used for the 15 quantization blocks, NumOfMatrix value is set to 2.
In this embodiment of the present invention, the quantization matrix can be changed in slice level using the slice header syntax shown in FIG. 14. In FIG. 14, three modeling parameters to be transmitted are 20 prepared compared with FIG. 13. When a quantization matrix is generated with the use of, for example, the equation (5), the parameter needs not be transmitted because the distortion degree (B) is always set to 0.
Therefore, the encoder and decoder can generate the 25 identical quantization matrix by holding an initial value of 0 as an internal parameter.

In this embodiment, the parameter can be transmitted using the slice header syntax expressed in FIG. 15. In FIG. 15, PrevSliceExMatrixType, PrevSliceExMatrix A and PrevSliceExMatrix B(further, PrevSliceExMatrix C) are added to FIG. 13. Explaining more concretely, slice ex scaling matrix flag is a flag indicating whether or not the quantization matrix is used in the slice, and when this flag is TRUE, a modeling parameter is transmitted to a decoder as shown in FIGS. 13 and 14. On the other hand, when the flag is FALSE, PrevSliceExMatrixType, PrevSliceExMatrix A
and PrevSliceExMatrix B(further, PrevSliceExMatrix C) are set. These meanings are interpreted as follows.

PrevSliceExMatrixType indicates a generationg type (T) used at the time when data is encoded in the same slice type as that of one before the current slice in order of encoding. This variable is updated immediately before that encoding of the slice is finished. The initial value is set to 0.
PrevSliceExMatrix A indicates a change degree (A) used at the time when data is encoded in the same slice type as that of one before the current slice in oder of encoding. This variable is updated immediately before that encoding of the slice is finished. The initial value is set to 0.

PrevSliceExMatrix B indicates a disortion degree (B) used at the time when data is encoded in the same slice type as that of one before the current slice in order of encoding. This variable is updated immediately before that encoding of the slice is finished. The initial value is set to 0.

PrevSliceExMatrix C indicates a correction item (C) used at the time when data is encoded in the same slice type as that of one before the current slice in order of encoding. This variable is updated immediately before that encoding of the slice is finished. The initial value is set to 16.

CurrSliceType indicates a slice type of the current encoded slice, and a corresponding index is assigned to each of, for example, I-Slice, P-Slice and B-Slice. An example of CurrSliceType is shown in FIG. 16. A value is assigned to each of respective slice types. 0 is assigned to I-Slice using only intra-picture prediction, for example. Further, 1 is assigned to P-Slice capable of using a single directinal prediction from the encoded frame encoded previously in order of time and intra-picture prediction. On the other hand, 2 is assinge to B-Slice capable of using bidirectional prediction, single directinal prediction and intra-picture prediction.

In this way, the modeling parameter of the quantization matrix encoded in the same slice type as that of the slice immediately before the current slice is accessed and reset. As a result, it is possible to reduce the number of encoded bits necessary for transmitting the modeling parameter.

In this embodiment of the present invention, FIG. 17 can be used. F'IG. 17 shows a structure that NumOfMatrix is removed from FIG. 5. When only one quantization matris is available for encoded slice, this syntax simplified more than FIG. 15 is used. This syntax shows approximat.ely the same operation as the case that NumOfMatrixl is 1 in FIG. 15.

When a plurality of quantization matrixes can be held in the same picture with the decoder, the quantization matrix generation parameter is read from the supplemental syntax to generate a corresponding quantization matrix. On the other hand, when a plurality of quantization matrixes cannot be held in the same picture, the quantization matrix generated by the quantization matrix generation parameter described in the picture parameter set syntax is used without decoding the supplement.al syntax.

In the embodiment as discussed above, the quantization matrix is generated according to a corresponding matrix generation type. When the generation parameter of the quantization matrix is encoded, the number of encoded bits used for sending the quantization matrix can be reduced. Further, it becomes possible to select adaptively the quantization matrixes in the picture. The encoding capable of dealing with various uses such as quantization done in consideration of a subjectivity picture and encoding done in consideration of the encoding efficiency becomes possible. In other words, a preferred encoding according to contents of a pixel block can be performed.

As mentioned above, when encoding is performed in a selected mode, a decoded image signal has only to be generated only for the selected mode. It needs not be always executed in a loop for determining a prediction mode.

The video decoding apparatus corresponding to the video encoding apparatus is explained hereinafter.
(Third embodiment: Decoding) According to a video decoding apparatus 300 concerning the present embodiment shown in FIG. 18, an input buffer 309 once saves code data sent from the video encoding apparatus 100 of FIG. 1 via a transmission medium or recording medium. The saved code data is read out from the input buffer 309, and input to a decoding processor 301 with being separated based on syntax every one frame. The decoding processor 301 decodes a code string of each syntax of the code data for each of a high-level syntax, a slice level syntax and a macroblock level syntax according to the syntax structure shown in FIG. 7. Owning to this decoding, the quantized transform coefficient, quantization matrix generation parameter, quantization parameter, prediction inode information, prediction switching information, etc. are reconstructed.

The decoding processor 301 produces, from the decoded syntax, a flag indicating whether a 5 quantization matrix is used for a corresponding frame, and input it to a generation parameter setting unit 306. When this flag is TRUE, a quantization matrix generation parameter 311 is input to the generation parameter setting unit 306 from the decoding processor 10 301. The generation parameter setting unit 306 has an update function of the quantization matrix generation parameter 311, and inputs a set of the quantization matrix generation parameters 311 to a quantization matrix generator 307 based on the syntax decoded by the 15 decoding processor 301. The quantization matrix generator 307 generates a quantization matrix 318 corresponding to the input quantization matrix generation parameter 311, and outputs it to a dequantizer 302.

20 The quantized transform coefficient output from the encoding processor 301 is input to the dequantizer 302, and dequantized thereby based on the decoded information using the quantization matrix 318, quantization parameter, etc. The dequantized transform 25 coefficient is input to an inverse transformer 303.
The inverse transformer 303 subjects the dequantized transform coefficient to inverse transform (for example, inverse discrete cosine transform) to generate an error signal 313. The inverse orthogonal transformation is used here. However, when the encoder performs wavelet transformation or independent component analysis, the inverse transformer 303 may perform inverse wavelet transformation or inverse independence component analysis. The coefficient subjected to the inverse transformation with the inverse transformer 30:3 is send to an adder 308 as an error signal 313. The adder 308 adds the predictive signal 315 output from the predictor 305 and the error signal 313, and inputs an addition signal to a reference memory 304 as a decoded signal 314. The decoded image 314 is sent from the video decoder 300 to the outside and stored in the output buffer (not shown). The decoded image stored in the output buffer is read at the timing managed by the decoding controller 310.

On the other hand, the prediction information 316 and mode information which are decoded with the decoding processor 301 are input to the predictor 305.
The reference signal 317 already encoded is supplied from the reference memory 304 to the predictor 305.
The predictor 305 generates the predictive signal 315 based on input mode information, etc. and supplies it to the adder 308.

The decoding controller 310 controls the input buffer 307, output timing, decoding timing, etc.
The video decoding apparatus 300 of the third embodiment is configured as described above, and the video decoding method executed with the video decoding apparatus 300 is explained referring to the flowchart of FIG. 19.

The code data of one frame is read from the input buffer 309 (step S201), and decoded according to a syntax structure (step S202). It is determined by a flag whether the quantization matrix is used for the readout frame based on the decoded syntax (step S203).
When this determination is YES, a quantization matrix generation parameter is set to the quantization matrix generator 307 (step 204). The quantization matrix generator 307 generates a quantization matrix corresponding to the generation parameter (step 205).
For this quantization matrix generation, a quantization matrix generator 307 having the same configuration as the quantization matrix generator 109 shown in FIG. 2 which is used for the video encoding apparatus is employed, and performs the same process as the video encoding apparatus to generate a quantization matrix.
A generation parameter generator 306 for supplying a generation parameter to the quantization matrix generator 109 has the same configuration as the generation parameter generator 108 of the encoding apparatus.

In other words, in the generation parameter generator 306, the syntax is formed of three parts mainly, that is, a high-level syntax (401), a slice level syntax (402) and a macroblock level syntax (403) as shown in FIG. 7. These syntaxes are comprised of further detailed syntaxes like the encoding apparatus.
The above mentioned syntaxes are components that are absolutely imperative at the time of decoding. If these syntax information lack, data cannot be correctly decoded at the time of decoding. On the other hand, there is a supplementary syntax for multiplexing information that is not always needed at the time of decoding.

The syntax information which is necessary in this embodiment contains a sequence parameter set syntax (404) and a picture parameter set syntax (405). The syntaxes are comprised of a sequence parameter set syntax and picture parameter set syntax, respectively, as shown in FIGS. 8 and 9 like the video encoding apparatus.

As another example of the picture parameter set syntax can be used the picture parameter set syntax used for sending a plurality of quantization matrix generation parameters shown in FIG. 10 as described in the video encoding apparatus. However, if the quantization matrix is changed in units of block, the bits have only to be transmitted by the number of corresponding quantization matrixes for each block, and exchanged. When, for example, ex num of matrix type is 2, a syntax of 1 bit i_s added in the macroblock header syntax, and the quanti.zation matrix is changed according to whether this value is TRUE or FALSE.
When, in this embodiment, a plurality of quantization matrix generation parameters are held in one frame as described above, data multiplexed with the supplementary syntax can be used. As described in the video encoding apparatus, it is possible to use the plurality of quantization matrix generation parameters using supplemental syntaxes shown in FIG. 11 In this embodiment of the present invention, re-receiving of a quantization matrix can be done by means 15. of slice header syntax in slice level syntax shown in FIG. 7. An example of such a case is explained using FIG. 13. FIG. 13 shows a syntax structure in a slice header syntax. The slice_ex_scaling_matrix_flag shown in the slice header syntax of FIG. 13 is a flag indicating whether a quantization matrix is used in the slice. When the flag is TRUE, the quantization matrix can be changed in the slice. On the other hand, when the flag is FALSE, it is impossible to change the quantization matrix in the slice.

When the slice_ex_scaling_matrix_flag is TRUE, slice_ex_matrix_type is received further. This syntax corresponds to a matrix generation type (T).

I

Successively, slice ex matrix A, slice ex matrix B and slice ex matrix C are received. These correspond to a change degree (A), a distortion degree (B) and a correction item of a quantization matrix (C), 5 respectively. NumOfMatrix in FIG. 13 represents the number of available quantization matrixes in the slice.
When the quantization matrix is changed in a smaller region in slice level, it is changed in luminance component and color component, it is changed in 10 quantization block size, and it is changed every encoding mode, etc., the number of available quantization matrixes can be received as a modeling parameter of the quantization matrix corresponding to the number. For purposes of example, when there are 15 two kinds of quantization blocks of a 4x4 pixel block size and a 8x8 pixel block size in the slice, and different quantization matrixes can be used for the quantization blocks, NumOfMatrix value is set to 2.

In this embodiment of the present invention, the 20 quantization matrix can be changed in slice level using the slice header syntax shown in FIG. 14. In FIG. 14, three modeling parameters to be transmitted are prepared compared with FIG. 13. When the quantization matrix is generated with the use of, for example, the 25 equation (5), the pararneter needs not be received because the distortion degree (B) is always set to 0.
Therefore, the encoder and decoder can generate the identical quantization matrix by holding an initial value of 0 as an internal parameter.

In this embodiment of the present invention, the parameter can be received using the slice header syntax expressed in FIG. 15. In FIG. 15, PrevSliceExMatrixType, PrevSliceExMatrix A and PrevSliceExMatrix B(further, PrevSliceExMatrix C) are added to FIG. 13. Explaining more concretely, slice_ex_scaling_matrix_flag is a flag indicating whether or not the quantization matrix is used in the slice. When this flag is TRUE, a modeling parameter is received as shown in FIGS. 13 and 14. On the other hand, when the flag is FALSE, PrevSliceExMatrixType, PrevSliceExMatrix A and PrevSliceExMatrix B(further, PrevSliceExMatrix C) are set. These meanings are interpreted as follows.

PrevSliceExMatrixType indicates a generationg type (T) used at the time when data is decoded in the same slice type as that of one before the current slice in order of decoding. This variable is updated immediately before that decoding of the slice is finished. The initial value is set to 0.
PrevSliceExMatrix-A indicates a change degree (A) used at the time when the slice is decoded in the same slice type as that of one before the current slice in oder of decoding. This variable is updated immediately before that decoding of the slice is finished. The initial value is set to 0.

PrevSliceExMatrix B indicates a disortion degree (B) used at the time when the slice is decoded in the same slice type as that of one before the current slice in order of decoding. This variable is updated immediately before that decoding of the slice is finished. The initial value is set to 0.

PrevSliceExMatrix C indicates a correction item (C) used at the time when data is decoded in the same slice type as that of one before the current slice in order of decoding. This variable is updated immediately before that decoding of the slice is finished. The initial value is set to 16.
CurrSliceType indicates a slice type of the current slice. Respective indexes are assigned to, for example, I-Slice, P-Slice and B-Slice, respectively.

An example of CurrSliceType is shown in FIG. 16.
Respective values are assigned to respective slice types. 0 is assigned to I-Slice using only intra-picture prediction, for example. Further, 1 is assigned to P-Slice capable of using single directional prediction from the encoded frame encoded previously in order of time and intra-picture prediction. Meanwhile, 2 is assigned to B-Slice capable of using bidirectional prediction, single directional prediction and intra-picture prediction.

In this way, the modeling parameter of the quantization matrix encoded in the same slice type as that of the slice immediately before the current slice is accessed and set again. As a result, it is possible to reduce the number of encoded bits necessary for receiving the modeling parameter.

In this embodiment of the present invention, FIG. 17 can be used. FIG. 17 shows a structure that NumOfMatrix is removed from FIG. 5. When only one quantization matrix is available for the slice, this syntax simplified more than FIG. 16 is used. This syntax shows approximately the same operation as the case that NumOfMatrixl is 1 in FIG. 15.

When the quantization matrix is generated as described above, the decoded transform coefficient 312 is dequantized by the quantization matrix (step S206), and subjected to inverse transformation with the inverse transformer 303 (step S207). As a result, the error signal is reproduced. Then, a predictive image is generated by the predictor 305 based on the prediction information 316 (S209). This predictive image and the error signal are added to reproduce a decoded image data (step S209). This decoding picture signal is stored in the reference memory 304, and output to an external device.

In this embodiment as discussed above, a quantization matrix is generated based on the input code data according to the corresponding matrix i generation type and used in dequantization, whereby the number of encoded bits of the quantization matrix can be reduced.

A function of each part described above can be realized by a program stored in a computer.

In the above embodiments, video encoding is explained. However, the present invention can be applied to still image encoding.

According to the present invention, a plurality of quantization matrixes are generated using one or more of parameters such as an index of generation function for generating a quantization matrix, a change degree indicating a degree of change of a quantization matrix, a distortion degree and a correction item.

Quantization and dequantization are performed using the quantization matrix. The optimum set of quantization matrix generation parameter is encoded and transmitted.
As a result, the present invention can realize an encoding efficiency higher than the conventional quantization matrix transmission method.
According to the present invention, there is provided a video encoding/decoding method and apparatus making it possible to improve the encoding efficiency in the low bit rate can be made a realizing possible.

Industrial Applicability The invention can applied to encoding and decoding of a motion picture, a still picture, an audio, etc.

over each field such as video, audio devices, mobile equipment, broadcast, information terminals, or network.

Claims (35)

1. (Deleted)
2. (Deleted)
3. (Deleted)
4. (Deleted)
5. A video encoding method of quantizing a transform coefficient using a quantization matrix, the video encoding method comprising:

a step of selecting a generation type of a quantization matrix;

a step of obtaining a plurality of generation functions by setting a parameter to a plurality of functions prepared beforehand in correspondence with the generation type;

a quantization matrix generation step of generating a plurality of quantization matrixes using the plurality of generating functions;

a quantization step of producing a quantization transform coefficient by quantizing a transform coefficient concerning an input image signal using the plurality of quantization matrixes; and an encoding step of producing an encoded signal by multiplexing and encoding the quantized transform coefficient, information indicating the generation type and information of the parameter.
6. The video encoding method according to claim 5, wherein the parameter includes at least one of a change degree representing a degree of change of the quantization matrix, a distortion degree and a correction item.
7. The video encoding method according to claim 5, where the step of obtaining the generation function obtains the generation function by setting the parameter to the function defined using any one of a sine function, a cosine function, an N-dimensional function, a sigmoid function and a Gaussian function.
8. The video encoding method according to claim 5, wherein the quantization matrix generation step includes a step of changing an operation precision in producing the quantization matrix in correspondence with a precision of the parameter set to the generation function.
9. The video encoding method according to claim 5, wherein the step of obtaining the generation function selects a table in which calculation values of the plural generation functions corresponding to the parameter, and the quantization matrix generation step performs a calculation for generating the plural quantization matrix referring to the selected table.
10. (Deleted)
11. (Deleted)
12. (Deleted)
13. (Deleted)
14. The video encoding method according to claim 5, wherein the encoding step encodes information indicating the quantization matrix of the plural quantization matrixes, which is used in the quantization step as header information of any one of an encoded sequence, an encoded picture or an encoded slice.
15. (Deleted)
16. (Deleted)
17. A video decoding method of dequantizing a transform coefficient using a quantization matrix corresponding to each frequency position of the transform coefficient, the video decoding method comprising:

a decoding step of acquiring information of a parameter of a generation function for generating a quantized transform coefficient, information indicating a generation type of a quantization matrix and information of a parameter of a generation function for generating a quantization matrix;

a step of obtaining plural generation functions by setting the parameter of the generating function to a plurality of functions prepared beforehand in correspondence with the generation type, a quantization matrix generation step of generating a plurality of quantization matrixes using the plural generating functions;

a dequantization step of obtaining a transform coefficient by dequantizing the quantized transform coefficient using the plurality of quantizaiton matrixes generated;

a decoded image generation step of generating a decoded image based on the transform coefficient.
18. (Deleted)
19. (Deleted)
20. (Deleted)
21. The video decoding method according to claim 17, wherein the parameter includes at least one of a change degree representing a degree of change of a quantization matrix, a distortion degree and a correction item.
22. The video decoding method according to claim 17, wherein the step of obtaining the generation function obtains obtains the generation function by setting the parameter to the function defined using any one of a sine function, a cosine function, an N-dimensional function, a sigmoid function and a Gaussian function.
23. The video encoding method according to claim 17, wherein the quantization matrix generation step includes a step of changing an operation precision in producing the quantizaion matrix in correspondence with a precision of the parameter set to the generation function.
24. The video encoding method according to claim 17, wherein the step of obtaining the generation function selects a table in which calculation values of the plural generation functions corresponding to the parameter, and the quantization matrix generation step performs a calculation for generating the plural quantization matrix referring to the selected table.
25. (Deleted)
26. The method according to claim 17, wherein the step of generating the quantization matrix includes a step of generating a quantization matrix by substituting an available generation function when a generation function corresponding to a generation function index within a generation parameter of a decoded quantization matrix is not available in decoding.
27. (Deleted)
28. (Deleted)
29. The video encoding method according to claim 17, wherein the encoding step encodes information indicating the quantization matrix of the plural quantization matrixes, which is used in the quantization step as header information of any one of an encoded sequence, an encoded picture and an encoded slice.
30. (Deleted)
31. (Deleted)
32. A video encoding apparatus of quantizing a transform coefficient using a quantization matrix, the video encoding apparatus comprising:

a selection unit to select a generation type of a quantization matrix;

a generation function acquirement unit to acquire a plurality of generation functions by setting a parameter to a plurality of functions prepared beforehand in correspondence with the generation type;

a quantization matrix generation unit to generate a plurality of quantization matrixes using the plurality of generating functions;

a quantization unit to produce a quantization transform coefficient by quantizing a transform coefficient concerning an input image signal using the plurality of quantization matrixes; and an encoding unit to produce an encoded signal by multiplexing and encoding the quantized transform coefficient, information indicating the generation type and information of the parameter.
33. A video decoding apparatus of dequantizing a transform coefficient using a quantization matrix corresponding to each frequency position of the transform coefficient; the video decoding apparatus comprising:

a decoding unit to acquire information of a parameter of a generation function for generating a quantized transform coefficient, information indicating a generation type of a quantization matrix and information of a parameter of a generation function for generating a quantization matrix;

a generation function acquirement unit to acquire plural generation functions by setting the parameter of the generating function to a plurality of functions prepared beforehand in correspondence with the generation type, a quantization matrix generation unit to generate a plurality of quantization matrixes using the plural generating functions;

a dequantization unit to obtain a transform coefficient by dequantizing the quantized transform coefficient using the plurality of quantizaiton matrixes generated;

a decoded image generation unit to generate a decoded image based on the transform coefficient.
34. (Deleted)
35. (Deleted)
CA002642491A 2006-02-13 2006-10-19 Video encoding/decoding method and apparatus and program Abandoned CA2642491A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2006035319 2006-02-13
JP2006-035319 2006-02-13
PCT/JP2006/320875 WO2007094100A1 (en) 2006-02-13 2006-10-19 Moving image encoding/decoding method and device and program

Publications (1)

Publication Number Publication Date
CA2642491A1 true CA2642491A1 (en) 2007-08-23

Family

ID=38368548

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002642491A Abandoned CA2642491A1 (en) 2006-02-13 2006-10-19 Video encoding/decoding method and apparatus and program

Country Status (10)

Country Link
US (1) US20070189626A1 (en)
EP (1) EP1986440A4 (en)
JP (1) JPWO2007094100A1 (en)
KR (1) KR101035754B1 (en)
CN (1) CN101401435A (en)
AU (1) AU2006338425B2 (en)
BR (1) BRPI0621340A2 (en)
CA (1) CA2642491A1 (en)
RU (1) RU2414093C2 (en)
WO (1) WO2007094100A1 (en)

Families Citing this family (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8422546B2 (en) * 2005-05-25 2013-04-16 Microsoft Corporation Adaptive video encoding using a perceptual model
WO2007104265A1 (en) * 2006-03-16 2007-09-20 Huawei Technologies Co., Ltd. A method and device for realizing quantization in coding-decoding
US8059721B2 (en) * 2006-04-07 2011-11-15 Microsoft Corporation Estimating sample-domain distortion in the transform domain with rounding compensation
US8503536B2 (en) 2006-04-07 2013-08-06 Microsoft Corporation Quantization adjustments for DC shift artifacts
US8130828B2 (en) * 2006-04-07 2012-03-06 Microsoft Corporation Adjusting quantization to preserve non-zero AC coefficients
US7995649B2 (en) 2006-04-07 2011-08-09 Microsoft Corporation Quantization adjustment based on texture level
US8711925B2 (en) 2006-05-05 2014-04-29 Microsoft Corporation Flexible quantization
WO2008044511A1 (en) * 2006-10-12 2008-04-17 Kabushiki Kaisha Toshiba Method and apparatus for encoding image
US8238424B2 (en) 2007-02-09 2012-08-07 Microsoft Corporation Complexity-based adaptive preprocessing for multiple-pass video compression
US20080240257A1 (en) * 2007-03-26 2008-10-02 Microsoft Corporation Using quantization bias that accounts for relations between transform bins and quantization bins
US8498335B2 (en) 2007-03-26 2013-07-30 Microsoft Corporation Adaptive deadzone size adjustment in quantization
US8243797B2 (en) 2007-03-30 2012-08-14 Microsoft Corporation Regions of interest for quality adjustments
US20080253449A1 (en) * 2007-04-13 2008-10-16 Yoji Shimizu Information apparatus and method
JPWO2008132890A1 (en) * 2007-04-16 2010-07-22 株式会社東芝 Method and apparatus for image encoding and image decoding
US8442337B2 (en) 2007-04-18 2013-05-14 Microsoft Corporation Encoding adjustments for animation content
US8331438B2 (en) 2007-06-05 2012-12-11 Microsoft Corporation Adaptive selection of picture-level quantization parameters for predicted video pictures
US8213502B2 (en) * 2007-12-31 2012-07-03 Ceva D.S.P. Ltd. Method and system for real-time adaptive quantization control
US8189933B2 (en) 2008-03-31 2012-05-29 Microsoft Corporation Classifying and controlling encoding quality for textured, dark smooth and smooth video content
US8897359B2 (en) 2008-06-03 2014-11-25 Microsoft Corporation Adaptive quantization for enhancement layer video coding
JP5680283B2 (en) * 2008-09-19 2015-03-04 株式会社Nttドコモ Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, and moving picture decoding program
JP4697557B2 (en) * 2009-01-07 2011-06-08 ソニー株式会社 Encoding apparatus, encoding method, recording medium, and image processing apparatus
JP2011029956A (en) * 2009-07-27 2011-02-10 Sony Corp Image encoding device and image encoding method
KR20110045949A (en) * 2009-10-28 2011-05-04 삼성전자주식회사 Method and apparatus for encoding and decoding image by using rotational transform
TW201138477A (en) * 2009-10-30 2011-11-01 Panasonic Corp Image decoding method, image encoding method, and devices, programs, and integrated circuits therefor
CA2778280C (en) * 2009-10-30 2018-04-24 Panasonic Corporation Decoding method, decoding apparatus, coding method, and coding apparatus using a quantization matrix
RU2012116555A (en) * 2009-10-30 2013-12-10 Панасоник Корпорэйшн METHOD FOR DECODING IMAGES, METHOD FOR CODING IMAGES, DEVICE FOR DECODING IMAGES, APPARATUS FOR ENCODING IMAGES, PROGRAM AND INTEGRATED DIAGRAM
US9313526B2 (en) * 2010-02-19 2016-04-12 Skype Data compression for video
JP2011259362A (en) * 2010-06-11 2011-12-22 Sony Corp Image processing system and method of the same
JP5741076B2 (en) 2010-12-09 2015-07-01 ソニー株式会社 Image processing apparatus and image processing method
SG10201400975QA (en) 2011-02-10 2014-07-30 Sony Corp Image Processing Device And Image Processing Method
AU2015202011B2 (en) * 2011-02-10 2016-10-20 Sony Group Corporation Image Processing Device and Image Processing Method
US9363509B2 (en) * 2011-03-03 2016-06-07 Electronics And Telecommunications Research Institute Method for determining color difference component quantization parameter and device using the method
WO2012118359A2 (en) * 2011-03-03 2012-09-07 한국전자통신연구원 Method for determining color difference component quantization parameter and device using the method
US20120230395A1 (en) * 2011-03-11 2012-09-13 Louis Joseph Kerofsky Video decoder with reduced dynamic range transform with quantization matricies
CA2770799A1 (en) * 2011-03-11 2012-09-11 Research In Motion Limited Method and system using prediction and error correction for the compact representation of quantization matrices in video compression
JP5874725B2 (en) 2011-05-20 2016-03-02 ソニー株式会社 Image processing apparatus and image processing method
JP5907367B2 (en) * 2011-06-28 2016-04-26 ソニー株式会社 Image processing apparatus and method, program, and recording medium
JP2013038768A (en) * 2011-07-13 2013-02-21 Canon Inc Image encoder, image encoding method, program, image decoder, image decoding method and program
US9131245B2 (en) 2011-09-23 2015-09-08 Qualcomm Incorporated Reference picture list construction for video coding
JP5698644B2 (en) * 2011-10-18 2015-04-08 株式会社Nttドコモ Video predictive encoding method, video predictive encoding device, video predictive encoding program, video predictive decoding method, video predictive decoding device, and video predictive decode program
JP6120490B2 (en) * 2011-11-07 2017-04-26 キヤノン株式会社 Image encoding device, image encoding method and program, image decoding device, image decoding method and program
KR20130050149A (en) * 2011-11-07 2013-05-15 오수미 Method for generating prediction block in inter prediction mode
CN102395031B (en) * 2011-11-23 2013-08-07 清华大学 Data compression method
US9648321B2 (en) * 2011-12-02 2017-05-09 Qualcomm Incorporated Coding picture order count values identifying long-term reference frames
WO2013086724A1 (en) * 2011-12-15 2013-06-20 Mediatek Singapore Pte. Ltd. Method of clippling transformed coefficients before de-quantization
CA2856348C (en) * 2011-12-19 2021-06-08 Sony Corporation Image processing device and method
CN105519108B (en) * 2012-01-09 2019-11-29 华为技术有限公司 The weight predicting method and device of quantization matrix coding
US20130188691A1 (en) * 2012-01-20 2013-07-25 Sony Corporation Quantization matrix design for hevc standard
WO2013126130A1 (en) 2012-02-23 2013-08-29 Northwestern University Improved suture
RU2658174C1 (en) * 2012-09-06 2018-06-19 Сан Пэтент Траст Image encoding method, image decoding method, image encoding device, image decoding device and apparatus for encoding and decoding images
KR20150092119A (en) * 2012-11-30 2015-08-12 소니 주식회사 Image processing device and method
US20160360237A1 (en) 2013-12-22 2016-12-08 Lg Electronics Inc. Method and apparatus for encoding, decoding a video signal using additional control of quantizaton error
US10142642B2 (en) * 2014-06-04 2018-11-27 Qualcomm Incorporated Block adaptive color-space conversion coding
TWI561060B (en) * 2015-01-15 2016-12-01 Mstar Semiconductor Inc Signal processing apparatus and signal processing method including quantization or inverse-quantization process
EP3510775A4 (en) * 2016-09-13 2020-03-04 MediaTek Inc. Method of multiple quantization matrix sets for video coding
CN110166781B (en) * 2018-06-22 2022-09-13 腾讯科技(深圳)有限公司 Video coding method and device, readable medium and electronic equipment
JP7090490B2 (en) * 2018-06-28 2022-06-24 キヤノン株式会社 Image coding device, image decoding device, image coding method, image decoding method
US11457214B2 (en) * 2018-08-23 2022-09-27 Interdigital Vc Holdings France, Sas Coding of quantization matrices using parametric models
US11394973B2 (en) 2019-07-06 2022-07-19 Hfi Innovation Inc. Signaling of quantization matrices
JP7402016B2 (en) * 2019-11-06 2023-12-20 オッポ広東移動通信有限公司 Image decoding device and image encoding device
CN114745107A (en) * 2022-03-22 2022-07-12 西安电子科技大学 Encoding layer secret communication method based on matrix coding

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3012698B2 (en) * 1991-01-29 2000-02-28 オリンパス光学工業株式会社 Image data encoding apparatus and encoding method
US5493513A (en) * 1993-11-24 1996-02-20 Intel Corporation Process, apparatus and system for encoding video signals using motion estimation
KR100355375B1 (en) * 1995-11-01 2002-12-26 삼성전자 주식회사 Method and circuit for deciding quantizing interval in video encoder
US6031929A (en) * 1996-07-18 2000-02-29 University Of Pittsburgh Image facsimile with real time image segmentation
JPH10150659A (en) * 1996-11-18 1998-06-02 Sony Corp Picture encoder
JP2955266B2 (en) * 1998-03-05 1999-10-04 株式会社エイ・ティ・アール環境適応通信研究所 Method and apparatus for optimizing quantization table for image encoding and recording medium
ATE333758T1 (en) * 1998-05-04 2006-08-15 Gen Instrument Corp METHOD AND DEVICE FOR INVERSE QUANTIZATION OF MPEG-4 VIDEO
JP3395892B2 (en) * 1999-05-06 2003-04-14 日本電気株式会社 Video encoding device
DE60039689D1 (en) * 2000-07-10 2008-09-11 St Microelectronics Srl Method of compressing digital images
US6944226B1 (en) * 2000-10-03 2005-09-13 Matsushita Electric Corporation Of America System and associated method for transcoding discrete cosine transform coded signals
US20030031371A1 (en) * 2001-08-02 2003-02-13 Shinichi Kato Image encoding apparatus and image decoding apparatus
JP2003046789A (en) * 2001-08-02 2003-02-14 Canon Inc Image coding apparatus and image decoding apparatus
JP3948266B2 (en) * 2001-12-14 2007-07-25 日本ビクター株式会社 Moving picture coding apparatus, coding method, decoding apparatus, decoding method, and moving picture code string transmission method
KR101134220B1 (en) * 2004-06-02 2012-04-09 파나소닉 주식회사 Picture coding apparatus and picture decoding apparatus
JP4146444B2 (en) * 2005-03-16 2008-09-10 株式会社東芝 Video encoding method and apparatus

Also Published As

Publication number Publication date
WO2007094100A1 (en) 2007-08-23
EP1986440A1 (en) 2008-10-29
EP1986440A4 (en) 2010-11-17
AU2006338425B2 (en) 2010-12-09
RU2008136882A (en) 2010-03-20
JPWO2007094100A1 (en) 2009-07-02
CN101401435A (en) 2009-04-01
AU2006338425A1 (en) 2007-08-23
KR101035754B1 (en) 2011-05-20
BRPI0621340A2 (en) 2011-12-06
RU2414093C2 (en) 2011-03-10
KR20080085909A (en) 2008-09-24
US20070189626A1 (en) 2007-08-16

Similar Documents

Publication Publication Date Title
AU2006338425B2 (en) Moving image encoding/decoding method and device and program
JP6780097B2 (en) Multidimensional quantization technology for video coding / decoding systems
KR100977101B1 (en) Image encoding/image decoding method and image encoding/image decoding apparatus
US7792193B2 (en) Image encoding/decoding method and apparatus therefor
KR101538704B1 (en) Method and apparatus for coding and decoding using adaptive interpolation filters
EP2136566A1 (en) Image encoding and image decoding method and device
DK2681914T3 (en) QUANTIZED Pulse code modulation in video encoding
JP4844449B2 (en) Moving picture encoding apparatus, method, program, moving picture decoding apparatus, method, and program
EP1877959A2 (en) System and method for scalable encoding and decoding of multimedia data using multiple layers
WO2013009896A1 (en) Pixel-based intra prediction for coding in hevc
JPWO2010001999A1 (en) Video encoding / decoding method and apparatus
KR102110227B1 (en) Method And Apparatus For Video Encoding And Decoding
KR20090103675A (en) Method for coding/decoding a intra prediction mode of video and apparatus for the same
JP2007266861A (en) Image encoding device
Takamura et al. Lossless scalable video coding with H. 264 compliant base layer
MX2008010316A (en) Moving image encoding/decoding method and device and program
WO2012173449A2 (en) Video encoding and decoding method and apparatus using same
JP2021520698A (en) Video coding and decryption.
Joshi et al. Proposed H. 264/AVC for Real Time Applications in DVB-H Sever

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued

Effective date: 20130723