US20070183505A1 - Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs - Google Patents

Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs Download PDF

Info

Publication number
US20070183505A1
US20070183505A1 US11/726,971 US72697107A US2007183505A1 US 20070183505 A1 US20070183505 A1 US 20070183505A1 US 72697107 A US72697107 A US 72697107A US 2007183505 A1 US2007183505 A1 US 2007183505A1
Authority
US
United States
Prior art keywords
motion vector
motion
vector
small block
representative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/726,971
Inventor
Atsushi Shimizu
Hirohisa Jozawa
Kazuto Kamikura
Hiroshi Watanabe
Atsushi Sagata
Seishi Takamura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to US11/726,971 priority Critical patent/US20070183505A1/en
Assigned to NIPPON TELEGRAPH AND TELEPHONE CORPORATION reassignment NIPPON TELEGRAPH AND TELEPHONE CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JOZAWA, HIROHISA, KAMIKURA, KAZUTO, SAGATA, ATSUSHI, SHIMIZU, ATSUSHI, TAKAMURA, SEISHI, WATANABE, HIROSHI
Publication of US20070183505A1 publication Critical patent/US20070183505A1/en
Priority to US13/738,539 priority patent/US9154789B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/537Motion estimation other than block-based

Definitions

  • the present invention relates to motion vector predictive encoding and decoding methods, predictive encoding and decoding apparatuses, and storage media storing motion vector predictive encoding and decoding programs. These methods, apparatuses, and storage media are used for motion-compensating interframe prediction for motion picture encoding.
  • the interframe predictive coding method for coding motion pictures (i.e., video data) is known, in which an already-encoded frame is used as a prediction signal so as to reduce temporal redundancy.
  • a motion-compensating interframe prediction method is used in which a motion-compensated picture signal is used as a prediction signal.
  • the number and the kinds of components of the motion vector used for the motion compensation are determined depending on the assumed motion model used as a basis. For example, in a motion model in which only translational movement is considered, the motion vector consists of components corresponding to horizontal and vertical motions. In another motion model in which extension and contraction are also considered in addition to the translational movement, the motion vector consists of components corresponding to horizontal and vertical motions, and a component corresponding to the extending or contracting motion.
  • the motion compensation is executed for each small area obtained by dividing a picture into a plurality of areas such as small blocks, and each divided area has an individual motion vector. It is known that the motion vectors belonging to neighboring areas including adjacent small areas have a higher correlation. Therefore, in practice, the motion vector of an area to be encoded is predicted based on the motion vector of an area which neighbors the area to be encoded, and a prediction error generated at the prediction is variable-length-encoded so as to reduce the redundancy of the motion vector.
  • the picture to be encoded is divided into small blocks so as to motion-compensate each small block, and the motion vector of a small block to be encoded (hereinbelow, called the “target small block”) is predicted based on the motion vector of a small block which has already been encoded.
  • An encoding method for avoiding such an increase of the amount of generated codes is known, in which the motion-vector encoding is performed using a method, selected from a plurality of motion-compensating methods, which minimizes the prediction error with respect to the target block.
  • the following is an example of such an encoding method in which two motion-compensating methods are provided, one method corresponding to a translational motion model, the other corresponding to a translational motion and extending/contracting motion model, and one of the two motion-compensating methods is chosen.
  • FIG. 9 shows a translational motion model (see part (a)) and a translational motion and extending/contracting motion model (see part (b)).
  • the motion of a target object is represented using a translational motion component (x, y).
  • the motion of a target object is represented using a component (x, y, z) in which parameter Z for indicating the amount of extension or contraction of the target object is added to the translational motion component (x, y).
  • parameter Z has a value corresponding to the contraction (see part (b)).
  • x, y, and z respectively indicate horizontal, vertical, and extending/contracting direction components.
  • the unit for motion compensation is a small block
  • the active motion-compensating method may be switched for each small block in accordance with the present prediction efficiency, and the motion vector is predicted based on the motion vector of an already-encoded small block.
  • the prediction error of the motion vector is calculated by the following equations.
  • v 1 x, y (i) and v 2 x, y, z (i) mean components of the motion vector of the target small block
  • v 1 x, y (i ⁇ 1) and v 2 x, y, z (i ⁇ 1) mean components of the motion vector of a small block of the previous frame.
  • prediction errors d x, y and d x, y, z are calculated and encoded so as to transmit the encoded data to the decoding side. Even if the size of each small block is not the same in the motion-compensating method, the motion vector predictive encoding is similarly performed if the motion model is the same.
  • the predicted value for each component is set to 0 and the original values of each component of the target small block are transmitted to the decoding side.
  • the redundancy of the motion vector with respect to the motion-compensating interframe predictive encoding can be reduced and the amount of generated codes of the motion vector can be reduced.
  • the motion vector which has been encoded using the above-described encoding method is decoded in a manner such that the prediction error is extracted from the encoded data sequence, and the motion vector of the small block to be decoded (i.e., the target small block) is decoded by adding the prediction error to the motion vector which has already been decoded. See the following equations.
  • v 1 x, y (i) and v 2 x, y, z (i) mean components of the motion vector of the target small block
  • v 1 x, y (i ⁇ 1) and v 2 x, y, z (i ⁇ 1) mean components of the already-decoded motion vector
  • the MPEG-4 adopts a global motion-compensating method for predicting the general change or movement of a picture caused by panning, tilting and zooming operations of the camera (refer to “MPEG-4 Video Verification Model Version 7.0”, ISO/IEC JTC1/SC29/WG11N1682, MPEG Video Group, April, 1997).
  • MPEG-4 Video Verification Model Version 7.0 ISO/IEC JTC1/SC29/WG11N1682, MPEG Video Group, April, 1997.
  • a picture to be encoded i.e., target picture
  • global motion detector 34 so as to determine global motion parameters 35 with respect to the entire picture.
  • the projective transformation and the affine transformation may be used in the motion model.
  • the projective transformation can be represented using the following equations (5) and (6).
  • x ′ ( ax+by+tx )/( px+qy+s )
  • y ′ ( cx+dy+ty )/( px+qy+s ) (6)
  • x′ ax+by+tx (7)
  • y′ cx+dy+ty (8)
  • tx and “ty” respectively represent the amounts of translational motions in the horizontal and vertical directions.
  • Parameter “a” represents extension/contraction or inversion in the horizontal direction
  • parameter “b” represents extension/contraction or inversion in the vertical direction
  • Parameter “b” represents shear in the horizontal direction
  • parameter “c” represents shear in the vertical direction.
  • the affine transformation used as the motion model enables the representation of various motions such as translational movement, extension/contraction, reverse, shear, and rotation, and any combination of these motions.
  • a projective transformation having eight or nine parameters can represent more complicated motions or deformations.
  • the global motion parameters 35 determined by the global motion detector 34 , and reference picture 33 stored in the frame memory 32 are input into global motion compensator 36 .
  • the global motion compensator 36 generates a global motion-compensating predicted picture 37 by making the motion vector of each pixel, determined based on the global motion parameters 35 , act on the reference picture 33 .
  • the reference picture 33 stored in the frame memory 32 , and the input picture 31 are input into local motion detector 38 .
  • the local motion detector 38 detects, for each macro block (16 pixels ⁇ 16 lines), motion vector 39 between input picture 31 and reference picture 33 .
  • the local motion compensator 40 generates a local motion-compensating predicted picture 41 based on the motion vector 39 of each macro block and the reference picture 33 .
  • This method equals the conventional motion-compensating method used in the conventional MPEG or the like.
  • one of the global motion-compensating predicted picture 37 and the local motion-compensating predicted picture 41 is chosen in the encoding mode selector 42 for each macro block. This choice is performed for each macro block. If the global motion compensation is chosen, the local motion compensation is not performed in the relevant macro block; thus, motion vector 39 is not encoded.
  • the predicted picture 43 chosen via the encoding mode selector 42 is input into subtracter 44 , and picture 45 corresponding to the difference between the input picture 31 and the predicted picture 43 is converted into DCT (discrete cosine transformation) coefficient 47 by DCT section 46 .
  • the DCT coefficient 47 is then converted into quantized index 49 in quantizer 48 .
  • the quantized index 49 is encoded by quantized-index encoder 57
  • encoded-mode choice information 56 is encoded by encoded-mode encoder 58
  • motion vector 39 is encoded by motion-vector encoder 59
  • global motion parameters 35 are encoded by global-motion-parameter encoder 60 .
  • the quantized index 49 is inverse-converted into a quantization representative value 51 by inverse quantizer 50 , and is further inverse-converted into difference picture 53 by inverse-DCT section 52 .
  • the difference picture 53 and predicted picture 43 are added to each other by adder 54 so that local decoded picture 55 is generated.
  • This local decoded picture 55 is stored in the frame memory 32 and is used as a reference picture at the encoding of the next frame.
  • the multiplexed and encoded bit stream is divided into each element, and the elements are respectively decoded.
  • the quantized-index decoder 61 decodes quantized index 49
  • encoded-mode decoder 62 decodes encoded-mode choice information 56
  • motion-vector decoder 63 decodes motion vector 39
  • global-motion-parameter decoder 64 decodes global motion parameters 35 .
  • the reference picture 33 stored in the frame memory 68 and global motion parameters 35 are input into global motion compensator 69 so that global motion-compensated picture 37 is generated.
  • the reference picture 33 and motion vector 39 are input into local motion compensator 70 so that local motion-compensating predicted picture 41 is generated.
  • the encoded-mode choice information 56 activates switch 71 so that one of the global motion-compensated picture 37 and the local motion-compensated picture 41 is output as predicted picture 43 .
  • the quantized index 49 is inverse-converted into quantization representative value 51 by inverse-quantizer 65 , and is further inverse-converted into difference picture 53 by inverse-DCT section 66 .
  • the difference picture 53 and predicted picture 43 are added to each other by adder 67 so that local decoded picture 55 is generated.
  • This local decoded picture 55 is stored in the frame memory 68 and is used as a reference picture when encoding the next frame.
  • one of the predicted pictures of the global motion compensation and the local motion compensation, whichever has the smaller error, is chosen for each macro block so that the prediction efficiency of the entire frame is improved.
  • the motion vector is not encoded in the macro block to which the global motion compensation is adopted; thus, the generated codes can be reduced by the amount necessary for conventional encoding of the motion vector.
  • FIG. 10 it is assumed that in the motion compensation of target small blocks Boa and Bob, the target small block Boa is motion-compensated using the method corresponding to the translational motion model and referring to small block Bra included in the reference frame, while the target small block Bob is motion-compensated using the method corresponding to the translational motion and extending/contracting motion model and referring to small block Brb included in the reference frame.
  • small block Brb in the reference frame to be referred to is extended. Therefore, the translational motion components of the motion vector va and vb in FIG. 10 have almost the same values and redundancy exists.
  • motion-vector encoder 59 in FIG. 11 the operations of motion-vector encoder 59 in FIG. 11 are as follows. As shown in FIG. 13 , three motion vectors such as motion vector MV 1 of the left block, motion vector MV 2 of the block immediately above, and motion vector MV 3 of the block diagonally above to the right are referred to so as to obtain a median thereof as a predicted value of the motion vector MV of the present block.
  • the predicted value PMV of the vector MV of the present block is defined using the following equation (9).
  • PMV median( MV 1, MV 2, MV 3) (9)
  • the following seven kinds of ranges are defined with respect to the size of the local motion vector, and the used range is communicated to the decoder by using a codeword “fcode” included in the bit stream.
  • List 1 fcode Range of motion vector 1 ⁇ 16 to +15.5 pixels 2 ⁇ 32 to +31.5 pixels 3 ⁇ 64 to +63.5 pixels 4 ⁇ 128 to +127.5 pixels 5 ⁇ 256 to +255.5 pixels 6 ⁇ 512 to +511.5 pixels 7 ⁇ 1024 to +1023.5 pixels
  • the global motion parameters used in MPEG-4 may have a wide range of ⁇ 2048 to +2047.5; thus, the motion vector determined based on the global motion vector may have a value from ⁇ 2048 to +2047.5.
  • the absolute values of this error are thus larger than the above values of the motion vector (Vx, Vy).
  • the smaller the absolute values of the prediction error (MVDx, MVDy) the shorter the length of the codeword assigned to the prediction error. Therefore, there is a disadvantage in that the amount of code are increased due to the prediction of the motion vector.
  • the objective of the present invention is to provide a motion vector predictive encoding method, a motion vector decoding method, a predictive encoding apparatus, a decoding apparatuses, and computer-readable storage media storing motion vector predictive encoding and decoding programs, which reduce the amount of generated code with respect to the motion vector, and improve the efficiency of the motion-vector prediction.
  • the motion vector predictive encoding method, predictive encoding apparatus, and motion vector predictive encoding program stored in a computer-readable storage medium relate to a motion vector predictive encoding method in which a target frame to be encoded is divided into small blocks and a motion-compensating method to be applied to each target small block to be encoded is selectable from among a plurality of motion-compensating methods.
  • the motion vector of the target small block is predicted based on the motion vector of an already-encoded small block
  • the motion model of the motion vector of the target small block differs from the motion model of the motion vector of the already-encoded small block
  • the motion vector of the target small block is predicted by converting the motion vector of the already-encoded small block for the prediction into one suitable for the motion model of the motion vector used in the motion-compensating method of the target small block and by calculating a predicted vector, and a prediction error of the motion vector is encoded.
  • the motion vector of the already-decoded small block is converted into one suitable for the motion model of the motion vector used in the motion-compensating method of the target small block, and a predicted vector is calculated based on the converted motion vector of the already-decoded small block, and the motion vector is decoded by adding the prediction error to the predicted vector.
  • motion-compensating method There are two types of motion-compensating method, i.e., global motion compensation (for compensating a global motion over a plurality of areas) and local motion compensation (for compensating a local motion for each area), as described above.
  • global motion compensation for compensating a global motion over a plurality of areas
  • local motion compensation for compensating a local motion for each area
  • the present invention provides another motion vector predictive encoding method, predictive encoding apparatus, and motion vector predictive encoding program stored in a computer-readable storage medium.
  • This case relates to a motion vector predictive encoding method in which a global motion-compensating method and a local motion-compensating method are switchable for each target small block to be encoded, wherein when the motion vector of the target small block is predicted based on the motion vector of an already-encoded small block, and if the motion-compensating method of the target small block differs from the motion-compensating method of the already-encoded small block, then a predicted vector is calculated by converting the format of the motion vector of the already-encoded small block for the prediction into a format of the motion vector used in the motion-compensating method of the target small block, and the motion vector of the target small block is predicted and a prediction error of the motion vector is encoded.
  • the motion-compensating method used for the target small block is the local motion-compensating method and the motion-compensating method used for the already-encoded small block for the prediction is the global motion-compensating method, then a local motion vector of the already-encoded small block is calculated based on the global motion vector, the motion vector of the target small block is predicted, and the prediction error of the motion vector is encoded.
  • the predicted vector of the target small block for which the local motion compensation is chosen is predicted based on the global motion parameters
  • the predicted vector is clipped to have a value within the predetermined range.
  • the motion vector of the block for which the local motion compensation is chosen is predicted based on the global motion parameters
  • the value of the predicted vector is set to 0.
  • the decoding method, decoding apparatus, and decoding program stored in a computer-readable storage medium for decoding the motion vector encoded using the above-mentioned motion vector predictive encoding methods, predictive encoding apparatuses, and motion vector predictive encoding programs stored in computer-readable storage media if the motion-compensating method of the target small block to be decoded differs from the motion-compensating method of the already-decoded small block, then the format of the motion vector of the already-decoded small block is converted into a format of the motion vector used in the motion-compensating method of the target small block, and a predicted vector of the motion vector of the target small block is calculated, and the motion vector is decoded by adding the prediction error to the predicted vector.
  • the motion-compensating method used for the target small block to be decoded is the local motion-compensating method and the motion-compensating method used for the already-decoded small block for the prediction is the global motion-compensating method
  • a local motion vector of the already-decoded small block is calculated based on the global motion vector
  • a predicted vector with respect to the motion vector of the target small block is calculated, and the motion vector is decoded by adding the prediction error to the predicted vector.
  • predictive encoding apparatus when the motion vector of the small block in any motion model is converted into a motion vector in a translational motion model, a representative motion vector of the small block is calculated based on the motion vector in the translational motion model determined for each pixel in the small block.
  • each component of the representative motion vector is set to a statistic (one of the average, intermediate value, median, mode, maximum value, and minimum value).
  • the predicted vector When decoding, if the predicted value of the local motion vector which is calculated based on the global motion parameters is not within a predetermined range, the predicted vector is clipped to have a value within the predetermined range. Instead, if the predicted value of the local motion vector which is calculated based on the global motion parameters is not within a predetermined range, the value of the predicted vector is set to 0.
  • the motion vector can be predicted between motion vectors of different motion models, and also between motion vectors of the global motion compensation and the local motion compensation. Therefore, the amount of generated code with respect to the motion vector can be reduced.
  • the prediction error (MVD x , MVD y ) is “( ⁇ 15.5, ⁇ 27)” and the absolute values thereof are smaller than those obtained by the above-mentioned conventional method in which the prediction error is “( ⁇ 54, ⁇ 38.5)”.
  • the smaller the absolute values of the prediction error the shorter the length of the codeword assigned to a difference between two motion vectors; thus, the total amount of code can be reduced.
  • the prediction error (MVD x , MVD y ) is “(+48, +36.5)” and the absolute values thereof are larger than those obtained by the above clipping method in which the prediction error is “( ⁇ 15.5, ⁇ 27)”, but smaller than those obtained by the above-mentioned conventional method in which the prediction error is “( ⁇ 54, ⁇ 38.5)”.
  • the predicted vector (PVx, PVy) is “(0, 0)” and the prediction error (MVD x , MVD y ) is “(+3, +1.5)”. Therefore, in this example, the absolute values in the “0” setting method can be smaller than those in the clipping method.
  • FIG. 1 is a flowchart showing the operations of the motion vector predictive encoding method of the first embodiment according to the present invention.
  • FIG. 2 is a flowchart showing the operations of the motion vector decoding method of the first embodiment according to the present invention.
  • FIG. 3 is a block diagram showing the functional structure of the motion vector predictive encoding apparatus of the first embodiment according to the present invention.
  • FIG. 4 is a block diagram showing the functional structure of the motion vector decoding apparatus of the first embodiment according to the present invention.
  • FIG. 5 is a flowchart showing the operations of the motion vector predictive encoding method of the second embodiment according to the present invention.
  • FIG. 6 is a flowchart showing the operations of the motion vector decoding method of the second embodiment according to the present invention.
  • FIG. 7 is a block diagram showing the functional structure of the motion vector predictive encoding apparatus of the second embodiment according to the present invention.
  • FIG. 8 is a block diagram showing the functional structure of the motion vector decoding apparatus of the second embodiment according to the present invention.
  • FIG. 9 is a diagram showing examples of the motion model, part (a) showing a translational motion model and part (b) showing a translational motion and extending/contracting motion model.
  • FIG. 10 is a diagram showing motion vectors of the motion model.
  • FIG. 11 is a diagram showing the structure of an example of the encoder for encoding moving pictures (the MPEG-4 encoder).
  • FIG. 12 is a diagram showing the structure of the decoder corresponding to the encoder in FIG. 11 (the MPEG-4 decoder).
  • FIG. 13 is a diagram showing the arrangement of reference blocks in the motion vector prediction of the MPEG-4.
  • the global motion-compensating method (GMC) and the local motion-compensating method (LMC) are switchable for each small block.
  • the motion model used in the GMC is determined in consideration of translational movement and extension/contraction, while the motion model used in the LMC is a translational motion model.
  • the motion vector is not assigned to each small block. Therefore, prediction is performed with respect to a motion vector used for the local motion compensation.
  • the format of the motion vector is converted into that of the translational motion model.
  • a representative vector of the small block is calculated. In the calculation, the average of the motion vectors which were calculated for each pixel of the small block is determined as the representative vector.
  • the size of each small block is M ⁇ N pixels.
  • the motion-compensating mode of the target small block is determined. If the mode corresponds to the GMC, then the predictive encoding of the motion vector is not executed and the operation of the motion-vector encoding is terminated.
  • step S 2 the encoding mode of a small block which was already encoded is determined. If it is determined that the encoding mode is the intraframe coding mode, then the operation shifts to step S 3 .
  • GMV the global motion vector
  • (m, n) indicates the position of each pixel in the small block.
  • the representative motion vector is calculated based on the average of motion vectors v i-1 (m,n) of each pixel.
  • dx i x i ⁇ x vp (12)
  • dy i y i ⁇ y vp (13)
  • step S 8 the prediction errors dx i and dy i calculated in step S 7 are subjected to reversible encoding such as the Huffman encoding, and the motion-vector encoding operation is finished.
  • the motion-compensating mode of the target small block is determined. If the determined mode corresponds to the GMC, then it is determined that the predictive encoding of the motion vector was not executed, and the decoding operation is terminated.
  • step S 11 If the mode corresponds to the LMC in step S 11 , then the operation shifts to step S 12 , where the prediction errors dx i and dy i , which were reversibly encoded, are decoded. The operation then shifts to step S 13 , where the encoding mode of the already-decoded small block is determined. If it is determined that the encoding mode is the intraframe coding mode, then the prediction error vp is set to (0, 0) in step S 14 , and the operation shifts to step S 18 (explained later).
  • GMV the global motion vector
  • (m, n) indicates the position of each pixel in the small block.
  • the representative motion vector is calculated based on the average of motion vectors v i-1 (m,n) of each pixel.
  • step S 18 When the process of step S 18 is finished, the motion-vector decoding operation is also finished.
  • step S 6 in FIG. 1 and step S 17 in FIG. 2 in the above embodiment the representative motion vector is calculated using the average; however, the following statistics may be used.
  • a set of components x i-1 (m,n) of the motion vector for each pixel is indicated by X i-1
  • a set of components y i-1 (m,n) of the motion vector for each pixel is indicated by Y i-1 .
  • the frequency distribution is examined for each component of the motion vector, and the value corresponding to the maximum frequency is determined as the relevant component of the representative motion vector.
  • each component of the motion vector is arranged in order of size: xs 1 ⁇ xs 2 ⁇ . . . ⁇ xs n-1 ⁇ xs n (32) ys 1 ⁇ ys 2 ⁇ . . . ⁇ ys n-1 ⁇ ys n (33) where n means the number of pixels of the small block, xs and ys are respectively obtained by rearranging the components X i-1 and Y i-1 (of the motion vector for each pixel) in order of the size.
  • the representative motion vector is calculated based on the above rearranged order of the components of the motion vector. That is:
  • the motion vector of any small block which was encoded before the target small block was encoded can be used as the motion vector of the already-encoded small block.
  • the predicted vector may be calculated using the motion vectors of a plurality of already-encoded small blocks.
  • FIG. 3 shows the functional structure of the motion vector encoding apparatus for performing the motion-vector encoding according to the above-explained motion vector encoding method.
  • the global motion vector is input once for a picture (plane), and the local motion vector is input only when the target small block is local-compensated.
  • the motion vector memory 1 stores the input local motion vector mv of the target small block. When the next local motion vector mv is input, the motion vector memory 1 outputs the stored local motion vector as motion vector mv t-1 of the already-encoded small block.
  • the representative-motion-vector calculating section 2 has an internal memory.
  • the calculating section 2 reads out the already-encoded global motion vector “gmv” from the internal memory, and calculates a representative motion vector for each small block based on the global motion vector gmv. The calculated result is output as the representative motion vector of the already-encoded small block when next local motion vector mv of the already-encoded small block is input.
  • Selector 3 operates in one of the following manners.
  • the selector 3 outputs motion vector mv t-1 of the already-encoded small block output from the motion vector memory 1 , or the representative motion vector output from the representative-motion-vector calculating section 2 , according to the motion-compensating mode of the target small block, the encoding mode of the already-encoded small block, and the motion-compensating mode of the already-encoded small block.
  • the selector 3 outputs no motion vector, that is, neither of the two motion vectors is chosen (this state is called a “neutral state” hereinbelow).
  • Subtracter 4 subtracts the output from the selector 3 from the local motion vector mv t of the target small block, and outputs prediction error dmv t .
  • the motion-vector encoding section 5 executes the variable-length encoding of the prediction error dmv t which is output from the subtracter 4 .
  • the encoding section 5 outputs the encoded results as local motion vector information.
  • the encoding section 6 executes the variable-length encoding of the global motion vector gmv, and outputs the encoded results as global motion vector information.
  • the operations of the encoding apparatus having the above-explained structure will be explained below.
  • the supplied global motion vector gmv is input into both the representative-motion-vector calculating section 2 and the motion-vector encoding section 6 .
  • the input data is variable-length-encoded and then output to the outside.
  • the representative-motion-vector calculating section 2 the above global motion vector gmv is stored into the internal memory.
  • the local motion vector is not supplied from an external device, and the selector 3 detects that the supplied motion vector is the global motion vector and thus enters the neutral state. Accordingly, no local motion vector information is output from the motion-vector encoding section 5 .
  • motion vector mv t When local Motion vector mv t is supplied from an external device after the above operations are executed, the supplied local motion vector mv t is input into motion vector memory 1 and subtracter 4 .
  • the motion-compensating method of the target block is the LMC
  • m and n indicate the relevant position of the pixel in the small block.
  • a representative motion vector is calculated from the calculated motion vector v t-1 (m, n), and is output as the motion vector of the already-encoded small block (this process corresponds to step S 6 in FIG. 1 ).
  • the representative motion vector is calculated based on the average of the motion vector v t-1 (m, n) for each pixel.
  • the selector 3 determines the encoding mode of the already-encoded small block. If the mode is the intraframe coding mode, the selector is in the neutral state and no signal is output from the selector (this process corresponds to step S 3 in FIG. 1 ). Therefore, the supplied local motion vector mv t passes through the subtracter 4 (this process corresponds to the operation using the above-described equations (14) and (15), performed in step S 7 in FIG. 1 ), and the motion vector is variable-length-encoded in the motion-vector encoding section 5 (this process corresponds to step S 8 in FIG. 1 ) and is output to the outside.
  • the motion-compensating mode of the already-encoded small block is then determined (this process corresponds to step S 4 in FIG. 1 ).
  • the motion-compensating mode of the already-encoded small block is the GMC; thus, the selector 3 chooses the representative motion vector output from the representative-motion-vector calculating section 2 , and outputs the vector into subtracter 4 .
  • the representative motion vector is subtracted from the target small block (this process corresponds to the operation using the above-described equations (12) and (13), performed in step S 7 in FIG. 1 ), and the result of the subtraction is output as the prediction error dmv t into the motion-vector encoding section 5 , where the prediction error is variable-length-encoded (this process corresponds to step S 8 in FIG. 1 ) and is output to the outside.
  • the supplied local motion vector mv t is input into motion vector memory 1 and subtracter 4 .
  • the motion vector memory 1 outputs the local motion vector mv t-1 which was the previously-input vector.
  • the selector 3 determines the encoding mode of the already-encoded small block. If the mode is the intraframe coding mode, the selector 3 enters the neutral state and no signal is output from the selector (this process corresponds to step S 3 in FIG. 1 ). Therefore, the supplied local motion vector mv t passes through the subtracter 4 (this process corresponds to the operation using the above-described equations (14) and (15), performed in step S 7 in FIG. 1 ), and the motion vector is variable-length-encoded in the motion-vector encoding section 5 (this process corresponds to step S 8 in FIG. 1 ) and is output to the outside.
  • the motion-compensating mode of the already-encoded small block is then determined (this process corresponds to step S 4 in FIG. 1 ).
  • the motion-compensating mode of the already-encoded small block is the LMC; thus, the selector 3 chooses the local motion vector mv t-1 of the already-encoded small block output from motion vector memory 1 , and outputs the vector into subtracter 4 .
  • the subtracter 4 the already-encoded local motion vector mv t-1 is subtracted from the motion vector mv t of the target small block (this process corresponds to the operation using the above-described equations (16) and (17), performed in step S 7 in FIG. 1 ), and the result of subtraction is output as the prediction error dmv t into the motion-vector encoding section 5 , where the prediction error is variable-length-encoded (this process corresponds to step S 8 in FIG. 1 ) and is output to the outside.
  • FIG. 4 shows the functional structure of the decoding apparatus for decoding the motion vector according to the above-explained motion vector decoding method.
  • motion-vector decoding section 10 decodes the global motion vector information which is output from the motion vector predictive encoding apparatus as shown in FIG. 3 .
  • the decoding section 10 outputs the decoded result as global motion vector gmv into representative-motion-vector calculating section 12 and the outside.
  • the motion-vector decoding section 11 decodes the local motion vector information which was output from the motion vector predictive encoding apparatus as shown in FIG. 3 .
  • the decoding section 11 outputs the decoded result as the prediction error dmv t into adder 15 .
  • the representative-motion-vector calculating section 12 has an internal memory.
  • the calculating section 12 reads out the already-decoded global motion vector gmv from the internal memory, and calculates a representative motion vector for each small block based on the global motion vector gmv.
  • the calculated result is output as the representative motion vector of the already-decoded small block when next the local motion vector information is decoded by the motion-vector decoding section 11 and the prediction error dmv t is output.
  • Selector 13 operates in one of the following manners.
  • the selector 13 outputs the representative motion vector output from the representative-motion-vector calculating section 12 , or the motion vector mv t-1 of the already-decoded small block output from the motion vector memory 14 , according to the motion-compensating mode of the target small block, the encoding mode of the already-decoded small block, and the motion-compensating mode of the already-decoded small block.
  • the selector 13 enters the neutral state and outputs no motion vector.
  • Motion vector memory 14 stores the local motion vector mv of the already-encoded small block, which is output from the adder 15 .
  • the adder 15 adds the output from selector 13 to the prediction error dmv t output from the motion-vector decoding section 11 , and outputs the added result as local motion vector mv t into motion vector memory 14 and the outside.
  • the motion vector decoding apparatus having the above-explained structure will be explained below.
  • the supplied global motion vector information is decoded into global motion vector gmv by motion-vector decoding section 10 , and is output into the representative-motion-vector calculating section 12 and the outside. Accordingly, the global motion vector gmv is stored into the internal memory in the representative-motion-vector calculating section 12 .
  • the supplied local motion vector information is decoded into the prediction error dmv t in the motion-vector decoding section 11 (this process corresponds to step S 12 in FIG. 2 ), and is output into adder 15 .
  • m and n indicate the relevant position of the pixel in the small block.
  • the representative motion vector is then calculated from the calculated motion vector v t-1 (m, n), and is output as the motion vector of the already-decoded small block (this process corresponds to step S 17 in FIG. 2 ).
  • the representative motion vector is calculated based on the average of the motion vector v t-1 (m, n) for each pixel.
  • the selector 13 determines the encoding mode of the already-decoded small block. If the mode is the intraframe coding mode, the selector is in the neutral state and no signal is output from the selector (this process corresponds to step S 14 in FIG. 2 ). Therefore, no element is added to the decoded prediction error dmv t , and the prediction error is output as the local motion vector mv t into motion vector memory 14 and the outside (this process corresponds to the operation using the above-described equations (22) and (23), performed in step S 18 in FIG. 2 ).
  • the motion-compensating mode of the already-decoded small block is then determined (this process corresponds to step S 15 in FIG. 2 ).
  • the motion-compensating mode of the already-decoded small block is the GMC; thus, the selector 13 chooses the representative motion vector output from the representative-motion-vector calculating section 12 , and outputs the vector into adder 15 .
  • the adder 15 the prediction error dmv t which was decoded in the motion-vector decoding section 11 and the representative motion vector are added to each other (this process corresponds to the operation using the above-described equations (20) and (21), performed in step S 12 in FIG. 2 ), and the added result is output as the local motion vector mv t into the motion vector memory 14 and the outside.
  • the supplied local motion vector information is decoded into the prediction error dmv t in the motion-vector decoding section 11 and is output into adder 15 .
  • the selector 13 determines the encoding mode of the already-decoded small block again. If the mode is the intraframe coding mode, the selector 13 enters the neutral state and no signal is output from the selector (this process corresponds to step S 14 in FIG. 2 ).
  • selector 13 determines that the encoding mode of the already-decoded small block is the interframe coding mode, then the motion-compensating mode of the already-decoded small block is determined (this process corresponds to step S 15 in FIG. 2 ).
  • the motion-compensating mode of the already-decoded small block is the LMC; thus, the selector 13 chooses the local motion vector mv t-1 of the already-decoded small block output from motion vector memory 14 , and outputs the vector into adder 15 .
  • a statistic such as the maximum value, the minimum value, the intermediate value, the mode, or the median may be used as described in the above items (1-1: encoding method) and (1-2: decoding method).
  • the motion vector predictive encoding method and motion vector decoding method of the second embodiment according to the present invention will be explained.
  • the present encoding and decoding methods differ from those of the first embodiment in an additional operation in which if the value of the predicted vector which is obtained based on the global motion parameters is not within the range of the local motion vector, the predicted vector is clipped into the minimum value or the maximum value of the range.
  • FIG. 5 shows a flowchart explaining the motion vector predictive encoding method of the second embodiment.
  • steps identical to those in the operations of the motion vector predictive encoding method as shown in FIG. 1 are given identical reference numbers, and detailed explanations are omitted here.
  • the motion vector predictive encoding method as shown in FIG. 5 differs from that as shown in FIG. 1 in the point that after the representative motion vector is calculated in step S 6 , it is determined whether the value of the calculated representative motion vector is within a predetermined range, and if the value is not within the range, the clipping of the value of the representative motion vector is performed.
  • step S 5 the motion vector of the translational motion model is calculated for each pixel of the already-encoded small block, based on the global motion vector GMV.
  • the average of the motion vectors (calculated in the above step S 5 ) of each pixel of the already-encoded small block is calculated, and the calculated average is determined as the representative motion vector.
  • the representative motion vector may be not only the average of the motion vectors of each pixel of the already-encoded small block, but also a statistic such as the maximum value, the minimum value, the intermediate value, the mode, or the median.
  • step S 20 it is determined whether the value of the representative motion vector calculated in step S 6 is within a predetermined range. If the value is not within the range, the operation shifts to step S 21 where the above representative motion vector is clipped so as to make the value thereof within this range.
  • the possible range for representing the motion vector is from MV min to MV max . If the value of the representative motion vector is less than MV min , then the representative motion vector is clipped so as to have value MV min . If the value of the representative motion vector exceeds MV max , then the representative motion vector is clipped so as to have value MV max .
  • step S 20 If it is determined in step S 20 that the value of the representative motion vector is within the predetermined range, then the representative motion vector calculated in step S 6 is determined as predicted vector vp.
  • step S 7 a difference between the motion vector of the target small block and the predicted vector (i.e., the prediction error) is calculated.
  • the prediction error i.e., the prediction error
  • three blocks such as the left, immediately above, and right-and-diagonally-above blocks are referred to as shown in FIG. 13 . Therefore, the processes of steps S 2 -S 6 , S 20 , S 21 , and S 7 are performed for each block, and the median of the three candidate blocks is determined as the prediction error.
  • step S 8 the prediction error determined in step S 7 is encoded, and the encoding operation of the second embodiment is finished.
  • FIG. 6 shows the flowchart explaining the motion vector decoding method of the second embodiment.
  • steps identical to those in the operations of the motion vector decoding method as shown in FIG. 2 are given identical reference numbers, and detailed explanations are omitted here.
  • the motion vector decoding method as shown in FIG. 6 differs from that as shown in FIG. 2 in the point that after the representative motion vector is calculated in step S 27 , it is determined whether the value of the calculated representative motion vector is within a predetermined range, and if the value is not within the range, the clipping of the value of the representative motion vector is performed.
  • step S 12 the prediction error is decoded. If (i) the encoding mode of the already-decoded small block is determined as the interframe coding mode in step S 13 , and (ii) the motion-compensating mode of the already-decoded small block is the GMC, then in step S 16 , the motion vector of the translational motion model is calculated for each pixel of the already-decoded small block, based on the global motion vector GMV.
  • the average of the motion vectors (calculated in the above step S 16 ) of each pixel of the already-decoded small block is calculated, and the calculated average is determined as the representative motion vector.
  • the representative motion vector may be not only the average of the motion vectors of each pixel of the already-decoded small block, but also a statistic such as the maximum value, the minimum value, the intermediate value, the mode, or the median.
  • step S 22 it is determined whether the value of the representative motion vector calculated in step S 17 is within a predetermined range. If the value is not within the range, the above representative motion vector is clipped so as to make the value thereof within the range.
  • the possible range for representing the motion vector is from MV min to MV max . If the value of the representative motion vector is less than MV min , then the representative motion vector is clipped so as to have value MV min . If the value of the representative motion vector exceeds MV max , then the representative motion vector is clipped so as to have value MV max .
  • step S 22 If it is determined in step S 22 that the value of the representative motion vector is within the predetermined range, then the representative motion vector calculated in step S 17 is determined as predicted vector vp.
  • step S 18 the prediction error of the target small block calculated in step S 12 and the predicted vector are added.
  • the MPEG-4 three blocks such as the left, immediately above, and right-and-diagonally-above blocks are referred to as shown in FIG. 13 . Therefore, the processes of steps S 12 -S 17 , S 22 , S 23 , and S 18 are performed for each block, and the median of the three candidate blocks is determined as the predicted vector. The decoding operation of the second embodiment is then finished.
  • step S 21 uses the maximum or minimum value of a predetermined range; however, the clipping may be performed at value 0.
  • the motion vector predictive encoding apparatus for performing the motion-vector predictive encoding according to the motion vector predictive encoding method (refer to FIG. 5 ) of the second embodiment will be explained with reference to FIG. 7 .
  • FIG. 7 parts identical to those of the motion vector predictive encoding apparatus shown in FIG. 3 are given identical reference numbers, and explanations thereof are omitted here.
  • the motion vector predictive encoding apparatus shown in FIG. 7 differs from that in FIG. 3 in the point that representative-motion-vector clipping section 20 is provided between representative-motion-vector calculating section 2 and one of the input terminals of selector 3 .
  • the representative-motion-vector clipping section 20 determines whether the value of the representative motion vector output from the representative-motion-vector calculating section 2 is within a predetermined range. If the value is not within the range, the clipping section 20 clips the value of the representative motion vector to the maximum or minimum value of the range.
  • the motion vector predictive encoding apparatus having the above-explained structure, when a representative motion vector is calculated and output from the representative-motion-vector calculating section 2 , that is, when processes corresponding to steps S 5 and S 6 in FIGS. 1 and 5 are performed in the representative-motion-vector calculating section 2 , then in the representative-motion-vector clipping section 20 , it is determined whether the value of the representative motion vector calculated in the representative-motion-vector calculating section 2 is within a predetermined range (this process corresponds to step S 20 in FIG. 5 ).
  • the representative motion vector is clipped so as to have a value within this range (this process corresponds to step S 21 in FIG. 5 ), and the representative motion vector after the clipping is output into the selector 3 . If the value of the representative motion vector is within the predetermined range, the original representative motion vector calculated in the representative-motion-vector calculating section 2 is output into the selector 3 .
  • the possible range for representing the motion vector is from MV min to MV max . If the value of the representative motion vector is less than MV min , then the representative motion vector is clipped so as to have value MV min . If the value of the representative motion vector exceeds MV max , then the representative motion vector is clipped so as to have value MV max .
  • the clipping operation performed in the representative-motion-vector clipping section 20 uses the maximum or minimum value of a predetermined range; however, the clipping may be performed at value 0.
  • the motion vector predictive encoding apparatus for performing the motion-vector predictive encoding according to the motion vector predictive encoding method as shown in the flowchart of FIG. 5 can be realized.
  • the motion vector decoding apparatus for performing the motion-vector decoding according to the motion vector decoding method (refer to FIG. 6 ) of the second embodiment will be explained with reference to FIG. 8 .
  • FIG. 8 parts identical to those of the motion vector decoding apparatus shown in FIG. 4 are given identical reference numbers, and explanations thereof are omitted here.
  • the motion vector decoding apparatus shown in FIG. 8 differs from that in FIG. 4 in the point that representative-motion-vector clipping section 21 is provided between representative-motion-vector calculating section 12 and one of the input terminals of selector 13 .
  • the representative-motion-vector clipping section 21 determines whether the value of the representative motion vector output from the representative-motion-vector calculating section 12 is within a predetermined range. If the value is not within the range, the clipping section 21 clips the value of the representative motion vector to the maximum or minimum value of the range.
  • the motion vector decoding apparatus having the above-explained structure, when a representative motion vector is calculated and output from the representative-motion-vector calculating section 12 , that is, when processes corresponding to steps S 16 and S 17 in FIGS. 2 and 6 are performed in the representative-motion-vector calculating section 12 , then in the representative-motion-vector clipping section 21 , it is determined whether the value of the representative motion vector calculated in the representative-motion-vector calculating section 12 is within a predetermined range (this process corresponds to step S 22 in FIG. 6 ).
  • the representative motion vector is clipped so as to have a value within this range (this process corresponds to step S 22 in FIG. 6 ), and the representative motion vector after the clipping is output into the selector 13 . If the value of the representative motion vector is within the predetermined range, the original representative motion vector calculated in the representative-motion-vector calculating section 12 is output into the selector 13 .
  • the possible range for representing the motion vector is from MV MV max . If the value of the representative motion vector is less than MV min , then the representative motion vector is clipped so as to have value MV min . If the value of the representative motion vector exceeds MV max , then the representative motion vector is clipped so as to have value MV max .
  • the clipping operation performed in the representative-motion-vector clipping section 21 uses the maximum or minimum value of a predetermined range; however, the clipping may be performed at value 0.
  • the motion vector decoding apparatus for performing the motion-vector decoding according to the motion vector decoding method as shown in the flowchart of FIG. 6 can be realized.
  • programs for executing the following operations may be stored in a computer-readable storage medium such as a CD-ROM or a floppy disk, and each program stored in the storage medium may be loaded and executed by a computer so as to perform the motion vector predictive encoding: the motion-vector predictive encoding operations as shown in the flowcharts of FIG. 1 and FIG. 5 , and operations of motion vector memory 1 , representative-motion-vector calculating section 2 , selector 3 , subtracter 4 , motion-vector encoding sections 5 and 6 in the block diagram of FIG. 3 , and representative-motion-vector clipping section 20 in the block diagram of FIG. 7 .
  • programs for executing the following operations may be stored in a computer-readable storage medium such as a CD-ROM or a floppy disk, and each program stored in the storage medium may be loaded and executed by a computer: the motion-vector decoding operations as shown in the flowcharts of FIG. 2 and FIG. 6 , and operations of motion-vector decoding sections 10 and 11 , representative-motion-vector calculating section 12 , selector 13 , motion vector memory 14 , adder 15 , and representative-motion-vector clipping section 21 in the block diagram of FIG. 8 .

Abstract

A motion vector predictive encoding method, a motion vector decoding method, a predictive encoding apparatus, a decoding apparatuses, and storage media storing motion vector predictive encoding and decoding programs are provided, thereby reducing the amount of generated code with respect to the motion vector, and improving the efficiency of the motion-vector prediction. If the motion-compensating mode of the target small block to be encoded is the global motion compensation, the encoding mode of an already-encoded small block is the interframe coding mode, and the motion-compensating mode of the already-encoded small block is the global motion compensation, then the motion vector of the translational motion model is determined for each pixel of the already-encoded small block, based on the global motion vector (steps S1-S5). Next, the representative motion vector is calculated as the predicted vector, based on the motion vector of each pixel of the already-encoded small block (step S6). Finally, the prediction error is calculated for each component of the motion vector and each prediction error is encoded (steps S7 and S8).

Description

    TECHNICAL FIELD
  • The present invention relates to motion vector predictive encoding and decoding methods, predictive encoding and decoding apparatuses, and storage media storing motion vector predictive encoding and decoding programs. These methods, apparatuses, and storage media are used for motion-compensating interframe prediction for motion picture encoding.
  • BACKGROUND ART
  • The interframe predictive coding method for coding motion pictures (i.e., video data) is known, in which an already-encoded frame is used as a prediction signal so as to reduce temporal redundancy. In order to improve the efficiency of the time-based prediction, a motion-compensating interframe prediction method is used in which a motion-compensated picture signal is used as a prediction signal. The number and the kinds of components of the motion vector used for the motion compensation are determined depending on the assumed motion model used as a basis. For example, in a motion model in which only translational movement is considered, the motion vector consists of components corresponding to horizontal and vertical motions. In another motion model in which extension and contraction are also considered in addition to the translational movement, the motion vector consists of components corresponding to horizontal and vertical motions, and a component corresponding to the extending or contracting motion.
  • Generally, the motion compensation is executed for each small area obtained by dividing a picture into a plurality of areas such as small blocks, and each divided area has an individual motion vector. It is known that the motion vectors belonging to neighboring areas including adjacent small areas have a higher correlation. Therefore, in practice, the motion vector of an area to be encoded is predicted based on the motion vector of an area which neighbors the area to be encoded, and a prediction error generated at the prediction is variable-length-encoded so as to reduce the redundancy of the motion vector.
  • In the moving-picture coding method ISO/IEC 11172-2 (MPEG-1), the picture to be encoded is divided into small blocks so as to motion-compensate each small block, and the motion vector of a small block to be encoded (hereinbelow, called the “target small block”) is predicted based on the motion vector of a small block which has already been encoded.
  • In the above MPEG-1, only translational motions can be compensated. It may be impossible to compensate more complicated motions with a simpler model, such as MPEG-1, which has few components of the motion vector. Accordingly, the efficiency of the interframe prediction can be improved by using a motion-compensating method which corresponds to a more complicated model having a greater number of components of the motion vector. However, when each small block is motion-compensated in such a method for a complicated motion model, the amount of codes generated when encoding the relevant motion vector is increased.
  • An encoding method for avoiding such an increase of the amount of generated codes is known, in which the motion-vector encoding is performed using a method, selected from a plurality of motion-compensating methods, which minimizes the prediction error with respect to the target block. The following is an example of such an encoding method in which two motion-compensating methods are provided, one method corresponding to a translational motion model, the other corresponding to a translational motion and extending/contracting motion model, and one of the two motion-compensating methods is chosen.
  • FIG. 9 shows a translational motion model (see part (a)) and a translational motion and extending/contracting motion model (see part (b)). In the translational motion model of part (a), the motion of a target object is represented using a translational motion component (x, y). In the translational motion and extending/contracting motion model of part (b), the motion of a target object is represented using a component (x, y, z) in which parameter Z for indicating the amount of extension or contraction of the target object is added to the translational motion component (x, y). In the example shown in FIG. 9, parameter Z has a value corresponding to the contraction (see part (b)).
  • Accordingly, motion vector {right arrow over (v1)} of the translational motion model is represented by:
    {right arrow over (v1)}=(x, y)
    while motion vector {right arrow over (v2)} of the translational motion and extending/contracting motion model is represented by:
    {right arrow over (v2)}=(x, y, z)
  • In the above formulas, x, y, and z respectively indicate horizontal, vertical, and extending/contracting direction components. Here, the unit for motion compensation is a small block, the active motion-compensating method may be switched for each small block in accordance with the present prediction efficiency, and the motion vector is predicted based on the motion vector of an already-encoded small block.
  • If the motion-compensating method chosen for the target small block is the same as that adopted for the already-encoded small block, the prediction error of the motion vector is calculated by the following equations.
  • For the translational motion model:
    d1x,y=v1x,y(i)−v1x,y(i−1)  (1)
  • For the translational motion and extending/contracting motion model:
    d2x,y,z=v2x,y,z(i)−v2x,y,z(i−1)  (2)
  • Here, v1 x, y (i) and v2 x, y, z (i) mean components of the motion vector of the target small block, while v1 x, y (i−1) and v2 x, y, z (i−1) mean components of the motion vector of a small block of the previous frame.
  • As explained above, prediction errors d x, y and d x, y, z are calculated and encoded so as to transmit the encoded data to the decoding side. Even if the size of each small block is not the same in the motion-compensating method, the motion vector predictive encoding is similarly performed if the motion model is the same.
  • If the motion-compensating method chosen for the target small block differs from that adopted for the already-encoded small block, or if intraframe coding is performed, then the predicted value for each component is set to 0 and the original values of each component of the target small block are transmitted to the decoding side.
  • By using such an encoding method, the redundancy of the motion vector with respect to the motion-compensating interframe predictive encoding can be reduced and the amount of generated codes of the motion vector can be reduced.
  • On the other hand, the motion vector which has been encoded using the above-described encoding method is decoded in a manner such that the prediction error is extracted from the encoded data sequence, and the motion vector of the small block to be decoded (i.e., the target small block) is decoded by adding the prediction error to the motion vector which has already been decoded. See the following equations.
  • For the translational motion model:
    v1x,y(i)=v1x,y(i−1)+d1x,y  (3)
  • For the translational motion and extending/contracting motion model:
    v2x,y,z(i)=v2x,y,z(i−1)+d2x,y,z  (4)
  • Here, v1 x, y (i) and v2 x, y, z (i) mean components of the motion vector of the target small block, while v1 x, y (i−1) and v2 x, y, z (i−1) mean components of the already-decoded motion vector.
  • In the model ISO/IEC 14496-2 (MPEG-4) under testing for international standardization in January, 1999, a similar motion-compensating method is adopted. The MPEG-4 adopts a global motion-compensating method for predicting the general change or movement of a picture caused by panning, tilting and zooming operations of the camera (refer to “MPEG-4 Video Verification Model Version 7.0”, ISO/IEC JTC1/SC29/WG11N1682, MPEG Video Group, April, 1997). Hereinafter, the structure and the operational flow of the encoder using the global motion compensation will be explained with reference to FIG. 11.
  • First, a picture to be encoded (i.e., target picture) 31 is input into global motion detector 34 so as to determine global motion parameters 35 with respect to the entire picture. In the MPEG-4, the projective transformation and the affine transformation may be used in the motion model.
  • With a target point (x, y) and a corresponding point (x′, y′) relating to the transformation, the projective transformation can be represented using the following equations (5) and (6).
    x′=(ax+by+tx)/(px+qy+s)  (5)
    y′=(cx+dy+ty)/(px+qy+s)  (6)
  • Generally, the case of “s=1” belongs to the projective transformation. The projective transformation is a general representation of the two dimensional transformation, and the affine transformation is represented by the following equations (7) and (8), which can be obtained under conditions of “p=Q=0” and “s=1”.
    x′=ax+by+tx  (7)
    y′=cx+dy+ty  (8)
  • In the above equations, “tx” and “ty” respectively represent the amounts of translational motions in the horizontal and vertical directions. Parameter “a” represents extension/contraction or inversion in the horizontal direction, while parameter “b” represents extension/contraction or inversion in the vertical direction. Parameter “b” represents shear in the horizontal direction, while parameter “c” represents shear in the vertical direction. In addition, the conditions of “a=cos θ, b=sin θ, c=−sin θ, and d=cos θ” correspond to rotation of angle θ. The conditions of “a=d=1” and “b=c=0” equal the conventional translational motion model.
  • As explained above, the affine transformation used as the motion model enables the representation of various motions such as translational movement, extension/contraction, reverse, shear, and rotation, and any combination of these motions. A projective transformation having eight or nine parameters can represent more complicated motions or deformations.
  • The global motion parameters 35, determined by the global motion detector 34, and reference picture 33 stored in the frame memory 32 are input into global motion compensator 36. The global motion compensator 36 generates a global motion-compensating predicted picture 37 by making the motion vector of each pixel, determined based on the global motion parameters 35, act on the reference picture 33.
  • The reference picture 33 stored in the frame memory 32, and the input picture 31 are input into local motion detector 38. The local motion detector 38 detects, for each macro block (16 pixels×16 lines), motion vector 39 between input picture 31 and reference picture 33. The local motion compensator 40 generates a local motion-compensating predicted picture 41 based on the motion vector 39 of each macro block and the reference picture 33. This method equals the conventional motion-compensating method used in the conventional MPEG or the like.
  • Next, one of the global motion-compensating predicted picture 37 and the local motion-compensating predicted picture 41, whichever has the smaller error with respect to the input picture 31, is chosen in the encoding mode selector 42 for each macro block. This choice is performed for each macro block. If the global motion compensation is chosen, the local motion compensation is not performed in the relevant macro block; thus, motion vector 39 is not encoded. The predicted picture 43 chosen via the encoding mode selector 42 is input into subtracter 44, and picture 45 corresponding to the difference between the input picture 31 and the predicted picture 43 is converted into DCT (discrete cosine transformation) coefficient 47 by DCT section 46. The DCT coefficient 47 is then converted into quantized index 49 in quantizer 48. The quantized index 49 is encoded by quantized-index encoder 57, encoded-mode choice information 56 is encoded by encoded-mode encoder 58, motion vector 39 is encoded by motion-vector encoder 59, and the global motion parameters 35 are encoded by global-motion-parameter encoder 60. These encoded data are multiplexed and output as an encoder output.
  • In order for the encoder to also acquire the same decoded picture as acquired in the decoder, the quantized index 49 is inverse-converted into a quantization representative value 51 by inverse quantizer 50, and is further inverse-converted into difference picture 53 by inverse-DCT section 52. The difference picture 53 and predicted picture 43 are added to each other by adder 54 so that local decoded picture 55 is generated. This local decoded picture 55 is stored in the frame memory 32 and is used as a reference picture at the encoding of the next frame.
  • Next, relevant decoding operations of the MPEG-4 decoder will be explained with reference to FIG. 12. The multiplexed and encoded bit stream is divided into each element, and the elements are respectively decoded. The quantized-index decoder 61 decodes quantized index 49, encoded-mode decoder 62 decodes encoded-mode choice information 56, motion-vector decoder 63 decodes motion vector 39, and global-motion-parameter decoder 64 decodes global motion parameters 35.
  • The reference picture 33 stored in the frame memory 68 and global motion parameters 35 are input into global motion compensator 69 so that global motion-compensated picture 37 is generated. In addition, the reference picture 33 and motion vector 39 are input into local motion compensator 70 so that local motion-compensating predicted picture 41 is generated. The encoded-mode choice information 56 activates switch 71 so that one of the global motion-compensated picture 37 and the local motion-compensated picture 41 is output as predicted picture 43.
  • The quantized index 49 is inverse-converted into quantization representative value 51 by inverse-quantizer 65, and is further inverse-converted into difference picture 53 by inverse-DCT section 66. The difference picture 53 and predicted picture 43 are added to each other by adder 67 so that local decoded picture 55 is generated. This local decoded picture 55 is stored in the frame memory 68 and is used as a reference picture when encoding the next frame.
  • In the above-explained global motion-compensating predictive method adopted in MPEG-4, one of the predicted pictures of the global motion compensation and the local motion compensation, whichever has the smaller error, is chosen for each macro block so that the prediction efficiency of the entire frame is improved. In addition, the motion vector is not encoded in the macro block to which the global motion compensation is adopted; thus, the generated codes can be reduced by the amount necessary for conventional encoding of the motion vector.
  • On the other hand, in the conventional method in which the active motion-compensating method is switched between a plurality of motion-compensating methods corresponding to different motion models, no prediction relating to a shift between motion vectors belonging to different motion models is performed. For example, in the encoding method in which the motion-compensating method corresponding to a translational motion model and the motion-compensating method corresponding to a translational motion and extending/contracting motion model are switched, a shift from the motion vector of the translational motion and extending/contracting motion model to the motion vector of the translational motion model cannot be simply predicted using a difference, because the number of used parameters with respect to the motion vector is different between the two methods.
  • However, redundancy of the motion vector may also occur between different motion models. Therefore, correlation between the motion vector of the translational motion model and the motion vector of the translational motion and extending/contracting motion model will be examined with reference to motion vectors shown in FIG. 10. In FIG. 10, it is assumed that in the motion compensation of target small blocks Boa and Bob, the target small block Boa is motion-compensated using the method corresponding to the translational motion model and referring to small block Bra included in the reference frame, while the target small block Bob is motion-compensated using the method corresponding to the translational motion and extending/contracting motion model and referring to small block Brb included in the reference frame.
  • In this case, motion vector {right arrow over (va)}=(xa, ya) in FIG. 10 indicates the translational motion model, while motion vector {right arrow over (vb)}=(xb, yb, zb) in FIG. 10 indicates the translational motion and extending/contracting motion model. Here, in the motion compensation of the small block Bob, small block Brb in the reference frame to be referred to is extended. Therefore, the translational motion components of the motion vector va and vb in FIG. 10 have almost the same values and redundancy exists.
  • However, in the conventional method, such redundancy between motion vectors of different motion models cannot be reduced because no motion vector of a motion model which is differs from the present motion model is predicted based on the motion vector of the present model.
  • In the above MPEG-4, predictive encoding is adopted so as to efficiently encode the motion vector. For example, the operations of motion-vector encoder 59 in FIG. 11 are as follows. As shown in FIG. 13, three motion vectors such as motion vector MV1 of the left block, motion vector MV2 of the block immediately above, and motion vector MV3 of the block diagonally above to the right are referred to so as to obtain a median thereof as a predicted value of the motion vector MV of the present block. The predicted value PMV of the vector MV of the present block is defined using the following equation (9).
    PMV=median(MV1, MV2, MV3)  (9)
  • If the reference block corresponds to the intraframe-coding mode, no motion vector exists. Therefore, the median is calculated with vector value 0 at the relevant position. If the reference block has been predicted using the global motion compensation, no motion vector exists. Therefore, the median is calculated with vector value 0 at the relevant position also in this case. For example, if the left block was predicted using the local motion compensation, the block immediately above was predicted using the global motion compensation, and the block diagonally above to the right was encoded using the intraframe coding method, then MV2=MV3=0. In addition, if the three reference blocks were all predicted using the global motion compensation, then MV1=MV2=MV3=0. In this case, the median is also 0 and thus the predicted value is 0. Therefore, this case is equal to the case that the motion vector of the target block is not subjected to predictive encoding, and the encoding efficiency is degraded.
  • In the MPEG-4, the following seven kinds of ranges (see List 1) are defined with respect to the size of the local motion vector, and the used range is communicated to the decoder by using a codeword “fcode” included in the bit stream.
    List 1
    fcode Range of motion vector
    1 −16 to +15.5 pixels
    2 −32 to +31.5 pixels
    3 −64 to +63.5 pixels
    4 −128 to +127.5 pixels
    5 −256 to +255.5 pixels
    6 −512 to +511.5 pixels
    7 −1024 to +1023.5 pixels
  • The global motion parameters used in MPEG-4 may have a wide range of −2048 to +2047.5; thus, the motion vector determined based on the global motion vector may have a value from −2048 to +2047.5. However, the range of the local motion vector is smaller than the above range and the prediction may have a large error. For example, if fcode=3; the motion vector of the target block (Vx, Vy)=(+48, +36.5); the predicted vector determined based on the global motion vector (PVx, PVy)=(+102, +75), then the prediction error (MVDx, MVDy)=(−54, −38.5). The absolute values of this error are thus larger than the above values of the motion vector (Vx, Vy). The smaller the absolute values of the prediction error (MVDx, MVDy), the shorter the length of the codeword assigned to the prediction error. Therefore, there is a disadvantage in that the amount of code are increased due to the prediction of the motion vector.
  • Therefore, the objective of the present invention is to provide a motion vector predictive encoding method, a motion vector decoding method, a predictive encoding apparatus, a decoding apparatuses, and computer-readable storage media storing motion vector predictive encoding and decoding programs, which reduce the amount of generated code with respect to the motion vector, and improve the efficiency of the motion-vector prediction.
  • DISCLOSURE OF INVENTION
  • The motion vector predictive encoding method, predictive encoding apparatus, and motion vector predictive encoding program stored in a computer-readable storage medium according to the present invention relate to a motion vector predictive encoding method in which a target frame to be encoded is divided into small blocks and a motion-compensating method to be applied to each target small block to be encoded is selectable from among a plurality of motion-compensating methods. In the present invention, when the motion vector of the target small block is predicted based on the motion vector of an already-encoded small block, if the motion model of the motion vector of the target small block differs from the motion model of the motion vector of the already-encoded small block, then the motion vector of the target small block is predicted by converting the motion vector of the already-encoded small block for the prediction into one suitable for the motion model of the motion vector used in the motion-compensating method of the target small block and by calculating a predicted vector, and a prediction error of the motion vector is encoded.
  • In the decoding method, decoding apparatus, and decoding program stored in a computer-readable storage medium for decoding the motion vector encoded using the above motion vector predictive encoding method, if the motion model of the motion vector of a target small block to be decoded differs from the motion model of the motion vector of the already-decoded small block, then the motion vector of the already-decoded small block is converted into one suitable for the motion model of the motion vector used in the motion-compensating method of the target small block, and a predicted vector is calculated based on the converted motion vector of the already-decoded small block, and the motion vector is decoded by adding the prediction error to the predicted vector.
  • There are two types of motion-compensating method, i.e., global motion compensation (for compensating a global motion over a plurality of areas) and local motion compensation (for compensating a local motion for each area), as described above. In the global motion compensation, the motion vector representing a global motion over a plurality of areas is transmitted, while in the local motion compensation, the motion vector for each area is transmitted.
  • The present invention provides another motion vector predictive encoding method, predictive encoding apparatus, and motion vector predictive encoding program stored in a computer-readable storage medium. This case relates to a motion vector predictive encoding method in which a global motion-compensating method and a local motion-compensating method are switchable for each target small block to be encoded, wherein when the motion vector of the target small block is predicted based on the motion vector of an already-encoded small block, and if the motion-compensating method of the target small block differs from the motion-compensating method of the already-encoded small block, then a predicted vector is calculated by converting the format of the motion vector of the already-encoded small block for the prediction into a format of the motion vector used in the motion-compensating method of the target small block, and the motion vector of the target small block is predicted and a prediction error of the motion vector is encoded.
  • In the encoding of the motion vector using the above motion vector predictive encoding method, predictive encoding apparatus, and motion vector predictive encoding program stored in a computer-readable storage medium, if the motion-compensating method used for the target small block is the local motion-compensating method and the motion-compensating method used for the already-encoded small block for the prediction is the global motion-compensating method, then a local motion vector of the already-encoded small block is calculated based on the global motion vector, the motion vector of the target small block is predicted, and the prediction error of the motion vector is encoded.
  • When the motion vector of the target small block for which the local motion compensation is chosen is predicted based on the global motion parameters, if the value of the predicted vector is not within a predetermined range, the predicted vector is clipped to have a value within the predetermined range. On the other hand, when the motion vector of the block for which the local motion compensation is chosen is predicted based on the global motion parameters, if the value of the predicted vector is not within a predetermined range, the value of the predicted vector is set to 0.
  • In the decoding method, decoding apparatus, and decoding program stored in a computer-readable storage medium for decoding the motion vector encoded using the above-mentioned motion vector predictive encoding methods, predictive encoding apparatuses, and motion vector predictive encoding programs stored in computer-readable storage media, if the motion-compensating method of the target small block to be decoded differs from the motion-compensating method of the already-decoded small block, then the format of the motion vector of the already-decoded small block is converted into a format of the motion vector used in the motion-compensating method of the target small block, and a predicted vector of the motion vector of the target small block is calculated, and the motion vector is decoded by adding the prediction error to the predicted vector.
  • In addition, if the motion-compensating method used for the target small block to be decoded is the local motion-compensating method and the motion-compensating method used for the already-decoded small block for the prediction is the global motion-compensating method, then a local motion vector of the already-decoded small block is calculated based on the global motion vector, a predicted vector with respect to the motion vector of the target small block is calculated, and the motion vector is decoded by adding the prediction error to the predicted vector.
  • In the above motion vector encoding and decoding methods, predictive encoding apparatus, decoding apparatus, motion vector predictive encoding program, and decoding program, when the motion vector of the small block in any motion model is converted into a motion vector in a translational motion model, a representative motion vector of the small block is calculated based on the motion vector in the translational motion model determined for each pixel in the small block.
  • In the calculation of the representative motion vector, each component of the representative motion vector is set to a statistic (one of the average, intermediate value, median, mode, maximum value, and minimum value).
  • When decoding, if the predicted value of the local motion vector which is calculated based on the global motion parameters is not within a predetermined range, the predicted vector is clipped to have a value within the predetermined range. Instead, if the predicted value of the local motion vector which is calculated based on the global motion parameters is not within a predetermined range, the value of the predicted vector is set to 0.
  • According to the motion vector predictive encoding method, decoding method, predictive encoding apparatus, decoding apparatus, motion vector predictive encoding program, and decoding program of the present invention, the motion vector can be predicted between motion vectors of different motion models, and also between motion vectors of the global motion compensation and the local motion compensation. Therefore, the amount of generated code with respect to the motion vector can be reduced.
  • In addition, if the predicted vector determined based on the global motion parameters is not within the range of the local motion vector, the predicted vector can be clipped to have the maximum value or the minimum value of the range. Therefore, if “fcode=3” (the motion vector range is from −64 to +63.5 pixels), the motion vector of the present block: (Vx, Vy)=(+48, +36.5), and the predicted vector calculated based on the global motion parameters: (PVx, PVy)=(+102, +75), then the predicted vector (PVx, PVy) is clipped to “(+63.5, +63.5)”. Accordingly, the prediction error (MVDx, MVDy) is “(−15.5, −27)” and the absolute values thereof are smaller than those obtained by the above-mentioned conventional method in which the prediction error is “(−54, −38.5)”. The smaller the absolute values of the prediction error, the shorter the length of the codeword assigned to a difference between two motion vectors; thus, the total amount of code can be reduced.
  • In the method of setting the value of the predicted vector to 0, if “fcode=3” (the motion vector range is from −64 to +63.5 pixels), the motion vector of the present block: (Vx, Vy)=(+48, +36.5), and the predicted vector calculated based on the global motion parameters: (PVx, PVy)=(+102, +75), then the predicted vector (PVx, PVy) is set to “(0, 0)”. Accordingly, the prediction error (MVDx, MVDy) is “(+48, +36.5)” and the absolute values thereof are larger than those obtained by the above clipping method in which the prediction error is “(−15.5, −27)”, but smaller than those obtained by the above-mentioned conventional method in which the prediction error is “(−54, −38.5)”.
  • The following is another example, in which “fcode=1” (the motion vector range is from −16 to +15.5 pixels), the motion vector of the present block: (Vx, Vy)=(+3, +1.5), and the predicted vector calculated based on the global motion parameters: (PVx, PVy)=(+102, +75). In the clipping method, the predicted vector (PVx, PVy) is “(+15.5, +15.5)” and the prediction error (MVDx, MVDy) is “(−12.5, −14)”. In the “0” setting method, the predicted vector (PVx, PVy) is “(0, 0)” and the prediction error (MVDx, MVDy) is “(+3, +1.5)”. Therefore, in this example, the absolute values in the “0” setting method can be smaller than those in the clipping method.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a flowchart showing the operations of the motion vector predictive encoding method of the first embodiment according to the present invention.
  • FIG. 2 is a flowchart showing the operations of the motion vector decoding method of the first embodiment according to the present invention.
  • FIG. 3 is a block diagram showing the functional structure of the motion vector predictive encoding apparatus of the first embodiment according to the present invention.
  • FIG. 4 is a block diagram showing the functional structure of the motion vector decoding apparatus of the first embodiment according to the present invention.
  • FIG. 5 is a flowchart showing the operations of the motion vector predictive encoding method of the second embodiment according to the present invention.
  • FIG. 6 is a flowchart showing the operations of the motion vector decoding method of the second embodiment according to the present invention.
  • FIG. 7 is a block diagram showing the functional structure of the motion vector predictive encoding apparatus of the second embodiment according to the present invention.
  • FIG. 8 is a block diagram showing the functional structure of the motion vector decoding apparatus of the second embodiment according to the present invention.
  • FIG. 9 is a diagram showing examples of the motion model, part (a) showing a translational motion model and part (b) showing a translational motion and extending/contracting motion model.
  • FIG. 10 is a diagram showing motion vectors of the motion model.
  • FIG. 11 is a diagram showing the structure of an example of the encoder for encoding moving pictures (the MPEG-4 encoder).
  • FIG. 12 is a diagram showing the structure of the decoder corresponding to the encoder in FIG. 11 (the MPEG-4 decoder).
  • FIG. 13 is a diagram showing the arrangement of reference blocks in the motion vector prediction of the MPEG-4.
  • MODES FOR CARRYING OUT THE INVENTION
  • Hereinbelow, embodiments according to the present invention will be explained with reference to the drawings. In the embodiments, the global motion-compensating method (GMC) and the local motion-compensating method (LMC) are switchable for each small block. The motion model used in the GMC is determined in consideration of translational movement and extension/contraction, while the motion model used in the LMC is a translational motion model.
  • In the GMC, the motion vector is not assigned to each small block. Therefore, prediction is performed with respect to a motion vector used for the local motion compensation. At the time of predicting the motion vector, the format of the motion vector is converted into that of the translational motion model. When the motion vector of the global motion compensation is converted into the motion vector of the translational motion model, a representative vector of the small block is calculated. In the calculation, the average of the motion vectors which were calculated for each pixel of the small block is determined as the representative vector. The size of each small block is M×N pixels.
  • First Embodiment
  • (1) Motion Vector Predictive Encoding Method and Motion Vector Decoding Method
  • Hereinbelow, the motion vector predictive encoding method and motion vector decoding method of the first embodiment according to the present invention will be explained with reference to flowcharts in FIGS. 1 and 2.
  • (1-1) Motion Vector Predictive Encoding Method
  • In the first step S1 in FIG. 1, the motion-compensating mode of the target small block is determined. If the mode corresponds to the GMC, then the predictive encoding of the motion vector is not executed and the operation of the motion-vector encoding is terminated.
  • If the mode corresponds to the LMC in step S1, then the operation shifts to step S2, where the encoding mode of a small block which was already encoded is determined. If it is determined that the encoding mode is the intraframe coding mode, then the operation shifts to step S3. In step S3, the predicted vector vp is set to 0 (that is, vp=(0, 0)), and then the operation shifts to step S7 (explained later).
  • If it is determined that the encoding mode is the interframe coding mode in step S2, then the operation shifts to step S4 where the motion-compensating mode of the already-encoded small block is determined. If it is determined that the motion-compensating mode corresponds to the LMC, then the motion vector of the small block vi-1=(xi-1, yi-1) is determined as the predicted vector vp and the operation shifts to step S7.
  • If it is determined that the motion-compensating mode corresponds to the GMC in step S4, then the operation shifts to step S5, where the motion vector of the translational motion model, that is, vi-1(m,n)=(xi-1(m,n), yi-1(m,n)), is determined for each pixel of the already-encoded small block, based on the global motion vector GMV (=(X, Y, Z)). Here, (m, n) indicates the position of each pixel in the small block.
  • The operation then shifts to step S6 where the representative motion vector is calculated based on the motion vector vi-1(m,n) determined in step S5 for each pixel of the already-encoded small block, and the calculated result is determined as the predicted vector vp=(xvp, yvp). Here, the representative motion vector is calculated based on the average of motion vectors vi-1(m,n) of each pixel. x vp = 1 M · N n = 1 N M = 1 M x i - 1 ( m , n ) ( 10 ) y vp = 1 M · N n = 1 N M = 1 M y i - 1 ( m , n ) ( 11 )
  • The operation then shifts to step S7, and prediction errors dxi and dyi for each component of the motion vector are calculated based on the motion vector vi=(xi, yi) of the target small block and the predicted vector vp, by using the following equations.
    dx i =x i −x vp  (12)
    dy i =y i −y vp  (13)
  • If it is determined that the encoding mode of the already-encoded small block is the intraframe coding mode in step S2, then the predicted vector vp=(0, 0); thus the prediction errors dxi and dyi are determined as follows:
    dx i =x i−0  (14)
    dy i =y i−0  (15)
  • If it is determined that the motion-compensating mode of the already-encoded small block is the LMC in step S4, then the motion vector of the already-encoded small block vi-1=(xi-1, yi-1) is determined as the predicted vector vp. Therefore, the prediction errors dxi and dyi are determined as follows:
    dx i =x i −x i-1  (16)
    dy i =y i −y i-1  (17)
  • The operation then shifts to step S8, where the prediction errors dxi and dyi calculated in step S7 are subjected to reversible encoding such as the Huffman encoding, and the motion-vector encoding operation is finished.
  • (1-2) Motion Vector Decoding Method
  • Hereinbelow, the method of decoding the motion vector which was encoded by the above-explained encoding method will be explained with reference to FIG. 2.
  • In the first step S11 in FIG. 2, the motion-compensating mode of the target small block is determined. If the determined mode corresponds to the GMC, then it is determined that the predictive encoding of the motion vector was not executed, and the decoding operation is terminated.
  • If the mode corresponds to the LMC in step S11, then the operation shifts to step S12, where the prediction errors dxi and dyi, which were reversibly encoded, are decoded. The operation then shifts to step S13, where the encoding mode of the already-decoded small block is determined. If it is determined that the encoding mode is the intraframe coding mode, then the prediction error vp is set to (0, 0) in step S14, and the operation shifts to step S18 (explained later).
  • If it is determined that the encoding mode is the interframe coding mode in step S13, then the operation shifts to step S15 where the motion-compensating mode of the already-decoded small block is determined. If it is determined that the motion-compensating mode of the already-decoded small block corresponds to the LMC, then the motion vector of the already-decoded small block vi-1=(xi-1, yi-1) is determined as the predicted vector vp and the operation shifts to step S18.
  • If it is determined that the motion-compensating mode of the already-decoded small block corresponds to the GMC in step S15, then the operation shifts to step S16, where the motion vector of the translational motion model, that is, vi-1(m,n)=(xi-1(m,n), yi-1(m,n)), is determined for each pixel of the already-decoded small block, based on the global motion vector GMV (=(X, Y, Z)). Here, (m, n) indicates the position of each pixel in the small block.
  • The operation then shifts to step S17 where the representative motion vector is calculated based on the motion vector vi-1(m,n) determined for each pixel of the already-decoded small block, and the calculated result is determined as the predicted vector vp=(xvp, yvp). Here, the representative motion vector is calculated based on the average of motion vectors vi-1(m,n) of each pixel. x vp = 1 M · N n = 1 N M = 1 M x i - 1 ( m , n ) ( 18 ) y vp = 1 M · N n = 1 N M = 1 M y i - 1 ( m , n ) ( 19 )
  • The operation then shifts to step S18, where the components xvp and yvp of the predicted vector vp are respectively added to the prediction errors dxi and dyi so that the motion vector vi=(xi, yi) of the target small block is calculated as follows:
    x i =x vp +dx i  (20)
    y i =y vp +dy i  (21)
  • If it is determined that the encoding mode of the already-decoded small block is the intraframe coding mode in step S13, then the predicted vector vp=(0, 0) in step S14; thus the motion vector vi=(xi, yi) of the target small block is calculated as follows:
    x i=0+dx i  (22)
    y i=0+dy i  (23)
  • If it is determined that the motion-compensating mode of the already-decoded small block is the LMC in step S15, then the motion vector of the already-decoded small block vi-1=(xi-1, yi-1) is determined as the predicted vector vp. Therefore, the motion vector vi=(xi, yi) of the target small block is calculated as follows:
    x i =x i-1 +dx i  (24)
    y i =y i-1 +dy i  (25)
  • When the process of step S18 is finished, the motion-vector decoding operation is also finished.
  • In step S6 in FIG. 1 and step S17 in FIG. 2 in the above embodiment, the representative motion vector is calculated using the average; however, the following statistics may be used. Here, a set of components xi-1(m,n) of the motion vector for each pixel is indicated by Xi-1, while a set of components yi-1(m,n) of the motion vector for each pixel is indicated by Yi-1.
  • Maximum Value
  • The representative motion vector is calculated using the following equations.
    x vp=max(X i-1)  (26)
    y vp=max(Y i-1)  (27)
    Minimum Value
  • The representative motion vector is calculated using the following equations.
    x vp=min(X i-1)  (28)
    y vp=min(Y i-1)  (29)
    Intermediate Value
  • The representative motion vector is calculated using the following equations.
    x vp=(max(X i-1)+min(X i-1))/2  (30)
    y vp=(max(Y i-1)+min(Y i-1))/2  (31)
    Mode
  • The frequency distribution is examined for each component of the motion vector, and the value corresponding to the maximum frequency is determined as the relevant component of the representative motion vector.
  • Median
  • First, each component of the motion vector is arranged in order of size:
    xs1≦xs2≦ . . . ≦xsn-1≦xsn  (32)
    ys1≦ys2≦ . . . ≦ysn-1≦ysn  (33)
    where n means the number of pixels of the small block, xs and ys are respectively obtained by rearranging the components Xi-1 and Yi-1 (of the motion vector for each pixel) in order of the size. The representative motion vector is calculated based on the above rearranged order of the components of the motion vector. That is:
  • When the number of pixels n is an odd number:
    x vp =xs (n+1)/2  (34)
    y vp =ys (n+1)/2  (35)
  • When the number of pixels n is an even number:
    x vp=(xs n/2 +xs n/2+1)/2  (36)
    y vp=(ys n/2 +ys n/2+1)/2  (37)
  • In the above embodiment, the motion vector of any small block which was encoded before the target small block was encoded, such as the motion vector of the small block which was encoded immediately before, can be used as the motion vector of the already-encoded small block. In addition, the predicted vector may be calculated using the motion vectors of a plurality of already-encoded small blocks.
  • (2) Motion Vector Predictive Encoding and Decoding Apparatuses
  • Hereinbelow, the motion vector predictive encoding and decoding apparatuses according to the above-explained motion vector encoding and decoding methods will be explained with reference to FIGS. 3 and 4.
  • (2-1) Motion Vector Predictive Encoding Apparatus
  • FIG. 3 shows the functional structure of the motion vector encoding apparatus for performing the motion-vector encoding according to the above-explained motion vector encoding method. In this figure, the global motion vector is input once for a picture (plane), and the local motion vector is input only when the target small block is local-compensated.
  • The motion vector memory 1 stores the input local motion vector mv of the target small block. When the next local motion vector mv is input, the motion vector memory 1 outputs the stored local motion vector as motion vector mvt-1 of the already-encoded small block.
  • The representative-motion-vector calculating section 2 has an internal memory. When the motion-compensating method applied to the input target small block is the LMC and the motion-compensating method applied to the already-encoded small block is the GMC, the calculating section 2 reads out the already-encoded global motion vector “gmv” from the internal memory, and calculates a representative motion vector for each small block based on the global motion vector gmv. The calculated result is output as the representative motion vector of the already-encoded small block when next local motion vector mv of the already-encoded small block is input.
  • Selector 3 operates in one of the following manners. In the first case, the selector 3 outputs motion vector mvt-1 of the already-encoded small block output from the motion vector memory 1, or the representative motion vector output from the representative-motion-vector calculating section 2, according to the motion-compensating mode of the target small block, the encoding mode of the already-encoded small block, and the motion-compensating mode of the already-encoded small block. In the second case, the selector 3 outputs no motion vector, that is, neither of the two motion vectors is chosen (this state is called a “neutral state” hereinbelow).
  • Subtracter 4 subtracts the output from the selector 3 from the local motion vector mvt of the target small block, and outputs prediction error dmvt. The motion-vector encoding section 5 executes the variable-length encoding of the prediction error dmvt which is output from the subtracter 4. The encoding section 5 outputs the encoded results as local motion vector information. The encoding section 6 executes the variable-length encoding of the global motion vector gmv, and outputs the encoded results as global motion vector information.
  • The operations of the encoding apparatus having the above-explained structure will be explained below. When a global motion vector is supplied from an external device, the supplied global motion vector gmv is input into both the representative-motion-vector calculating section 2 and the motion-vector encoding section 6. In the motion-vector encoding section 6, the input data is variable-length-encoded and then output to the outside. In the representative-motion-vector calculating section 2, the above global motion vector gmv is stored into the internal memory.
  • At this time, the local motion vector is not supplied from an external device, and the selector 3 detects that the supplied motion vector is the global motion vector and thus enters the neutral state. Accordingly, no local motion vector information is output from the motion-vector encoding section 5.
  • When local Motion vector mvt is supplied from an external device after the above operations are executed, the supplied local motion vector mvt is input into motion vector memory 1 and subtracter 4. In this case, the motion-compensating method of the target block is the LMC and the motion-compensating method of the already-encoded small block is the GMC. Therefore, in the representative-motion-vector calculating section 2, motion vector “vt-1(m, n)=(xi-1(m,n), yi-1(m,n))” of the translational motion model is calculated for each pixel of the already-encoded small block, based on the global motion vector gmv stored in the internal memory (this process corresponds to step S5 in FIG. 1). Here, m and n indicate the relevant position of the pixel in the small block.
  • In the representative-motion-vector calculating section 2, a representative motion vector is calculated from the calculated motion vector vt-1(m, n), and is output as the motion vector of the already-encoded small block (this process corresponds to step S6 in FIG. 1). Here, the representative motion vector is calculated based on the average of the motion vector vt-1(m, n) for each pixel.
  • The selector 3 determines the encoding mode of the already-encoded small block. If the mode is the intraframe coding mode, the selector is in the neutral state and no signal is output from the selector (this process corresponds to step S3 in FIG. 1). Therefore, the supplied local motion vector mvt passes through the subtracter 4 (this process corresponds to the operation using the above-described equations (14) and (15), performed in step S7 in FIG. 1), and the motion vector is variable-length-encoded in the motion-vector encoding section 5 (this process corresponds to step S8 in FIG. 1) and is output to the outside.
  • On the other hand, if selector 3 determines that the encoding mode of the already-encoded small block is the interframe coding mode, the motion-compensating mode of the already-encoded small block is then determined (this process corresponds to step S4 in FIG. 1). Here, the motion-compensating mode of the already-encoded small block is the GMC; thus, the selector 3 chooses the representative motion vector output from the representative-motion-vector calculating section 2, and outputs the vector into subtracter 4.
  • Accordingly, in the subtracter 4, the representative motion vector is subtracted from the target small block (this process corresponds to the operation using the above-described equations (12) and (13), performed in step S7 in FIG. 1), and the result of the subtraction is output as the prediction error dmvt into the motion-vector encoding section 5, where the prediction error is variable-length-encoded (this process corresponds to step S8 in FIG. 1) and is output to the outside.
  • If another local motion vector mvt is supplied from an external device after the above operations are executed, the supplied local motion vector mvt is input into motion vector memory 1 and subtracter 4. The motion vector memory 1 outputs the local motion vector mvt-1 which was the previously-input vector.
  • The selector 3 determines the encoding mode of the already-encoded small block. If the mode is the intraframe coding mode, the selector 3 enters the neutral state and no signal is output from the selector (this process corresponds to step S3 in FIG. 1). Therefore, the supplied local motion vector mvt passes through the subtracter 4 (this process corresponds to the operation using the above-described equations (14) and (15), performed in step S7 in FIG. 1), and the motion vector is variable-length-encoded in the motion-vector encoding section 5 (this process corresponds to step S8 in FIG. 1) and is output to the outside.
  • On the other hand, if selector 3 determines that the encoding mode of the already-encoded small block is the interframe coding mode, the motion-compensating mode of the already-encoded small block is then determined (this process corresponds to step S4 in FIG. 1). Here, the motion-compensating mode of the already-encoded small block is the LMC; thus, the selector 3 chooses the local motion vector mvt-1 of the already-encoded small block output from motion vector memory 1, and outputs the vector into subtracter 4.
  • Accordingly, in the subtracter 4, the already-encoded local motion vector mvt-1 is subtracted from the motion vector mvt of the target small block (this process corresponds to the operation using the above-described equations (16) and (17), performed in step S7 in FIG. 1), and the result of subtraction is output as the prediction error dmvt into the motion-vector encoding section 5, where the prediction error is variable-length-encoded (this process corresponds to step S8 in FIG. 1) and is output to the outside.
  • (2-2) Motion Vector Decoding Apparatus
  • FIG. 4 shows the functional structure of the decoding apparatus for decoding the motion vector according to the above-explained motion vector decoding method. In this figure, motion-vector decoding section 10 decodes the global motion vector information which is output from the motion vector predictive encoding apparatus as shown in FIG. 3. The decoding section 10 outputs the decoded result as global motion vector gmv into representative-motion-vector calculating section 12 and the outside. The motion-vector decoding section 11 decodes the local motion vector information which was output from the motion vector predictive encoding apparatus as shown in FIG. 3. The decoding section 11 outputs the decoded result as the prediction error dmvt into adder 15.
  • The representative-motion-vector calculating section 12 has an internal memory. When the motion-compensating method applied to the input target small block is the LMC and the motion-compensating method applied to the already-decoded small block is the GMC, the calculating section 12 reads out the already-decoded global motion vector gmv from the internal memory, and calculates a representative motion vector for each small block based on the global motion vector gmv. The calculated result is output as the representative motion vector of the already-decoded small block when next the local motion vector information is decoded by the motion-vector decoding section 11 and the prediction error dmvt is output.
  • Selector 13 operates in one of the following manners. In the first case, the selector 13 outputs the representative motion vector output from the representative-motion-vector calculating section 12, or the motion vector mvt-1 of the already-decoded small block output from the motion vector memory 14, according to the motion-compensating mode of the target small block, the encoding mode of the already-decoded small block, and the motion-compensating mode of the already-decoded small block. In the second case, the selector 13 enters the neutral state and outputs no motion vector.
  • Motion vector memory 14 stores the local motion vector mv of the already-encoded small block, which is output from the adder 15. When next the prediction error dmvt is output from the motion-vector decoding section 11, the stored local motion vector is output as the local motion vector mvt-1 of the already-decoded small block. In addition, the adder 15 adds the output from selector 13 to the prediction error dmvt output from the motion-vector decoding section 11, and outputs the added result as local motion vector mvt into motion vector memory 14 and the outside.
  • The operations of the motion vector decoding apparatus having the above-explained structure will be explained below. When global motion vector information is supplied from the motion vector predictive encoding apparatus as shown in FIG. 3, the supplied global motion vector information is decoded into global motion vector gmv by motion-vector decoding section 10, and is output into the representative-motion-vector calculating section 12 and the outside. Accordingly, the global motion vector gmv is stored into the internal memory in the representative-motion-vector calculating section 12.
  • In this phase, no local motion vector information is supplied from the motion vector predictive encoding apparatus as shown in FIG. 3, and the selector 13 recognizes that the supplied motion vector information is the global motion vector information and enters the neutral state. Therefore, local motion vector mvt is never output from adder 15.
  • When local motion vector information is supplied from the motion vector predictive encoding apparatus as shown in FIG. 3 after the above operations, the supplied local motion vector information is decoded into the prediction error dmvt in the motion-vector decoding section 11 (this process corresponds to step S12 in FIG. 2), and is output into adder 15. In this case, the motion-compensating method of the target small block is the LMC and the motion-compensating method of the already-decoded small block is the GMC; thus, in the representative-motion-vector calculating section 12, motion vector “vt-1(m, n)=(xi-1(m,n), yi-1(m,n))” of the translational motion model is calculated for each pixel of the already-decoded mall block, based on the global motion vector “gmv” stored in the internal memory (this process corresponds to step S16 in FIG. 2). Here, m and n indicate the relevant position of the pixel in the small block.
  • The representative motion vector is then calculated from the calculated motion vector vt-1(m, n), and is output as the motion vector of the already-decoded small block (this process corresponds to step S17 in FIG. 2). Here, the representative motion vector is calculated based on the average of the motion vector vt-1(m, n) for each pixel.
  • The selector 13 determines the encoding mode of the already-decoded small block. If the mode is the intraframe coding mode, the selector is in the neutral state and no signal is output from the selector (this process corresponds to step S14 in FIG. 2). Therefore, no element is added to the decoded prediction error dmvt, and the prediction error is output as the local motion vector mvt into motion vector memory 14 and the outside (this process corresponds to the operation using the above-described equations (22) and (23), performed in step S18 in FIG. 2).
  • On the other hand, if selector 13 determines that the encoding mode of the already-decoded small block is the interframe coding mode, the motion-compensating mode of the already-decoded small block is then determined (this process corresponds to step S15 in FIG. 2). Here, the motion-compensating mode of the already-decoded small block is the GMC; thus, the selector 13 chooses the representative motion vector output from the representative-motion-vector calculating section 12, and outputs the vector into adder 15.
  • Accordingly, in the adder 15, the prediction error dmvt which was decoded in the motion-vector decoding section 11 and the representative motion vector are added to each other (this process corresponds to the operation using the above-described equations (20) and (21), performed in step S12 in FIG. 2), and the added result is output as the local motion vector mvt into the motion vector memory 14 and the outside.
  • If another local motion vector information is supplied from the motion vector predictive encoding apparatus as shown in FIG. 3 after the above operations are executed, the supplied local motion vector information is decoded into the prediction error dmvt in the motion-vector decoding section 11 and is output into adder 15. The selector 13 determines the encoding mode of the already-decoded small block again. If the mode is the intraframe coding mode, the selector 13 enters the neutral state and no signal is output from the selector (this process corresponds to step S14 in FIG. 2). Therefore, no element is added to the decoded prediction error dmvt in adder 15, and the prediction error is output as local motion vector mvt into motion vector memory 14 and the outside (this process corresponds to the operation using the above-described equations (22) and (23), performed in step S18 in FIG. 2).
  • On the other hand, if selector 13 determines that the encoding mode of the already-decoded small block is the interframe coding mode, then the motion-compensating mode of the already-decoded small block is determined (this process corresponds to step S15 in FIG. 2). Here, the motion-compensating mode of the already-decoded small block is the LMC; thus, the selector 13 chooses the local motion vector mvt-1 of the already-decoded small block output from motion vector memory 14, and outputs the vector into adder 15.
  • Accordingly, in the adder 15, local motion vector mvt-1 of the already-decoded small lock, which was output from the motion vector memory 14, and the prediction error mvt are added to each other (this process corresponds to the operation using the above-described equations (24) and (25), performed in step 28 in FIG. 2), and the added result is output as local motion vector mvt into motion vector memory 14 and the outside.
  • As the representative motion vector calculated in each of the representative-motion- vector calculating sections 2 and 12 in the motion vector predictive encoding apparatus and motion vector decoding apparatus as shown in FIGS. 3 and 4, instead of the average of the motion vectors of each pixel of the already-decoded small block, a statistic such as the maximum value, the minimum value, the intermediate value, the mode, or the median may be used as described in the above items (1-1: encoding method) and (1-2: decoding method).
  • Second Embodiment
  • (1) Motion Vector Predictive Encoding Method and Motion Vector Decoding Method
  • Hereinbelow, the motion vector predictive encoding method and motion vector decoding method of the second embodiment according to the present invention will be explained. The present encoding and decoding methods differ from those of the first embodiment in an additional operation in which if the value of the predicted vector which is obtained based on the global motion parameters is not within the range of the local motion vector, the predicted vector is clipped into the minimum value or the maximum value of the range.
  • (1-1) Motion Vector Predictive Encoding Method
  • Hereinafter, the motion vector predictive encoding method of the second embodiment will be explained with reference to the flowchart in FIG. 5. FIG. 5 shows a flowchart explaining the motion vector predictive encoding method of the second embodiment. In this figure, steps identical to those in the operations of the motion vector predictive encoding method as shown in FIG. 1 are given identical reference numbers, and detailed explanations are omitted here.
  • The motion vector predictive encoding method as shown in FIG. 5 differs from that as shown in FIG. 1 in the point that after the representative motion vector is calculated in step S6, it is determined whether the value of the calculated representative motion vector is within a predetermined range, and if the value is not within the range, the clipping of the value of the representative motion vector is performed.
  • That is, as shown in the first embodiment, if (i) the motion-compensating mode of the target small block is determined as the LMC in step S1, (ii) the encoding mode of the already-encoded small block is determined as the interframe coding mode in step S2, and (iii) the motion-compensating mode of the already-encoded small block is determined as the GMC in step S4, then in step S5, the motion vector of the translational motion model is calculated for each pixel of the already-encoded small block, based on the global motion vector GMV.
  • In the next step S6, the average of the motion vectors (calculated in the above step S5) of each pixel of the already-encoded small block is calculated, and the calculated average is determined as the representative motion vector. As in the motion vector predictive encoding method in the first embodiment, the representative motion vector may be not only the average of the motion vectors of each pixel of the already-encoded small block, but also a statistic such as the maximum value, the minimum value, the intermediate value, the mode, or the median.
  • In the next step S20, it is determined whether the value of the representative motion vector calculated in step S6 is within a predetermined range. If the value is not within the range, the operation shifts to step S21 where the above representative motion vector is clipped so as to make the value thereof within this range.
  • Here, it is assumed that the possible range for representing the motion vector (for example, the range defined by the “fcode” in the MPEG-4 as shown in List 1) is from MVmin to MVmax. If the value of the representative motion vector is less than MVmin, then the representative motion vector is clipped so as to have value MVmin. If the value of the representative motion vector exceeds MVmax, then the representative motion vector is clipped so as to have value MVmax. For example, if the motion vector range is from −64 to +63.5 in the case of “fcode=3” (refer to List 1) as defined in the MPEG-4, and the predicted vector vp determined based on the global motion parameters is (xPV, yPV)=(+102, +75), then these values are compulsively changed into (63.5, 63.5).
  • If it is determined in step S20 that the value of the representative motion vector is within the predetermined range, then the representative motion vector calculated in step S6 is determined as predicted vector vp.
  • The operation then shifts to step S7, where a difference between the motion vector of the target small block and the predicted vector (i.e., the prediction error) is calculated. In the MPEG-4, three blocks such as the left, immediately above, and right-and-diagonally-above blocks are referred to as shown in FIG. 13. Therefore, the processes of steps S2-S6, S20, S21, and S7 are performed for each block, and the median of the three candidate blocks is determined as the prediction error.
  • The operation then shifts to step S8 where the prediction error determined in step S7 is encoded, and the encoding operation of the second embodiment is finished.
  • (1-2) Motion Vector Decoding Method
  • Hereinbelow, the motion vector decoding method of the second embodiment will be explained with reference to the flowchart as shown in FIG. 6. FIG. 6 shows the flowchart explaining the motion vector decoding method of the second embodiment. In this figure, steps identical to those in the operations of the motion vector decoding method as shown in FIG. 2 are given identical reference numbers, and detailed explanations are omitted here.
  • The motion vector decoding method as shown in FIG. 6 differs from that as shown in FIG. 2 in the point that after the representative motion vector is calculated in step S27, it is determined whether the value of the calculated representative motion vector is within a predetermined range, and if the value is not within the range, the clipping of the value of the representative motion vector is performed.
  • That is, as shown in the first embodiment, if the motion-compensating mode of the target small block is determined as the LMC in step S11, then in step S12, the prediction error is decoded. If (i) the encoding mode of the already-decoded small block is determined as the interframe coding mode in step S13, and (ii) the motion-compensating mode of the already-decoded small block is the GMC, then in step S16, the motion vector of the translational motion model is calculated for each pixel of the already-decoded small block, based on the global motion vector GMV.
  • In the next step S17, the average of the motion vectors (calculated in the above step S16) of each pixel of the already-decoded small block is calculated, and the calculated average is determined as the representative motion vector. As in the motion vector decoding method in the first embodiment, the representative motion vector may be not only the average of the motion vectors of each pixel of the already-decoded small block, but also a statistic such as the maximum value, the minimum value, the intermediate value, the mode, or the median.
  • In the next step S22, it is determined whether the value of the representative motion vector calculated in step S17 is within a predetermined range. If the value is not within the range, the above representative motion vector is clipped so as to make the value thereof within the range.
  • Here, it is assumed that the possible range for representing the motion vector (for example, the range defined by the “fcode” in the MPEG-4 as shown in List 1) is from MVmin to MVmax. If the value of the representative motion vector is less than MVmin, then the representative motion vector is clipped so as to have value MVmin. If the value of the representative motion vector exceeds MVmax, then the representative motion vector is clipped so as to have value MVmax. For example, if the motion vector range is from −64 to +63.5 in the case of “fcode=3” (refer to List 1) as defined in the MPEG-4, and the predicted vector vp determined based on the global motion parameters is (xPV, yPV)=(+102, +75), then these values are compulsively changed into (63.5, 63.5).
  • If it is determined in step S22 that the value of the representative motion vector is within the predetermined range, then the representative motion vector calculated in step S17 is determined as predicted vector vp.
  • The operation then shifts to step S18, where the prediction error of the target small block calculated in step S12 and the predicted vector are added. In the MPEG-4, three blocks such as the left, immediately above, and right-and-diagonally-above blocks are referred to as shown in FIG. 13. Therefore, the processes of steps S12-S17, S22, S23, and S18 are performed for each block, and the median of the three candidate blocks is determined as the predicted vector. The decoding operation of the second embodiment is then finished.
  • In the above-described motion vector predictive encoding method and motion vector decoding method, the clipping operation performed in step S21 (see FIG. 5) and step S23 (see FIG. 6) uses the maximum or minimum value of a predetermined range; however, the clipping may be performed at value 0.
  • Here, it is assumed that the possible range for representing the motion vector is from MVmin to MVmax. If the value of the representative motion vector is less than MVmin, then the representative motion vector is clipped so as to have value 0. If the value of the representative motion vector exceeds MVmax, then the representative motion vector is also clipped so as to have value 0. For example, if the motion vector range is from −64 to +63.5 in the case of “fcode=3” (refer to List 1) as defined in the MPEG-4, and the predicted vector vp determined based on the global motion parameters is (XPV, YPV)=(+102, +75), then these values are compulsively changed into (0, 0).
  • (2) Motion Vector Predictive Encoding Apparatus and Decoding Apparatus
  • (2-1) Motion Vector Predictive Encoding Apparatus
  • The motion vector predictive encoding apparatus for performing the motion-vector predictive encoding according to the motion vector predictive encoding method (refer to FIG. 5) of the second embodiment will be explained with reference to FIG. 7. In FIG. 7, parts identical to those of the motion vector predictive encoding apparatus shown in FIG. 3 are given identical reference numbers, and explanations thereof are omitted here.
  • The motion vector predictive encoding apparatus shown in FIG. 7 differs from that in FIG. 3 in the point that representative-motion-vector clipping section 20 is provided between representative-motion-vector calculating section 2 and one of the input terminals of selector 3. The representative-motion-vector clipping section 20 determines whether the value of the representative motion vector output from the representative-motion-vector calculating section 2 is within a predetermined range. If the value is not within the range, the clipping section 20 clips the value of the representative motion vector to the maximum or minimum value of the range.
  • According to the motion vector predictive encoding apparatus having the above-explained structure, when a representative motion vector is calculated and output from the representative-motion-vector calculating section 2, that is, when processes corresponding to steps S5 and S6 in FIGS. 1 and 5 are performed in the representative-motion-vector calculating section 2, then in the representative-motion-vector clipping section 20, it is determined whether the value of the representative motion vector calculated in the representative-motion-vector calculating section 2 is within a predetermined range (this process corresponds to step S20 in FIG. 5).
  • If the value of the representative motion vector is not within the predetermined range, the representative motion vector is clipped so as to have a value within this range (this process corresponds to step S21 in FIG. 5), and the representative motion vector after the clipping is output into the selector 3. If the value of the representative motion vector is within the predetermined range, the original representative motion vector calculated in the representative-motion-vector calculating section 2 is output into the selector 3.
  • Here, it is assumed that the possible range for representing the motion vector (for example, the range defined by the “fcode” in the MPEG-4 as shown in List 1) is from MVmin to MVmax. If the value of the representative motion vector is less than MVmin, then the representative motion vector is clipped so as to have value MVmin. If the value of the representative motion vector exceeds MVmax, then the representative motion vector is clipped so as to have value MVmax. For example, if the motion vector range is from −64 to +63.5 in the case of “fcode=3” (refer to List 1) as defined in the MPEG-4, and the predicted vector vp determined based on the global motion parameters is (xPV, yPV)=(+102, +75), then these values are compulsively changed into (63.5, 63.5).
  • The clipping operation performed in the representative-motion-vector clipping section 20 uses the maximum or minimum value of a predetermined range; however, the clipping may be performed at value 0.
  • Here, it is assumed that the possible range for representing the motion vector is from MVmin to MVmax. If the value of the representative motion vector is less than MVmin, then the representative motion vector is clipped so as to have value 0. If the value of the representative motion vector exceeds MVmax, then the representative motion vector is also clipped so as to have value 0. For example, if the motion vector range is from −64 to +63.5 in the case of “fcode=3” (refer to List 1) as defined in the MPEG-4, and the predicted vector vp determined based on the global motion parameters is (xPV, yPV)=(+102, +75), then these values are compulsively changed into (0, 0).
  • Accordingly, the motion vector predictive encoding apparatus for performing the motion-vector predictive encoding according to the motion vector predictive encoding method as shown in the flowchart of FIG. 5 can be realized.
  • (2-2) Motion Vector Decoding Apparatus
  • The motion vector decoding apparatus for performing the motion-vector decoding according to the motion vector decoding method (refer to FIG. 6) of the second embodiment will be explained with reference to FIG. 8. In FIG. 8, parts identical to those of the motion vector decoding apparatus shown in FIG. 4 are given identical reference numbers, and explanations thereof are omitted here.
  • The motion vector decoding apparatus shown in FIG. 8 differs from that in FIG. 4 in the point that representative-motion-vector clipping section 21 is provided between representative-motion-vector calculating section 12 and one of the input terminals of selector 13. The representative-motion-vector clipping section 21 determines whether the value of the representative motion vector output from the representative-motion-vector calculating section 12 is within a predetermined range. If the value is not within the range, the clipping section 21 clips the value of the representative motion vector to the maximum or minimum value of the range.
  • According to the motion vector decoding apparatus having the above-explained structure, when a representative motion vector is calculated and output from the representative-motion-vector calculating section 12, that is, when processes corresponding to steps S16 and S17 in FIGS. 2 and 6 are performed in the representative-motion-vector calculating section 12, then in the representative-motion-vector clipping section 21, it is determined whether the value of the representative motion vector calculated in the representative-motion-vector calculating section 12 is within a predetermined range (this process corresponds to step S22 in FIG. 6).
  • If the value of the representative motion vector is not within the predetermined range, the representative motion vector is clipped so as to have a value within this range (this process corresponds to step S22 in FIG. 6), and the representative motion vector after the clipping is output into the selector 13. If the value of the representative motion vector is within the predetermined range, the original representative motion vector calculated in the representative-motion-vector calculating section 12 is output into the selector 13.
  • Here, it is assumed that the possible range for representing the motion vector (for example, the range defined by the “fcode” in the MPEG-4 as shown in List 1) is from MV MVmax. If the value of the representative motion vector is less than MVmin, then the representative motion vector is clipped so as to have value MVmin. If the value of the representative motion vector exceeds MVmax, then the representative motion vector is clipped so as to have value MVmax. For example, if the motion vector range is from −64 to +63.5 in the case of “fcode=3” (refer to List 1) as defined in the MPEG-4, and the predicted vector vp determined based on the global motion parameters is (xPV, yPV)=(+102, +75), then these values are compulsively changed into (63.5, 63.5).
  • The clipping operation performed in the representative-motion-vector clipping section 21 uses the maximum or minimum value of a predetermined range; however, the clipping may be performed at value 0.
  • Here, it is assumed that the possible range for representing the motion vector is from MVmin to MVmax. If the value of the representative motion vector is less than MVmin, then the representative motion vector is clipped so as to have value 0. If the value of the representative motion vector exceeds MVmax, then the representative motion vector is also clipped so as to have value 0. For example, if the motion vector range is from −64 to +63.5 in the case of “fcode=3” (refer to List 1) as defined in the MPEG-4, and the predicted vector vp determined based on the global motion parameters is (xPV, yPV)=(+102, +75), then these values are compulsively changed into (0, 0).
  • Accordingly, the motion vector decoding apparatus for performing the motion-vector decoding according to the motion vector decoding method as shown in the flowchart of FIG. 6 can be realized.
  • In the above first and second embodiments, programs for executing the following operations may be stored in a computer-readable storage medium such as a CD-ROM or a floppy disk, and each program stored in the storage medium may be loaded and executed by a computer so as to perform the motion vector predictive encoding: the motion-vector predictive encoding operations as shown in the flowcharts of FIG. 1 and FIG. 5, and operations of motion vector memory 1, representative-motion-vector calculating section 2, selector 3, subtracter 4, motion- vector encoding sections 5 and 6 in the block diagram of FIG. 3, and representative-motion-vector clipping section 20 in the block diagram of FIG. 7.
  • Similarly, in order to perform the motion vector decoding, programs for executing the following operations may be stored in a computer-readable storage medium such as a CD-ROM or a floppy disk, and each program stored in the storage medium may be loaded and executed by a computer: the motion-vector decoding operations as shown in the flowcharts of FIG. 2 and FIG. 6, and operations of motion- vector decoding sections 10 and 11, representative-motion-vector calculating section 12, selector 13, motion vector memory 14, adder 15, and representative-motion-vector clipping section 21 in the block diagram of FIG. 8.
  • The present invention is not limited to the above-described embodiments, but various variations and applications are possible in the scope of the claimed invention.

Claims (41)

1-96. (canceled)
97. A motion vector predictive encoding method in which a target frame to be encoded is divided into target small blocks and a motion-compensating method is applied to each target small block to be encoded, a motion vector of a target small block is predicted and calculated (S7) based on a motion vector of already-encoded small blocks to produce a predicted vector, and a prediction error of the motion vector is encoded,
characterized in that the motion-compensating method used for the target small block is a local motion-compensating method (LMV) and the motion-compensating method used for the already-encoded small blocks for the prediction is a global motion-compensating method (GMC), then the motion vector of the target small block is predicted by converting (S6) a global motion vector used in the global motion-compensating method and obtained from the already-encoded small blocks for the prediction into a local motion vector used in the local motion-compensating method to produce the predicted vector, and clipping (S21) the predicted vector to a value within a predetermined range if the predicted vector is not within the predetermined range, and providing the error of the motion vector using the clipped prediction vector.
98. A motion vector predictive encoding method as claimed in claim 97, wherein if the value of the predicted vector with respect to the motion vector of the target small block is not within a predetermined range, the value of the predicted vector is set to 0.
99. A motion vector decoding method for decoding a motion vector which was encoded using a motion vector predictive encoding method, in which a target frame to be encoded is divided into target small blocks and a motion-compensating method is applied to each target small block to be encoded, a motion vector of the target small block is predicted and calculated (S17) based on a motion vector of already-encoded small blocks to produce a predicted vector and a prediction error of the motion vector is encoded,
characterized in that the motion-compensating method used for the target small blocks is the local motion-compensating method (LMV) and the motion-compensating method used for the already-encoded small blocks for the prediction is the global motion-compensating method (GMC), then the motion vector of the already-decoded small block is predicted by converting a global motion vector used in the global motion-compensating method and obtained from the already-encoded small blocks into a local motion vector used in the local motion-compensating method to produce the predicted vector, the predicted motion vector is clipped (S23) to a determined value if the predicted vector is not within the predetermined range, and the motion vector is decoded by adding the prediction error to the predicted vector (S18).
100. A motion vector decoding method as claimed in claim 99, wherein if the value of the predicted vector is not within a predetermined range, the predicted vector is clipped so as to have a value within the predetermined range.
101. A motion vector decoding method as claimed in claim 99, wherein if the value of the predicted vector is not within a predetermined range, the value of the predicted vector is set to 0.
102. A motion vector predictive encoding method as claimed in claim 97, wherein when the motion vector of the small block in any motion model is converted into a motion vector in another motion model having fewer parameters with respect to the motion vector than parameters of the above motion model, a representative motion vector of the small block is calculated based on the motion vector in the other motion model determined for each pixel in the small block.
103. A motion vector predictive encoding method as claimed in claim 98, wherein when the motion vector of the small block in any motion model is converted into a motion vector in another motion model having fewer parameters with respect to the motion vector than parameters of the above motion model, a representative motion vector of the small block is calculated based on the motion vector in the other motion model determined for each pixel in the small block.
104. A motion vector decoding method as claimed in claim 99, wherein when the motion vector of the small block in any motion model is converted into a motion vector in another motion model having fewer parameters with respect to the motion vector than parameters of the above motion model, a representative motion vector of the small block is calculated based on the motion vector in the other motion model determined for each pixel in the small block.
105. A motion vector decoding method as claimed in claim 100, wherein when the motion vector of the small block in any motion model is converted into a motion vector in another motion model having fewer parameters with respect to the motion vector than parameters of the above motion model, a representative motion vector of the small block is calculated based on the motion vector in the other motion model determined for each pixel in the small block.
106. A motion vector decoding method as claimed in claim 101, wherein when the motion vector of the small block in any motion model is converted into a motion vector in another motion model having fewer parameters with respect to the motion vector than parameters of the above motion model, a representative motion vector of the small block is calculated based on the motion vector the other motion model determined for each pixel in the small block.
107. A motion vector predictive encoding method as claimed in claim 97, wherein when the motion vector of the small block in any motion model is converted into a motion vector in a translational motion model, a representative motion vector of the small block is calculated based on the motion vector in the translational motion model determined for each pixel in the small block.
108. A motion vector predictive encoding method as claimed in claim 98, wherein when the motion vector of the small block in any motion model is converted into a motion vector in a translational motion model, a representative motion vector of the small block is calculated based on the motion vector in the translational motion model determined for each pixel in the small block.
109. A motion vector decoding method as claimed in claim 99, wherein when the motion vector of the small block in any motion model is converted into a motion vector in a translational motion model, a representative motion vector of the small block is calculated based on the motion vector in the translational motion model determined for each pixel in the small block.
110. A motion vector decoding method as claimed in claim 100, wherein when the motion vector of the small block in any motion model is converted into a motion vector in a translational motion model, a representative motion vector of the small block is calculated based on the motion vector in the translational motion model determined for each pixel in the small block.
111. A motion vector decoding method as claimed in claim 101, wherein when the motion vector of the small block in any motion model is converted into a motion vector in a translational motion model, a representative motion vector of the small block is calculated based on the motion vector in the translational motion model determined for each pixel in the small block.
112. A motion vector predictive encoding method as claimed in claim 107, wherein in the calculation of the representative motion vector, each component of the representative motion vector is set to one of the average, intermediate value, median, mode, maximum value, and minimum value, which is calculated for each component of the motion vector in the translational motion model of each pixel in the small block.
113. A motion vector predictive encoding method as claimed in claim 108, wherein in the calculation of the representative motion vector, each component of the representative motion vector is set to one of the average, intermediate value, median, mode, maximum value, and minimum value, which is calculated for each component of the motion vector in the translational motion mode of each pixel in the small block.
114. A motion vector decoding method as claimed in claim 109, wherein in the calculation of the representative motion vector, each component of the representative motion vector is set to one of the average, intermediate value, median, mode, maximum value, and minimum value, which is calculated for each component of the motion vector in the translational motion model of each pixel in the small block.
115. A motion vector decoding method as claimed in claim 110, wherein in the calculation of the representative motion vector, each component of the representative motion vector is set to one of the average, intermediate value, median, mode, maximum value, and minimum value, which is calculated for each component of the motion vector in the translational motion model of each pixel in the small block.
116. A motion vector decoding method as claimed in claim 111, wherein in the calculation of the representative motion vector, each component of the representative motion vector is set to one of the average, intermediate value, median, mode, maximum value, and minimum value, which is calculated for each component of the motion vector in the translational motion model of each pixel in the small block.
117. A computer-readable storage medium storing a motion vector predictive encoding program for executing a motion vector predictive encoding program in which a target frame to be encoded is divided into small blocks and a motion-compensating method is applied to each target small block to be encoded, a motion vector of a target small block is predicted and calculated based on a motion vector of already-encoded small blocks to produce a predicted vector, and prediction error of the motion vector is encoded,
the encoding program being characterized in that, the motion-compensating method used for the target small block being a local motion-compensating method (LMV) and the motion-compensating method used for the already-encoded small blocks for the prediction being a global motion-compensating method (GMC), it includes the steps of:
calculating a predicted motion vector of the target small block by converting a global motion vector used in the global motion-compensating method and obtained from the already-encoded small blocks for the prediction into a local motion vector;
clipping the predicted vector to have a value within a predetermined range if the value of the predicted vector is not within the predetermined range.
118. A computer-readable storage medium storing a motion vector predictive encoding a program as claimed in claim 117, including the step of:
setting the value of the predicted vector with respect to the motion vector of the target small block to 0 if the value of the predicted vector is not within a predetermined range.
119. A computer-readable storage medium storing a motion vector decoding program for decoding the motion vector which was encoded by executing a motion vector predictive encoding program, in which a target frame to be encoded is divided into target small blocks and a motion-compensating method is applied to each target small block to be encoded, a motion vector of the target small block is predicted and calculated based on a motion vector of already-encoded small blocks to produce a predicted vector, and a prediction error of the motion vector is encoded,
the decoding program being characterized in that, the motion compensating method used for the target small blocks being the local motion-compensating method (LMV) and the motion-compensating method used for the already-encoded small blocks for the prediction being the global motion-compensating method (GMC), it includes the steps of:
calculating (S17) a predicted vector with respect to the motion vector of the target small block;
clipping (S23) the predicted vector to a determined value if the predicted vector is not within the predetermined range; and
decoding the motion vector by adding the prediction error to the predicted vector (S18).
120. A computer-readable storage medium storing a motion vector decoding program as claimed in claim 119 including the step of:
clipping the predicted vector so as to have a value within a predetermined range if the value of the predicted vector is not within the predetermined range.
121. A computer-readable storage medium storing a motion vector decoding program as claimed in claim 119, including the step of:
setting the value of the predicted vector to 0 if the value of the predicted vector is not within a predetermined range.
122. A computer-readable storage medium storing a motion vector predictive encoding program as claimed in claim 117, including the step of:
calculating a representative motion vector of the small block when the motion vector of the small block in any motion model is converted into a motion vector in another motion model having fewer parameters of the motion vector than parameters of the above motion model, wherein the representative motion vector is calculated based on the motion vector in the other motion model determined far each pixel in the small block.
123. A computer-readable storage medium storing a motion vector predictive encoding program as claimed in claim 118, including the step of
calculating a representative motion vector of the small block when the motion vector of the small block in any motion model is converted into a motion vector in another motion model having fewer parameters of the motion vector than parameters of the above motion model, wherein the representative motion vector is calculated based on the motion vector in the other motion model determined for each pixel in the small block.
124. A computer-readable storage medium storing a motion vector decoding program as claimed in claim 119, including the step of
calculating a representative motion vector of the small block when the motion vector of the small block in any motion model is converted into a motion vector in another motion model having fewer parameters of the motion vector than parameters of the above motion model, wherein the representative motion vector is calculated based on the motion vector in the other motion model determined for each pixel in the small block.
125. A computer-readable storage medium storing a motion vector decoding program as claimed in claim 120, including the step of:
calculating a representative motion vector of the small block when the motion vector of the small block in any motion model is converted into a motion vector in another motion model having fewer parameters of the motion vector than parameters of the above motion model, wherein the representative motion vector is calculated based on the motion vector in the other motion model determined for each pixel in the small block.
126. A computer-readable storage medium storing a motion vector decoding program, as claimed in claim 121, including the step of:
calculating a representative motion vector of the small block when the motion vector of the small block in any motion model is converted into a motion vector in another motion model having fewer parameters of the motion vector than parameters of the above motion model, wherein the representative motion vector is calculated based on the motion vector in the other motion model determined for each pixel in the small block.
127. A computer-readable storage medium storing a motion vector predictive encoding program as claimed in claim 117, including the step of:
calculating a representative motion vector of the small block when the motion vector of the small block in any motion model is converted into a motion vector in a translational motion model, wherein the representative motion vector is calculated based on the motion vector in the translational motion model determined for each pixel in the small block.
128. A computer-readable storage medium storing a motion vector predictive encoding program as claimed in claim 118, including the step of:
calculating a representative motion vector of the small block when the motion vector of the small block in any motion model is converted into a motion vector in a translational motion model, wherein the representative motion vector is calculated based on the motion vector in the translational motion model determined for each pixel in the small block.
129. A computer-readable storage medium storing a motion vector decoding program as claimed in claim 119, including the step of:
calculating a representative motion vector of the small block when the motion vector of the small block in any motion model is converted into a motion vector in a translational motion model, wherein the representative motion vector is calculated based on the motion vector in the translational motion model determined for each pixel in the small block.
130. A computer-readable storage medium storing a motion vector-decoding program as claimed in claim 120 including the step of:
calculating a representative motion vector of the small block when the motion vector of the small block in any motion model is converted into a motion vector in a translational motion model, wherein the representative motion vector is calculated based on the motion vector in the translational motion model determined for each pixel in the small block.
131. A computer-readable storage medium storing a motion vector decoding program as claimed in claim 121, including the step of:
calculating a representative motion vector of the small block when the motion vector of the small block in any motion model is converted into a motion vector in a translational motion model, wherein the representative motion vector is calculated based on the motion vector in the translational motion model determined for each pixel in the small block.
132. A computer-readable storage medium storing a motion vector predictive encoding program as claimed in claim 127, including the step of:
calculating the representative motion vector by setting each component thereof to one of the average, intermediate value, median, mode, maximum value, and minimum value, which is calculated for each component of the motion vector in the translational motion model of each pixel in the small block.
133. A computer-readable storage medium storing a motion vector predictive encoding program as claimed in claim 128, including the step of:
calculating the representative motion vector by setting each component thereof to one of the average, intermediate value, median, mode, maximum value, and minimum value, which is calculated for each component of the motion vector in the translational motion model of each pixel in the small block.
134. A computer-readable storage medium storing a motion vector decoding program as claimed in claim 129, including the step of:
calculating the representative motion vector by setting each component thereof to one of the average, intermediate value, median, mode, maximum value, and minimum value, which is calculated for each component of the motion vector in the translational motion model of each pixel in the small block.
135. A computer-readable storage medium storing a motion vector decoding program as claimed in claim 130, including the step of:
calculating the representative motion vector by setting each component thereof to one of the average, intermediate value, median, mode, maximum value, and minimum value, which is calculated for each component of the motion vector in the translational motion model of each pixel in the small block.
136. A computer-readable storage medium storing a motion vector decoding program as claimed in claim 131, including the step of:
calculating the representative motion vector by setting each component thereof to one of the average, intermediate value, median, mode, maximum value, and minimum value, which is calculated for each component of the motion vector in the translational motion model of each pixel in the small block.
US11/726,971 1997-06-25 2007-03-22 Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs Abandoned US20070183505A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/726,971 US20070183505A1 (en) 1997-06-25 2007-03-22 Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs
US13/738,539 US9154789B2 (en) 1997-06-25 2013-01-10 Motion vector predictive encoding and decoding method using prediction of motion vector of target block based on representative motion vector

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
JP09-168947 1997-06-25
JP16894797 1997-06-25
JP09-189985 1997-07-15
JP18998597 1997-07-15
JPPCT/JP98/02839 1998-06-25
US25411699A 1999-02-25 1999-02-25
US10/354,663 US7206346B2 (en) 1997-06-25 2003-01-30 Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs
US11/726,971 US20070183505A1 (en) 1997-06-25 2007-03-22 Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/354,663 Division US7206346B2 (en) 1997-06-25 2003-01-30 Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/738,539 Division US9154789B2 (en) 1997-06-25 2013-01-10 Motion vector predictive encoding and decoding method using prediction of motion vector of target block based on representative motion vector

Publications (1)

Publication Number Publication Date
US20070183505A1 true US20070183505A1 (en) 2007-08-09

Family

ID=28046013

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/354,663 Expired - Lifetime US7206346B2 (en) 1997-06-25 2003-01-30 Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs
US11/726,971 Abandoned US20070183505A1 (en) 1997-06-25 2007-03-22 Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs
US13/738,539 Expired - Fee Related US9154789B2 (en) 1997-06-25 2013-01-10 Motion vector predictive encoding and decoding method using prediction of motion vector of target block based on representative motion vector

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/354,663 Expired - Lifetime US7206346B2 (en) 1997-06-25 2003-01-30 Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/738,539 Expired - Fee Related US9154789B2 (en) 1997-06-25 2013-01-10 Motion vector predictive encoding and decoding method using prediction of motion vector of target block based on representative motion vector

Country Status (1)

Country Link
US (3) US7206346B2 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060182352A1 (en) * 2005-02-04 2006-08-17 Tetsuya Murakami Encoding apparatus and method, decoding apparatus and method, recording medium, and image processing system and method
US20070104275A1 (en) * 2005-11-02 2007-05-10 Heyward Simon N Motion estimation
US20080240247A1 (en) * 2007-03-29 2008-10-02 Samsung Electronics Co., Ltd. Method of encoding and decoding motion model parameters and video encoding and decoding method and apparatus using motion model parameters
US20090097565A1 (en) * 2007-07-23 2009-04-16 Huawei Technologies Co., Ltd. Vector coding/decoding apparatus and stream media player
US20090257498A1 (en) * 2008-04-15 2009-10-15 Sony Corporation Image processing apparatus and image processing method
US20090323808A1 (en) * 2008-06-25 2009-12-31 Micron Technology, Inc. Method and apparatus for motion compensated filtering of video signals
US20100049777A1 (en) * 2008-08-25 2010-02-25 Kabushiki Kaisha Toshiba Representation converting apparatus, arithmetic apparatus, representation converting method, and computer program product
US20100194932A1 (en) * 2009-02-03 2010-08-05 Sony Corporation Image processing device, image processing method, and capturing device
US20100283892A1 (en) * 2009-05-06 2010-11-11 Samsung Electronics Co., Ltd. System and method for reducing visible halo in digital video with covering and uncovering detection
US20120189167A1 (en) * 2011-01-21 2012-07-26 Sony Corporation Image processing device, image processing method, and program
US20130177082A1 (en) * 2011-12-16 2013-07-11 Panasonic Corporation Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US8929451B2 (en) 2011-08-04 2015-01-06 Imagination Technologies, Limited External vectors in a motion estimation system
US20150379683A1 (en) * 2014-06-26 2015-12-31 Lg Display Co., Ltd. Data processing apparatus for organic light emitting display device
CN105681808A (en) * 2016-03-16 2016-06-15 同济大学 Rapid decision-making method for SCC interframe coding unit mode
US11669007B2 (en) 2015-09-29 2023-06-06 Fujifilm Corporation Projection lens and projector

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7206346B2 (en) * 1997-06-25 2007-04-17 Nippon Telegraph And Telephone Corporation Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs
US7170941B2 (en) * 1999-08-13 2007-01-30 Patapsco Designs Inc. Temporal compression
US7050500B2 (en) * 2001-08-23 2006-05-23 Sharp Laboratories Of America, Inc. Method and apparatus for motion vector coding with global motion parameters
US7227896B2 (en) * 2001-10-04 2007-06-05 Sharp Laboratories Of America, Inc. Method and apparatus for global motion estimation
US7453940B2 (en) * 2003-07-15 2008-11-18 Lsi Corporation High quality, low memory bandwidth motion estimation processor
US20050105621A1 (en) * 2003-11-04 2005-05-19 Ju Chi-Cheng Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof
TWI268715B (en) * 2004-08-16 2006-12-11 Nippon Telegraph & Telephone Picture encoding method, picture decoding method, picture encoding apparatus, and picture decoding apparatus
US8588304B2 (en) 2005-03-31 2013-11-19 Panasonic Corporation Video decoding device, video decoding method, video decoding program, and video decoding integrated circuit
JP4304528B2 (en) * 2005-12-01 2009-07-29 ソニー株式会社 Image processing apparatus and image processing method
KR101356735B1 (en) * 2007-01-03 2014-02-03 삼성전자주식회사 Mothod of estimating motion vector using global motion vector, apparatus, encoder, decoder and decoding method
WO2009066284A2 (en) * 2007-11-20 2009-05-28 Ubstream Ltd. A method and system for compressing digital video streams
JP5422168B2 (en) * 2008-09-29 2014-02-19 株式会社日立製作所 Video encoding method and video decoding method
TWI398169B (en) 2008-12-23 2013-06-01 Ind Tech Res Inst Motion vector coding mode selection method and related coding mode selection apparatus thereof, and machine readable medium thereof
US9456111B2 (en) * 2010-06-15 2016-09-27 Mediatek Inc. System and method for content adaptive clipping
EP2424243B1 (en) * 2010-08-31 2017-04-05 OCT Circuit Technologies International Limited Motion estimation using integral projection
KR20120088488A (en) * 2011-01-31 2012-08-08 한국전자통신연구원 method for storing temporal motion vector and apparatus using the same
US9083983B2 (en) * 2011-10-04 2015-07-14 Qualcomm Incorporated Motion vector predictor candidate clipping removal for video coding
US9628795B2 (en) * 2013-07-17 2017-04-18 Qualcomm Incorporated Block identification using disparity vector in video coding
CN105684441B (en) 2013-10-25 2018-09-21 微软技术许可有限责任公司 The Block- matching based on hash in video and image coding
EP3061233B1 (en) 2013-10-25 2019-12-11 Microsoft Technology Licensing, LLC Representing blocks with hash values in video and image coding and decoding
WO2015131325A1 (en) 2014-03-04 2015-09-11 Microsoft Technology Licensing, Llc Hash table construction and availability checking for hash-based block matching
US10368092B2 (en) 2014-03-04 2019-07-30 Microsoft Technology Licensing, Llc Encoder-side decisions for block flipping and skip mode in intra block copy prediction
KR102287779B1 (en) 2014-06-23 2021-08-06 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 Encoder decisions based on results of hash-based block matching
MX2017004210A (en) 2014-09-30 2017-11-15 Microsoft Technology Licensing Llc Hash-based encoder decisions for video coding.
CN104539966B (en) 2014-09-30 2017-12-22 华为技术有限公司 Image prediction method and relevant apparatus
CN108293128A (en) * 2015-11-20 2018-07-17 联发科技股份有限公司 The method and device of global motion compensation in video coding and decoding system
WO2017157259A1 (en) 2016-03-15 2017-09-21 Mediatek Inc. Method and apparatus of video coding with affine motion compensation
US10390039B2 (en) 2016-08-31 2019-08-20 Microsoft Technology Licensing, Llc Motion estimation for screen remoting scenarios
US11095877B2 (en) 2016-11-30 2021-08-17 Microsoft Technology Licensing, Llc Local hash-based motion estimation for screen remoting scenarios
JP2021529462A (en) 2018-06-29 2021-10-28 ヴィド スケール インコーポレイテッド Selection of adaptive control points for video coding based on affine motion model
TW202017377A (en) 2018-09-08 2020-05-01 大陸商北京字節跳動網絡技術有限公司 Affine mode in video coding and decoding
CN113170111B (en) * 2018-12-08 2024-03-08 北京字节跳动网络技术有限公司 Video processing method, apparatus and computer readable storage medium
US11202085B1 (en) 2020-06-12 2021-12-14 Microsoft Technology Licensing, Llc Low-cost hash table construction and hash-based block matching for variable-size blocks

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4999705A (en) * 1990-05-03 1991-03-12 At&T Bell Laboratories Three dimensional motion compensated video coding
US5262854A (en) * 1992-02-21 1993-11-16 Rca Thomson Licensing Corporation Lower resolution HDTV receivers
US5424779A (en) * 1991-05-31 1995-06-13 Kabushiki Kaisha Toshiba Video coding apparatus
US5430480A (en) * 1992-06-30 1995-07-04 Ricoh California Research Center Sensor driven global motion compensation
US5657087A (en) * 1994-06-15 1997-08-12 Samsung Electronics Co., Ltd. Motion compensation encoding method and apparatus adaptive to motion amount
US5764803A (en) * 1996-04-03 1998-06-09 Lucent Technologies Inc. Motion-adaptive modelling of scene content for very low bit rate model-assisted coding of video sequences
US5784115A (en) * 1996-12-31 1998-07-21 Xerox Corporation System and method for motion compensated de-interlacing of video frames
US5819213A (en) * 1996-01-31 1998-10-06 Kabushiki Kaisha Toshiba Speech encoding and decoding with pitch filter range unrestricted by codebook range and preselecting, then increasing, search candidates from linear overlap codebooks
US5912991A (en) * 1997-02-07 1999-06-15 Samsung Electronics Co., Ltd. Contour encoding method using error bands
US6008852A (en) * 1996-03-18 1999-12-28 Hitachi, Ltd. Video coder with global motion compensation
US6084912A (en) * 1996-06-28 2000-07-04 Sarnoff Corporation Very low bit rate video coding/decoding method and apparatus
US6278736B1 (en) * 1996-05-24 2001-08-21 U.S. Philips Corporation Motion estimation
US20020102027A1 (en) * 1992-06-30 2002-08-01 Nobutaka Miyake Image encoding methdo and apparatus
US7206346B2 (en) * 1997-06-25 2007-04-17 Nippon Telegraph And Telephone Corporation Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0810936B2 (en) * 1989-03-31 1996-01-31 松下電器産業株式会社 Motion vector detection device
EP0419752B1 (en) 1989-09-25 1995-05-10 Rai Radiotelevisione Italiana System for encoding and transmitting video signals comprising motion vectors
JP3263960B2 (en) * 1991-10-22 2002-03-11 ソニー株式会社 Motion vector encoder and decoder
US5235419A (en) 1991-10-24 1993-08-10 General Instrument Corporation Adaptive motion compensation using a plurality of motion compensators
WO1995004432A1 (en) * 1993-07-30 1995-02-09 British Telecommunications Plc Coding image data
US6052414A (en) * 1994-03-30 2000-04-18 Samsung Electronics, Co. Ltd. Moving picture coding method and apparatus for low bit rate systems using dynamic motion estimation
JPH08228351A (en) 1995-02-20 1996-09-03 Nippon Telegr & Teleph Corp <Ntt> Motion compensative prediction encoding method for moving image
EP2129133A3 (en) * 1995-08-29 2012-02-15 Sharp Kabushiki Kaisha Video coding device and video decoding device with a motion compensated interframe prediction
JPH0984017A (en) 1995-09-14 1997-03-28 Nippon Telegr & Teleph Corp <Ntt> Motion compensation prediction coding method for moving image
US5623313A (en) * 1995-09-22 1997-04-22 Tektronix, Inc. Fractional pixel motion estimation of video signals
US5929940A (en) * 1995-10-25 1999-07-27 U.S. Philips Corporation Method and device for estimating motion between images, system for encoding segmented images
US6002802A (en) * 1995-10-27 1999-12-14 Kabushiki Kaisha Toshiba Video encoding and decoding apparatus
US5748247A (en) * 1996-04-08 1998-05-05 Tektronix, Inc. Refinement of block motion vectors to achieve a dense motion field

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4999705A (en) * 1990-05-03 1991-03-12 At&T Bell Laboratories Three dimensional motion compensated video coding
US5424779A (en) * 1991-05-31 1995-06-13 Kabushiki Kaisha Toshiba Video coding apparatus
US5262854A (en) * 1992-02-21 1993-11-16 Rca Thomson Licensing Corporation Lower resolution HDTV receivers
US5430480A (en) * 1992-06-30 1995-07-04 Ricoh California Research Center Sensor driven global motion compensation
US20020102027A1 (en) * 1992-06-30 2002-08-01 Nobutaka Miyake Image encoding methdo and apparatus
US5657087A (en) * 1994-06-15 1997-08-12 Samsung Electronics Co., Ltd. Motion compensation encoding method and apparatus adaptive to motion amount
US5819213A (en) * 1996-01-31 1998-10-06 Kabushiki Kaisha Toshiba Speech encoding and decoding with pitch filter range unrestricted by codebook range and preselecting, then increasing, search candidates from linear overlap codebooks
US6008852A (en) * 1996-03-18 1999-12-28 Hitachi, Ltd. Video coder with global motion compensation
US5764803A (en) * 1996-04-03 1998-06-09 Lucent Technologies Inc. Motion-adaptive modelling of scene content for very low bit rate model-assisted coding of video sequences
US6278736B1 (en) * 1996-05-24 2001-08-21 U.S. Philips Corporation Motion estimation
US6084912A (en) * 1996-06-28 2000-07-04 Sarnoff Corporation Very low bit rate video coding/decoding method and apparatus
US5784115A (en) * 1996-12-31 1998-07-21 Xerox Corporation System and method for motion compensated de-interlacing of video frames
US5912991A (en) * 1997-02-07 1999-06-15 Samsung Electronics Co., Ltd. Contour encoding method using error bands
US7206346B2 (en) * 1997-06-25 2007-04-17 Nippon Telegraph And Telephone Corporation Motion vector predictive encoding method, motion vector decoding method, predictive encoding apparatus and decoding apparatus, and storage media storing motion vector predictive encoding and decoding programs

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060182352A1 (en) * 2005-02-04 2006-08-17 Tetsuya Murakami Encoding apparatus and method, decoding apparatus and method, recording medium, and image processing system and method
US20070104275A1 (en) * 2005-11-02 2007-05-10 Heyward Simon N Motion estimation
US8743959B2 (en) * 2005-11-02 2014-06-03 Simon Nicholas Heyward Motion estimation
US20080240247A1 (en) * 2007-03-29 2008-10-02 Samsung Electronics Co., Ltd. Method of encoding and decoding motion model parameters and video encoding and decoding method and apparatus using motion model parameters
US7746932B2 (en) 2007-07-23 2010-06-29 Huawei Technologies Co., Ltd. Vector coding/decoding apparatus and stream media player
US20090097587A1 (en) * 2007-07-23 2009-04-16 Huawei Technologies Co., Ltd. Vector coding method and apparatus and computer program
US7738559B2 (en) 2007-07-23 2010-06-15 Huawei Technologies Co., Ltd. Vector decoding method and apparatus and computer program
US20090097565A1 (en) * 2007-07-23 2009-04-16 Huawei Technologies Co., Ltd. Vector coding/decoding apparatus and stream media player
US20090097595A1 (en) * 2007-07-23 2009-04-16 Huawei Technologies Co., Ltd. Vector decoding method and apparatus and computer program
US7738558B2 (en) 2007-07-23 2010-06-15 Huawei Technologies Co., Ltd. Vector coding method and apparatus and computer program
US20090257498A1 (en) * 2008-04-15 2009-10-15 Sony Corporation Image processing apparatus and image processing method
US8446957B2 (en) * 2008-04-15 2013-05-21 Sony Corporation Image processing apparatus and method using extended affine transformations for motion estimation
US8184705B2 (en) * 2008-06-25 2012-05-22 Aptina Imaging Corporation Method and apparatus for motion compensated filtering of video signals
US20090323808A1 (en) * 2008-06-25 2009-12-31 Micron Technology, Inc. Method and apparatus for motion compensated filtering of video signals
US8533243B2 (en) * 2008-08-25 2013-09-10 Kabushiki Kaisha Toshiba Representation converting apparatus, arithmetic apparatus, representation converting method, and computer program product
US20100049777A1 (en) * 2008-08-25 2010-02-25 Kabushiki Kaisha Toshiba Representation converting apparatus, arithmetic apparatus, representation converting method, and computer program product
US20100194932A1 (en) * 2009-02-03 2010-08-05 Sony Corporation Image processing device, image processing method, and capturing device
US8363130B2 (en) * 2009-02-03 2013-01-29 Sony Corporation Image processing device, image processing method, and capturing device
US20100283892A1 (en) * 2009-05-06 2010-11-11 Samsung Electronics Co., Ltd. System and method for reducing visible halo in digital video with covering and uncovering detection
US8289444B2 (en) * 2009-05-06 2012-10-16 Samsung Electronics Co., Ltd. System and method for reducing visible halo in digital video with covering and uncovering detection
US20120189167A1 (en) * 2011-01-21 2012-07-26 Sony Corporation Image processing device, image processing method, and program
US8818046B2 (en) * 2011-01-21 2014-08-26 Sony Corporation Image processing device, image processing method, and program
US8929451B2 (en) 2011-08-04 2015-01-06 Imagination Technologies, Limited External vectors in a motion estimation system
US9094682B2 (en) 2011-12-16 2015-07-28 Panasonic Intellectual Property Corporation Of America Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US20150271519A1 (en) * 2011-12-16 2015-09-24 Panasonic Intellectual Property Corporation Of America Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US20140341295A1 (en) * 2011-12-16 2014-11-20 Panasonic Intellectual Property Corporation Of America Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US8867620B2 (en) * 2011-12-16 2014-10-21 Panasonic Intellectual Property Corporation Of America Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US8917773B2 (en) * 2011-12-16 2014-12-23 Panasonic Intellectual Property Corporation Of America Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US20130177082A1 (en) * 2011-12-16 2013-07-11 Panasonic Corporation Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
CN103650507A (en) * 2011-12-16 2014-03-19 松下电器产业株式会社 Video image coding method, video image coding device, video image decoding method, video image decoding device and video image coding/decoding device
US8885722B2 (en) 2011-12-16 2014-11-11 Panasonic Intellectual Property Corporation Of America Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US20230328254A1 (en) * 2011-12-16 2023-10-12 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US11711521B2 (en) * 2011-12-16 2023-07-25 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US20220377348A1 (en) * 2011-12-16 2022-11-24 Velos Media, Llc Moving Picture Coding Method, Moving Picture Coding Apparatus, Moving Picture Decoding Method, Moving Picture Decoding Apparatus, and Moving Picture Coding and Decoding Apparatus
US20170054985A1 (en) * 2011-12-16 2017-02-23 Sun Patent Trust Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US11356669B2 (en) * 2011-12-16 2022-06-07 Velos Media, Llc Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US10321133B2 (en) * 2011-12-16 2019-06-11 Velos Media, Llc Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US20190253715A1 (en) * 2011-12-16 2019-08-15 Velos Media, Llc Moving Picture Coding Method, Moving Picture Coding Apparatus, Moving Picture Decoding Method, Moving Picture Decoding Apparatus, and Moving Picture Coding and Decoding Apparatus
US10757418B2 (en) * 2011-12-16 2020-08-25 Velos Media, Llc Moving picture coding method, moving picture coding apparatus, moving picture decoding method, moving picture decoding apparatus, and moving picture coding and decoding apparatus
KR102184884B1 (en) * 2014-06-26 2020-12-01 엘지디스플레이 주식회사 Data processing apparatus for organic light emitting diode display
US9715716B2 (en) * 2014-06-26 2017-07-25 Lg Display Co., Ltd. Data processing apparatus for organic light emitting display device
KR20160007751A (en) * 2014-06-26 2016-01-21 엘지디스플레이 주식회사 Data processing apparatus for organic light emitting diode display
US20150379683A1 (en) * 2014-06-26 2015-12-31 Lg Display Co., Ltd. Data processing apparatus for organic light emitting display device
US11669007B2 (en) 2015-09-29 2023-06-06 Fujifilm Corporation Projection lens and projector
CN105681808A (en) * 2016-03-16 2016-06-15 同济大学 Rapid decision-making method for SCC interframe coding unit mode

Also Published As

Publication number Publication date
US20130128980A1 (en) 2013-05-23
US9154789B2 (en) 2015-10-06
US20030174776A1 (en) 2003-09-18
US7206346B2 (en) 2007-04-17

Similar Documents

Publication Publication Date Title
US9154789B2 (en) Motion vector predictive encoding and decoding method using prediction of motion vector of target block based on representative motion vector
KR100658181B1 (en) Video decoding method and apparatus
RU2307478C2 (en) Method for compensating global movement for video images
KR100950743B1 (en) Image information coding device and method and image information decoding device and method
EP0877530B1 (en) Digital image encoding and decoding method
KR101182977B1 (en) Motion prediction compensation method and motion prediction compensation device
EP0762776B1 (en) A method and apparatus for compressing video information using motion dependent prediction
US6542642B2 (en) Image coding process and motion detecting process using bidirectional prediction
US20060120455A1 (en) Apparatus for motion estimation of video data
JP2006279573A (en) Encoder and encoding method, and decoder and decoding method
WO2006035584A1 (en) Encoder, encoding method, program of encoding method and recording medium wherein program of encoding method is recorded
US20100158120A1 (en) Reference Picture Selection for Sub-Pixel Motion Estimation
EP1819173B1 (en) Motion vector predictive encoding apparatus and decoding apparatus
KR100238893B1 (en) Motion vector coding method and apparatus
JP2914448B2 (en) Motion vector prediction encoding method and motion vector decoding method, prediction encoding device and decoding device, and recording medium recording motion vector prediction encoding program and decoding program
WO2003054795A2 (en) Image coding with block dropping
KR100602148B1 (en) Method for motion picture encoding use of the a quarter of a pixel motion vector in mpeg system
JP2003348595A (en) Image processor and image processing method, recording medium and program
KR100617598B1 (en) Method for compressing moving picture using 1/4 pixel motion vector
KR100240620B1 (en) Method and apparatus to form symmetric search windows for bidirectional half pel motion estimation
KR100757832B1 (en) Method for compressing moving picture using 1/4 pixel motion vector
KR100293445B1 (en) Method for coding motion vector
KR100617177B1 (en) Motion estimation method
JP4061505B2 (en) Image coding apparatus and method
KR100242649B1 (en) Method for estimating motion using mesh structure

Legal Events

Date Code Title Description
AS Assignment

Owner name: NIPPON TELEGRAPH AND TELEPHONE CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHIMIZU, ATSUSHI;JOZAWA, HIROHISA;KAMIKURA, KAZUTO;AND OTHERS;REEL/FRAME:019168/0095

Effective date: 19990215

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION