USRE46468E1 - Methods for motion estimation with adaptive motion accuracy - Google Patents

Methods for motion estimation with adaptive motion accuracy Download PDF

Info

Publication number
USRE46468E1
USRE46468E1 US14/170,134 US201414170134A USRE46468E US RE46468 E1 USRE46468 E1 US RE46468E1 US 201414170134 A US201414170134 A US 201414170134A US RE46468 E USRE46468 E US RE46468E
Authority
US
United States
Prior art keywords
motion vector
motion
searching
criteria
accuracy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US14/170,134
Inventor
Jordi Ribas-Corbera
Jiandong Shen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sharp Corp
Original Assignee
Sharp Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Corp filed Critical Sharp Corp
Priority to US14/170,134 priority Critical patent/USRE46468E1/en
Application granted granted Critical
Publication of USRE46468E1 publication Critical patent/USRE46468E1/en
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/149Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/533Motion estimation using multistep search, e.g. 2D-log search or one-at-a-time search [OTS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/567Motion estimation based on rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention relates generally to a method of compressing or coding digital video with bits and, specifically, to an effective method for estimating and encoding motion vectors in motion-compensated video coding.
  • the current frame to be encoded is decomposed into image blocks of the same size, typically blocks of 16 ⁇ 16 pixels, called “macroblocks.”
  • the encoder searches for the block in a previously encoded frame (the “reference frame”) that best matches the current macroblock.
  • the coordinate shift between a current macroblock and its best match in the reference frame is represented by a two-dimensional vector (the “motion vector”) of the macroblock.
  • Each component of the motion vector is measured in pixel units.
  • the motion vector for the current macroblock is (0,0). If the best match is found two pixels to the right and three pixels up from the coordinates of the current macroblock, the motion vector is (2,3).
  • Such motion vectors are said to have integer pixel (or “integer-pel” or “full-pel”) accuracy, since their horizontal X and vertical Y components are integer pixel values.
  • Moving objects in a video scene do not move in integer pixel increments from frame to frame.
  • True motion can take any real value along the X and Y directions. Consequently, a better match for a current macroblock can often be found by interpolating the previous frame by a factor N ⁇ N and then searching for the best match in the interpolated frame.
  • the motion vectors can then take values in increments of 1/N pixel along X and Y and are said to have 1/N pixel (or “1/N-pel”) accuracy.
  • the Telenor encoder estimates the best motion vector in two steps: the encoder first searches for the best integer-pel vector and then the Telenor encoder searches for the best 1 ⁇ 3-pixel accurate vector V 1/3 near V 1 .
  • the Telenor encoder has several problems. First, it uses a sub-optimal fast-search strategy and a complex cubic filter (at all stages) to compute the 1 ⁇ 3-pel accurate motion vectors.
  • the Telenor encoder uses an accuracy of the effective rate-distortion criteria that is fixed at 1 ⁇ 3-pixel and, therefore, does not adapt to select better motion accuracies.
  • the Telenor encoder variable-length code (“VLC”) table has an accuracy fixed at 1 ⁇ 3-pixel and, therefore, is not adapted and interpreted differently for different accuracies.
  • Girod work is the first fundamental analysis on the benefits of using sub-pixel motion accuracy for video coding.
  • Girod used a simple, hierarchical strategy to search for the best motion vector in sub-pixel space.
  • He also used simple mean absolute difference (“MAD”) criteria to select the best motion vector for a given accuracy.
  • the best accuracy was selected using a formula that is not useful in practice since it is based on idealized assumptions, is very complex, and restricts all motion vectors to have the same accuracy within a frame.
  • Girod focused only on prediction error energy and did not address how to use bits to encode the motion vectors.
  • the Gupta work presented a method for computing, selecting, and encoding motion vectors with sub-pixel accuracy for video compression.
  • the Gupta work disclosed a formula based on mean squared error (“MSE”) and bilinear interpolation, used this formula to find an ideal motion vector, and then quantized such vector to the desired motion accuracy.
  • MSE mean squared error
  • the best motion vector for a given accuracy was found using the sub-optimal MSE criteria and the best accuracy was selected using the largest decrease in difference energy per distortion bit, which is a greedy (sub-optimal) criteria.
  • a given motion vector was coded by first encoding that vector with 1 ⁇ 2-pel accuracy and then encoding the higher accuracy with refinement bits. Course-to-fine coding tends to require significant bit overhead.
  • Benzler did consider different interpolation filters, but proposed a complex filter at the first stage and a simpler filter at the second stage and interpolated one macroblock at a time. This approach does not require much cache memory, but it is computationally expensive because of its complexity and because all motion vectors are computed with 1 ⁇ 4-pel accuracy for all the possible modes in a macroblock (e.g., 16 ⁇ 16, four-8 ⁇ 8, sixteen-4 ⁇ 4, etc.) and then the best mode is determined. Benzler used the MAD criteria to find the best motion vector which was fixed to 1 ⁇ 4-pel accuracy for the whole sequence, and hence he did not address how to select the best motion accuracy. Finally, Benzler encoded the motion vectors with a variable-length code (“VLC”) table that could be used for encoding 1 ⁇ 2 and 1 ⁇ 4 pixel 1 ⁇ 2- and 1 ⁇ 4-pixel accurate vectors.
  • VLC variable-length code
  • references discussed above do not estimate the motion vectors using optimized rate-distortion criteria and do not exploit the convexity properties of such criteria to reduce computational complexity. Further, these references do not use effective strategies to encode motion vectors and their accuracies.
  • One preferred embodiment of the present invention addresses the problems of the prior art by computing motion vectors of high pixel accuracy (also denoted as “fractional” or “sub-pixel” accuracy) with a minor increase in computation.
  • a video encoder can achieve significant compression gains (e.g., up to thirty percent in bit rate savings over the classical choices of motion accuracy) using similar levels of computation. Since the motion accuracies are adaptively computed and selected, the present invention may be described as adaptive motion accuracy (“AMA”).
  • AMA adaptive motion accuracy
  • One preferred embodiment of the present invention uses fast-search strategies in sub-pixel space that smartly searches for the best motion vectors.
  • This technique estimates motion vectors in motion-compensated video coding by finding a best motion vector for a macroblock.
  • the first step is searching a first set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V 1 to find a best motion vector V 2 .
  • a second set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V 2 is searched to find a best motion vector V 3 .
  • a third set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V 3 is searched to find the best motion vector of the macroblock.
  • a technique for estimating high-accurate motion vectors may use different interpolation filters at different stages in order to reduce computational complexity.
  • Another alternate preferred embodiment of the present invention selects the best vectors and accuracies in a rate-distortion (“RD”) sense.
  • This embodiment uses rate-distortion criteria that adapts according to the different motion accuracies to determine both the best motion vectors and the best motion accuracies.
  • another alternate preferred embodiment of the present invention encodes the motion vector and accuracies with an effective VLC approach.
  • This technique uses a VLC table that is interpreted differently at different coding units, according to the associated motion vector accuracy.
  • FIG. 1 is a diagram of an exemplary full-pel and 1 ⁇ 3-pel locations in velocity space.
  • FIG. 2 is a flowchart illustrating a prior art method for estimating the best motion vector.
  • FIG. 3 is a diagram of an exemplary location of motion vector candidates for full-search in sub-pixel velocity space.
  • FIG. 4 is a flowchart illustrating a full-search preferred embodiment of the method for estimating the best motion vector of the present invention.
  • FIG. 5 is a diagram of an exemplary location of motion vector candidates for fast-search in sub-pixel velocity space.
  • FIG. 6 is a flowchart illustrating a fast-search preferred embodiment of the method for estimating the best motion vector of the present invention.
  • FIG. 7 is a detail flowchart illustrating an alternate preferred embodiment of step 114 of FIG. 6 .
  • FIG. 8 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Container” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
  • FIG. 9 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “News” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
  • FIG. 10 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Mobile” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
  • FIG. 11 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Garden” video sequence, with SIF resolution, and at the frame rate of 15 frames per second.
  • FIG. 12 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Garden” video sequence, with QCIF resolution, and at the frame rate of 15 frames per second.
  • FIG. 13 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Tempete” video sequence, with SIF resolution, and at the frame rate of 15 frames per second.
  • FIG. 14 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Tempete” video sequence, with QCIF resolution, and at the frame rate of 15 frames per second.
  • FIG. 15 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Paris shaked” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
  • FIG. 16 is a graphical representation of experimental performance results of fast-search (“Telenor FSAMA+c”) and full-search (“Telenor AMA+c”) strategies in the “Mobile” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
  • Tal FSAMA+c fast-search
  • Telenor AMA+c full-search
  • FIG. 17 is a graphical representation of experimental performance results of fast-search (“Telenor FSAMA+c”) and full-search (“Telenor AMA+c”) strategies in the “Container” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
  • Tal FSAMA+c fast-search
  • Telenor AMA+c full-search
  • FIG. 18 is a graphical representation of experimental performance results of tests using only one reference frame for motion compensation as compared to tests using multiple reference frames for motion compensation the in the “Mobile” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
  • the methods of the present invention are described herein in terms of the motion accuracy being modified at each image block. These methods, however, may be applied when the accuracy is fixed for the whole sequence or modified on a frame-by-frame basis.
  • the present invention is also described as using Telenor's video encoders (and particularly the Telenor encoder) as described in the Background of the Invention. Although described in terms of Telenor's video encoders, the techniques described herein are applicable to any other motion-compensated video coder.
  • Telenor's encoder estimates the best motion vector in two steps shown in FIG. 2 .
  • the Telenor encoder searches for the best integer-pel vector V 1 ( FIG. 1 ) 100 .
  • the Telenor encoder searches for the best 1 ⁇ 3-pixel accurate vector V 1/3 ( FIG. 1 ) near V 1 102 .
  • This second step is shown graphically in FIG. 1 where a total of eight blocks (each having an array of 16 ⁇ 16 pixels) in the 3 ⁇ 3 interpolated reference frame are checked to find the best match.
  • the motion vectors for these eight blocks are represented by the eight solid dots in the grid centered on V 1 .
  • the technology of the present invention allows the encoder to choose between any set of motion accuracies (for example, 1 ⁇ 2, 1 ⁇ 3, and 1 ⁇ 6-pel accurate motion vectors) using either a full search strategy or a fast search strategy.
  • the encoder searches all the motion vector candidates in a grid of 1 ⁇ 6-pixel resolution and a “square radius” (defined herein as a square block defined by a number of pixels up, a number of pixels down, and a number of pixels to both sides) of five pixels as shown in FIG. 3 .
  • FIG. 4 shows that the first step of the full-search AMA is to search for the best integer-pel vector V 1 ( FIG. 1 ) 104 .
  • the encoder searches for the best 1 ⁇ 6-pixel accurate vector V 1/6 ( FIG. 3 ) near V 1 106 .
  • the full-search AMA modifies the second step of the Telenor's process so that the encoder also searches for motion vector candidates in other sub-pixel locations in the velocity space.
  • the objective is to find the best motion vector in the grid, i.e., the vector that points to the block (in the interpolated reference frame) that best matches the current macroblock.
  • the full-search strategy is computationally complex since it searches 120 sub-pixel candidates, it shows the full potential of this preferred method of the present invention.
  • MSE mean squared error
  • MAD mean absolute difference
  • typical video coders find the best match for a macroblock by selecting the motion vector that produces either the smallest MSE or the smallest MAD.
  • the block associated to the best motion vector is the one closest to the given macroblock in an MSE or MAD sense.
  • MSE and MAD distortion measures do not take into account the cost in bits of actually encoding the vector. For example, a given motion vector may minimize the MSE, but it may be very costly to encode with bits, so it may not be the best choice from a coding standpoint.
  • RD Telenor use rate-distortion
  • the value of “distortion” is typically the MSE or MAD
  • L is a constant that depends on the compression level (i.e., the quantization step size)
  • Bits is the number of bits required to code the motion vector.
  • any RD criteria of this type would work with the present invention.
  • “Bits” include the bits needed for encoding the vector and those for encoding the accuracy of the vector.
  • some candidates can have several “Bits” values, because they can have several accuracy modes. For example, the candidate at location (1 ⁇ 2, ⁇ 1 ⁇ 2) can be thought of having 1 ⁇ 2 or 1 ⁇ 3 pixel 1/6-pixel accuracy.
  • the encoder checks only a small set of the motion vector candidates.
  • the encoder checks the eight motion vector candidates in a grid of 1 ⁇ 2-pixel resolution of square radius 1 , which is centered on V 1 108 .
  • V 2 is then set to denote the candidate that has the smallest RD cost (i.e., the best of the eight previous vectors and V 1 ) 110 .
  • the encoder checks the eight motion vector locations in a grid of 1 ⁇ 6-pixel resolution of square radius 1 that is now centered on V 2 112 .
  • V 2 has the smallest RD cost 114
  • the encoder stops its search and selects V 2 as the motion vector for the block. Otherwise, V 3 is set to denote the best motion vector of the eight 116 .
  • the encoder searches for a new motion vector candidate in the grid of 1 ⁇ 6-pixel resolution of square radius 1 that is centered on V 3 118 . It should be noted that some of the candidates in this grid have already been tested and can be skipped.
  • the candidate with the smallest RD cost in this last step is selected as the motion vector for the block 120 .
  • Alternate embodiments of the invention replace one or more of the steps 108 - 120 . These embodiments have also been effective and have further reduced the number of motion vector candidates to check in the sub-pixel velocity space.
  • FIG. 7 checks candidates of 1 ⁇ 3-pel accuracy.
  • step 112 is replaced by one of three possible scenarios. First, if the best motion vector candidate from step 110 is at the center of V 1 (the “integer-pel vector”) 130 , then the encoder checks three candidates of 1 ⁇ 3-pel accuracy between the center vector and the 1 ⁇ 2-pel location with the next lowest RD cost 132 . Second, if the best motion vector candidate from step 110 is a corner vector 134 , then, the encoder checks the four vector candidates of 1 ⁇ 3-pel accuracy that are closest to such corner 136 .
  • step 138 the encoder determines which of these two corners has lower RD cost and checks the four vector candidates of 1 ⁇ 3-pel accuracy that are closest to the line between such corner and the best candidate from step 110 140 . It should be noted that in implementing this process step 138 may be unnecessary because if V 2 V 2 is neither at the center or a corner vector, then it would necessarily be between two corners. If the encoder is set to find motion vectors with 1 ⁇ 3-pixel accuracy, FIG. 7 could be modified to end rather than continuing with step 114 .
  • step 108 checks only motion vector candidates of 1 ⁇ 2-pixel accuracy, the computation and memory requirements for the hardware or software implementation are significantly reduced.
  • the reference frame is interpolated by 2 ⁇ 2 in order to obtain the RD costs for the 1 ⁇ 2-pel vector candidates.
  • a significant amount of fast (or cache) memory for a hardware or software encoder is saved as compared to Telenor's approach that needed to interpolate the reference frame by 3 ⁇ 3. In comparison to the Telenor encoder, this is a cache memory savings of 9/4or 9/4, or a factor of 2.25. The few additional interpolations can be done later on a block-by-block basis.
  • step 108 since the interpolations in step 108 are used to direct the search towards the lower values of the RD cost function, a complex filter is not needed for these interpolations. Accordingly, computation power may be saved by using a simple bilinear filter for step 108 .
  • the encoder encodes both the motion vector and accuracy is values with bits.
  • One approach is to encode the motion vector with a given accuracy (e.g., half-pixel accuracy) and then add some extra bits for refining the vector to the higher motion accuracy. This is the strategy suggested by B. Girod, but it is sub-optimal in a rate-distortion sense.
  • the accuracy of the motion vector for a macroblock is first encoded using a simple code such as the one given in Table 1. Any other table with code lengths ⁇ 1, 2, 2 ⁇ could be used as well.
  • the bit rate could be further reduced using a typical DPCM approach.
  • the method of the present invention can be used for encoding vectors of any motion accuracy and the table can be interpreted differently at each frame and macroblock. Further, the general method of the present invention can be used for any motion accuracy, not necessarily those that are multiples of each other or those that are of the type 1/n (with n an integer). The number of increments in the given sub-pixel space is simply counted and the bits in the associated entry of the table is used as the code.
  • VLC variable length code
  • the motion vector can also be easily decoded.
  • the associated block in the previous frame is reconstructed using a typical 4-tap cubic interpolator. There is a different 4-tap filter for each motion accuracy.
  • the AMA does not increase decoding complexity, because the number of operations needed to reconstruct the predicted block are the same, regardless of the motion accuracy.
  • FIGS. 8-18 show test results of the Telenor encoder codec with and without AMA in a variety of video sequences, resolutions, and frame rates, as described in Table 2. These figures show rate-distortion (“RD”) plots for each case.
  • the “Anchor” curve shows RD points from optimized H.263+ ( FIGS. 8 and 9 only).
  • the “Telenor 1 ⁇ 2+b” curve shows Telenor with 1 ⁇ 2-pel vectors and bilinear interpolation (the “classical case”).
  • the “Telenor 1 ⁇ 3” curve shows the current Telenor proposal (the “Telenor encoder”).
  • the “Telenor+AMA+c” curve shows the Telenor encoder with the full-search strategy of the present invention.
  • PSNR peak signal-to-noise ratio
  • FIG. # Resolution Frame rate Container FIG. 8 QCIF 10 News FIG. 9 QCIF 10 Mobile FIG. 10 QCIF 10 FIG. 11 SIF 15 Garden FIG. 12 QCIF 15 Tempete FIG. 13 SIF 15 FIG. 14 QCIF 15 Paris Shaked FIG. 15 QCIF 10
  • the video sequences are commonly used by the video coding community, except for “Paris Shaked.”
  • the latter is a synthetic sequence obtained by shifting the well-known sequence “Paris” by a motion vector whose X and Y components take a random value within [ ⁇ 1,1]. This synthetic sequence simulates small movements caused by a hand-held camera in a typical video phone scene.
  • the experiments show that the gains with AMA add to those obtained using multiple reference frames.
  • the gain from AMA in the one-reference case can be measured by comparing the curve labeled with a “+” (Telenor AMA+c+1r) with the curve labeled with an “x” (Telenor 1 ⁇ 3+1r), and the gain in the five-reference case can be measured between the curve labeled with a “diamond” (Telenor AMA+c+5r) with the curve labeled with a “*” (Telenor 1 ⁇ 3+5r).
  • the present invention may be implemented at the frame level so that different frames could use different motion accuracies, but within a frame all motion vectors would use the same accuracy. Preferably in this embodiment the motion vector accuracy would then be signaled only once at the frame layer. Experiments have shown that using the best, fixed motion accuracy for the whole frame should also produce compression gains as those presented here for the macroblock-adaptive case.
  • the encoder could do motion compensation on the entire frame with the different vector accuracies and then select the best accuracy according to the RD criteria.
  • This approach is not suitable for pipeline, one-pass encoders, but it could be appropriate for software-based or more complex encoders.
  • the encoder could use previous statistics and/or formulas to predict what will be the best accuracy for a given frame (e.g., the formulas in set forth in the Ribas work or a variation thereof can be used). This approach would be well-suited for one-pass encoders, although the performance gains would depend on the precision of the formulas used for the prediction.

Abstract

Methods for motion estimation with adaptive motion accuracy of the present invention include several techniques for computing motion vectors of high pixel accuracy with a minor increase in computation. One technique uses fast-search strategies in sub-pixel space that smartly searches for the best motion vectors. An alternate technique estimates high-accurate motion vectors using different interpolation filters at different stages in order to reduce computational complexity. Yet another technique uses rate-distortion criteria that adapts according to the different motion accuracies to determine both the best motion vectors and the best motion accuracies. Still another technique uses a VLC table that is interpreted differently at different coding units, according to the associated motion vector accuracy.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
This application is a Continuation Reissue Application of co-pending U.S. application Ser. No. 11/984,661 filed on Nov. 20, 2007. U.S. application Ser. No. 11/984,661 is a Reissue of U.S. application Ser. No. 09/615,791, filed on Jul. 13, 2000, now U.S. Pat. No. 6,968,008, which claims the benefit of priority of U.S. Provisional Application No. 60/146,102, filed on Jul. 27, 1999, now expired. The entire contents of all of the above applications are incorporated herein by reference.
This application claims the benefit of Provisional Application No. 60/146,102, filed Jul., 27, 1999.
BACKGROUND OF THE INVENTION
The present invention relates generally to a method of compressing or coding digital video with bits and, specifically, to an effective method for estimating and encoding motion vectors in motion-compensated video coding.
In classical motion estimation the current frame to be encoded is decomposed into image blocks of the same size, typically blocks of 16×16 pixels, called “macroblocks.” For each current macroblock, the encoder searches for the block in a previously encoded frame (the “reference frame”) that best matches the current macroblock. The coordinate shift between a current macroblock and its best match in the reference frame is represented by a two-dimensional vector (the “motion vector”) of the macroblock. Each component of the motion vector is measured in pixel units.
For example, if the best match for a current macroblock happens to be at the same location, as is the typical case in stationary background, the motion vector for the current macroblock is (0,0). If the best match is found two pixels to the right and three pixels up from the coordinates of the current macroblock, the motion vector is (2,3). Such motion vectors are said to have integer pixel (or “integer-pel” or “full-pel”) accuracy, since their horizontal X and vertical Y components are integer pixel values. In FIG. 1, the vector V1=(1,1) represents the full-pel motion vector for a given current macroblock.
Moving objects in a video scene do not move in integer pixel increments from frame to frame. True motion can take any real value along the X and Y directions. Consequently, a better match for a current macroblock can often be found by interpolating the previous frame by a factor N×N and then searching for the best match in the interpolated frame. The motion vectors can then take values in increments of 1/N pixel along X and Y and are said to have 1/N pixel (or “1/N-pel”) accuracy.
In “Response to Call for Proposals for H.26L,” ITU-Telecommunications Standardization Sector, Q.15/SG16, doc. Q15-F-11, Seoul, Nov. 98, and “Enhancement of the Telenor proposal for H.26L,” ITU-Telecommunications Standardization Sector, Q.15/SG16, doc. Q15-G-25, Monterey, Feb. 99, Gisle Bjontegaard proposed using ⅓-pel accurate motion vectors and cubic-like interpolation for the H26L video coding standard (the “Telenor encoder”). To do so this, the Telenor encoder interpolates or “up-samples” the reference frame by 3×3 using a cubic-like interpolation filter. This interpolated version requires nine times more memory than the reference frame. At a given macroblock, the Telenor encoder estimates the best motion vector in two steps: the encoder first searches for the best integer-pel vector and then the Telenor encoder searches for the best ⅓-pixel accurate vector V1/3 near V1. Using FIG. 1 as an example, a total of eight blocks (of 16×16 pixels) in the 3×3 interpolated reference frame are checked to find the best match which, as shown is the block associated to the motion vector (VX, VY)=(1+⅓,1). The Telenor encoder has several problems. First, it uses a sub-optimal fast-search strategy and a complex cubic filter (at all stages) to compute the ⅓-pel accurate motion vectors. As a result, the computed motion vectors are not optimal and the memory and computation requirements are very expensive. Further, the Telenor encoder uses an accuracy of the effective rate-distortion criteria that is fixed at ⅓-pixel and, therefore, does not adapt to select better motion accuracies. Similarly, the Telenor encoder variable-length code (“VLC”) table has an accuracy fixed at ⅓-pixel and, therefore, is not adapted and interpreted differently for different accuracies.
Most known video compression methods estimate and encode motion vectors with ⅓ ½-pixel accuracy, because early studies suggested that higher or adaptive motion accuracies would increase computational complexity without providing additional compression gains. These early studies, however, did not estimate the motion vectors using optimized rate-distortion criteria, did not exploit the convexity properties of such criteria to reduce computational complexity, and did not use effective strategies to encode the motion vectors and their accuracies.
One such early study was Bernd Girod's “Motion-Compensating Prediction with Fractional-Pel Accuracy,” IEEE Transactions on Communications, Vol. 41, No. 4, pp. 604-612, April 1993 (the “Girod work”). The Girod work is the first fundamental analysis on the benefits of using sub-pixel motion accuracy for video coding. Girod used a simple, hierarchical strategy to search for the best motion vector in sub-pixel space. He also used simple mean absolute difference (“MAD”) criteria to select the best motion vector for a given accuracy. The best accuracy was selected using a formula that is not useful in practice since it is based on idealized assumptions, is very complex, and restricts all motion vectors to have the same accuracy within a frame. Finally, Girod focused only on prediction error energy and did not address how to use bits to encode the motion vectors.
Another early study was Smita Gupta's and Allen Gersho's “On Fractional Pixel Motion Estimation,” Proc. SPIE VCIP, Vol. 2094, pp. 408-419, Cambridge, November 1993 (the “Gupta work”). The Gupta work presented a method for computing, selecting, and encoding motion vectors with sub-pixel accuracy for video compression. The Gupta work disclosed a formula based on mean squared error (“MSE”) and bilinear interpolation, used this formula to find an ideal motion vector, and then quantized such vector to the desired motion accuracy. The best motion vector for a given accuracy was found using the sub-optimal MSE criteria and the best accuracy was selected using the largest decrease in difference energy per distortion bit, which is a greedy (sub-optimal) criteria. A given motion vector was coded by first encoding that vector with ½-pel accuracy and then encoding the higher accuracy with refinement bits. Course-to-fine coding tends to require significant bit overhead.
In “On the Optimal Motion Vector Accuracy for Block-Based Motion-Compensated Video Coders,” Proc. IST/SPIE Digital Video Compression: Algorithms and Technologies, pp. 302-314, San Jose, February 1996 (the “Ribas work”), Jordi Ribas-Corbera and David L. Neuhoff, modeled the effect of motion accuracy on bit rate and proposed several methods to estimate the optimal accuracies that minimize bit rate. The Ribas work set forth a full-search approach for computing motion vectors for a given accuracy and considered only bilinear interpolation. The best motion vector was found by minimizing MSE and the best accuracy was selected using some formulas derived from a rate-distortion optimization. The motion vectors and accuracies were encoded with frame-adaptive entropy coders, which are complex to implement in real-time applications.
In “Proposal for a new core experiment on prediction enhancement at higher bitrates,” ISO/IEC JTC1/SC29/WG11 Coding of Moving Pictures and Audio, MPEG 97/1827, Sevilla, February 1997 and “Performance Evaluation of a Reduced Complexity Implementation for Quarter Pel Motion Compensation,” ISO/IEC JTC1/SC29/WG11 Coding of Moving Pictures and Audio, MPEG 97/3146, San Jose, January 1998, Ulrich Benzler proposed using ¼-pel accurate motion vectors for the video sequence and more advanced interpolation filters for the MPEG4 video coding standard. Benzler, however, used the Girod's fast-search technique to find the ¼-pel motion vectors. Benzler did consider different interpolation filters, but proposed a complex filter at the first stage and a simpler filter at the second stage and interpolated one macroblock at a time. This approach does not require much cache memory, but it is computationally expensive because of its complexity and because all motion vectors are computed with ¼-pel accuracy for all the possible modes in a macroblock (e.g., 16×16, four-8×8, sixteen-4×4, etc.) and then the best mode is determined. Benzler used the MAD criteria to find the best motion vector which was fixed to ¼-pel accuracy for the whole sequence, and hence he did not address how to select the best motion accuracy. Finally, Benzler encoded the motion vectors with a variable-length code (“VLC”) table that could be used for encoding ½ and ¼ pixel ½- and ¼-pixel accurate vectors.
The references discussed above do not estimate the motion vectors using optimized rate-distortion criteria and do not exploit the convexity properties of such criteria to reduce computational complexity. Further, these references do not use effective strategies to encode motion vectors and their accuracies.
BRIEF SUMMARY OF THE INVENTION
One preferred embodiment of the present invention addresses the problems of the prior art by computing motion vectors of high pixel accuracy (also denoted as “fractional” or “sub-pixel” accuracy) with a minor increase in computation.
Experiments have demonstrated that, by using the search strategy of the present invention, a video encoder can achieve significant compression gains (e.g., up to thirty percent in bit rate savings over the classical choices of motion accuracy) using similar levels of computation. Since the motion accuracies are adaptively computed and selected, the present invention may be described as adaptive motion accuracy (“AMA”).
One preferred embodiment of the present invention uses fast-search strategies in sub-pixel space that smartly searches for the best motion vectors. This technique estimates motion vectors in motion-compensated video coding by finding a best motion vector for a macroblock. The first step is searching a first set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V1 to find a best motion vector V2. Next, a second set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V2 is searched to find a best motion vector V3. Then, a third set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V3 is searched to find the best motion vector of the macroblock.
In an alternate preferred embodiment of the present invention, a technique for estimating high-accurate motion vectors may use different interpolation filters at different stages in order to reduce computational complexity.
Another alternate preferred embodiment of the present invention selects the best vectors and accuracies in a rate-distortion (“RD”) sense. This embodiment uses rate-distortion criteria that adapts according to the different motion accuracies to determine both the best motion vectors and the best motion accuracies.
Still further, another alternate preferred embodiment of the present invention encodes the motion vector and accuracies with an effective VLC approach. This technique uses a VLC table that is interpreted differently at different coding units, according to the associated motion vector accuracy.
The foregoing and other objectives, features, and advantages of the invention will be more readily understood upon consideration of the following detailed description of the invention, taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
FIG. 1 is a diagram of an exemplary full-pel and ⅓-pel locations in velocity space.
FIG. 2 is a flowchart illustrating a prior art method for estimating the best motion vector.
FIG. 3 is a diagram of an exemplary location of motion vector candidates for full-search in sub-pixel velocity space.
FIG. 4 is a flowchart illustrating a full-search preferred embodiment of the method for estimating the best motion vector of the present invention.
FIG. 5 is a diagram of an exemplary location of motion vector candidates for fast-search in sub-pixel velocity space.
FIG. 6 is a flowchart illustrating a fast-search preferred embodiment of the method for estimating the best motion vector of the present invention.
FIG. 7 is a detail flowchart illustrating an alternate preferred embodiment of step 114 of FIG. 6.
FIG. 8 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Container” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
FIG. 9 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “News” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
FIG. 10 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Mobile” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
FIG. 11 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Garden” video sequence, with SIF resolution, and at the frame rate of 15 frames per second.
FIG. 12 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Garden” video sequence, with QCIF resolution, and at the frame rate of 15 frames per second.
FIG. 13 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Tempete” video sequence, with SIF resolution, and at the frame rate of 15 frames per second.
FIG. 14 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Tempete” video sequence, with QCIF resolution, and at the frame rate of 15 frames per second.
FIG. 15 is a graphical representation of experimental performance results of the Telenor encoder with and without AMA in the “Paris shaked” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
FIG. 16 is a graphical representation of experimental performance results of fast-search (“Telenor FSAMA+c”) and full-search (“Telenor AMA+c”) strategies in the “Mobile” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
FIG. 17 is a graphical representation of experimental performance results of fast-search (“Telenor FSAMA+c”) and full-search (“Telenor AMA+c”) strategies in the “Container” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
FIG. 18 is a graphical representation of experimental performance results of tests using only one reference frame for motion compensation as compared to tests using multiple reference frames for motion compensation the in the “Mobile” video sequence, with QCIF resolution, and at the frame rate of 10 frames per second.
DETAILED DESCRIPTION OF THE INVENTION
The methods of the present invention are described herein in terms of the motion accuracy being modified at each image block. These methods, however, may be applied when the accuracy is fixed for the whole sequence or modified on a frame-by-frame basis. The present invention is also described as using Telenor's video encoders (and particularly the Telenor encoder) as described in the Background of the Invention. Although described in terms of Telenor's video encoders, the techniques described herein are applicable to any other motion-compensated video coder.
Most video coders use motion vectors with half pixel (or “½-pel”) accuracy and bilinear interpolation. The first version of Telenor's encoder also used ½-pel motion vectors and bilinear interpolation. The latest version of Telenor's encoder, however, incorporated ⅓-pel vectors and cubic-like interpolation because of the additional compression gains. Specifically, at a given macroblock, Telenor's encoder estimates the best motion vector in two steps shown in FIG. 2. First, the Telenor encoder searches for the best integer-pel vector V1 (FIG. 1) 100. Second, the Telenor encoder searches for the best ⅓-pixel accurate vector V1/3 (FIG. 1) near V 1 102. This second step is shown graphically in FIG. 1 where a total of eight blocks (each having an array of 16×16 pixels) in the 3×3 interpolated reference frame are checked to find the best match. The motion vectors for these eight blocks are represented by the eight solid dots in the grid centered on V1. In FIG. 1 the best match is the block associated to the motion vector V1/3=(Vx, Vy)=(1+⅓, 1).
The technology of the present invention allows the encoder to choose between any set of motion accuracies (for example, ½, ⅓, and ⅙-pel accurate motion vectors) using either a full search strategy or a fast search strategy.
Full-Search AMA Search Strategy
As shown in FIGS. 3 and 4, in the full-search adaptive motion accuracy (“AMA”) search strategy the encoder searches all the motion vector candidates in a grid of ⅙-pixel resolution and a “square radius” (defined herein as a square block defined by a number of pixels up, a number of pixels down, and a number of pixels to both sides) of five pixels as shown in FIG. 3. FIG. 4 shows that the first step of the full-search AMA is to search for the best integer-pel vector V1 (FIG. 1) 104. In the second step of the full-search AMA, the encoder searches for the best ⅙-pixel accurate vector V1/6 (FIG. 3) near V 1 106. In other words, the full-search AMA modifies the second step of the Telenor's process so that the encoder also searches for motion vector candidates in other sub-pixel locations in the velocity space. The objective is to find the best motion vector in the grid, i.e., the vector that points to the block (in the interpolated reference frame) that best matches the current macroblock. Although the full-search strategy is computationally complex since it searches 120 sub-pixel candidates, it shows the full potential of this preferred method of the present invention.
A critical issue in the motion vector search is the choice of a measure or criterion for establishing which block is the best match for the given macroblock. In practice, most methods use either the mean squared error (“MSE”) or mean absolute difference (“MAD”) criteria. The MSE between two blocks consists of subtracting the pixel values of the two blocks, squaring the pixel differences, and then taking the average. The MAD difference between two blocks is a similar distortion measure, except that the absolute value of the pixel differences is computed instead of the squares. If two image blocks are similar to each other, the MSE and MAD values will be small. If, however, the image blocks are dissimilar, these values will be large. Hence, typical video coders find the best match for a macroblock by selecting the motion vector that produces either the smallest MSE or the smallest MAD. In other words, the block associated to the best motion vector is the one closest to the given macroblock in an MSE or MAD sense.
Unfortunately, the MSE and MAD distortion measures do not take into account the cost in bits of actually encoding the vector. For example, a given motion vector may minimize the MSE, but it may be very costly to encode with bits, so it may not be the best choice from a coding standpoint.
To deal with this, advanced encoders such as those described by Telenor use rate-distortion (“RD”) criteria of the type “distortion+L*Bits” to select the best motion vector. The value of “distortion” is typically the MSE or MAD, “L” is a constant that depends on the compression level (i.e., the quantization step size), and “Bits” is the number of bits required to code the motion vector. In general, any RD criteria of this type would work with the present invention. However, in the present invention “Bits” include the bits needed for encoding the vector and those for encoding the accuracy of the vector. In fact, some candidates can have several “Bits” values, because they can have several accuracy modes. For example, the candidate at location (½, −½) can be thought of having ½ or ⅓ pixel 1/6-pixel accuracy.
Fast-Search AMA Search Strategy
As shown in FIGS. 5 and 6, in the fast-search adaptive motion accuracy (“AMA”) search strategy the encoder checks only a small set of the motion vector candidates. In the first step of the fast-search AMA, the encoder checks the eight motion vector candidates in a grid of ½-pixel resolution of square radius 1, which is centered on V 1 108. V2 is then set to denote the candidate that has the smallest RD cost (i.e., the best of the eight previous vectors and V1) 110. Next, the encoder checks the eight motion vector locations in a grid of ⅙-pixel resolution of square radius 1 that is now centered on V 2 112. If V2 has the smallest RD cost 114, the encoder stops its search and selects V2 as the motion vector for the block. Otherwise, V3 is set to denote the best motion vector of the eight 116. The encoder then searches for a new motion vector candidate in the grid of ⅙-pixel resolution of square radius 1 that is centered on V 3 118. It should be noted that some of the candidates in this grid have already been tested and can be skipped. The candidate with the smallest RD cost in this last step is selected as the motion vector for the block 120.
Experimental data has shown that, on average, this simple fast search strategy typically checks the RD cost of about eighteen locations in sub-pixel space (ten more than Telenor's search strategy), and hence the overall computational complexity is only moderately increased.
The experimental data discussed below in connection with FIGS. 8-18 show that there is practically no loss in compression performance from using this fast-search version of AMA. This is because the fast-search AMA search strategy exploits the convexity of the “distortion+L*Bits” curve (c.f., “distortion” is known to be convex), by creating a path that smartly follows the RD cost from higher to lower levels.
Alternate embodiments of the invention replace one or more of the steps 108-120. These embodiments have also been effective and have further reduced the number of motion vector candidates to check in the sub-pixel velocity space.
FIG. 7, for example, checks candidates of ⅓-pel accuracy. In this embodiment step 112 is replaced by one of three possible scenarios. First, if the best motion vector candidate from step 110 is at the center of V1 (the “integer-pel vector”) 130, then the encoder checks three candidates of ⅓-pel accuracy between the center vector and the ½-pel location with the next lowest RD cost 132. Second, if the best motion vector candidate from step 110 is a corner vector 134, then, the encoder checks the four vector candidates of ⅓-pel accuracy that are closest to such corner 136. Third, if the best motion vector candidate from step 110 is between two corners 138, then, the encoder determines which of these two corners has lower RD cost and checks the four vector candidates of ⅓-pel accuracy that are closest to the line between such corner and the best candidate from step 110 140. It should be noted that in implementing this process step 138 may be unnecessary because if V2 V2 is neither at the center or a corner vector, then it would necessarily be between two corners. If the encoder is set to find motion vectors with ⅓-pixel accuracy, FIG. 7 could be modified to end rather than continuing with step 114.
Computation And Memory Savings
Because step 108 checks only motion vector candidates of ½-pixel accuracy, the computation and memory requirements for the hardware or software implementation are significantly reduced. To be specific, in a smart implementation embodiment of this fast-search the reference frame is interpolated by 2×2 in order to obtain the RD costs for the ½-pel vector candidates. A significant amount of fast (or cache) memory for a hardware or software encoder is saved as compared to Telenor's approach that needed to interpolate the reference frame by 3×3. In comparison to the Telenor encoder, this is a cache memory savings of 9/4or 9/4, or a factor of 2.25. The few additional interpolations can be done later on a block-by-block basis.
Additionally, since the interpolations in step 108 are used to direct the search towards the lower values of the RD cost function, a complex filter is not needed for these interpolations. Accordingly, computation power may be saved by using a simple bilinear filter for step 108.
Also, other key coding decisions such as selecting the mode of a macroblock (e.g., 16×16, four-8×8, etc.) can be done using the ½-pel vectors because such decisions do not benefit significantly from using higher accuracies. Then, the encoder can use a more complex cubic filter to interpolate the required sub-pixel values for the few additional vector candidates to check in the remaining steps. Since the macroblock mode has already been chosen, these final interpolations only need to be done for the chosen mode.
Use of multiple-filters obtained computation savings of over twenty percent in running time on a Sparc Ultra 10 Workstation in comparison to Telenor's approach, which uses a cubic interpolation all the time. Additionally, the fast-memory requirements were reduced by nearly half. Also, there was little or no loss in compression performance. Comparing one preferred embodiment of the fast-search, Benzler's technique requires about 70 interpolations per pixel in the Telenor encoder and the present invention requires only about 7 interpolations per pixel.
Coding The Motion Vector And Accuracies With Bits
Once the best motion vector and accuracy are determined, the encoder encodes both the motion vector and accuracy is values with bits. One approach is to encode the motion vector with a given accuracy (e.g., half-pixel accuracy) and then add some extra bits for refining the vector to the higher motion accuracy. This is the strategy suggested by B. Girod, but it is sub-optimal in a rate-distortion sense.
In one preferred embodiment of the present invention, the accuracy of the motion vector for a macroblock is first encoded using a simple code such as the one given in Table 1. Any other table with code lengths {1, 2, 2} could be used as well. The bit rate could be further reduced using a typical DPCM approach.
TABLE 1
VLC table to indicate the accuracy mode for a given macroblock.
Motion
Code Accuracy
 1 ½-pel
01 ⅓-pel
11 ⅙-pel

Next, the value of the vector/s in the respective accuracy space is encoded. These bits can be obtained from entries of a single VLC table such as the one used in the H26L codec. The key idea is that these bits are interpreted differently depending on the motion accuracy for the macroblock. For example, if the motion accuracy is ⅓ and the code bits for the X component of the difference motion vector are 000011 00001 (observe that this code is the fourth entry (code number 3) of H26L's VLC table in [6]), the X component of the vector is Vx=⅔. If the accuracy is ½, such code corresponds to Vx=1.
Compared to the Benzler method for encoding the motion vectors with a variable length code (“VLC”) table that could be used for encoding ½and ¼pixel accurate vectors, the method of the present invention can be used for encoding vectors of any motion accuracy and the table can be interpreted differently at each frame and macroblock. Further, the general method of the present invention can be used for any motion accuracy, not necessarily those that are multiples of each other or those that are of the type 1/n (with n an integer). The number of increments in the given sub-pixel space is simply counted and the bits in the associated entry of the table is used as the code.
From the decoder's viewpoint, once the motion accuracy is decoded, the motion vector can also be easily decoded. After that, the associated block in the previous frame is reconstructed using a typical 4-tap cubic interpolator. There is a different 4-tap filter for each motion accuracy.
The AMA does not increase decoding complexity, because the number of operations needed to reconstruct the predicted block are the same, regardless of the motion accuracy.
Experimental Results
FIGS. 8-18 show test results of the Telenor encoder codec with and without AMA in a variety of video sequences, resolutions, and frame rates, as described in Table 2. These figures show rate-distortion (“RD”) plots for each case. The “Anchor” curve shows RD points from optimized H.263+ (FIGS. 8 and 9 only). The “Telenor ½+b” curve shows Telenor with ½-pel vectors and bilinear interpolation (the “classical case”). The “Telenor ⅓” curve shows the current Telenor proposal (the “Telenor encoder”). The “Telenor+AMA+c” curve shows the Telenor encoder with the full-search strategy of the present invention. The “Telenor+FSAMA+c”, as shown in FIGS. 15-17, shows the current Telenor encoder with the fast-search strategy. (Unless otherwise specified, the full-search version of AMA was the encoder strategy used in the experiments.) All of the test results were cross-checked at the encoder and decoder. These results show that with AMA the gains in peak signal-to-noise ratio (“PSNR”) can be as high as 1 dB over H26L, and even higher over the classical case.
TABLE 2
Description of the Experiments
Video sequence FIG. # Resolution Frame rate
Container FIG. 8 QCIF 10
News FIG. 9 QCIF 10
Mobile FIG. 10 QCIF 10
FIG. 11 SIF 15
Garden FIG. 12 QCIF 15
Tempete FIG. 13 SIF 15
FIG. 14 QCIF 15
Paris Shaked FIG. 15 QCIF 10
The video sequences are commonly used by the video coding community, except for “Paris Shaked.” The latter is a synthetic sequence obtained by shifting the well-known sequence “Paris” by a motion vector whose X and Y components take a random value within [−1,1]. This synthetic sequence simulates small movements caused by a hand-held camera in a typical video phone scene.
Comparison Of Full-Search And Fast-Search AMA
The experimental results shown in FIGS. 16 and 17 demonstrate that the encoder performance with fast-search (“Telenor FSAMA+c”) and full-search (“Telenor AMA+c”) strategies for AMA is practically the same. This is true because the fast-search strategies exploit the convexity of the RD cost curve in the sub-pixel velocity space. In other words, since the shape of the RD cost follows a smooth convex curve, its minimum should be easy to find with some smart fast-search schemes that descend down the curve.
Combining AMA And Multiple Reference Frames
In the plot shown in FIG. 18, the curves labeled “1r” used only one reference frame for the motion compensation, so these curves are the same as those presented in FIG. 10. The curves labeled “5r” used five reference frames.
The experiments show that the gains with AMA add to those obtained using multiple reference frames. The gain from AMA in the one-reference case can be measured by comparing the curve labeled with a “+” (Telenor AMA+c+1r) with the curve labeled with an “x” (Telenor ⅓+1r), and the gain in the five-reference case can be measured between the curve labeled with a “diamond” (Telenor AMA+c+5r) with the curve labeled with a “*” (Telenor ⅓+5r).
It should be noted that the present invention may be implemented at the frame level so that different frames could use different motion accuracies, but within a frame all motion vectors would use the same accuracy. Preferably in this embodiment the motion vector accuracy would then be signaled only once at the frame layer. Experiments have shown that using the best, fixed motion accuracy for the whole frame should also produce compression gains as those presented here for the macroblock-adaptive case.
In another frame-based embodiment the encoder could do motion compensation on the entire frame with the different vector accuracies and then select the best accuracy according to the RD criteria. This approach is not suitable for pipeline, one-pass encoders, but it could be appropriate for software-based or more complex encoders. Still In still another fame-based embodiment, the encoder could use previous statistics and/or formulas to predict what will be the best accuracy for a given frame (e.g., the formulas in set forth in the Ribas work or a variation thereof can be used). This approach would be well-suited for one-pass encoders, although the performance gains would depend on the precision of the formulas used for the prediction.
The terms and expressions which have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding equivalents of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims that follow.

Claims (29)

What is claimed is:
1. A fast-search adaptive motion accuracy search method for estimating motion vectors in motion-compensated video coding by finding a best motion vector for a macroblock, said method comprising the steps of:
(a) searching a first set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V1, to find a best motion vector V2 using a first criteria;
(b) searching a second set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V2 to find a best motion vector V3 using a second criteria;
(c) searching a third set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V3 to find said best motion vector of said macroblock using a third criteria, and
(d) wherein at least one of said first criteria, said second criteria, and said third criteria is a rate-distortion criteria.
2. The method of claim 1, said step of searching a first set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V1, to find a best motion vector V2 further comprising the step of searching a first set of eight motion vector candidates in a grid of ½-pixel resolution of square radius 1 centered on V1 to find a best motion vector V2.
3. The method of claim 1, said step of searching a second set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V2 to find a best motion vector V3 further comprising the step of searching a second set of eight motion vector candidates in a grid of ⅙-pixel resolution of square radius 1 centered on V2 to find a best motion vector V3.
4. The method of claim 1 further comprising the steps of using V2 as the motion vector for the macroblock if V2 has the smallest rate-distortion cost and skipping step (c) of claim 1.
5. The method of claim 1, said step of searching a third set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V3 to find said best motion vector of said macroblock further comprising the step of searching a third set of eight motion vector candidates in a grid of ⅙-pixel resolution of square radius 1 centered on V3 to find said best motion vector of said macroblock.
6. The method of claim 1, said step of searching a third set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V3 to find said best motion vector of said macroblock further comprising the step of skipping motion vector candidates of said third set of motion vector candidates that have already been tested.
7. The method of claim 1 further wherein said step of searching said first set of motion vector candidates further comprises the step of searching said first set of motion vector candidates using a first filter to do a first interpolation, said step of searching said second set of motion vector candidates further comprises the step of searching said second set of motion vector candidates using a second filter to do a second interpolation, and said step of searching said third set of motion vector candidates further comprises the step of searching said third set of motion vector candidates using a third filter to do a third interpolation.
8. The method of claim 1, said step of searching a second set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V2 to find a best motion vector V3 further comprising the steps of:
(a) searching three candidates of ⅓-pel accuracy V2 and a ½-pel location with the next lowest rate-distortion cost if V2 is at the center;
(b) searching four vector candidates of ⅓-pel accuracy that are closest to V2 if V2 is a corner vector; and
(c) determining which of two corners has lower rate-distortion cost and searching four vector candidates of ⅓-pel accuracy that are closest to a line between said corner with lower rate-distortion cost, if V2 is between two corners vectors.
9. An adaptive motion accuracy search method for estimating motion vectors in motion-compensated video coding by finding a best motion vector for a macroblock, said method comprising the steps of:
(a) searching a first set of motion vector candidates in a grid centered on V1 using a first criteria to find a best motion vector V2 using a first filter to do a first interpolation;
(b) searching a second set of motion vector candidates in a grid centered on V2 using a second criteria to find a best motion vector V3 using a second filter to do a second interpolation; and
(c) searching a third set of motion vector candidates in a grid centered on V3 using a third criteria to find said best motion vector of said macroblock using a third filter to do a third interpolation;
(d) wherein at least one of said first criteria, said second criteria, and said third criteria is a rate-distortion criteria.
10. The method of claim 9 wherein said step of searching using a first filter to do a first interpolation further comprises using a simple filter to do a coarse interpolation.
11. The method of claim 9 wherein said step of searching using a first filter to do a first interpolation further comprises using a simple filter to do a coarse interpolation and said step of searching using a second filter to do a second interpolation further comprises using a complex filter to do a fine interpolation.
12. The method of claim 11 wherein said step of searching using a third filter to do a third interpolation further comprises using a complex filter to do a fine interpolation.
13. The method of claim 9 wherein said step of searching using a first filter to do a first interpolation further comprises using a bilinear filter to interpolate the reference frame by 2×2.
14. The method of claim 9 wherein said step of searching using a first filter to do a first interpolation further comprises to using a bilinear filter to interpolate the reference frame by 2×2 and said step of searching using a second filter to do a second interpolation further comprises using a cubic filter to do a fine interpolation.
15. The method of claim 14 wherein said step of searching using a third filter to do a third interpolation further comprises using a cubic filter to do a fine interpolation.
16. An adaptive motion accuracy search method for estimating motion vectors in motion-compensated video coding by finding a best motion vector for a macroblock, said method comprising the steps of:
(a) searching at a first motion accuracy for a first best motion vector of said macroblock;
(b) encoding said first best motion vector and said first motion accuracy;
(c) searching for at least one second best motion vector of said macroblock at an at least one second motion accuracy;
(d) encoding said at least one second best motion vector and said at least one second motion accuracy; and
(e) selecting the best motion vector of said first and at least one second best motion vectors using rate-distortion criteria.
17. The method of claim 16 wherein said step of selecting the best motion vector using rate-distortion criteria further comprises the step of said rate-distortion criteria adapting according to the different motion accuracies to determine both the best motion vectors and the best motion accuracies.
18. The method of claim 16, said step of searching for at least one second best motion vector at an at least one second motion accuracy further comprising the step of searching for at least one second best motion vector of said macroblock at an at least one second motion accuracy that is finer than said first motion accuracy.
19. The method of claim 16 wherein said step of selecting the best motion vector using rate-distortion criteria further comprises the step of using rate-distortion criteria of the type “distortion+L*Bits” to select the best motion vector.
20. An adaptive motion accuracy search method for estimating motion vectors in motion-compensated video coding by finding a best motion vector for a macroblock, said method comprising the steps of:
(a) searching at a motion accuracy for a best motion vector of said macroblock using rate-distortion criteria;
(b) encoding said motion accuracy using a code from a VLC table that is interpreted differently at different coding units according to the associated motion vector accuracy; and
(c) encoding said best motion vector in the respective accuracy space.
21. A system for estimating motion vectors in motion-compensated video coding by finding a best motion vector for a macroblock, said system comprising:
(a) a first encoder for searching a first set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V1 using a first criteria to find a best motion vector V2;
(b) a second encoder for searching a second set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V2 using a second criteria to find a best motion vector V3; and (c) a third encoder for searching a third set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V3 using a third criteria to find said best motion vector of said macroblock;
(d) wherein at least one of said first criteria, said second criteria, and said third criteria is a rate-distortion criteria.
22. The system of claim 21 wherein said first, second, and third encoders are a single encoder.
23. A fast-search adaptive motion accuracy search method for estimating motion vectors in motion-compensated video coding by finding a best motion vector for a macroblock, said method comprising the steps of:
(a) searching a first set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V1 to find a best motion vector V2;
(b) searching a second set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V2 to find a best motion vector V3;
(c) searching a third set of motion vector candidates in a grid of sub-pixel resolution of a predetermined square radius centered on V3 to find said best motion vector of said macroblock, and
(d) using V2 as the motion vector for the macroblock if V2 has the smallest rate-distortion cost and skipping step (c).
24. The method of claim 1, wherein said first criteria, said second criteria, and said third criteria are all rate-distortion criteria.
25. The method of claim 9, wherein said first criteria, said second criteria, and said third criteria are all rate-distortion criteria.
26. The system of claim 21, wherein said first criteria, said second criteria, and said third criteria are all rate-distortion criteria.
27. A motion compensated video encoding apparatus comprising:
a motion compensator that compensates a motion using a motion vector having a fractional accuracy level; and
an encoder that encodes the motion vector and a fractional accuracy level which indicates two or more levels of a fractional accuracy expressed by 1/N pel (N is an arbitrary integer) of the motion vector, wherein
the motion compensation is performed by interpolation with a filter corresponding to the fractional accuracy level,
the fractional accuracy level is set frame-by-frame so that different frames could use different motion accuracies and is sent frame-by-frame,
the encoder encodes a variable length fractional accuracy level which indicates the fractional accuracy level, separately from encoding the motion vector, and
the encoder encodes the motion vector for each block in a block by block manner.
28. A motion compensated video encoding method comprising:
performing a motion compensation using a motion vector having a fractional accuracy level; and
encoding the motion vector and a fractional accuracy level which indicates two or more levels of a fractional accuracy expressed by 1/N pel (N is an arbitrary integer) of the motion vector, wherein
the motion compensation is performed by interpolation with a filter corresponding to the fractional accuracy level,
the fractional accuracy level is set frame-by-frame so that different frames could use different motion accuracies and is sent frame-by-frame,
encoding a variable length fractional accuracy level which indicates the fractional accuracy level, separately from encoding the motion vector, and
encoding the motion vector for each block in a block by block manner.
29. A video processing method comprising:
performing a motion compensation using a motion vector having a fractional accuracy level; and
computing the motion vector and a fractional accuracy level which indicates two or more levels of a fractional accuracy expressed by 1/N pel (N is an arbitrary integer) of the motion vector, wherein
the motion compensation is performed by interpolation with a filter corresponding to the fractional accuracy level,
the fractional accuracy level is set frame-by-frame so that different frames could use different motion accuracies and is computed frame-by-frame,
computing the fractional accuracy level separately from the motion vector by using a variable length code, and
computing the motion vector for each block in a block by block manner.
US14/170,134 1999-07-27 2014-01-31 Methods for motion estimation with adaptive motion accuracy Expired - Lifetime USRE46468E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/170,134 USRE46468E1 (en) 1999-07-27 2014-01-31 Methods for motion estimation with adaptive motion accuracy

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US14610299P 1999-07-27 1999-07-27
US09/615,791 US6968008B1 (en) 1999-07-27 2000-07-13 Methods for motion estimation with adaptive motion accuracy
US11/984,661 USRE45014E1 (en) 1999-07-27 2007-11-20 Methods for motion estimation with adaptive motion accuracy
US14/170,134 USRE46468E1 (en) 1999-07-27 2014-01-31 Methods for motion estimation with adaptive motion accuracy

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/615,791 Reissue US6968008B1 (en) 1999-07-27 2000-07-13 Methods for motion estimation with adaptive motion accuracy

Publications (1)

Publication Number Publication Date
USRE46468E1 true USRE46468E1 (en) 2017-07-04

Family

ID=26843579

Family Applications (4)

Application Number Title Priority Date Filing Date
US09/615,791 Ceased US6968008B1 (en) 1999-07-27 2000-07-13 Methods for motion estimation with adaptive motion accuracy
US11/984,661 Expired - Lifetime USRE45014E1 (en) 1999-07-27 2007-11-20 Methods for motion estimation with adaptive motion accuracy
US13/289,902 Expired - Lifetime USRE44012E1 (en) 1999-07-27 2011-11-04 Methods for motion estimation with adaptive motion accuracy
US14/170,134 Expired - Lifetime USRE46468E1 (en) 1999-07-27 2014-01-31 Methods for motion estimation with adaptive motion accuracy

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US09/615,791 Ceased US6968008B1 (en) 1999-07-27 2000-07-13 Methods for motion estimation with adaptive motion accuracy
US11/984,661 Expired - Lifetime USRE45014E1 (en) 1999-07-27 2007-11-20 Methods for motion estimation with adaptive motion accuracy
US13/289,902 Expired - Lifetime USRE44012E1 (en) 1999-07-27 2011-11-04 Methods for motion estimation with adaptive motion accuracy

Country Status (4)

Country Link
US (4) US6968008B1 (en)
EP (4) EP2373036B1 (en)
JP (4) JP4614512B2 (en)
HK (1) HK1161948A1 (en)

Families Citing this family (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7269219B1 (en) 1997-02-14 2007-09-11 At&T Corp. Non-linear quantizer for video coding
EP1359766A3 (en) * 1997-02-14 2005-02-16 AT&T Corp. A method of generating a dequantized dc luminance or dc chrominance coefficient
WO1998046005A2 (en) * 1997-04-07 1998-10-15 At & T Corp. System and method for processing object-based audiovisual information
US6968008B1 (en) * 1999-07-27 2005-11-22 Sharp Laboratories Of America, Inc. Methods for motion estimation with adaptive motion accuracy
US6940903B2 (en) * 2001-03-05 2005-09-06 Intervideo, Inc. Systems and methods for performing bit rate allocation for a video data stream
CN1976472A (en) * 2001-09-18 2007-06-06 松下电器产业株式会社 Image decoding method
JP3861698B2 (en) 2002-01-23 2006-12-20 ソニー株式会社 Image information encoding apparatus and method, image information decoding apparatus and method, and program
US8284844B2 (en) 2002-04-01 2012-10-09 Broadcom Corporation Video decoding system supporting multiple standards
KR100474285B1 (en) * 2002-04-08 2005-03-08 엘지전자 주식회사 Method for finding motion vector
US7620109B2 (en) * 2002-04-10 2009-11-17 Microsoft Corporation Sub-pixel interpolation in motion estimation and compensation
US7305034B2 (en) * 2002-04-10 2007-12-04 Microsoft Corporation Rounding control for multi-stage interpolation
US7224731B2 (en) * 2002-06-28 2007-05-29 Microsoft Corporation Motion estimation/compensation for screen capture video
JP4724351B2 (en) 2002-07-15 2011-07-13 三菱電機株式会社 Image encoding apparatus, image encoding method, image decoding apparatus, image decoding method, and communication apparatus
JP4841101B2 (en) * 2002-12-02 2011-12-21 ソニー株式会社 Motion prediction compensation method and motion prediction compensation device
US7408988B2 (en) * 2002-12-20 2008-08-05 Lsi Corporation Motion estimation engine with parallel interpolation and search hardware
US20050013498A1 (en) 2003-07-18 2005-01-20 Microsoft Corporation Coding of motion vector information
US7724827B2 (en) 2003-09-07 2010-05-25 Microsoft Corporation Multi-layer run level encoding and decoding
US8064520B2 (en) 2003-09-07 2011-11-22 Microsoft Corporation Advanced bi-directional predictive coding of interlaced video
US7567617B2 (en) 2003-09-07 2009-07-28 Microsoft Corporation Predicting motion vectors for fields of forward-predicted interlaced video frames
US7253374B2 (en) * 2003-09-15 2007-08-07 General Motors Corporation Sheet-to-tube welded structure and method
US7983835B2 (en) 2004-11-03 2011-07-19 Lagassey Paul J Modular intelligent transportation system
DE102004019339A1 (en) * 2004-04-21 2005-11-24 Siemens Ag Prediction method and associated method for decoding a prediction method, associated encoding device and decoding device
US8130825B2 (en) * 2004-05-10 2012-03-06 Nvidia Corporation Processor for video data encoding/decoding
US8018463B2 (en) * 2004-05-10 2011-09-13 Nvidia Corporation Processor for video data
EP1617672A1 (en) * 2004-07-13 2006-01-18 Matsushita Electric Industrial Co., Ltd. Motion estimator/compensator including a 16-bit 1/8 pel interpolation filter
TWI256844B (en) * 2004-11-16 2006-06-11 Univ Nat Kaohsiung Applied Sci Flat hexagon-based search method for fast block moving detection
JP4736456B2 (en) * 2005-02-15 2011-07-27 株式会社日立製作所 Scanning line interpolation device, video display device, video signal processing device
EP1886502A2 (en) * 2005-04-13 2008-02-13 Universität Hannover Method and apparatus for enhanced video coding
US20060233258A1 (en) * 2005-04-15 2006-10-19 Microsoft Corporation Scalable motion estimation
TWI280805B (en) * 2005-07-20 2007-05-01 Novatek Microelectronics Corp Method and apparatus for cost calculation in decimal motion estimation
US8165205B2 (en) * 2005-09-16 2012-04-24 Sony Corporation Natural shaped regions for motion compensation
US8208548B2 (en) 2006-02-09 2012-06-26 Qualcomm Incorporated Video encoding
US8494052B2 (en) * 2006-04-07 2013-07-23 Microsoft Corporation Dynamic selection of motion estimation search ranges and extended motion vector ranges
US8155195B2 (en) * 2006-04-07 2012-04-10 Microsoft Corporation Switching distortion metrics during motion estimation
US20070268964A1 (en) * 2006-05-22 2007-11-22 Microsoft Corporation Unit co-location-based motion estimation
US9307122B2 (en) * 2006-09-27 2016-04-05 Core Wireless Licensing S.A.R.L. Method, apparatus, and computer program product for providing motion estimation for video encoding
US20080111923A1 (en) * 2006-11-09 2008-05-15 Scheuermann W James Processor for video data
KR101369746B1 (en) 2007-01-22 2014-03-07 삼성전자주식회사 Method and apparatus for Video encoding and decoding using adaptive interpolation filter
US8358699B2 (en) * 2007-04-09 2013-01-22 Cavium, Inc. Method and system for selection of reference picture and mode decision
US9118927B2 (en) * 2007-06-13 2015-08-25 Nvidia Corporation Sub-pixel interpolation and its application in motion compensated encoding of a video signal
US8213515B2 (en) * 2008-01-11 2012-07-03 Texas Instruments Incorporated Interpolated skip mode decision in video compression
JP4824712B2 (en) * 2008-02-29 2011-11-30 日本電信電話株式会社 Motion estimation accuracy estimation method, motion estimation accuracy estimation device, motion estimation accuracy estimation program, and computer-readable recording medium recording the program
US20090323807A1 (en) * 2008-06-30 2009-12-31 Nicholas Mastronarde Enabling selective use of fractional and bidirectional video motion estimation
US8345996B2 (en) * 2008-07-07 2013-01-01 Texas Instruments Incorporated Determination of a field referencing pattern
JP4793424B2 (en) * 2008-11-04 2011-10-12 三菱電機株式会社 Image encoding apparatus, image encoding method, image decoding apparatus, image decoding method, and communication apparatus
EP3633996A1 (en) * 2009-10-14 2020-04-08 InterDigital Madison Patent Holdings Methods and apparatus for adaptive coding of motion information
JP5875989B2 (en) 2010-01-19 2016-03-02 トムソン ライセンシングThomson Licensing Method and apparatus for low complexity template matching prediction for video encoding and decoding
WO2011097199A2 (en) * 2010-02-04 2011-08-11 Sony Corporation Recursive adaptive interpolation filters (raif)
US9237355B2 (en) * 2010-02-19 2016-01-12 Qualcomm Incorporated Adaptive motion resolution for video coding
JP2012004615A (en) * 2010-06-14 2012-01-05 Nippon Telegr & Teleph Corp <Ntt> Motion vector search method, motion vector search apparatus and program therefor
TWI521950B (en) 2010-07-21 2016-02-11 財團法人工業技術研究院 Method and apparatus for motion estimation for video processing
US10327008B2 (en) 2010-10-13 2019-06-18 Qualcomm Incorporated Adaptive motion vector resolution signaling for video coding
CN102710934B (en) * 2011-01-22 2015-05-06 华为技术有限公司 Motion predicting or compensating method
US9319716B2 (en) * 2011-01-27 2016-04-19 Qualcomm Incorporated Performing motion vector prediction for video coding
US9143799B2 (en) 2011-05-27 2015-09-22 Cisco Technology, Inc. Method, apparatus and computer program product for image motion prediction
US9131239B2 (en) * 2011-06-20 2015-09-08 Qualcomm Incorporated Unified merge mode and adaptive motion vector prediction mode candidates selection
JP5649524B2 (en) * 2011-06-27 2015-01-07 日本電信電話株式会社 Video encoding method, apparatus, video decoding method, apparatus, and program thereof
CN104641644A (en) * 2012-05-14 2015-05-20 卢卡·罗萨托 Encoding and decoding based on blending of sequences of samples along time
CN103413217A (en) * 2013-08-30 2013-11-27 国家电网公司 Control method and control device for prepayment system
US9942560B2 (en) 2014-01-08 2018-04-10 Microsoft Technology Licensing, Llc Encoding screen capture data
US9774881B2 (en) 2014-01-08 2017-09-26 Microsoft Technology Licensing, Llc Representing motion vectors in an encoded bitstream
US9749642B2 (en) 2014-01-08 2017-08-29 Microsoft Technology Licensing, Llc Selection of motion vector precision
CN106331722B (en) 2015-07-03 2019-04-26 华为技术有限公司 Image prediction method and relevant device
CN106331703B (en) 2015-07-03 2020-09-08 华为技术有限公司 Video encoding and decoding method, video encoding and decoding device
US10715818B2 (en) * 2016-08-04 2020-07-14 Intel Corporation Techniques for hardware video encoding
US10602174B2 (en) 2016-08-04 2020-03-24 Intel Corporation Lossless pixel compression for random video memory access
SG11201913272SA (en) * 2017-06-30 2020-01-30 Huawei Tech Co Ltd Search region for motion vector refinement
US10291925B2 (en) 2017-07-28 2019-05-14 Intel Corporation Techniques for hardware video encoding
US11025913B2 (en) 2019-03-01 2021-06-01 Intel Corporation Encoding video using palette prediction and intra-block copy
TWI810596B (en) * 2019-03-12 2023-08-01 弗勞恩霍夫爾協會 Encoders, decoders, methods, and video bit streams, and computer programs for hybrid video coding
US10855983B2 (en) 2019-06-13 2020-12-01 Intel Corporation Encoding video using two-stage intra search

Citations (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4864393A (en) 1987-06-09 1989-09-05 Sony Corp. Motion vector estimation in television images
US4937666A (en) 1989-12-04 1990-06-26 Bell Communications Research, Inc. Circuit implementation of block matching algorithm with fractional precision
EP0420653A2 (en) 1989-09-29 1991-04-03 Victor Company Of Japan, Ltd. Motion picture data coding/decoding system having motion vector coding unit and decoding unit
JPH04264889A (en) 1991-02-19 1992-09-21 Victor Co Of Japan Ltd Inter-motion-compensating-frame encoder
JPH0795585A (en) 1993-09-17 1995-04-07 Sony Corp Moving vector detector
US5408269A (en) 1992-05-29 1995-04-18 Sony Corporation Moving picture encoding apparatus and method
US5489949A (en) 1992-02-08 1996-02-06 Samsung Electronics Co., Ltd. Method and apparatus for motion estimation
JPH08116532A (en) 1994-10-14 1996-05-07 Graphics Commun Lab:Kk Image decoding system and device therefor
US5610658A (en) 1994-01-31 1997-03-11 Sony Corporation Motion vector detection using hierarchical calculation
GB2305569A (en) 1995-09-21 1997-04-09 Innovision Res Ltd Motion compensated interpolation
US5623313A (en) 1995-09-22 1997-04-22 Tektronix, Inc. Fractional pixel motion estimation of video signals
JPH09153820A (en) 1995-11-29 1997-06-10 Sharp Corp Encoding/decoding device
US5682205A (en) * 1994-08-19 1997-10-28 Eastman Kodak Company Adaptive, global-motion compensated deinterlacing of sequential video fields with post processing
US5694179A (en) 1994-12-23 1997-12-02 Electronics And Telecommunications Research Institute Apparatus for estimating a half-pel motion in a video compression method
JPH1042295A (en) 1996-07-19 1998-02-13 Sony Corp Video signal encoding method and video signal encoder
US5754240A (en) 1995-10-04 1998-05-19 Matsushita Electric Industrial Co., Ltd. Method and apparatus for calculating the pixel values of a block from one or two prediction blocks
US5767907A (en) * 1994-10-11 1998-06-16 Hitachi America, Ltd. Drift reduction methods and apparatus
WO1998041011A1 (en) 1997-03-12 1998-09-17 Matsushita Electric Industrial Co., Ltd. Hdtv downconversion system
US5844616A (en) 1993-06-01 1998-12-01 Thomson Multimedia S.A. Method and apparatus for motion compensated interpolation
DE19730305A1 (en) 1997-07-15 1999-01-21 Bosch Gmbh Robert Method for generating an improved image signal in the motion estimation of image sequences, in particular a prediction signal for moving images with motion-compensating prediction
JPH1146364A (en) 1997-07-28 1999-02-16 Victor Co Of Japan Ltd Motion compensation coding and decoding device, and coding and decoding device
JPH1155673A (en) 1997-07-31 1999-02-26 Victor Co Of Japan Ltd Motion vector decoder, coding method and decoding method
US5987181A (en) 1995-10-12 1999-11-16 Sharp Kabushiki Kaisha Coding and decoding apparatus which transmits and receives tool information for constructing decoding scheme
US6005509A (en) 1997-07-15 1999-12-21 Deutsches Zentrum Fur Luft-Und Raumfahrt E.V. Method of synchronizing navigation measurement data with S.A.R radar data, and device for executing this method
EP1073276A2 (en) 1999-07-27 2001-01-31 Sharp Kabushiki Kaisha Methods for motion estimation with adaptive motion accuracy
US6249318B1 (en) * 1997-09-12 2001-06-19 8×8, Inc. Video coding/decoding arrangement and method therefor
US6269174B1 (en) 1997-10-28 2001-07-31 Ligos Corporation Apparatus and method for fast motion estimation
US6275532B1 (en) * 1995-03-18 2001-08-14 Sharp Kabushiki Kaisha Video coding device and video decoding device with a motion compensated interframe prediction
US6714593B1 (en) * 1997-10-21 2004-03-30 Robert Bosch Gmbh Motion compensating prediction of moving image sequences

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6442295A (en) 1987-08-10 1989-02-14 Seiko Epson Corp Memory card mounting structure

Patent Citations (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4864393A (en) 1987-06-09 1989-09-05 Sony Corp. Motion vector estimation in television images
EP0420653A2 (en) 1989-09-29 1991-04-03 Victor Company Of Japan, Ltd. Motion picture data coding/decoding system having motion vector coding unit and decoding unit
US5105271A (en) 1989-09-29 1992-04-14 Victor Company Of Japan, Ltd. Motion picture data coding/decoding system having motion vector coding unit and decoding unit
US4937666A (en) 1989-12-04 1990-06-26 Bell Communications Research, Inc. Circuit implementation of block matching algorithm with fractional precision
JPH04264889A (en) 1991-02-19 1992-09-21 Victor Co Of Japan Ltd Inter-motion-compensating-frame encoder
US5489949A (en) 1992-02-08 1996-02-06 Samsung Electronics Co., Ltd. Method and apparatus for motion estimation
US5408269A (en) 1992-05-29 1995-04-18 Sony Corporation Moving picture encoding apparatus and method
US5844616A (en) 1993-06-01 1998-12-01 Thomson Multimedia S.A. Method and apparatus for motion compensated interpolation
JPH0795585A (en) 1993-09-17 1995-04-07 Sony Corp Moving vector detector
US5610658A (en) 1994-01-31 1997-03-11 Sony Corporation Motion vector detection using hierarchical calculation
US5682205A (en) * 1994-08-19 1997-10-28 Eastman Kodak Company Adaptive, global-motion compensated deinterlacing of sequential video fields with post processing
US5767907A (en) * 1994-10-11 1998-06-16 Hitachi America, Ltd. Drift reduction methods and apparatus
JPH08116532A (en) 1994-10-14 1996-05-07 Graphics Commun Lab:Kk Image decoding system and device therefor
US5694179A (en) 1994-12-23 1997-12-02 Electronics And Telecommunications Research Institute Apparatus for estimating a half-pel motion in a video compression method
US6275532B1 (en) * 1995-03-18 2001-08-14 Sharp Kabushiki Kaisha Video coding device and video decoding device with a motion compensated interframe prediction
GB2305569A (en) 1995-09-21 1997-04-09 Innovision Res Ltd Motion compensated interpolation
US20010017889A1 (en) 1995-09-21 2001-08-30 Timothy John Borer Motion compensated interpolation
US5623313A (en) 1995-09-22 1997-04-22 Tektronix, Inc. Fractional pixel motion estimation of video signals
US5754240A (en) 1995-10-04 1998-05-19 Matsushita Electric Industrial Co., Ltd. Method and apparatus for calculating the pixel values of a block from one or two prediction blocks
US5987181A (en) 1995-10-12 1999-11-16 Sharp Kabushiki Kaisha Coding and decoding apparatus which transmits and receives tool information for constructing decoding scheme
JPH09153820A (en) 1995-11-29 1997-06-10 Sharp Corp Encoding/decoding device
JPH1042295A (en) 1996-07-19 1998-02-13 Sony Corp Video signal encoding method and video signal encoder
WO1998041011A1 (en) 1997-03-12 1998-09-17 Matsushita Electric Industrial Co., Ltd. Hdtv downconversion system
DE19730305A1 (en) 1997-07-15 1999-01-21 Bosch Gmbh Robert Method for generating an improved image signal in the motion estimation of image sequences, in particular a prediction signal for moving images with motion-compensating prediction
US6005509A (en) 1997-07-15 1999-12-21 Deutsches Zentrum Fur Luft-Und Raumfahrt E.V. Method of synchronizing navigation measurement data with S.A.R radar data, and device for executing this method
US7224733B1 (en) 1997-07-15 2007-05-29 Robert Bosch Gmbh Interpolation filtering method for accurate sub-pixel motion assessment
WO1999004574A1 (en) 1997-07-15 1999-01-28 Robert Bosch Gmbh Interpolation filtering method for accurate sub-pixel motion assessment
JPH1146364A (en) 1997-07-28 1999-02-16 Victor Co Of Japan Ltd Motion compensation coding and decoding device, and coding and decoding device
US6205176B1 (en) 1997-07-28 2001-03-20 Victor Company Of Japan, Ltd. Motion-compensated coder with motion vector accuracy controlled, a decoder, a method of motion-compensated coding, and a method of decoding
JPH1155673A (en) 1997-07-31 1999-02-26 Victor Co Of Japan Ltd Motion vector decoder, coding method and decoding method
US6249318B1 (en) * 1997-09-12 2001-06-19 8×8, Inc. Video coding/decoding arrangement and method therefor
US6714593B1 (en) * 1997-10-21 2004-03-30 Robert Bosch Gmbh Motion compensating prediction of moving image sequences
US6269174B1 (en) 1997-10-28 2001-07-31 Ligos Corporation Apparatus and method for fast motion estimation
JP2001189934A (en) 1999-07-27 2001-07-10 Sharp Corp Motion estimating method with motion precision having adaptability
US6968008B1 (en) 1999-07-27 2005-11-22 Sharp Laboratories Of America, Inc. Methods for motion estimation with adaptive motion accuracy
EP1073276A2 (en) 1999-07-27 2001-01-31 Sharp Kabushiki Kaisha Methods for motion estimation with adaptive motion accuracy
JP2011035928A (en) 1999-07-27 2011-02-17 Sharp Corp Motion compensation moving image coder and motion compensation moving image decoder
JP2012075175A (en) 1999-07-27 2012-04-12 Sharp Corp Motion compensation moving image coder and motion compensation moving image decoder
USRE44012E1 (en) * 1999-07-27 2013-02-19 Sharp Kabushiki Kaisha Methods for motion estimation with adaptive motion accuracy
USRE45014E1 (en) * 1999-07-27 2014-07-15 Sharp Kabushiki Kaisha Methods for motion estimation with adaptive motion accuracy

Non-Patent Citations (27)

* Cited by examiner, † Cited by third party
Title
ARUN NETRAVALI, ET AL.: "A CODEC FOR HDTV.", IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 38., no. 03., 1 August 1992 (1992-08-01), NEW YORK, NY, US, pages 325 - 340., XP000311862, ISSN: 0098-3063, DOI: 10.1109/30.156704
Bernd Girod, Motion-Compensating Prediction with Fractional-Pel Accuracy, IEEE Transactions on Communications, vol. 41, No. 4, pp. 604-612, (Apr. 1993).
CHAN E., PANCHANATHAN S.: "Review of block matching based motion estimation algorithms for video compression", ELECTRICAL AND COMPUTER ENGINEERING, 1993. CANADIAN CONFERENCE ON VANCOUVER, BC, CANADA 14-17 SEPT. 1993, NEW YORK, NY, USA,IEEE, 14 September 1993 (1993-09-14) - 17 September 1993 (1993-09-17), pages 151 - 154, XP010117942, ISBN: 978-0-7803-1443-6, DOI: 10.1109/CCECE.1993.332213
Chan et al., "Review of Block Matching Based Motion Estimation Algorithms for Video Compression," Electrical and Computer Engineering, Canadian Conference on Vancouver, BC, Canada 14-17, Sep. 14, 1993, New York, NY, USA, IEEE, pp. 151-154, XP010117942.
Ebrahimi et al., "A video codec based on perceptually derived and localized wavelet transform for mobile applications," Signal Processing Theories and Applications, Brussels, Aug. 24-27, 1992, vol. 3, pp. 1361-1364, XP000356495.
EBRAHIMI T., KUNT M.: "A VIDEO CODEC BASED ON PERCEPTUALLY DERIVED AND LOCALIZED WAVELET TRANSFORM FOR MOBILE APPLICATIONS.", SIGNAL PROCESSING THEORIES AND APPLICATIONS. BRUSSELS, AUG. 24 - 27, 1992., AMSTERDAM, ELSEVIER., NL, vol. 03., 24 August 1992 (1992-08-24) - 27 August 1992 (1992-08-27), NL, pages 1361 - 1364., XP000356495
Enhancement for the Telenor proposal for H.26L, ITU-Telecommunications Standardization Section, Q. 15/SG16, doc. Q15-G-25, Monterey, (Feb. 1999).
Extended European Search Report, dated May 24, 2011, for European Application No. 10013511.0.
Fujiwara, "Point illustrated Newest MPEG Textbook," ASCII Corporation, 1994, p. 114.
Jordi Ribas-Corbera and David L. Neuhoff, On the Optimal Motion Vector Accuracy for Block-Based Motion-Compensated Video Coders, Proc. IST/SPIE Digital Video Compression: Algorithms and Technologies, pp. 302-314, San Jose, (Feb. 1996).
Joshi et al., "Lossy Encoding of Motion Vectors Using Entropy-Constrained Vector Quantization," IEEE Comp. Soc. Press, US, Proceedings of the International Conference on Image Processing (ICIP). Washington, Oct. 23-26, 1995, vol. 3, Los Alamitos, pp. 109-112, XP010197142.
JOSHI R.L., FISCHER T.R., BAMBERGER R.H.: "Lossy encoding of motion vectors using entropy-constrained vector quantization", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING. (ICIP). WASHINGTON, OCT. 23 - 26, 1995., LOS ALAMITOS, IEEE COMP. SOC. PRESS, US, vol. 3, 23 October 1995 (1995-10-23) - 26 October 1995 (1995-10-26), US, pages 109 - 112, XP010197142, ISBN: 978-0-7803-3122-8, DOI: 10.1109/ICIP.1995.537592
LEE J.: "RATE-DISTORTION OPTIMIZED MOTION SMOOTHING FOR MPEG-2 ENCODING.", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING. ICIP 1997. SANTA BARBARA, CA, OCT. 26 - 29, 1997., LOS ALAMITOS, CA : IEEE., US, vol. 02., 1 January 1997 (1997-01-01), US, pages 45 - 48., XP000914163, ISBN: 978-0-8186-8184-4
Lee, "Rate-Distortion Optimized Motion Smoothing for MPEG-2 Encoding," Proceedings of the International Conference on Image Processing (ICIP), Los Alamitos, CA, vol. 2, Jan. 1, 1997, pp. 45-48, XP000914163.
Netravali et al., "A Codec for HDTV," IEEE Transactions on Consumer Electronics, IEEE Service Center, New York, NY, US, vol. 38, No. 3, Aug. 1, 1992, pp. 325-340, XP000311862.
OHM J.-R.: "Motion-compensated 3-D subband coding with multiresolution representation of motion parameters", PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP) AUSTIN, NOV. 13 - 16, 1994., LOS ALAMITOS, IEEE COMP. SOC. PRESS, US, vol. 3, 13 November 1994 (1994-11-13) - 16 November 1994 (1994-11-16), US, pages 250 - 254, XP010146425, ISBN: 978-0-8186-6952-1, DOI: 10.1109/ICIP.1994.413849
Ohm, "Motion-Compensated 3-D Subband Coding with Multiresolution Representation of Motion Parameters," IEEE, Proceedings of the International Conference on Image Processing (ICIP), Nov. 13-16, 1994, vol. 3, Conf. 1, pp. 250-254, XP010146425.
Pang et al., "Optimum Loop Filter in Hybrid Coders", IEEE Transactions on Circuits and Systems for Video Technology, vol. 4, No. 2, pp. 158-167, Apr. 1994, XP000489688.
PANG K. K., TAN T. K.: "OPTIMUM LOOP FILTER IN HYBRID CODERS.", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY., IEEE SERVICE CENTER, PISCATAWAY, NJ., US, vol. 04., no. 02., 1 April 1994 (1994-04-01), US, pages 158 - 167., XP000489688, ISSN: 1051-8215, DOI: 10.1109/76.285622
Response to Call for Proposals for H.26L, ITU-Telecommunications Standardization Section, Q.15/SG16, doc. Q15-F-11, Seoul, (Nov. 1998).
Shen et al., "Adaptive motion accuracy (AMA) in Telenor's proposal", ITU—Telecommunications Standardization Section, Study Group 16, Video Coding Experts Group (Question 15), Q15-H-20, Eight Meeting, Berlin, Aug. 3-6, 1999, XP030002958.
Smita Gupta and Allen Gersho, On Fractional Pixel Motion Estimation, Proc. SPIE VCIP, vol. 2094, pp. 408-419, Cambridge, (Nov. 1993).
Ulrich Benzler, Performance Evaluation of a Reduced Complexity Implementation for Quarter Pel Motion Compensation, ISO/IEC JTC1/SC29/WG11 Coding of Moving Pictures and Audio, MPEG 97/3146, San Jose, (Jan. 1998).
Ulrich Benzler, Proposal for a new core experiment on prediction enhancement at higher bitrates, ISO/IEC JTC1/SC29/WG11 Coding of Moving Pictures and Audio, MPEG 97/1827, Sevilla, (Feb. 1997).
WEDI T: "Results of core experiment on Adaptive Motion Accuracy (AMA) with 1/2, 1/4 and 1/8-pel accuracy", ITU STUDY GROUP 16 - VIDEO CODING EXPERTS GROUP, XX, XX, 16 May 2000 (2000-05-16) - 18 May 2000 (2000-05-18), XX, pages 1 - 9, XP002301984
Wedi, "Results of core experiment on Adaptive Motion Accuracy (AMA) with ½, ¼ and ⅛-pel accuracy," ITU Study Group 16—Video Coding Experts Group, May 16, 2000, pp. 1-9, XP002301984.
Xiaoming Li and Cesar Gonzales, A Locally Quadratic Model of the Motion Estimation Error Criterion Function and Its Application to Subpixel Interpolations, IEEE Transactions on Circuits and Systems for Video Technology, vol. 6, No. 1, (Feb. 1996).

Also Published As

Publication number Publication date
EP2373036B1 (en) 2013-01-09
USRE45014E1 (en) 2014-07-15
JP5269023B2 (en) 2013-08-21
JP4614512B2 (en) 2011-01-19
JP2012075175A (en) 2012-04-12
JP2011035928A (en) 2011-02-17
EP2026582A2 (en) 2009-02-18
EP1073276A3 (en) 2007-03-14
EP2026582A3 (en) 2009-10-21
USRE44012E1 (en) 2013-02-19
US6968008B1 (en) 2005-11-22
JP2009273157A (en) 2009-11-19
EP2373036A1 (en) 2011-10-05
JP2001189934A (en) 2001-07-10
EP2051531A1 (en) 2009-04-22
HK1161948A1 (en) 2012-08-10
EP1073276A2 (en) 2001-01-31

Similar Documents

Publication Publication Date Title
USRE46468E1 (en) Methods for motion estimation with adaptive motion accuracy
KR101403343B1 (en) Method and apparatus for inter prediction encoding/decoding using sub-pixel motion estimation
US9078007B2 (en) Digital video coding with interpolation filters and offsets
US8155195B2 (en) Switching distortion metrics during motion estimation
US20060233258A1 (en) Scalable motion estimation
US20030156646A1 (en) Multi-resolution motion estimation and compensation
US20070268964A1 (en) Unit co-location-based motion estimation
US20040156437A1 (en) Method for encoding and decoding video information, a motion compensated video encoder and a corresponding decoder
US20040076333A1 (en) Adaptive interpolation filter system for motion compensated predictive video coding
US20050135484A1 (en) Method of encoding mode determination, method of motion estimation and encoding apparatus
US6785333B2 (en) Motion vector coding method
US20050190977A1 (en) Method and apparatus for video encoding
US20060120455A1 (en) Apparatus for motion estimation of video data
CA2449048A1 (en) Methods and apparatus for sub-pixel motion estimation
Ribas-Corbera et al. Optimizing motion-vector accuracy in block-based video coding
US7433407B2 (en) Method for hierarchical motion estimation
Flierl et al. Video Coding with Superimposed Motion-Compensated Signals: Applications to H. 264 and Beyond
US20030067988A1 (en) Fast half-pixel motion estimation using steepest descent
KR20040070490A (en) Method and apparatus for encoding/decoding video signal in interlaced video
Ribas-Corbera et al. Optimal block size for block-based motion-compensated video coders
Suzuki et al. Block-based reduced resolution inter frame coding with template matching prediction
Pientka et al. Deep video coding with gradient-descent optimized motion compensation and Lanczos filtering
Shen et al. Benefits of adaptive motion accuracy in H. 26L video coding
KR100617177B1 (en) Motion estimation method
Wang et al. An efficient dual-interpolator architecture for sub-pixel motion estimation