US20100027667A1 - Motion estimation for uncovered frame regions - Google Patents

Motion estimation for uncovered frame regions Download PDF

Info

Publication number
US20100027667A1
US20100027667A1 US12/524,281 US52428108A US2010027667A1 US 20100027667 A1 US20100027667 A1 US 20100027667A1 US 52428108 A US52428108 A US 52428108A US 2010027667 A1 US2010027667 A1 US 2010027667A1
Authority
US
United States
Prior art keywords
group
motion
frame
uncovered
representation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/524,281
Inventor
Jonatan Samuelsson
Kenneth Andersson
Clinton Priddle
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US12/524,281 priority Critical patent/US20100027667A1/en
Publication of US20100027667A1 publication Critical patent/US20100027667A1/en
Assigned to TELEFONAKTIEBOLAGET LM ERICSSON (PUBL) reassignment TELEFONAKTIEBOLAGET LM ERICSSON (PUBL) ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SAMUELSSON, JONATAN, PRIDDLE, CLINTON, ANDERSSON, KENNETH
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/38Circuits or arrangements for blanking or otherwise eliminating unwanted parts of pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3872Repositioning or masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/553Motion estimation dealing with occlusions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Definitions

  • the present invention generally relates to image processing, and in particular to motion estimation for uncovered regions in images.
  • the motivation of frame rate up-conversion is that a video sequence with higher frame rate is generally considered to give higher quality experience than a video sequence with lower frame rate.
  • the frame rate of a video sequence can be increased by inserting predicted frames in between existing frames.
  • a good approach is to predict the in-between frame using bi-directional block based motion estimation, searching for linear motions between the previous frame and the next frame in the input video sequence. It is possible to use non-linear approaches that can represent acceleration, but the linear approach is used because of its simplicity and low complexity.
  • the in-between frame is divided into blocks and to each of these a motion vector must be assigned in some way.
  • a problem or short-coming with many frame rate up-conversion algorithms is the handling of panning, rotating or zooming images.
  • the camera pans to the left when going from image 20 to image 10 , thus a new area becomes revealed along the left border of the image 10 .
  • These “new” areas generally do not have any accurate references to the previous image 20 .
  • pixel blocks in the new areas are typically encoded according to intra mode, or if being inter encoded, having motion vectors pointing to areas in the previous image 20 that look similar but will not represent the actual motion (the camera pan). The lack of accurate motion vectors for these pixel blocks makes rate up-conversion harder, possibly leading to visual artefacts in interpolated images.
  • Document [1] discusses the identification of a block B i as an uncovered region, when it can be seen in a frame F t to be determined and in a following frame F t+1 but not in a previous frame F t ⁇ 1 .
  • Such a block is encoded as an intra block and has not been motion compensated by other blocks.
  • Document [1] handles uncovered blocks but assumes an intra coding for the uncovered pixels. This means that the uncovered blocks do not have any motion vectors that can be used during frame rate up-conversion.
  • the present invention overcomes these and other drawbacks of the prior art arrangements.
  • the present invention involves identification and motion estimation for groups of image elements in an uncovered region of a frame in a video sequence.
  • This uncovered region comprises image elements or pixels that are not present in a previous frame of the video sequence, such as due to camera panning, zooming or rotation.
  • a representation of a global motion of image element property values from at least a portion of a reference frame, typically a previous frame, in the video sequence to at least a portion of a current frame is determined.
  • the determined global motion representation is used for identifying uncovered groups in the current frame, i.e. those groups comprising at least one image element present in the uncovered region of the frame.
  • an uncovered group is identified as a group in the current frame that does not have any associated group in the reference frame when applying the global motion from the group in the frame towards the reference frame.
  • the global motion instead points outside of the border of the reference image.
  • the motion estimation of the present invention then assigns the determined global motion as motion representation for the identified uncovered groups. This means that also these groups that traditionally are not assigned any “true” motion vectors will have motion representations that can be used, for instance, during frame rate up-conversion.
  • a border uncovered group present on the border between the uncovered region of the frame and the remaining frame regions is investigated for the purpose of re-assigning a local motion instead of the global motion.
  • the motion representations of neighboring groups present in the remaining frame portion are compared to the global motion and preferably each other. If certain criteria are fulfilled, i.e. at least a minimum number of the neighboring motion representations differ significantly from the global motion and this at least minimum number of neighboring motion representations do not significantly differ from each other, the uncovered group is re-assigned a local motion representation determined based on the neighboring motion representation(s).
  • the present invention therefore allows assigning motion representations to also uncovered groups in a frame. These motion representations are useful during frame rate up-conversion for the purpose of identifying reference frames that are used when determining property values of the image elements of a group in a frame to be constructed.
  • FIG. 1 is a drawing illustrating two image frames in a video sequence
  • FIG. 2 is a flow diagram of a motion estimation method according to an embodiment of the present invention.
  • FIG. 3 is a diagram illustrating adjacent frames in a video sequence
  • FIG. 4 is a flow diagram illustrating additional steps to the estimating method of FIG. 2 ;
  • FIG. 5 is a flow diagram illustrating additional steps to the estimating method of FIG. 2 ;
  • FIG. 6 schematically illustrating assignment of local motion representations according to an embodiment of the present invention
  • FIG. 7 is a flow diagram illustrating a method of estimating property values according to an embodiment of the present invention.
  • FIG. 8 is a flow diagram illustrating an embodiment of the estimating step in the estimating method of FIG. 7 ;
  • FIG. 9 is a diagram illustrating adjacent frames in a video sequence
  • FIG. 10 illustrating the gains of employing the present invention in frame rate-up conversion and the problems of prior art techniques
  • FIG. 11 is a schematic block diagram of a motion estimating device according to the present invention.
  • FIG. 12 is a schematic block diagram of a group estimating device according to the present invention.
  • the present invention generally relates to image processing and in particular to methods and devices for handling groups of image elements in uncovered regions of images and frames in a video sequence.
  • a video or frame sequence comprises multiple, i.e. at least two, frames or images.
  • a frame can in turn be regarded as composed of a series of one or more slices, where such a slice consists of one or more macroblocks of image elements or pixels.
  • image element is used to denote a smallest element of a frame or image in a sequence.
  • Such an image element has associated image element properties, such as color (in the red, green, blue, RGB, space) or luminance (Y) and chrominance (Cr, Cb or sometimes denoted U, V).
  • a typical example of an image element is a pixel of a frame or picture.
  • the present invention is particularly adapted to a video sequence comprising multiple consecutive frames at a given frame rate.
  • the image elements are organized into groups of image elements.
  • group of image element denotes any of the prior art known partitions of frames and slices into collections of image elements that are handled together during decoding and encoding.
  • a group is a rectangular (M ⁇ N) or square (M ⁇ M) group of image elements.
  • An example of such a grouping is a macroblock in the video compression standard.
  • Such a macroblock generally has a size of 16 ⁇ 16 image elements.
  • a macroblock can consists of multiple so-called sub-macroblock partitions, such as 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8 and 4 ⁇ 4 image elements.
  • the 8 ⁇ 8 sub-macroblock partition is often denoted as a sub-macroblock or sub-block, whereas a 4 ⁇ 4 partition is often denoted block.
  • So-called uncovered regions in an image or a frame correspond to image elements that have no correspondences in a previous frame of the video sequence. Uncovered regions typically occur during panning, zooming and/or rotation in video recording causing new objects to be captured in the video sequence.
  • the frames are encoded according to well known techniques, such as intra or inter coding [ 2 ].
  • Inter coding generally leads to a more efficiently coded block in terms of the number of bits spent on the inter-encoded block as compared to intra coding.
  • inter coding presumes that there is a correspondence or at least a (closely) matching reference in another frame of the video sequence that can be used as starting reference for a current block. If no such match can be found, the block is generally intra coded, thereby requiring a comparatively larger amount of bits.
  • Image elements in uncovered regions do not have any correspondences in previous frames. As a consequence, these may be intra coded or a best effort inter coding can be conducted even though no correct matches are available. This generally works well and gives a visually acceptable result during subsequent decoding and rendering. However, if the video sequence is subsequent to frame rate-up conversion, serious problems in terms of unacceptable visual appearance of the constructed intermediate frames can occur in the case of inter coding utilizing “untrue” motion representations for uncovered image elements.
  • FIG. 10 illustrates a determined frame 30 that is constructed to be intermediate of two frames in a video sequence.
  • the right portion of the frame 30 illustrates a linesman that is further illustrated in the magnified portion. As is seen in the left figure, portions of this linesman have been assigned colors that are clearly incorrect to the viewer. Such problem may occur when utilizing inter coding according to prior techniques together with frame rate-up conversion.
  • the present invention reduces the risk of such visual errors by providing a method for performing motion estimation for a group of at least one image element in a frame of a video sequence.
  • the operation steps of the method are illustrated in the flow diagram of FIG. 2 .
  • the method starts in step S 1 , which determines a representation of a global motion of image element property values from at least a reference portion of a reference frame to at least a portion of a current frame in the frame/video sequence.
  • This global motion representation is indicative of the global or overall movement of pixels when going from the reference frame to the current frame in the sequence.
  • a next step S 2 uses the determined global motion representation for identifying at least one so-called uncovered group of at least one image element each. These uncovered groups are present in an uncovered region of the frame. Thus, the at least one group comprises image elements that are not present in a previous frame of the video sequence.
  • a preferred implementation of this step S 2 identifies an uncovered group as a group in the current frame that does not have an associated group in a previous frame when applying the global motion from the group in the current frame to the previous frame. In other words, if the global motion is to be utilized as a motion or displacement vector for the uncovered group in the previous frame, the global motion will indeed point towards a (imaginary) group present outside of the boundaries of the previous frame. This situation is illustrated in FIG. 3 .
  • the leftmost frame 20 corresponds to a previous frame of a previous time instance in the video sequence 1 relative a current frame 10 .
  • the picture has been panned when going from the previous 20 to the current 10 frame, thereby uncovering a region 13 of image elements 11 not present in the previous frame 20 .
  • the global motion 50 is a vertical motion from the right to the left.
  • a current group 12 of image elements 11 is an uncovered group and belongs to the uncovered region 13 of the frame 10 .
  • the current group 12 belongs to the uncovered region 13 otherwise it corresponds to a remaining portion 15 of the current frame 10 .
  • the group 22 of image elements 21 occupying the same position in the previous frame 20 as the current group 12 occupy in the current frame 10 is identified.
  • the group 22 is then moved according to the determined global motion representation 50 to reach a final group position 24 . If this position 24 falls outside of the frame boundary 20 , the current group 12 belongs to the uncovered region 13 otherwise it corresponds to the remaining portion 15 of the current frame 10 .
  • step S 3 the determined global motion representation is as assigned as motion or displacement estimation for the uncovered group.
  • the uncovered group thereby becomes assigned a displacement representation that can be subsequently used for different purposes, such as when constructing a new frame during frame rate-up conversion, which is further described herein.
  • all or at least a portion of the image element groups in the frame can be tested by utilizing the global motion representation in order to determine whether the groups belong to an uncovered frame region or a remaining region.
  • all groups identified as uncovered groups in step S 2 are preferable assigned the global motion representation as their motion estimation. This means that steps S 2 and S 3 are preferably performed multiple times, either in series or parallel, for different groups in the frame.
  • the global motion representation of the present invention can take any vector value
  • FIG. 4 is a flow diagram illustrating a preferred embodiment of determining the global motion representation of the present invention.
  • the method starts in step S 10 , where a vector set is provided.
  • This vector set comprises, for each image element group in at least a portion of the current frame a respective associated displacement or motion vector referring to a reference group of at least one image element in the reference frame.
  • each group in at least a portion of the current frame preferably each group in the frame, has an assigned displacement vector that is pointing to or associated with a reference group in the reference frame.
  • the displacement vectors can be provided from a coded motion vector field of a video codec, such as H.264.
  • a video codec such as H.264.
  • Such motion vectors are traditionally used in inter coding of frames and can be re-used but for another purpose according to the invention. If no such motion vectors are available from the video codec, they can be determined from a motion estimation search. In such a case, a dedicated motion estimation search is conducted, preferably according to prior art algorithms but for the purpose of generating a motion vector set that can be used for determining the global motion representation of the invention.
  • each image element group in the remaining portion of current frame can have an associated motion vector generated by the video codec or from the motion estimation.
  • some of the groups such as those belonging to the uncovered region, might not have an assigned motion vector as these are could be coded as intra blocks by the video codec. In such a case, such groups can be omitted from the processing of the motion vectors of the invention. This means that only a portion (though a major portion) of the groups in the current frame and their assigned motion/displacement vectors are utilized in the following step S 11 for calculating the global motion representation.
  • the next step S 11 uses the displacement vectors from the provided (fetched or calculated) vector set from step S 10 to determine a global motion vector.
  • step S 11 utilizes the following representation of the global motion representation:
  • step S 10 are a matrix and a vector that are to be estimated based on the displacement vectors provided in step S 10 .
  • a least square method is preferably used for the provided displacement vector.
  • the matrix and vector that gives a best result, in terms of minimizing a squared difference between the displacement vectors and the global motion representation are estimated in step S 11 .
  • step S 2 of FIG. 2 where the determined representation of the global motion is applied, using the group coordinates x and y of the current group to calculate the global motion at that point and determine whether the group is an uncovered group or not.
  • displacement vectors from the video codec or from a dedicated motion estimation search is a particular embodiment of obtaining a displacement vector set that are used for determining a global motion representation of the present invention.
  • Other embodiments can instead be used and are contemplated by the invention.
  • a motion estimation that is based on phase correlation can be used to obtain a representation of the global motion.
  • Another example is of motion estimation for the global motion is pel-recursive, i.e. pixel-based motion estimation.
  • FIG. 5 is a flow diagram illustrating additional steps of a preferred embodiment of the motion estimating method of FIG. 2 .
  • the method continues from step S 3 of FIG. 2 .
  • a next step S 20 is performed for a group identified as belonging to the uncovered region of the frame.
  • the uncovered group has a set of neighboring groups in the frame identified as not belonging to the uncovered region.
  • the uncovered group is present in the border between the uncovered frame region and the remaining region, where the neighboring groups of the set are found in the remaining frame region.
  • the groups present in the remaining portion of the frame are preferably each associated with a displacement or motion representation. These motion representations can be available from a coded motion vector field of the video codec or from a dedicated motion estimation search.
  • Step S 20 compares the motion representations associated with the neighboring groups with the determined global motion representation.
  • a next step S 21 determines whether there is a local motion diverging from the global motion in the vicinity of the uncovered group. Thus, the step S 21 determines whether at least a minimum number of the motion representations differ from the global motion representation as applied to these group positions with at least a minimum difference.
  • FIG. 6 is an illustration of a portion of a frame showing the border between an uncovered region 13 and a remaining frame region 15 . A current uncovered group 12 is under investigation.
  • This group comprises three neighboring groups 16 , 18 not present in the uncovered region 13 (and five neighboring groups 14 present in the uncovered region 13 ). Each of these non-uncovered neighboring groups 18 , 16 has an associated motion representation 50 , 52 .
  • one of the neighboring groups 18 has a motion representation 50 that is identical or near identical to the global motion representation for the frame when applied at the position of the group 18 .
  • the remaining two neighboring groups 16 have motion representations 52 that significantly differ from the global motion representation 50 . This means that two thirds of the neighboring groups present a local motion divergence.
  • the minimum number of neighboring groups that must have motion representations differing from the global motion in order to have a local motion divergence in step S 21 is preferably more than half of the neighboring groups.
  • an uncovered group 12 has three or two neighboring groups 16 , 18 in the remaining portion. In the former case, at least two of them must have motion representations differing from the global motion in order to have a local motion divergence. In the latter case, all of the neighboring groups should present this divergence.
  • a difference between the motion representations of the neighboring groups and the global motion can be determined according to different embodiments.
  • a first case only the relative directions of the vectors are investigated.
  • an angle ⁇ between the global motion v and a motion representation d can be determined as:
  • This angle ⁇ can then be compared to a reference angle ⁇ ref and if the difference between the two angles exceed a minimum threshold or if the quotient of the angels exceeds (or is below) a threshold, the motion representation d is regarded at differing from the global motion representation v with at least a minimum difference.
  • Another implementation would be to calculate a difference vector q between the motion representation and the global motion:
  • the motion representation d is regarded at differing from the global motion representation v with at least a minimum difference. This can be implemented by comparing the X and Y values separately with the global motion. If both the X component and the Y component of d differs less than the minimum threshold from corresponding components of v, the motion representation d is regarded as describing the same motion as the global motion representation v, otherwise the motion representation differs significantly from the global motion representation.
  • not only a minimum number of the neighboring groups should have motion representations differing significantly from the global motion to have a local motion divergence in step S 21 .
  • a further preferred condition is that motion representations that significantly differ from the global motion should not significantly differ from each other. This is also illustrated in FIG. 6 .
  • the two neighboring groups 16 with motion representations 52 differing from the global motion 50 have indeed parallel and equal respective motion representations 52 . The difference therebetween is therefore zero degrees or the zero difference vector, depending on the particular difference investigating embodiment to use.
  • the angles between pairwise tested motion representations should not exceed a maximum angle or the difference vector between pairwise tested motion representations should not have a vector length exceeding a maximum length. If at least a minimum number of the representations fulfill these conditions, we have a local motion divergence in step S 21 otherwise not. The minimum number could be more than half of the motion representations that differ significantly from the global motion, i.e. the two motion representations 52 in the example of FIG. 6 should not differ significantly from each other. Alternative no of the motion representations must significantly differ from any other tested motion representation.
  • step S 21 If there is a local motion divergence in step S 21 that fulfills the one or preferably the two criteria listed above, i) a first minimum number of motion vectors differ significantly from the global motion and ii) of these motion vectors at least a second minimum number of motion vectors must not significantly differ from each other, the method continues to step S 22 otherwise it ends.
  • Step S 22 assigns a motion representation provided from the motion vector associated a neighboring group as motion estimation for the tested uncovered group.
  • the motion representation could be any of the representations of the relevant neighboring groups fulfilling the single criterion i) or, in the case of requiring the two criteria i) and ii), fulfilling the two criteria. Also more elaborated embodiments can be utilized, such as determining an average motion representation based on the neighboring motion representations fulfilling criterion i) or criteria i) and ii). This average local motion representation is then assigned to the uncovered group in step S 22 .
  • a next optional step S 23 investigates whether there are any more uncovered groups on the same row or column as the current uncovered group.
  • a same row is investigated if the border between the uncovered and remaining frame portions is vertical and a column is investigated for a horizontal border.
  • FIG. 6 illustrates one additional group 14 present on the same row as the uncovered group 12 in the case of a vertical border.
  • the newly assigned local motion representation assigned to the uncovered group in step S 22 is also assigned to this (these) adjacent group(s) in step S 24 .
  • These steps S 23 and S 24 are preferably performed for each uncovered group being assigned a local motion representation instead of the global motion representation in step S 22 .
  • Preferably all identified uncovered groups of the same row or column are assigned the same local motion representation as the uncovered group present next to the border between the uncovered and remaining frame portions.
  • a trend in the local motion along the same row or column but over the uncovered-remaining border is utilized for determining local motion representations to uncovered groups.
  • a linear extrapolation of the motion representations is calculated for the uncovered groups to thereby more accurately reflect local changes in the motion along a row or column.
  • Information of the motion representations of a set comprising the N groups present on the same row or column and being closest to the border to the uncovered frame portion but still present in the remaining portion can be used in this extrapolation, where N is some multiple integer, such as two or three.
  • all groups in a frame will be associated with a motion representation; the ones in the remaining frame group get their motion representations from the video codec or a dedicated motion search and the uncovered groups are assigned the global motion representation or re-assigned a local motion representation according to the present invention.
  • FIG. 7 is a flow diagram illustrating a method of estimating property values of a group of at least one image element in a frame to be constructed at a particular time instance in a video sequence relative an existing reference frame having a following or a previous time instance in the sequence. The method continues from step S 3 of FIG. 2 or step S 24 of FIG. 5 . This means that the motion estimation method of the invention is first applied to the reference frame to assign motion representations to at least one uncovered group present in the uncovered region of the reference frame.
  • a next step S 30 selects a reference group among the uncovered groups in the reference frame.
  • This selected reference group has an assigned (global or local) motion representation intersecting the group to be determined in the constructed frame.
  • This situation is illustrated in FIG. 3 .
  • the uncovered group 12 in the reference frame 10 is assigned the global motion representation 50 according to the present invention.
  • half the global motion in the case the constructed frame 30 is positioned at a time instance t i in the middle of the time of the previous frame t i ⁇ 1 and the reference frame t i+1 , moves the reference group 12 , when applied to the constructed frame 30 , to the position of the current group 32 of image elements 31 to be determined.
  • the motion representations assigned to the uncovered groups 12 in the reference frame 10 are used when selecting the particular reference group to utilize as a basis for determining the pixel values of the image elements 31 in the group 32 to be determined.
  • the reference group to use is, thus, the uncovered group 12 having a motion representation that pass through the group 32 in the constructed frame 30 .
  • the property values of the image elements in the group are then estimated in step S 31 based on the property values of the reference group.
  • this estimating step simply involves assigning the property values of the reference group to the respective image elements of the group.
  • the property values may also be low pass filtering, as these groups may otherwise become to sharp compared to other groups that often become somewhat blurred as a consequence of the bidirectional interpolation. The method then ends.
  • FIGS. 8 and 9 illustrate another embodiment utilizing two reference groups in different reference frames in the estimation.
  • the method continues from step S 30 of FIG. 7 .
  • a next step S 40 identifies a second reference group 42 of at least one image element in a second reference frame 40 associated with a second different time instance in the video sequence 1 .
  • the two reference frames 10 , 40 are positioned on a time basis on a same side in the video sequence 1 relative the constructed frame 30 .
  • the two reference frames 10 , 40 could be two preceding frames, such as of time instances t i ⁇ 1 and t i ⁇ 3 , or two following frames, such as of time instances t i+1 and t i+3 , relative the time instance t i of the constructed frame 30 .
  • the second reference group 42 is identified in step S 40 based on a motion representation assigned to the reference group 42 .
  • the second reference group 42 is identified as a group in the second reference frame 40 having an assigned motion representation pointing towards the first reference group 12 in the first reference frame 10 .
  • the next step S 41 extrapolates the property values of the image elements in the group 12 based on the property values of the first 12 and second 42 reference group.
  • Such extrapolation procedures are well known in the art and may, for instance, involve different weights to the property values of the first reference group 12 as compared to the second reference group to thereby weight up the values of the first reference group 12 , which is closer in time to the constructed frame 30 as compared to the second reference group 42 .
  • FIG. 10 illustrates the advantages of utilizing the present invention during frame rate up-conversion as compared to prior art techniques lacking any true motion estimation for uncovered groups.
  • the picture to the left has been constructed from a video sequence using prior art interpolation and extrapolation techniques.
  • the corresponding picture to the right has been constructed according to the present invention.
  • the present invention provides a more correct construction of the linesman to the right in the pictures.
  • the present invention is not only advantageously used in connection with frame rate up-conversion.
  • the present invention can also be used for refining a motion vector field from the coded bit stream. This means that also uncovered groups previously having no assigned motion vectors will be assigned motion representations according to the invention.
  • Another application of the invention is for error concealment.
  • a distorted frame or part of a frame can be replaced by unidirectional or bidirectional prediction using the refined vector field produced by the invention.
  • the invention can also be used to obtain a predicted motion vector field from a reconstructed motion vector field as a mean to obtain better coding efficiency of a next frame to be decoded.
  • FIG. 11 is a schematic block diagram of a device for motion estimation for a group of at least one image element in a frame of a video sequence.
  • a global motion determiner 110 is arranged in the device 100 for determining a representation of a global motion of image element property values from at least a portion of a reference frame to at least a portion of a current frame in the sequence.
  • the determiner 110 is preferably connected to a set provider 140 , which is arranged for providing a vector set comprising, for each image element group in the portion of the frame, a displacement vector referring to a reference group in the reference frame.
  • the set provider 140 can fetch this set from an internal or external video codec, or include functionality for estimating the displacement vectors in a motion estimation search.
  • the determiner 110 preferably generates the global motion representation as one of the previously described position-dependent global motion vectors, by determining matrix A and vector b of the global motion representation.
  • the device 100 also comprises a group identifier 120 for identifying uncovered groups of at least one image element each in an uncovered region of the frame based on the global motion representation from the determiner 110 .
  • This identifier 110 preferably identifies the uncovered groups as groups in the frame that does not have any associated group in the reference frame when applying the global motion from the groups in the frame to the reference frame. In a typical implementation, one then ends up outside the boundaries of the reference frame.
  • a motion assigner 130 assigns the global motion representation as motion representation or vector for those uncovered groups identified by the group identifier 120 .
  • the device 100 optionally but preferably comprises a motion comparator 150 arranged for comparing motion representations of a set of groups. These groups are not present in the uncovered region of the frame but are neighbors to an uncovered group.
  • the comparator 150 compares the motion representation of each of these neighboring groups to the global motion representation from the determiner 110 and investigates whether at least a minimum number of the motion representations differ significantly, i.e. with at least a minimum difference, from the global motion representation. This comparison is preferably performed as previously described herein.
  • the motion assigner 130 assigns a new motion representation to the uncovered group as a replacement of the global motion representation.
  • This new motion representation is the motion representation of one of the neighboring groups having a significantly differing motion relative the global motion or is calculated based on at least a portion of the neighboring motions differing significantly from the global motion.
  • the motion comparator 150 also compares those neighboring motion representations that significantly differed from the global motion representation with each other. The comparator 150 then only signal the assigner 130 to re-assign motion representation for the uncovered group if these neighboring motion representations do not differ significantly, i.e. with not more than a maximum difference, from each other.
  • the previously described comparison embodiments can be utilized by the comparator 150 for investigating this criterion. This means that the assigner 130 only assigns a new motion representation to the uncovered group if these two criteria are fulfilled as determined by the comparator 150 .
  • the group identifier 120 preferably identifies other uncovered groups present on a same group row or column as the uncovered group but further away from the neighboring groups present in the remaining frame portion. In such a case, the motion assigner 130 re-assigns motion representations also for this (these) uncovered group(s).
  • the re-assigned motion representation is the same as was previously assigned to the uncovered group adjacent the border between the uncovered region and the remaining frame region or a motion representation calculated at least partly therefrom, such as through extrapolation.
  • the units 110 to 150 of the motion estimating device 100 can be provided in hardware, software and/or a combination of hardware and software.
  • the units 110 to 150 can be implemented in a video or frame processing terminal or server, such as implemented in or connected to a node of a wired or wireless communications system.
  • the units 110 to 150 of the motion estimating device 100 can be arranged in a user terminal, such as TV decoder, computer, mobile telephone, or other user appliance having or being connected to a decoder and/or an image rendering device.
  • FIG. 12 is a schematic block diagram of a device 200 for estimating property values of a group of at least one image element in a constructed frame of video sequence.
  • the device 200 comprises a motion estimating device 100 according to the present invention, illustrated in FIG. 11 and disclosed above.
  • the motion estimating device is arranged for performing a motion estimation on a reference frame associated with a previous or following time instance in the video sequence relative the constructed frame.
  • the motion estimation performed by the device 100 assigns global or local motion representations to at least one, preferably all, uncovered groups in the reference frame.
  • a group selector 210 is provided in the device 200 for selecting a reference group among the uncovered groups in the reference frame.
  • the selector 210 preferably selects the reference group as an uncovered group having an assigned motion representation that intersects the group in the constructed frame. In other words, one passes straight through the group when traveling along the motion representation of the reference group from the reference frame towards another previous or following frame in the sequence.
  • the device 200 also comprises a value estimator 220 arranged for estimating the property values of the group based on the property values of the reference group selected by the group selector 210 .
  • the estimator 200 preferably assigns the property values of the reference group to the corresponding image elements of the group in the constructed frame.
  • the group selector 210 is also arranged for selecting a second reference group in a second reference frame in the video sequence.
  • This second reference frame is preferably positioned further from the constructed frame regarding frame times as compared to the first reference frame.
  • the second group is identified by the group selector 210 based on the motion representation assigned to the second reference group.
  • the selector 210 typically selects the second reference group as a group in the second frame having an assigned motion representation pointing towards the first reference group in the first reference frame.
  • the estimator 220 then estimates the property values of the group based on the property values of both the first and second reference group. This value estimation is performed as a value extrapolation, preferably utilizing different weights for the values of the first and second reference group to thereby upweight those reference property values originating from the reference group that is positioned in a reference frame closer in time to the constructed group relative the other reference frame.
  • the units 100 , 210 and 220 of the group estimating device 200 can be provided in hardware, software and/or a combination of hardware and software.
  • the units 100 , 210 and 220 can be implemented in a video or frame processing terminal or server, such as implemented in or connected to a node of a wired or wireless communications system.
  • the units 100 , 210 and 220 of the group estimating device 200 can be arranged in a user terminal, such as TV decoder, computer, mobile telephone, or other user appliance having or being connected to a decoder and/or an image rendering device.

Abstract

In a motion estimation for a group of at least one image element in a frame of a video sequence, a global motion is determined between the frame and a reference frame. Uncovered groups present in an uncovered region of the frame are identified based on the determined global motion. The global motion is assigned as motion representation for these identified uncovered groups. The assigned motion representation is useful for constructing new frames in the sequence in a frame rate up-conversion.

Description

    TECHNICAL FIELD
  • The present invention generally relates to image processing, and in particular to motion estimation for uncovered regions in images.
  • BACKGROUND
  • The motivation of frame rate up-conversion is that a video sequence with higher frame rate is generally considered to give higher quality experience than a video sequence with lower frame rate. The frame rate of a video sequence can be increased by inserting predicted frames in between existing frames. A good approach is to predict the in-between frame using bi-directional block based motion estimation, searching for linear motions between the previous frame and the next frame in the input video sequence. It is possible to use non-linear approaches that can represent acceleration, but the linear approach is used because of its simplicity and low complexity. The in-between frame is divided into blocks and to each of these a motion vector must be assigned in some way.
  • A problem or short-coming with many frame rate up-conversion algorithms is the handling of panning, rotating or zooming images. In FIG. 1, the camera pans to the left when going from image 20 to image 10, thus a new area becomes revealed along the left border of the image 10. There are parts of the audience and almost a whole commercial sign in the bottom image 10 that are not part of the top image 20. These “new” areas generally do not have any accurate references to the previous image 20. In clear contrast, pixel blocks in the new areas are typically encoded according to intra mode, or if being inter encoded, having motion vectors pointing to areas in the previous image 20 that look similar but will not represent the actual motion (the camera pan). The lack of accurate motion vectors for these pixel blocks makes rate up-conversion harder, possibly leading to visual artefacts in interpolated images.
  • Document [1] discusses the identification of a block Bi as an uncovered region, when it can be seen in a frame Ft to be determined and in a following frame Ft+1 but not in a previous frame Ft−1. Such a block is encoded as an intra block and has not been motion compensated by other blocks.
  • SUMMARY
  • Document [1] handles uncovered blocks but assumes an intra coding for the uncovered pixels. This means that the uncovered blocks do not have any motion vectors that can be used during frame rate up-conversion.
  • The present invention overcomes these and other drawbacks of the prior art arrangements.
  • It is a general object of the present invention to provide an identification of image elements in an uncovered region of a video frame.
  • It is another object of the invention to provide a motion estimation of identified uncovered groups of image elements.
  • These and other objects are met by the invention as defined by the accompanying patent claims.
  • Briefly, the present invention involves identification and motion estimation for groups of image elements in an uncovered region of a frame in a video sequence. This uncovered region comprises image elements or pixels that are not present in a previous frame of the video sequence, such as due to camera panning, zooming or rotation.
  • A representation of a global motion of image element property values from at least a portion of a reference frame, typically a previous frame, in the video sequence to at least a portion of a current frame is determined. The determined global motion representation is used for identifying uncovered groups in the current frame, i.e. those groups comprising at least one image element present in the uncovered region of the frame. Preferably, an uncovered group is identified as a group in the current frame that does not have any associated group in the reference frame when applying the global motion from the group in the frame towards the reference frame. Typically, the global motion instead points outside of the border of the reference image.
  • The motion estimation of the present invention then assigns the determined global motion as motion representation for the identified uncovered groups. This means that also these groups that traditionally are not assigned any “true” motion vectors will have motion representations that can be used, for instance, during frame rate up-conversion.
  • In a preferred embodiment, a border uncovered group present on the border between the uncovered region of the frame and the remaining frame regions is investigated for the purpose of re-assigning a local motion instead of the global motion. In such a case, the motion representations of neighboring groups present in the remaining frame portion are compared to the global motion and preferably each other. If certain criteria are fulfilled, i.e. at least a minimum number of the neighboring motion representations differ significantly from the global motion and this at least minimum number of neighboring motion representations do not significantly differ from each other, the uncovered group is re-assigned a local motion representation determined based on the neighboring motion representation(s).
  • The present invention therefore allows assigning motion representations to also uncovered groups in a frame. These motion representations are useful during frame rate up-conversion for the purpose of identifying reference frames that are used when determining property values of the image elements of a group in a frame to be constructed.
  • SHORT DESCRIPTION OF THE DRAWINGS
  • The invention together with further objects and advantages thereof, may best be understood by making reference to the following description taken together with the accompanying drawings, in which:
  • FIG. 1 is a drawing illustrating two image frames in a video sequence;
  • FIG. 2 is a flow diagram of a motion estimation method according to an embodiment of the present invention;
  • FIG. 3 is a diagram illustrating adjacent frames in a video sequence;
  • FIG. 4 is a flow diagram illustrating additional steps to the estimating method of FIG. 2;
  • FIG. 5 is a flow diagram illustrating additional steps to the estimating method of FIG. 2;
  • FIG. 6 schematically illustrating assignment of local motion representations according to an embodiment of the present invention;
  • FIG. 7 is a flow diagram illustrating a method of estimating property values according to an embodiment of the present invention;
  • FIG. 8 is a flow diagram illustrating an embodiment of the estimating step in the estimating method of FIG. 7;
  • FIG. 9 is a diagram illustrating adjacent frames in a video sequence;
  • FIG. 10 illustrating the gains of employing the present invention in frame rate-up conversion and the problems of prior art techniques;
  • FIG. 11 is a schematic block diagram of a motion estimating device according to the present invention; and
  • FIG. 12 is a schematic block diagram of a group estimating device according to the present invention.
  • DETAILED DESCRIPTION
  • Throughout the drawings, the same reference characters will be used for corresponding or similar elements.
  • The present invention generally relates to image processing and in particular to methods and devices for handling groups of image elements in uncovered regions of images and frames in a video sequence.
  • In the present invention, a video or frame sequence comprises multiple, i.e. at least two, frames or images. Such a frame can in turn be regarded as composed of a series of one or more slices, where such a slice consists of one or more macroblocks of image elements or pixels. In the present invention, the expression “image element” is used to denote a smallest element of a frame or image in a sequence. Such an image element has associated image element properties, such as color (in the red, green, blue, RGB, space) or luminance (Y) and chrominance (Cr, Cb or sometimes denoted U, V). A typical example of an image element is a pixel of a frame or picture. The present invention is particularly adapted to a video sequence comprising multiple consecutive frames at a given frame rate.
  • The image elements are organized into groups of image elements. The expression “group of image element” denotes any of the prior art known partitions of frames and slices into collections of image elements that are handled together during decoding and encoding. Generally, such a group is a rectangular (M×N) or square (M×M) group of image elements. An example of such a grouping is a macroblock in the video compression standard. Such a macroblock generally has a size of 16×16 image elements. A macroblock can consists of multiple so-called sub-macroblock partitions, such as 16×8, 8×16, 8×8, 8×4, 4×8 and 4×4 image elements. The 8×8 sub-macroblock partition is often denoted as a sub-macroblock or sub-block, whereas a 4×4 partition is often denoted block.
  • So-called uncovered regions in an image or a frame correspond to image elements that have no correspondences in a previous frame of the video sequence. Uncovered regions typically occur during panning, zooming and/or rotation in video recording causing new objects to be captured in the video sequence. In order to reduce the bit size of a video sequence, the frames are encoded according to well known techniques, such as intra or inter coding [2]. Inter coding generally leads to a more efficiently coded block in terms of the number of bits spent on the inter-encoded block as compared to intra coding. However, inter coding presumes that there is a correspondence or at least a (closely) matching reference in another frame of the video sequence that can be used as starting reference for a current block. If no such match can be found, the block is generally intra coded, thereby requiring a comparatively larger amount of bits.
  • Image elements in uncovered regions do not have any correspondences in previous frames. As a consequence, these may be intra coded or a best effort inter coding can be conducted even though no correct matches are available. This generally works well and gives a visually acceptable result during subsequent decoding and rendering. However, if the video sequence is subsequent to frame rate-up conversion, serious problems in terms of unacceptable visual appearance of the constructed intermediate frames can occur in the case of inter coding utilizing “untrue” motion representations for uncovered image elements.
  • FIG. 10 illustrates a determined frame 30 that is constructed to be intermediate of two frames in a video sequence. The right portion of the frame 30 illustrates a linesman that is further illustrated in the magnified portion. As is seen in the left figure, portions of this linesman have been assigned colors that are clearly incorrect to the viewer. Such problem may occur when utilizing inter coding according to prior techniques together with frame rate-up conversion.
  • The present invention reduces the risk of such visual errors by providing a method for performing motion estimation for a group of at least one image element in a frame of a video sequence. The operation steps of the method are illustrated in the flow diagram of FIG. 2. The method starts in step S1, which determines a representation of a global motion of image element property values from at least a reference portion of a reference frame to at least a portion of a current frame in the frame/video sequence. This global motion representation is indicative of the global or overall movement of pixels when going from the reference frame to the current frame in the sequence.
  • A next step S2 uses the determined global motion representation for identifying at least one so-called uncovered group of at least one image element each. These uncovered groups are present in an uncovered region of the frame. Thus, the at least one group comprises image elements that are not present in a previous frame of the video sequence. A preferred implementation of this step S2 identifies an uncovered group as a group in the current frame that does not have an associated group in a previous frame when applying the global motion from the group in the current frame to the previous frame. In other words, if the global motion is to be utilized as a motion or displacement vector for the uncovered group in the previous frame, the global motion will indeed point towards a (imaginary) group present outside of the boundaries of the previous frame. This situation is illustrated in FIG. 3. The leftmost frame 20 corresponds to a previous frame of a previous time instance in the video sequence 1 relative a current frame 10. The picture has been panned when going from the previous 20 to the current 10 frame, thereby uncovering a region 13 of image elements 11 not present in the previous frame 20. In this case, the global motion 50 is a vertical motion from the right to the left.
  • In order to determine whether a current group 12 of image elements 11 is an uncovered group and belongs to the uncovered region 13 of the frame 10, it is checked whether the determined global motion representation 50 as applied to the current group 12 points outside the border of the previous frame 20. In such a case, the current group 12 belong to the uncovered region 13 otherwise it corresponds to a remaining portion 15 of the current frame 10. Alternatively, the group 22 of image elements 21 occupying the same position in the previous frame 20 as the current group 12 occupy in the current frame 10 is identified. The group 22 is then moved according to the determined global motion representation 50 to reach a final group position 24. If this position 24 falls outside of the frame boundary 20, the current group 12 belongs to the uncovered region 13 otherwise it corresponds to the remaining portion 15 of the current frame 10.
  • Once an uncovered group has been identified in step S2, the method continues to step S3, where the determined global motion representation is as assigned as motion or displacement estimation for the uncovered group. Thus, the uncovered group thereby becomes assigned a displacement representation that can be subsequently used for different purposes, such as when constructing a new frame during frame rate-up conversion, which is further described herein.
  • In a preferred embodiment of the present invention, all or at least a portion of the image element groups in the frame can be tested by utilizing the global motion representation in order to determine whether the groups belong to an uncovered frame region or a remaining region. In such a case, all groups identified as uncovered groups in step S2 are preferable assigned the global motion representation as their motion estimation. This means that steps S2 and S3 are preferably performed multiple times, either in series or parallel, for different groups in the frame.
  • The global motion representation of the present invention can take any vector value
  • v = [ x y ] ,
  • ranging from the zero vector up to non-zero values for the vector components x and y, depending on how the pixel parameter values are moved when going from the reference frame to the current frame in the sequence.
  • FIG. 4 is a flow diagram illustrating a preferred embodiment of determining the global motion representation of the present invention. The method starts in step S10, where a vector set is provided. This vector set comprises, for each image element group in at least a portion of the current frame a respective associated displacement or motion vector referring to a reference group of at least one image element in the reference frame. Thus, each group in at least a portion of the current frame, preferably each group in the frame, has an assigned displacement vector that is pointing to or associated with a reference group in the reference frame.
  • The displacement vectors can be provided from a coded motion vector field of a video codec, such as H.264. Such motion vectors are traditionally used in inter coding of frames and can be re-used but for another purpose according to the invention. If no such motion vectors are available from the video codec, they can be determined from a motion estimation search. In such a case, a dedicated motion estimation search is conducted, preferably according to prior art algorithms but for the purpose of generating a motion vector set that can be used for determining the global motion representation of the invention.
  • Generally, each image element group in the remaining portion of current frame can have an associated motion vector generated by the video codec or from the motion estimation. However, some of the groups, such as those belonging to the uncovered region, might not have an assigned motion vector as these are could be coded as intra blocks by the video codec. In such a case, such groups can be omitted from the processing of the motion vectors of the invention. This means that only a portion (though a major portion) of the groups in the current frame and their assigned motion/displacement vectors are utilized in the following step S11 for calculating the global motion representation.
  • The next step S11 uses the displacement vectors from the provided (fetched or calculated) vector set from step S10 to determine a global motion vector. In a simple implementation, the global motion representation is determined as an average vector of the displacement vectors in the vector set. This is a computationally simple embodiment, though far from optimal for the purpose of obtaining an accurate global motion representation. Therefore, in a preferred embodiment of step S11, a position-dependent global motion vector or representation having vector component values that can vary for different image element positions in the current frame, i.e. v=v(x,y), is determined in step S11.
  • A preferred implementation of step S11 utilizes the following representation of the global motion representation:

  • v=Ax+b
  • where
  • x = [ x y ]
  • is the position of a current group in the current frame,
  • v = [ v x v y ]
  • is the global motion representation of the current group,
  • A = [ a 11 a 12 a 21 a 22 ] and b = [ b 1 b 2 ]
  • are a matrix and a vector that are to be estimated based on the displacement vectors provided in step S10. In order to calculate the values for the matrix A and the vector b, a least square method is preferably used for the provided displacement vector. Thus, the matrix and vector that gives a best result, in terms of minimizing a squared difference between the displacement vectors and the global motion representation, are estimated in step S11. The final global motion representation v=Ax+b captures most common background motions, such as camera panning, zooming and rotation.
  • The above concept can of course be applied to other parameterizations of a global motion representation, such as
  • v = [ a 11 a 12 a 21 a 22 ] [ x 2 y 2 ] + [ b 11 b 12 b 21 b 22 ] [ x y ] + [ c 1 c 2 ]
  • or higher order components. The method then continues to step S2 of FIG. 2, where the determined representation of the global motion is applied, using the group coordinates x and y of the current group to calculate the global motion at that point and determine whether the group is an uncovered group or not.
  • The usage of displacement vectors from the video codec or from a dedicated motion estimation search is a particular embodiment of obtaining a displacement vector set that are used for determining a global motion representation of the present invention. Other embodiments can instead be used and are contemplated by the invention. For instance, a motion estimation that is based on phase correlation can be used to obtain a representation of the global motion. Another example is of motion estimation for the global motion is pel-recursive, i.e. pixel-based motion estimation.
  • FIG. 5 is a flow diagram illustrating additional steps of a preferred embodiment of the motion estimating method of FIG. 2. The method continues from step S3 of FIG. 2. A next step S20 is performed for a group identified as belonging to the uncovered region of the frame. Furthermore, the uncovered group has a set of neighboring groups in the frame identified as not belonging to the uncovered region. Thus, the uncovered group is present in the border between the uncovered frame region and the remaining region, where the neighboring groups of the set are found in the remaining frame region.
  • As was discussed above, the groups present in the remaining portion of the frame are preferably each associated with a displacement or motion representation. These motion representations can be available from a coded motion vector field of the video codec or from a dedicated motion estimation search. Step S20 compares the motion representations associated with the neighboring groups with the determined global motion representation. A next step S21 determines whether there is a local motion diverging from the global motion in the vicinity of the uncovered group. Thus, the step S21 determines whether at least a minimum number of the motion representations differ from the global motion representation as applied to these group positions with at least a minimum difference. FIG. 6 is an illustration of a portion of a frame showing the border between an uncovered region 13 and a remaining frame region 15. A current uncovered group 12 is under investigation. This group comprises three neighboring groups 16, 18 not present in the uncovered region 13 (and five neighboring groups 14 present in the uncovered region 13). Each of these non-uncovered neighboring groups 18, 16 has an associated motion representation 50, 52. In this example, one of the neighboring groups 18 has a motion representation 50 that is identical or near identical to the global motion representation for the frame when applied at the position of the group 18. However, the remaining two neighboring groups 16 have motion representations 52 that significantly differ from the global motion representation 50. This means that two thirds of the neighboring groups present a local motion divergence.
  • In a typical implementation, the minimum number of neighboring groups that must have motion representations differing from the global motion in order to have a local motion divergence in step S21 is preferably more than half of the neighboring groups. In most typical cases with a horizontal or vertical border between uncovered 13 and remaining 15 frame portion, an uncovered group 12 has three or two neighboring groups 16, 18 in the remaining portion. In the former case, at least two of them must have motion representations differing from the global motion in order to have a local motion divergence. In the latter case, all of the neighboring groups should present this divergence.
  • A difference between the motion representations of the neighboring groups and the global motion can be determined according to different embodiments. In a first case, only the relative directions of the vectors are investigated. In such a case, an angle θ between the global motion v and a motion representation d can be determined as:
  • θ = arccos ( v · d v d )
  • This angle θ can then be compared to a reference angle θref and if the difference between the two angles exceed a minimum threshold or if the quotient of the angels exceeds (or is below) a threshold, the motion representation d is regarded at differing from the global motion representation v with at least a minimum difference.
  • Another implementation would be to calculate a difference vector q between the motion representation and the global motion:

  • q=v−d
  • If the length of this difference vector (|q|) exceeds a minimum threshold, the motion representation d is regarded at differing from the global motion representation v with at least a minimum difference. This can be implemented by comparing the X and Y values separately with the global motion. If both the X component and the Y component of d differs less than the minimum threshold from corresponding components of v, the motion representation d is regarded as describing the same motion as the global motion representation v, otherwise the motion representation differs significantly from the global motion representation.
  • In a preferred implementation, not only a minimum number of the neighboring groups should have motion representations differing significantly from the global motion to have a local motion divergence in step S21. A further preferred condition is that motion representations that significantly differ from the global motion should not significantly differ from each other. This is also illustrated in FIG. 6. The two neighboring groups 16 with motion representations 52 differing from the global motion 50 have indeed parallel and equal respective motion representations 52. The difference therebetween is therefore zero degrees or the zero difference vector, depending on the particular difference investigating embodiment to use.
  • The same tests that were described above can also be utilized for determining differences between the relevant motion representations 52. In such a case, the angles between pairwise tested motion representations should not exceed a maximum angle or the difference vector between pairwise tested motion representations should not have a vector length exceeding a maximum length. If at least a minimum number of the representations fulfill these conditions, we have a local motion divergence in step S21 otherwise not. The minimum number could be more than half of the motion representations that differ significantly from the global motion, i.e. the two motion representations 52 in the example of FIG. 6 should not differ significantly from each other. Alternative no of the motion representations must significantly differ from any other tested motion representation.
  • If there is a local motion divergence in step S21 that fulfills the one or preferably the two criteria listed above, i) a first minimum number of motion vectors differ significantly from the global motion and ii) of these motion vectors at least a second minimum number of motion vectors must not significantly differ from each other, the method continues to step S22 otherwise it ends.
  • Step S22 assigns a motion representation provided from the motion vector associated a neighboring group as motion estimation for the tested uncovered group. This means that the previously assigned (in step S3 of FIG. 2) global motion representation is replaced for the current uncovered group by the motion representation associated with a neighboring group or a motion representation calculated therefrom. The motion representation could be any of the representations of the relevant neighboring groups fulfilling the single criterion i) or, in the case of requiring the two criteria i) and ii), fulfilling the two criteria. Also more elaborated embodiments can be utilized, such as determining an average motion representation based on the neighboring motion representations fulfilling criterion i) or criteria i) and ii). This average local motion representation is then assigned to the uncovered group in step S22.
  • A next optional step S23 investigates whether there are any more uncovered groups on the same row or column as the current uncovered group. In this embodiment, a same row is investigated if the border between the uncovered and remaining frame portions is vertical and a column is investigated for a horizontal border. FIG. 6 illustrates one additional group 14 present on the same row as the uncovered group 12 in the case of a vertical border. In such a case, the newly assigned local motion representation assigned to the uncovered group in step S22 is also assigned to this (these) adjacent group(s) in step S24. These steps S23 and S24 are preferably performed for each uncovered group being assigned a local motion representation instead of the global motion representation in step S22.
  • Preferably all identified uncovered groups of the same row or column are assigned the same local motion representation as the uncovered group present next to the border between the uncovered and remaining frame portions. In a more elaborated embodiment, a trend in the local motion along the same row or column but over the uncovered-remaining border is utilized for determining local motion representations to uncovered groups. In such a case, a linear extrapolation of the motion representations is calculated for the uncovered groups to thereby more accurately reflect local changes in the motion along a row or column. Information of the motion representations of a set comprising the N groups present on the same row or column and being closest to the border to the uncovered frame portion but still present in the remaining portion can be used in this extrapolation, where N is some multiple integer, such as two or three.
  • By employing the teachings of the present invention, all groups in a frame will be associated with a motion representation; the ones in the remaining frame group get their motion representations from the video codec or a dedicated motion search and the uncovered groups are assigned the global motion representation or re-assigned a local motion representation according to the present invention.
  • The assignment of motion representations to all or at least a vast majority of the groups in the frame leads to significant advantages during frame rate up-conversion when constructing new frames in the video sequence.
  • FIG. 7 is a flow diagram illustrating a method of estimating property values of a group of at least one image element in a frame to be constructed at a particular time instance in a video sequence relative an existing reference frame having a following or a previous time instance in the sequence. The method continues from step S3 of FIG. 2 or step S24 of FIG. 5. This means that the motion estimation method of the invention is first applied to the reference frame to assign motion representations to at least one uncovered group present in the uncovered region of the reference frame.
  • A next step S30 selects a reference group among the uncovered groups in the reference frame. This selected reference group has an assigned (global or local) motion representation intersecting the group to be determined in the constructed frame. This situation is illustrated in FIG. 3. Assume that the uncovered group 12 in the reference frame 10 is assigned the global motion representation 50 according to the present invention. In such a case, half the global motion, in the case the constructed frame 30 is positioned at a time instance ti in the middle of the time of the previous frame ti−1 and the reference frame ti+1, moves the reference group 12, when applied to the constructed frame 30, to the position of the current group 32 of image elements 31 to be determined. This means that the motion representations assigned to the uncovered groups 12 in the reference frame 10 are used when selecting the particular reference group to utilize as a basis for determining the pixel values of the image elements 31 in the group 32 to be determined. The reference group to use is, thus, the uncovered group 12 having a motion representation that pass through the group 32 in the constructed frame 30.
  • The property values of the image elements in the group are then estimated in step S31 based on the property values of the reference group. In an embodiment, this estimating step simply involves assigning the property values of the reference group to the respective image elements of the group. The property values may also be low pass filtering, as these groups may otherwise become to sharp compared to other groups that often become somewhat blurred as a consequence of the bidirectional interpolation. The method then ends.
  • It is anticipated by the present invention that for other groups of image elements in the frame to be constructed, a traditional bidirectional interpolation of property values based on a reference group in a preceding frame and another reference group in the following frame is performed according to prior art techniques. However, uncovered groups are handled according to the embodiment described above as they do not have any associated group in the previous (or following) frame.
  • In the above described embodiment a single reference group is utilized in the estimation of the property values for a current group. FIGS. 8 and 9 illustrate another embodiment utilizing two reference groups in different reference frames in the estimation. The method continues from step S30 of FIG. 7. A next step S40 identifies a second reference group 42 of at least one image element in a second reference frame 40 associated with a second different time instance in the video sequence 1. The two reference frames 10, 40 are positioned on a time basis on a same side in the video sequence 1 relative the constructed frame 30. Thus, the two reference frames 10, 40 could be two preceding frames, such as of time instances ti−1 and ti−3, or two following frames, such as of time instances ti+1 and ti+3, relative the time instance ti of the constructed frame 30.
  • The second reference group 42 is identified in step S40 based on a motion representation assigned to the reference group 42. In a preferred embodiment, the second reference group 42 is identified as a group in the second reference frame 40 having an assigned motion representation pointing towards the first reference group 12 in the first reference frame 10.
  • The next step S41 extrapolates the property values of the image elements in the group 12 based on the property values of the first 12 and second 42 reference group. Such extrapolation procedures are well known in the art and may, for instance, involve different weights to the property values of the first reference group 12 as compared to the second reference group to thereby weight up the values of the first reference group 12, which is closer in time to the constructed frame 30 as compared to the second reference group 42.
  • FIG. 10 illustrates the advantages of utilizing the present invention during frame rate up-conversion as compared to prior art techniques lacking any true motion estimation for uncovered groups. The picture to the left has been constructed from a video sequence using prior art interpolation and extrapolation techniques. The corresponding picture to the right has been constructed according to the present invention. As is better seen in the two magnified picture portions, the present invention provides a more correct construction of the linesman to the right in the pictures.
  • The present invention is not only advantageously used in connection with frame rate up-conversion. The present invention can also be used for refining a motion vector field from the coded bit stream. This means that also uncovered groups previously having no assigned motion vectors will be assigned motion representations according to the invention.
  • Another application of the invention is for error concealment. A distorted frame or part of a frame can be replaced by unidirectional or bidirectional prediction using the refined vector field produced by the invention. The invention can also be used to obtain a predicted motion vector field from a reconstructed motion vector field as a mean to obtain better coding efficiency of a next frame to be decoded.
  • FIG. 11 is a schematic block diagram of a device for motion estimation for a group of at least one image element in a frame of a video sequence. A global motion determiner 110 is arranged in the device 100 for determining a representation of a global motion of image element property values from at least a portion of a reference frame to at least a portion of a current frame in the sequence. The determiner 110 is preferably connected to a set provider 140, which is arranged for providing a vector set comprising, for each image element group in the portion of the frame, a displacement vector referring to a reference group in the reference frame. The set provider 140 can fetch this set from an internal or external video codec, or include functionality for estimating the displacement vectors in a motion estimation search. The determiner 110 preferably generates the global motion representation as one of the previously described position-dependent global motion vectors, by determining matrix A and vector b of the global motion representation.
  • The device 100 also comprises a group identifier 120 for identifying uncovered groups of at least one image element each in an uncovered region of the frame based on the global motion representation from the determiner 110. This identifier 110 preferably identifies the uncovered groups as groups in the frame that does not have any associated group in the reference frame when applying the global motion from the groups in the frame to the reference frame. In a typical implementation, one then ends up outside the boundaries of the reference frame.
  • A motion assigner 130 assigns the global motion representation as motion representation or vector for those uncovered groups identified by the group identifier 120.
  • The device 100 optionally but preferably comprises a motion comparator 150 arranged for comparing motion representations of a set of groups. These groups are not present in the uncovered region of the frame but are neighbors to an uncovered group. The comparator 150 compares the motion representation of each of these neighboring groups to the global motion representation from the determiner 110 and investigates whether at least a minimum number of the motion representations differ significantly, i.e. with at least a minimum difference, from the global motion representation. This comparison is preferably performed as previously described herein.
  • If there is a local motion divergence in connection with the uncovered group, i.e. at least a minimum number of the tested neighboring motion representations differ significantly from the global motion representation the motion assigner 130 assigns a new motion representation to the uncovered group as a replacement of the global motion representation. This new motion representation is the motion representation of one of the neighboring groups having a significantly differing motion relative the global motion or is calculated based on at least a portion of the neighboring motions differing significantly from the global motion.
  • In an alternative embodiment, the motion comparator 150 also compares those neighboring motion representations that significantly differed from the global motion representation with each other. The comparator 150 then only signal the assigner 130 to re-assign motion representation for the uncovered group if these neighboring motion representations do not differ significantly, i.e. with not more than a maximum difference, from each other. The previously described comparison embodiments can be utilized by the comparator 150 for investigating this criterion. This means that the assigner 130 only assigns a new motion representation to the uncovered group if these two criteria are fulfilled as determined by the comparator 150.
  • If an uncovered group gets a re-assigned local motion representation by the assigner 130, the group identifier 120 preferably identifies other uncovered groups present on a same group row or column as the uncovered group but further away from the neighboring groups present in the remaining frame portion. In such a case, the motion assigner 130 re-assigns motion representations also for this (these) uncovered group(s). The re-assigned motion representation is the same as was previously assigned to the uncovered group adjacent the border between the uncovered region and the remaining frame region or a motion representation calculated at least partly therefrom, such as through extrapolation.
  • The units 110 to 150 of the motion estimating device 100 can be provided in hardware, software and/or a combination of hardware and software. The units 110 to 150 can be implemented in a video or frame processing terminal or server, such as implemented in or connected to a node of a wired or wireless communications system. Alternatively, the units 110 to 150 of the motion estimating device 100 can be arranged in a user terminal, such as TV decoder, computer, mobile telephone, or other user appliance having or being connected to a decoder and/or an image rendering device.
  • FIG. 12 is a schematic block diagram of a device 200 for estimating property values of a group of at least one image element in a constructed frame of video sequence. The device 200 comprises a motion estimating device 100 according to the present invention, illustrated in FIG. 11 and disclosed above. The motion estimating device is arranged for performing a motion estimation on a reference frame associated with a previous or following time instance in the video sequence relative the constructed frame. The motion estimation performed by the device 100 assigns global or local motion representations to at least one, preferably all, uncovered groups in the reference frame.
  • A group selector 210 is provided in the device 200 for selecting a reference group among the uncovered groups in the reference frame. The selector 210 preferably selects the reference group as an uncovered group having an assigned motion representation that intersects the group in the constructed frame. In other words, one passes straight through the group when traveling along the motion representation of the reference group from the reference frame towards another previous or following frame in the sequence.
  • The device 200 also comprises a value estimator 220 arranged for estimating the property values of the group based on the property values of the reference group selected by the group selector 210. The estimator 200 preferably assigns the property values of the reference group to the corresponding image elements of the group in the constructed frame.
  • In a preferred embodiment, the group selector 210 is also arranged for selecting a second reference group in a second reference frame in the video sequence. This second reference frame is preferably positioned further from the constructed frame regarding frame times as compared to the first reference frame. The second group is identified by the group selector 210 based on the motion representation assigned to the second reference group. The selector 210 typically selects the second reference group as a group in the second frame having an assigned motion representation pointing towards the first reference group in the first reference frame.
  • The estimator 220 then estimates the property values of the group based on the property values of both the first and second reference group. This value estimation is performed as a value extrapolation, preferably utilizing different weights for the values of the first and second reference group to thereby upweight those reference property values originating from the reference group that is positioned in a reference frame closer in time to the constructed group relative the other reference frame.
  • The units 100, 210 and 220 of the group estimating device 200 can be provided in hardware, software and/or a combination of hardware and software. The units 100, 210 and 220 can be implemented in a video or frame processing terminal or server, such as implemented in or connected to a node of a wired or wireless communications system. Alternatively, the units 100, 210 and 220 of the group estimating device 200 can be arranged in a user terminal, such as TV decoder, computer, mobile telephone, or other user appliance having or being connected to a decoder and/or an image rendering device.
  • It will be understood by a person skilled in the art that various modifications and changes may be made to the present invention without departure from the scope thereof, which is defined by the appended claims.
  • REFERENCES
    • [1] Chen, Y.-K., Vetro, Y.-K., Sun H., & Kung S.-Y., December 1998, Frame Rate Up-Conversion Using Transmitted True Motion, In IEEE Workshop on Multimedia Signal Processing.
    • [2] Wiegand T., Sullivan G. J., Bjontegaard G., & Luthra A., July 2003, Overview of the H.264/AVC Video Coding Standard, IEEE Transactions on Circuits and Systems for Video Technology, pages 1-19.

Claims (22)

1. A method of motion estimation for a group of at least one image element in a frame associated with a time instance in a video sequence, said method comprising the steps of:
determining a representation of a global motion of image element property values from at least a portion of a previous frame associated with a previous time instance in said video sequence to at least a portion of said frame;
identifying an uncovered group of at least one image element in an uncovered region of said frame based on said global motion representation; and assigning said global motion representation to said uncovered group as motion estimation for said uncovered group.
2. The method according to claim 1, wherein said identifying step comprises identifying said uncovered group as a group in said frame that does not have an associated group in said previous frame when applying said global motion from said group in said frame to said previous frame.
3. The method according to claim 1, further comprising providing a motion set comprising, for each group n said at least a portion of said frame, a respective motion representation referring to an associated group in said previous frame, wherein said determining step comprises determining, based on at least a portion of the motion representations of said motion set, said global motion representation.
4. The method according to claim 3, wherein said determining step comprises determining, based on said at least a portion of the motion representations of said motion set, a position-dependent global motion representation.
5. The method according to claim 3, wherein said determining step comprises estimating, based on said at least a portion of the motion representations of said motion set, elements of matrix
A = [ a 11 a 12 a 21 a 22 ]
and a vector b=[b1b2]T by a least square method to determine said position-dependent global motion representation as having a formula:

v=Ax+b
where v=[vxvx]T is said global motion representation, vx is a first vector component of said global motion representation in a first direction, vy is a second vector component of said global motion representation in a second perpendicular direction, x=[xy]T is an image element position in said frame.
6. The method according to claim 1, further comprising the steps of:
comparing, for said uncovered group having a neighboring set of neighboring groups not identified as belonging to said uncovered region, motion representations associated with said neighboring groups of said neighboring set with said global motion representation; and
assigning, to said uncovered group, a motion representation provided based on a motion representation associated with a neighboring group of said neighboring set if at least a minimum number of said motion representations differ from said global motion representation with at least a minimum difference.
7. The method according to claim 6, further comprising comparing said motion representations differing from said global motion representation with at least said minimum difference, wherein said assigning step comprises assigning, to said uncovered group, said motion representation provided based on said motion representation associated with said neighboring group of said neighboring set if said motion representations differing from said global motion representation with at least said minimum difference differ from each other with no more than a maximum difference.
8. The method according to claim 6, further comprising the steps of:
identifying at least one uncovered group present on a same group row or group column in said frame as said uncovered group but not having any neighboring groups not identified as corresponding to said uncovered region; and
assigning, to said at least one uncovered group, a motion representation assigned to said uncovered group of said same group row or group column if said motion representation assigned to said uncovered group of said same group row or group column is different from said global motion representation.
9. A method of estimating property values of a group of at least one image element in a frame associated with a time instance in a video sequence relative a first reference frame associated with a first different time instance in said video sequence, said method comprising the steps of:
performing motion estimation for at least one uncovered group in said first reference frame;
selecting a first reference group from said at least one uncovered group based on a respective motion representation assigned to said at least one uncovered group; and
estimating said property values of said group based on the property values of said first reference group.
10. The method according to claim 9, wherein said selecting step comprises selecting said first reference group as an uncovered group having an assigned motion representation intersecting said group in said frame.
11. The method according to claim 9, further comprising identifying a second reference group in a second reference frame associated with a second different time instance in said video sequence based on a motion representation assigned to said second reference group, wherein said estimating step comprises estimating said property values of said group based on the property values of said first reference group and the property values of said second reference group.
12. The method according to claim 11, wherein said identifying step comprises identifying said second reference group as a group in said second reference frame having an assigned motion representation pointing towards said first reference group in said first reference frame.
13. A device for motion estimation for a group of at least one image element in a frame associated with a time instance in a video sequence, said device comprises:
a motion determiner for determining a representation of a global motion of image element property values from at least a portion of a previous frame associated with a previous time instance in said video sequence to at least a portion of said frame;
a group identifier for identifying an uncovered group of at least one image element in an uncovered region of said frame based on said global motion representation; and
a motion assigner for assigning said global motion representation to said uncovered group.
14. The device according to claim 13, wherein said group identifier identifies said uncovered group as a group in said frame that does not have an associated group in said previous frame when applying said global motion from said group in said frame to said previous frame.
15. The device according to claim 13, further comprising a set provider for providing a motion set comprising, for each group in said at least a portion of said frame, a respective motion representation referring to an associated group in said previous frame, wherein said motion determiner determines, based on at least a portion of the motion representations of said motion set, said global motion representation.
16. The device according to claim 13, further comprising a motion comparator for comparing, for said uncovered group having a neighboring set of neighboring groups not identified as belonging to said uncovered region, motion representations associated with said neighboring groups of said neighboring set with said global motion representation, wherein said motion assigner assigns, to said uncovered group, a motion representation provided based on a motion representation associated with a neighboring group of said neighboring set if at least a minimum number of said motion representations differ from said global motion representation with at least a minimum difference.
17. The device according to claim 16, wherein
said motion comparator compares said motion representations differing from said global motion representation with at least said minimum difference, and
said motion assigner assigns, to said uncovered group, said motion representation provided based on said motion representation associated with said neighboring group of said neighboring set if said motion representations differing from said global motion representation with at least said minimum difference differ from each other with no more than a maximum difference.
18. The device according to claim 16, wherein said group identifier identifies at least one uncovered group present on a same group row or group column in said frame as said uncovered group but not having any neighboring groups not identified as corresponding to said uncovered region, and
said motion assigner assigns, to said at least one uncovered group, a motion representation assigned to said uncovered group of said same group row or group column if said motion representation assigned to said uncovered group of said same group row or group column is different from said global motion representation.
19. A device for estimating property values of a group of at least one image element in a frame associated with a time instance in a video sequence relative a first reference frame associated with a first different time instance in said video sequence, said device comprises:
a motion estimation device for performing motion estimation for at least one uncovered group in said first reference frame;
a group selector for selecting a first reference group from said at least one uncovered group based on a motion representation assigned to said first reference group; and
a value estimator for determining said property values of said group based on the property values of said first reference group.
20. The device according to claim 19, wherein said group selector selects said first reference group as an uncovered group having an assigned motion representation intersecting said group in said frame.
21. The device according to claim 19, wherein
said group selector selects a second reference group in a second reference frame associated with a second different time instance in said video sequence based on a motion representation assigned to said second reference group, and
said value estimator is arranged for estimating said property values of said group based on the property values of said first reference group and the property values of said second reference group.
22. The device according to claim 21, wherein said group selector selects said second reference group as a group in said second reference frame having an assigned motion representation pointing towards said first reference group in said first reference frame.
US12/524,281 2007-01-26 2008-01-14 Motion estimation for uncovered frame regions Abandoned US20100027667A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/524,281 US20100027667A1 (en) 2007-01-26 2008-01-14 Motion estimation for uncovered frame regions

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US89751107P 2007-01-26 2007-01-26
PCT/SE2008/050034 WO2008091206A1 (en) 2007-01-26 2008-01-14 Motion estimation for uncovered frame regions
US12/524,281 US20100027667A1 (en) 2007-01-26 2008-01-14 Motion estimation for uncovered frame regions

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2008/050034 A-371-Of-International WO2008091206A1 (en) 2007-01-26 2008-01-14 Motion estimation for uncovered frame regions

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/977,839 Continuation US9860554B2 (en) 2007-01-26 2015-12-22 Motion estimation for uncovered frame regions

Publications (1)

Publication Number Publication Date
US20100027667A1 true US20100027667A1 (en) 2010-02-04

Family

ID=39644716

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/524,281 Abandoned US20100027667A1 (en) 2007-01-26 2008-01-14 Motion estimation for uncovered frame regions
US12/524,309 Active 2030-08-04 US8498495B2 (en) 2007-01-26 2008-01-14 Border region processing in images
US14/977,839 Active US9860554B2 (en) 2007-01-26 2015-12-22 Motion estimation for uncovered frame regions

Family Applications After (2)

Application Number Title Priority Date Filing Date
US12/524,309 Active 2030-08-04 US8498495B2 (en) 2007-01-26 2008-01-14 Border region processing in images
US14/977,839 Active US9860554B2 (en) 2007-01-26 2015-12-22 Motion estimation for uncovered frame regions

Country Status (4)

Country Link
US (3) US20100027667A1 (en)
EP (2) EP2108177B1 (en)
JP (2) JP5190469B2 (en)
WO (2) WO2008091207A1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080115175A1 (en) * 2006-11-13 2008-05-15 Rodriguez Arturo A System and method for signaling characteristics of pictures' interdependencies
US20080115176A1 (en) * 2006-11-13 2008-05-15 Scientific-Atlanta, Inc. Indicating picture usefulness for playback optimization
US20080260045A1 (en) * 2006-11-13 2008-10-23 Rodriguez Arturo A Signalling and Extraction in Compressed Video of Pictures Belonging to Interdependency Tiers
US20090034627A1 (en) * 2007-07-31 2009-02-05 Cisco Technology, Inc. Non-enhancing media redundancy coding for mitigating transmission impairments
US20090034633A1 (en) * 2007-07-31 2009-02-05 Cisco Technology, Inc. Simultaneous processing of media and redundancy streams for mitigating impairments
US20090100482A1 (en) * 2007-10-16 2009-04-16 Rodriguez Arturo A Conveyance of Concatenation Properties and Picture Orderness in a Video Stream
US20090148132A1 (en) * 2007-12-11 2009-06-11 Cisco Technology, Inc. Inferential processing to ascertain plural levels of picture interdependencies
US20090180546A1 (en) * 2008-01-09 2009-07-16 Rodriguez Arturo A Assistance for processing pictures in concatenated video streams
US20090220012A1 (en) * 2008-02-29 2009-09-03 Rodriguez Arturo A Signalling picture encoding schemes and associated picture properties
US20090313668A1 (en) * 2008-06-17 2009-12-17 Cisco Technology, Inc. Time-shifted transport of multi-latticed video for resiliency from burst-error effects
US20090310934A1 (en) * 2008-06-12 2009-12-17 Rodriguez Arturo A Picture interdependencies signals in context of mmco to assist stream manipulation
US20090313662A1 (en) * 2008-06-17 2009-12-17 Cisco Technology Inc. Methods and systems for processing multi-latticed video streams
US20100003015A1 (en) * 2008-06-17 2010-01-07 Cisco Technology Inc. Processing of impaired and incomplete multi-latticed video streams
US20100118978A1 (en) * 2008-11-12 2010-05-13 Rodriguez Arturo A Facilitating fast channel changes through promotion of pictures
US20100215338A1 (en) * 2009-02-20 2010-08-26 Cisco Technology, Inc. Signalling of decodable sub-sequences
US20100302450A1 (en) * 2008-10-29 2010-12-02 Eyal Farkash Video signature
US20110222837A1 (en) * 2010-03-11 2011-09-15 Cisco Technology, Inc. Management of picture referencing in video streams for plural playback modes
US20110320152A1 (en) * 2008-12-17 2011-12-29 Vourc H Sebastien Integrated closed-loop hybridization device built in by construction
US8782261B1 (en) 2009-04-03 2014-07-15 Cisco Technology, Inc. System and method for authorization of segment boundary notifications
US8949883B2 (en) 2009-05-12 2015-02-03 Cisco Technology, Inc. Signalling buffer characteristics for splicing operations of video streams
CN104365095A (en) * 2012-03-30 2015-02-18 阿尔卡特朗讯公司 Method and apparatus for encoding a selected spatial portion of a video stream
US9467696B2 (en) 2009-06-18 2016-10-11 Tech 5 Dynamic streaming plural lattice video coding representations of video
US10218992B2 (en) * 2017-07-24 2019-02-26 Cisco Technology, Inc. Encoding, transmission and decoding of combined high motion and high fidelity content

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU758562B2 (en) * 1999-01-29 2003-03-27 Roche Diagnostics Gmbh Method of identifying N-terminal proBNP
US8840602B2 (en) 2008-07-31 2014-09-23 Acclarent, Inc. Systems and methods for anesthetizing ear tissue
RU2602792C2 (en) * 2011-01-28 2016-11-20 Конинклейке Филипс Электроникс Н.В. Motion vector based comparison of moving objects
CN103136741A (en) * 2011-12-05 2013-06-05 联咏科技股份有限公司 Edge detection method for fixed pattern and circuit
US9071842B2 (en) * 2012-04-19 2015-06-30 Vixs Systems Inc. Detection of video feature based on variance metric
US20130321497A1 (en) * 2012-06-05 2013-12-05 Shenzhen China Star Optoelectronics Technology Co., Ltd. Method of Signal Compensation, Transformation Circuit in Liquid Crystal Panel, and Liquid Crystal Display Device
WO2016142965A1 (en) * 2015-03-10 2016-09-15 日本電気株式会社 Video processing device, video processing method, and storage medium storing video processing program
US10733231B2 (en) 2016-03-22 2020-08-04 Sensormatic Electronics, LLC Method and system for modeling image of interest to users
US10665071B2 (en) 2016-03-22 2020-05-26 Sensormatic Electronics, LLC System and method for deadzone detection in surveillance camera network
US10192414B2 (en) * 2016-03-22 2019-01-29 Sensormatic Electronics, LLC System and method for overlap detection in surveillance camera network
US10475315B2 (en) 2016-03-22 2019-11-12 Sensormatic Electronics, LLC System and method for configuring surveillance cameras using mobile computing devices
US11216847B2 (en) 2016-03-22 2022-01-04 Sensormatic Electronics, LLC System and method for retail customer tracking in surveillance camera network
US10318836B2 (en) * 2016-03-22 2019-06-11 Sensormatic Electronics, LLC System and method for designating surveillance camera regions of interest
US9965680B2 (en) 2016-03-22 2018-05-08 Sensormatic Electronics, LLC Method and system for conveying data from monitored scene via surveillance cameras
US10347102B2 (en) 2016-03-22 2019-07-09 Sensormatic Electronics, LLC Method and system for surveillance camera arbitration of uplink consumption
US10764539B2 (en) 2016-03-22 2020-09-01 Sensormatic Electronics, LLC System and method for using mobile device of zone and correlated motion detection
US11601583B2 (en) 2016-03-22 2023-03-07 Johnson Controls Tyco IP Holdings LLP System and method for controlling surveillance cameras
WO2018174618A1 (en) * 2017-03-22 2018-09-27 한국전자통신연구원 Prediction method and device using reference block
EP3451665A1 (en) 2017-09-01 2019-03-06 Thomson Licensing Refinement of internal sub-blocks of a coding unit
TWI747000B (en) 2018-06-29 2021-11-21 大陸商北京字節跳動網絡技術有限公司 Virtual merge candidates
CN110896492B (en) * 2018-09-13 2022-01-28 阿里巴巴(中国)有限公司 Image processing method, device and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4772947A (en) * 1985-12-18 1988-09-20 Sony Corporation Method and apparatus for transmitting compression video data and decoding the same for reconstructing an image from the received data
US6192080B1 (en) * 1998-12-04 2001-02-20 Mitsubishi Electric Research Laboratories, Inc. Motion compensated digital video signal processing
US6249613B1 (en) * 1997-03-31 2001-06-19 Sharp Laboratories Of America, Inc. Mosaic generation and sprite-based coding with automatic foreground and background separation
US20030103568A1 (en) * 2001-11-30 2003-06-05 Samsung Electronics Co., Ltd. Pixel data selection device for motion compensated interpolation and method thereof
US6618439B1 (en) * 1999-07-06 2003-09-09 Industrial Technology Research Institute Fast motion-compensated video frame interpolator
US20040105493A1 (en) * 2001-06-27 2004-06-03 Tetsujiro Kondo Image processing apparatus and method, and image pickup apparatus
US6940910B2 (en) * 2000-03-07 2005-09-06 Lg Electronics Inc. Method of detecting dissolve/fade in MPEG-compressed video environment

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2231749B (en) * 1989-04-27 1993-09-29 Sony Corp Motion dependent video signal processing
SE9201183L (en) * 1992-04-13 1993-06-28 Dv Sweden Ab MAKE ADAPTIVE ESTIMATES UNUSUAL GLOBAL IMAGE INSTABILITIES IN IMAGE SEQUENCES IN DIGITAL VIDEO SIGNALS
JP3490142B2 (en) * 1994-06-24 2004-01-26 株式会社東芝 Moving image encoding method and apparatus
US5929940A (en) * 1995-10-25 1999-07-27 U.S. Philips Corporation Method and device for estimating motion between images, system for encoding segmented images
EP0837602A3 (en) 1996-10-17 1999-10-06 Kabushiki Kaisha Toshiba Letterbox image detection apparatus
JP2935357B2 (en) 1997-06-02 1999-08-16 日本ビクター株式会社 Video signal high-efficiency coding device
US6061400A (en) 1997-11-20 2000-05-09 Hitachi America Ltd. Methods and apparatus for detecting scene conditions likely to cause prediction errors in reduced resolution video decoders and for using the detected information
US7551673B1 (en) * 1999-05-13 2009-06-23 Stmicroelectronics Asia Pacific Pte Ltd. Adaptive motion estimator
US7072398B2 (en) * 2000-12-06 2006-07-04 Kai-Kuang Ma System and method for motion vector generation and analysis of digital video clips
US6337925B1 (en) * 2000-05-08 2002-01-08 Adobe Systems Incorporated Method for determining a border in a complex scene with applications to image masking
FR2811791B1 (en) * 2000-07-13 2002-11-22 France Telecom MOTION ESTIMATOR FOR CODING AND DECODING IMAGE SEQUENCES
US7068852B2 (en) * 2001-01-23 2006-06-27 Zoran Corporation Edge detection and sharpening process for an image
WO2002085026A1 (en) * 2001-04-10 2002-10-24 Koninklijke Philips Electronics N.V. Method of encoding a sequence of frames
KR100441509B1 (en) * 2002-02-25 2004-07-23 삼성전자주식회사 Apparatus and method for transformation of scanning format
US7254268B2 (en) * 2002-04-11 2007-08-07 Arcsoft, Inc. Object extraction
EP1376471A1 (en) * 2002-06-19 2004-01-02 STMicroelectronics S.r.l. Motion estimation for stabilization of an image sequence
WO2005027491A2 (en) * 2003-09-05 2005-03-24 The Regents Of The University Of California Global motion estimation image coding and processing
US7574070B2 (en) * 2003-09-30 2009-08-11 Canon Kabushiki Kaisha Correction of subject area detection information, and image combining apparatus and method using the correction
EP1583364A1 (en) 2004-03-30 2005-10-05 Matsushita Electric Industrial Co., Ltd. Motion compensated interpolation of images at image borders for frame rate conversion
WO2005109899A1 (en) * 2004-05-04 2005-11-17 Qualcomm Incorporated Method and apparatus for motion compensated frame rate up conversion
US8553776B2 (en) * 2004-07-21 2013-10-08 QUALCOMM Inorporated Method and apparatus for motion vector assignment
US7447337B2 (en) * 2004-10-25 2008-11-04 Hewlett-Packard Development Company, L.P. Video content understanding through real time video motion analysis
WO2006054257A1 (en) * 2004-11-22 2006-05-26 Koninklijke Philips Electronics N.V. Motion vector field projection dealing with covering and uncovering
US7593603B1 (en) * 2004-11-30 2009-09-22 Adobe Systems Incorporated Multi-behavior image correction tool
US7755667B2 (en) * 2005-05-17 2010-07-13 Eastman Kodak Company Image sequence stabilization method and camera having dual path image sequence stabilization
KR100699261B1 (en) * 2005-06-24 2007-03-27 삼성전자주식회사 Motion Error Detector, Motion Error Compensator Comprising The Same, Method for Detecting Motion Error and Method for Compensating Motion Error
JP5134001B2 (en) * 2006-10-18 2013-01-30 アップル インコーポレイテッド Scalable video coding with lower layer filtering

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4772947A (en) * 1985-12-18 1988-09-20 Sony Corporation Method and apparatus for transmitting compression video data and decoding the same for reconstructing an image from the received data
US4772947B1 (en) * 1985-12-18 1989-05-30
US6249613B1 (en) * 1997-03-31 2001-06-19 Sharp Laboratories Of America, Inc. Mosaic generation and sprite-based coding with automatic foreground and background separation
US6751350B2 (en) * 1997-03-31 2004-06-15 Sharp Laboratories Of America, Inc. Mosaic generation and sprite-based coding with automatic foreground and background separation
US6192080B1 (en) * 1998-12-04 2001-02-20 Mitsubishi Electric Research Laboratories, Inc. Motion compensated digital video signal processing
US6618439B1 (en) * 1999-07-06 2003-09-09 Industrial Technology Research Institute Fast motion-compensated video frame interpolator
US6940910B2 (en) * 2000-03-07 2005-09-06 Lg Electronics Inc. Method of detecting dissolve/fade in MPEG-compressed video environment
US20040105493A1 (en) * 2001-06-27 2004-06-03 Tetsujiro Kondo Image processing apparatus and method, and image pickup apparatus
US20030103568A1 (en) * 2001-11-30 2003-06-05 Samsung Electronics Co., Ltd. Pixel data selection device for motion compensated interpolation and method thereof
US7720150B2 (en) * 2001-11-30 2010-05-18 Samsung Electronics Co., Ltd. Pixel data selection device for motion compensated interpolation and method thereof

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8875199B2 (en) 2006-11-13 2014-10-28 Cisco Technology, Inc. Indicating picture usefulness for playback optimization
US9521420B2 (en) 2006-11-13 2016-12-13 Tech 5 Managing splice points for non-seamless concatenated bitstreams
US20080260045A1 (en) * 2006-11-13 2008-10-23 Rodriguez Arturo A Signalling and Extraction in Compressed Video of Pictures Belonging to Interdependency Tiers
US20080115176A1 (en) * 2006-11-13 2008-05-15 Scientific-Atlanta, Inc. Indicating picture usefulness for playback optimization
US9716883B2 (en) 2006-11-13 2017-07-25 Cisco Technology, Inc. Tracking and determining pictures in successive interdependency levels
US8416859B2 (en) 2006-11-13 2013-04-09 Cisco Technology, Inc. Signalling and extraction in compressed video of pictures belonging to interdependency tiers
US20080115175A1 (en) * 2006-11-13 2008-05-15 Rodriguez Arturo A System and method for signaling characteristics of pictures' interdependencies
US20090034627A1 (en) * 2007-07-31 2009-02-05 Cisco Technology, Inc. Non-enhancing media redundancy coding for mitigating transmission impairments
US20090034633A1 (en) * 2007-07-31 2009-02-05 Cisco Technology, Inc. Simultaneous processing of media and redundancy streams for mitigating impairments
US8804845B2 (en) 2007-07-31 2014-08-12 Cisco Technology, Inc. Non-enhancing media redundancy coding for mitigating transmission impairments
US8958486B2 (en) 2007-07-31 2015-02-17 Cisco Technology, Inc. Simultaneous processing of media and redundancy streams for mitigating impairments
US20090100482A1 (en) * 2007-10-16 2009-04-16 Rodriguez Arturo A Conveyance of Concatenation Properties and Picture Orderness in a Video Stream
US20090148056A1 (en) * 2007-12-11 2009-06-11 Cisco Technology, Inc. Video Processing With Tiered Interdependencies of Pictures
US8873932B2 (en) 2007-12-11 2014-10-28 Cisco Technology, Inc. Inferential processing to ascertain plural levels of picture interdependencies
US20090148132A1 (en) * 2007-12-11 2009-06-11 Cisco Technology, Inc. Inferential processing to ascertain plural levels of picture interdependencies
US8718388B2 (en) 2007-12-11 2014-05-06 Cisco Technology, Inc. Video processing with tiered interdependencies of pictures
US8804843B2 (en) 2008-01-09 2014-08-12 Cisco Technology, Inc. Processing and managing splice points for the concatenation of two video streams
US20090180546A1 (en) * 2008-01-09 2009-07-16 Rodriguez Arturo A Assistance for processing pictures in concatenated video streams
US8416858B2 (en) 2008-02-29 2013-04-09 Cisco Technology, Inc. Signalling picture encoding schemes and associated picture properties
US20090220012A1 (en) * 2008-02-29 2009-09-03 Rodriguez Arturo A Signalling picture encoding schemes and associated picture properties
US8886022B2 (en) 2008-06-12 2014-11-11 Cisco Technology, Inc. Picture interdependencies signals in context of MMCO to assist stream manipulation
US20090310934A1 (en) * 2008-06-12 2009-12-17 Rodriguez Arturo A Picture interdependencies signals in context of mmco to assist stream manipulation
US9819899B2 (en) 2008-06-12 2017-11-14 Cisco Technology, Inc. Signaling tier information to assist MMCO stream manipulation
US9407935B2 (en) 2008-06-17 2016-08-02 Cisco Technology, Inc. Reconstructing a multi-latticed video signal
US20100003015A1 (en) * 2008-06-17 2010-01-07 Cisco Technology Inc. Processing of impaired and incomplete multi-latticed video streams
US9723333B2 (en) 2008-06-17 2017-08-01 Cisco Technology, Inc. Output of a video signal from decoded and derived picture information
US20090313668A1 (en) * 2008-06-17 2009-12-17 Cisco Technology, Inc. Time-shifted transport of multi-latticed video for resiliency from burst-error effects
US9350999B2 (en) 2008-06-17 2016-05-24 Tech 5 Methods and systems for processing latticed time-skewed video streams
US8971402B2 (en) 2008-06-17 2015-03-03 Cisco Technology, Inc. Processing of impaired and incomplete multi-latticed video streams
US8699578B2 (en) 2008-06-17 2014-04-15 Cisco Technology, Inc. Methods and systems for processing multi-latticed video streams
US8705631B2 (en) 2008-06-17 2014-04-22 Cisco Technology, Inc. Time-shifted transport of multi-latticed video for resiliency from burst-error effects
US20090313662A1 (en) * 2008-06-17 2009-12-17 Cisco Technology Inc. Methods and systems for processing multi-latticed video streams
US8411752B2 (en) * 2008-10-29 2013-04-02 Nds Limited Video signature
US20100302450A1 (en) * 2008-10-29 2010-12-02 Eyal Farkash Video signature
US8259817B2 (en) * 2008-11-12 2012-09-04 Cisco Technology, Inc. Facilitating fast channel changes through promotion of pictures
US20100118974A1 (en) * 2008-11-12 2010-05-13 Rodriguez Arturo A Processing of a video program having plural processed representations of a single video signal for reconstruction and output
US8259814B2 (en) 2008-11-12 2012-09-04 Cisco Technology, Inc. Processing of a video program having plural processed representations of a single video signal for reconstruction and output
US20100118979A1 (en) * 2008-11-12 2010-05-13 Rodriguez Arturo A Targeted bit appropriations based on picture importance
US20100118978A1 (en) * 2008-11-12 2010-05-13 Rodriguez Arturo A Facilitating fast channel changes through promotion of pictures
US8761266B2 (en) 2008-11-12 2014-06-24 Cisco Technology, Inc. Processing latticed and non-latticed pictures of a video program
US8320465B2 (en) 2008-11-12 2012-11-27 Cisco Technology, Inc. Error concealment of plural processed representations of a single video signal received in a video program
US8681876B2 (en) * 2008-11-12 2014-03-25 Cisco Technology, Inc. Targeted bit appropriations based on picture importance
US8781774B2 (en) * 2008-12-17 2014-07-15 Sagem Defense Securite Integrated closed-loop hybridization device built in by construction
US20110320152A1 (en) * 2008-12-17 2011-12-29 Vourc H Sebastien Integrated closed-loop hybridization device built in by construction
US8326131B2 (en) 2009-02-20 2012-12-04 Cisco Technology, Inc. Signalling of decodable sub-sequences
US20100215338A1 (en) * 2009-02-20 2010-08-26 Cisco Technology, Inc. Signalling of decodable sub-sequences
US8782261B1 (en) 2009-04-03 2014-07-15 Cisco Technology, Inc. System and method for authorization of segment boundary notifications
US9609039B2 (en) 2009-05-12 2017-03-28 Cisco Technology, Inc. Splice signalling buffer characteristics
US8949883B2 (en) 2009-05-12 2015-02-03 Cisco Technology, Inc. Signalling buffer characteristics for splicing operations of video streams
US9467696B2 (en) 2009-06-18 2016-10-11 Tech 5 Dynamic streaming plural lattice video coding representations of video
US20110222837A1 (en) * 2010-03-11 2011-09-15 Cisco Technology, Inc. Management of picture referencing in video streams for plural playback modes
CN104365095A (en) * 2012-03-30 2015-02-18 阿尔卡特朗讯公司 Method and apparatus for encoding a selected spatial portion of a video stream
US10218992B2 (en) * 2017-07-24 2019-02-26 Cisco Technology, Inc. Encoding, transmission and decoding of combined high motion and high fidelity content

Also Published As

Publication number Publication date
EP2122573B1 (en) 2018-05-30
JP5190469B2 (en) 2013-04-24
US9860554B2 (en) 2018-01-02
EP2108177A4 (en) 2017-08-23
EP2122573A1 (en) 2009-11-25
WO2008091206A1 (en) 2008-07-31
EP2108177A1 (en) 2009-10-14
EP2122573A4 (en) 2017-07-19
JP5254997B2 (en) 2013-08-07
US8498495B2 (en) 2013-07-30
EP2108177B1 (en) 2019-04-10
US20090316997A1 (en) 2009-12-24
WO2008091207A8 (en) 2008-09-12
US20160112717A1 (en) 2016-04-21
JP2010517417A (en) 2010-05-20
WO2008091207A1 (en) 2008-07-31
JP2010517416A (en) 2010-05-20

Similar Documents

Publication Publication Date Title
US9860554B2 (en) Motion estimation for uncovered frame regions
US20230412822A1 (en) Effective prediction using partition coding
US8837591B2 (en) Image block classification
US9641839B2 (en) Computing predicted values for motion vectors
US6618439B1 (en) Fast motion-compensated video frame interpolator
US6711211B1 (en) Method for encoding and decoding video information, a motion compensated video encoder and a corresponding decoder
EP2039171B1 (en) Weighted prediction for video coding
US8514939B2 (en) Method and system for motion compensated picture rate up-conversion of digital video using picture boundary processing
US10887587B2 (en) Distance weighted bi-directional intra prediction
US20030189980A1 (en) Method and apparatus for motion estimation between video frames
US7212573B2 (en) Method and/or apparatus for determining minimum positive reference indices for a direct prediction mode
MXPA05009250A (en) Fast mode decision algorithm for intra prediction for advanced video coding.
US11792393B2 (en) Inter prediction encoding and decoding method using combination of prediction blocks, and computer-readable storage medium bitstream to be decoded thereby
JP2007515115A (en) Improvement of calculation method of interpolated pixel value
EP2532163B1 (en) Improved method and apparatus for sub-pixel interpolation
KR20140005232A (en) Methods and devices for forming a prediction value
CN109089116A (en) A kind of determination method, device, equipment and the medium of Skip type of mb
WO2022174782A1 (en) On boundary padding samples generation in image/video coding
Kamath et al. Sample-based DC prediction strategy for HEVC lossless intra prediction mode
Na et al. A multi-layer motion estimation scheme for spatial scalability in H. 264/AVC scalable extension

Legal Events

Date Code Title Description
AS Assignment

Owner name: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL),SWEDEN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SAMUELSSON, JONATAN;ANDERSSON, KENNETH;PRIDDLE, CLINTON;SIGNING DATES FROM 20090710 TO 20090727;REEL/FRAME:024237/0583

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION