US20100177821A1 - Moving picture coding apparatus - Google Patents

Moving picture coding apparatus Download PDF

Info

Publication number
US20100177821A1
US20100177821A1 US12/591,438 US59143809A US2010177821A1 US 20100177821 A1 US20100177821 A1 US 20100177821A1 US 59143809 A US59143809 A US 59143809A US 2010177821 A1 US2010177821 A1 US 2010177821A1
Authority
US
United States
Prior art keywords
pixels
prediction
block
pixel
available
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/591,438
Other versions
US8953678B2 (en
Inventor
Tadakazu Kadoto
Masatoshi Kondo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Kokusai Electric Inc
Original Assignee
Hitachi Kokusai Electric Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Kokusai Electric Inc filed Critical Hitachi Kokusai Electric Inc
Assigned to HITACHI KOKUSAI ELECTRIC INC. reassignment HITACHI KOKUSAI ELECTRIC INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KADOTO, TADAKAZU, KONDO, MASATOSHI
Publication of US20100177821A1 publication Critical patent/US20100177821A1/en
Application granted granted Critical
Publication of US8953678B2 publication Critical patent/US8953678B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the present invention relates to a moving picture coding apparatus; and more particularly, to a moving picture coding apparatus capable of increasing prediction accuracy when intra- or inter-prediction is performed in pixel blocks based on the standards such as MPEG-2 and H.264.
  • the amount of data transmitted in the form of a moving picture is increasing day by day. For example, let's consider the amount of data of an analog television.
  • the number of pixels is 720 in a horizontal direction and is 480 in a vertical direction.
  • Each pixel has a luminance component of 8 bits and two chrominance components of 8 bits.
  • a moving picture has stage main body 30 frames per one second.
  • a data ratio of a chrominance component to the luminance component is 1/2
  • an optical fiber currently supplied as a home broadband has a transmission rate of about 100 Mbps and thus an image cannot be transmitted without compression.
  • the amount of data of terrestrial digital television broadcasting to replace in 2011 is known as 1.5 Gbps. Accordingly, a highly efficient compression technology may be regarded as one of technologies required in the future.
  • H.264/AVC hereinafter, referred to as H.264
  • H.264 is suggested as the standard of the highly efficient compression technology.
  • H.264 is the up-to-date international standard of moving picture coding developed by the joint video team (JVT) commonly established in December, 2001 by the video coding experts group (VCEG) of the international telecommunication union telecommunication standardization sector (ITU-T) and the moving picture experts group (MPEG) of the international organization for standardization (ISO)/international electro-technical commission (IEC).
  • JVT joint video team
  • VCEG video coding experts group
  • MPEG moving picture experts group of the international organization for standardization
  • ISO international electro-technical commission
  • JTC ISO/IEC/joint technical committee
  • H.264 is characterized in that the same picture quality can be realized by coding efficiency which is about twice as high as that of the conventional MPEG-2 and MPEG-4, that inter frame prediction, quantization, and entropy coding are adopted as a compression algorithm, and that H.264 can be widely used not only at a low bit rate of a mobile telephone or the like but also at a high bit rate of a high vision TV or the like.
  • ITU-T recommendations can be downloaded from the URL stated in the following Non-Patent Document 1.
  • Non-Patent Document 1 “ITU-T Recommendation H.264 Advanced video coding for generic audiovisual services”, [online], November 2007, TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU [searched on Dec. 12, 2008], the Internet ⁇ URL: http://www.itu.int/rec/T-REC-H.264-200711-I/en>
  • intra-prediction 104 for generating an intra-prediction image predicted by using correlations within a picture and inter-prediction 105 for generating an inter-prediction image predicted by using correlations between pictures are performed.
  • a difference between the generated prediction image and an input picture 101 is obtained, and orthogonal transform, e.g., discrete cosine transform (DCT), 102 and quantization (Q) 103 are performed on the differential data.
  • orthogonal transform e.g., discrete cosine transform (DCT), 102 and quantization (Q) 103 are performed on the differential data.
  • coding 110 is performed on the quantized data.
  • only the differential data is coded and transmitted, thereby realizing high coding efficiency.
  • the reference numeral 107 indicates a deblocking filter standardized in H.264
  • the reference numeral 108 is inverse orthogonal transform, e.g., inverse discrete cosine transform (IDCT), for performing an inverse processing to the processing of the orthogonal transform 102
  • the reference numeral 109 indicates inverse quantization (IQ) for performing an inverse processing to the processing of the quantization 103 .
  • the filter 107 , the inverse orthogonal transform 108 and the inverse quantization 109 perform the processing to obtain reconstructed pictures in an encoder.
  • the reconstructed pictures for a plurality of previous frames are stored in a frame memory 106 and are retrieved to the inter-prediction 105 .
  • the intra-prediction generates the prediction picture based on a correlation between adjacent pixels.
  • the prediction picture is generated by using correlations between a pixel to be predicted and its adjacent pixels, wherein pixels in a left column and an upper row of a block to be predicted are used.
  • FIG. 2 for example, reference pixels used for generating a prediction picture of 4 ⁇ 4 intra-prediction are illustrated.
  • H.264/AVC it is possible to generate prediction pictures on a basis of block of 4 ⁇ 4 pixels (hereinafter, referred to as 4 ⁇ 4 block), 8 ⁇ 8 pixels (hereinafter, referred to as 8 ⁇ 8 block), or 16 ⁇ 16 pixels (hereinafter, referred to as 16 ⁇ 16 block).
  • 4 ⁇ 4 block 4 ⁇ 4 pixels
  • 8 ⁇ 8 block 8 ⁇ 8 pixels
  • 16 ⁇ 16 block 16 ⁇ 16 pixels
  • total 22 modes 9 modes in 4 ⁇ 4 blocks, 9 modes in 8 ⁇ 8 blocks and 4 modes in 16 ⁇ 16 blocks
  • Intra-prediction Modes Intra 4 ⁇ process chamber 4/Intra 8 ⁇ 8 Intra 16 ⁇ 16 0 Vertical 0 Vertical 1 Horizontal 1 Horizontal 2 DC 2 DC 3 Diagonal Down Left 3 Plane 4 Diagonal Down Right 5 Vertical Right 6 Horizontal Down 7 Vertical Left 8 Horizontal Up
  • prediction is performed by using adjacent pixels. It is possible to obtain high prediction efficiency for blocks including vertical edges and horizontal edges.
  • an average value of adjacent pixels is used.
  • a weight average is obtained from every 2 to 3 pixels from adjacent pixels and is used as a prediction value. It is possible to obtain a high prediction effect for images including edges of 45 degrees to the left, 45 degrees to the right, 22.5 degrees to the right, 67.5 degrees to the right, 22.5 degrees to the left, and 112.5 degrees to the right, letting the vertically downward direction be 0 degree.
  • H.264 it is possible to realize highly efficient coding by selecting a proper mode from the intra-prediction modes of the images. In general, rough intra-prediction is performed to select an optimal intra-prediction mode.
  • a motion vector of a pixel to be predicted is calculated from previous and future pictures to thereby generate a prediction picture.
  • the adjacent pixels referred to in the intra-prediction are A to M illustrated in FIG. 2 .
  • reference pixels do not exist. Further, since reference beyond the slice boundary is prohibited, available modes are limited.
  • the intra-prediction is performed in the order of the numbers illustrated in FIGS. 3A and 3B .
  • the present invention provides a moving picture code compressing apparatus capable of compressing codes without increasing the amount of generated codes, furthermore, without deteriorating the accuracy of an image to be predicted when intra- or inter-prediction is performed in units of pixel blocks.
  • the pixels values of the reference pixels that are not available are calculated based on the pixels in the reference pixel block to generate a prediction picture of the block to be predicted by using the calculated pixel values instead of the reference pixels that are not available.
  • an average value of some pixels in the reference pixel block and difference values thereof are obtained.
  • the pixel values of the corresponding reference pixels are obtained based on the obtained average value and difference values.
  • a moving picture code compressing apparatus capable of compressing codes without increasing the amount of generated codes, furthermore, without deteriorating the accuracy of a prediction picture when intra- or inter-prediction is performed in pixel blocks.
  • FIG. 1 is a configuration diagram of an encoder of H.264
  • FIG. 2 illustrates positions of reference pixels in generating an intra-prediction picture
  • FIGS. 3A and 3B illustrate the order of intra-prediction in a block in H.264
  • FIG. 4 illustrates a relationship between a reference pixel block and a block to be predicted
  • FIG. 5 illustrates a relationship between a pixel line used for padding and pixels (in a horizontal direction) to be padded in accordance with an embodiment of the present invention
  • FIG. 6 illustrates a relationship between a pixel line used for padding and pixels (in a vertical direction) to be padded in accordance with the embodiment of the present invention
  • FIG. 7 illustrates the outline of a padding algorithm in image prediction in accordance with the present invention
  • FIG. 8 is a flowchart illustrating the flow of padding when upper reference pixels are not available
  • FIG. 9 illustrates a reference pixel block and a block to be predicted when the upper reference pixels are not available
  • FIG. 10 illustrates that the reference pixels are padded in the horizontal direction (step 1 ) in the relationship between the reference pixel block and the block to be predicted of FIG. 4 ;
  • FIG. 11 illustrates that the reference pixels are padded in the horizontal direction (step 2 ) in the relationship between the reference pixel block and the block to be predicted of FIG. 4 ;
  • FIG. 12 illustrates that the reference pixels are padded in the horizontal direction (step 3 ) in the relationship between the reference pixel block and the block to be predicted of FIG. 4 ;
  • FIG. 13 illustrates that the reference pixels are padded in the horizontal direction (step 4 ) in the relationship between the reference pixel block and the block to be predicted of FIG. 4 ;
  • FIG. 14 is a flowchart illustrating the flow of padding when left reference pixels are not available
  • FIG. 15 illustrates a reference pixel block and a block to be predicted when the left reference pixels are not available
  • FIG. 16 illustrates that the reference pixels are padded in the vertical direction (step 1 ) in the relationship between the reference pixel block and the block to be predicted of FIG. 4 ;
  • FIG. 17 illustrates that the reference pixels are padded in the vertical direction (step 2 ) in the relationship between the reference pixel block and the block to be predicted of FIG. 4 ;
  • FIG. 18 illustrates that the reference pixels are padded in the vertical direction (step 3 ) in the relationship between the reference pixel block and the block to be predicted of FIG. 4 ;
  • FIG. 19 illustrates a data hierarchy in H.264
  • FIG. 20 illustrates an access unit in H.264
  • FIG. 21 illustrates an example of an access unit, in which padding is set in the image prediction in accordance with the present invention
  • FIG. 22 illustrates a pixel block, in which limitations on modes are generated by a slice boundary/picture edge in conventional H.264.
  • FIG. 23 illustrates a pixel block, in which limitations on modes are generated by using adjacent pixels in inter-prediction of the conventional H.264.
  • FIGS. 4 to 23 which form a part hereof.
  • the present invention in generating a prediction image, when upper or left reference pixels are available and pixels on the other side are not available, even if the pixels at a picture edge and at a slice end or adjacent pixels are coded by inter-prediction by performing padding based on a pixel average and a pixel difference using the available reference pixel blocks, proper reference pixels are generated regardless of limitations on the prediction generated by prediction image generation modes. Therefore, when the upper or left reference pixels are available, all of the modes are available even for the pixels at the picture edge and on the slice boundary, so that a highly dense prediction image can be generated. In this way, in accordance with the embodiment of the present invention, a difference between the prediction image and an input image is reduced to thereby improve coding efficiency.
  • one reference pixel block is padded from the other reference pixel block.
  • padding is performed by using the available reference pixel lines 501 and 601 closest to pixels 502 and 602 to be padded.
  • pixels to be padded in a horizontal direction and a pixel line required for performing padding are illustrated in FIG. 5
  • pixels to be padded in a vertical direction and a pixel line required for performing padding are illustrated in FIG. 6 .
  • the basic padding in the image prediction in accordance with the embodiment of the present invention is to generate pixels 705 to be padded from a padding reference pixel line 704 illustrated in FIG. 7 .
  • a pixel average value 701 of the padding pixel line is obtained.
  • differences 702 between the respective pixel values and the pixel average value 701 are obtained.
  • a padding reference pixel 703 of the pixel to be padded is determined. Based on the padding reference pixel 703 , at the respective pixels to be padded, the padding reference pixel 703 and the differences 702 are added to obtain final values of the pixels to be padded.
  • the padding in the horizontal direction is illustrated. When the padding in the vertical direction is performed, the reference pixel line and the pixels to be padded are arranged in the vertical direction.
  • intra-prediction is performed in the order of the numbers illustrated in FIGS. 3A and 3B .
  • 4 ⁇ 4 intra-prediction will be taken as an example.
  • the padding is performed by using a macroblock including available reference pixels, a pixel average, and a pixel difference.
  • the padding can be also performed in an 8 ⁇ 8 block and in a 16 ⁇ 16 block by using the same method as in the 4 ⁇ 4 block described in this embodiment.
  • an uppermost reference pixel I of left 4 pixels is copied to the position of a reference pixel M (step 1 of FIG. 8 ).
  • an average value Ave(i_ 1 to i_ 4 ) of the pixel values in the uppermost horizontal line (i_ 1 to i_ 4 of FIG. 11 ) of a left reference pixel block is calculated by the following Eq. 1 (step 2 ):
  • the differences of Eq. 2 are added to the pixel value of the copied reference pixel M to pad resultant values to the respective corresponding positions as the values of the upper reference pixels (step 3 ).
  • FIG. 12 an example of padding a reference pixel A is illustrated.
  • upper right reference pixels are not available at the positions of 1 , 3 , 4 , 5 , 7 , 11 , 13 , and 15 illustrated in FIG. 3A . Accordingly, the pixel values of EFGH cannot be predicted in view of the standards. Therefore, as illustrated in FIG. 13 , the pixel value of the rightmost pixel D of the upper reference pixels is copied to set it as EFGH (step 4 ).
  • the upper reference pixels are padded by the processes of steps 1 to 4 . Since the reference pixels become available, a prediction image is generated by all of the modes using the upper reference pixels as “available for Intra — 4 ⁇ 4 prediction”.
  • the leftmost reference pixel A of upper 4 pixels is copied to the position of the reference pixel M (step 11 of FIG. 14 ).
  • an average value Ave(a_ 1 to a_ 4 ) of the pixel values (a_ 1 to a_ 4 of FIG. 17 ) in the leftmost vertical line of an upper reference pixel block is calculated by the following Eq. 3 (step 12 of FIG. 14 ):
  • FIG. 18 an example of padding a reference pixel I is illustrated.
  • the left reference pixels are padded by the processes of the above steps 11 to 13 . Since the left reference pixels become available, in the same way as the padding of the upper reference pixels, a prediction image is generated by all of the modes using the reference pixels as “available for Intra — 4 ⁇ 4 prediction”.
  • a prediction image is generated by replacing all the pixel values of the block to be predicted by a median that is, e.g., 512 when an input format is 10 bits.
  • a network abstraction layer including NAL units 1703 and 1704 is defined between a moving picture coding layer including coding data 1701 and parameter set 1702 to perform moving picture coding and a lower system such as MPEG-2 system 1705 for transmitting and accumulating coded information.
  • NAL network abstraction layer
  • the bit stream to the lower system 1705 is performed on a basis of NAL unit.
  • FIG. 19 the position of the NAL unit in H.264 is illustrated.
  • An AU delimiter 1801 is a start code that represents the head of the access unit.
  • a sequence parameter set (SPS) 1802 is a header including information on coding of an entire sequence such as the profile and level of a primary coded picture (PCP) image.
  • a picture parameter set (PPS) 1803 is a header that represents the coding mode of an entire picture.
  • a supplement enhanced information (SEI) 1804 is a header including certain additional information such as timing information of each picture and random access information.
  • a primary coded picture (PCP) 1805 is an NAL unit consisting of at least one slice data.
  • a redundant coded picture (RCP) 1806 which is an NAL unit including macroblock data such as PCP, is redundancy data that can be used when PCP is lost by errors.
  • An end of sequence (EOS) 1807 is a part that represents the end of a sequence.
  • An end of stream (EOS) 1808 is a part that represents the end of a stream. In H.264, it is defined that the access unit includes the AU delimiter 1801 to the EOS 1808 arranged in order.
  • the SPS 1802 illustrated in FIG. 20 When the padding of the image prediction described in this embodiment is performed based on H.264, in the SPS 1802 illustrated in FIG. 20 , a flag for determining intra-padding is added. At a decoder, it is determined whether the intra-padding is to be performed or not based on the flag determination.
  • the SPS 1802 is the header including the information on the coding of the entire sequence such as the profile and level of the PCP image.
  • the final parameter of the SPS is vui_parameters_present_flag that represents whether the syntax structure of video usability information (VUI) that is a data structure related to video display information exists or not. After this vui_parameters_present_flag, a flag of 1 bit that represents whether the intra-padding in accordance with the embodiment of the present invention is to be performed or not is added.
  • VUI video usability information
  • a padding determination flag 1900 related to the padding of the image prediction described in this embodiment is added.
  • the decoder performs decoding, as in the conventional method, after the PSP 1803 is decoded, the padding flag information 1900 is decoded to determine whether the padding is to be performed or not.
  • the decoder can perform decoding by using a prediction image generation block, in the same way as the encoder.
  • the modes that can be used for generating the prediction image on the slice boundary and at the picture edge are limited when the intra-prediction is used.
  • 1 slice is set as a 1 macroblock line (16 lines) in the screen size of 1920*1080
  • 4 ⁇ 4 pixel units limitations on available modes are generated in the region of about 25% in the uppermost 4 ⁇ 4 block and at the picture edge as illustrated in FIG. 22 .
  • mode limitations are generated in about 50% in 8 ⁇ 8 pixel units and are generated in all of the macroblocks in 16 ⁇ 16 pixel units.
  • the medium values are processed.
  • limitations on available modes do not exist in generating the prediction image, it is possible to generate a highly dense prediction image.
  • the prediction image is generated by using the intra-prediction and the inter-prediction

Abstract

A moving picture coding apparatus divides a picture into basic blocks and generates a prediction image of a block to be predicted in a basic block by using adjacent pixels in reference pixel blocks adjacent to the block to be predicted as reference pixels to perform predictive coding of a moving picture. When some of the reference pixels are not available, pixel values of the reference pixels that are not available are calculated based on pixels in the reference pixel blocks. The prediction image of the block to be predicted is generated by using the calculated pixel values instead of the reference pixels that are not available.

Description

    FIELD OF THE INVENTION
  • The present invention relates to a moving picture coding apparatus; and more particularly, to a moving picture coding apparatus capable of increasing prediction accuracy when intra- or inter-prediction is performed in pixel blocks based on the standards such as MPEG-2 and H.264.
  • BACKGROUND OF THE INVENTION
  • Nowadays, the amount of data transmitted in the form of a moving picture is increasing day by day. For example, let's consider the amount of data of an analog television. Currently, in the case of digitizing Japanese standard television broadcasting, the number of pixels is 720 in a horizontal direction and is 480 in a vertical direction. Each pixel has a luminance component of 8 bits and two chrominance components of 8 bits. A moving picture has stage main body 30 frames per one second. Currently, since a data ratio of a chrominance component to the luminance component is 1/2, the amount of data for one second is 720×480×(8+8×1/2+8×1/2)×30=124,416,000 bits and a transmission rate of about 120 Mbps is required.
  • Further, an optical fiber currently supplied as a home broadband has a transmission rate of about 100 Mbps and thus an image cannot be transmitted without compression. The amount of data of terrestrial digital television broadcasting to replace in 2011 is known as 1.5 Gbps. Accordingly, a highly efficient compression technology may be regarded as one of technologies required in the future. Currently, H.264/AVC (hereinafter, referred to as H.264) is suggested as the standard of the highly efficient compression technology. H.264 is the up-to-date international standard of moving picture coding developed by the joint video team (JVT) commonly established in December, 2001 by the video coding experts group (VCEG) of the international telecommunication union telecommunication standardization sector (ITU-T) and the moving picture experts group (MPEG) of the international organization for standardization (ISO)/international electro-technical commission (IEC).
  • ITU-T recommendations were admitted in May, 2003. In addition, the ISO/IEC/joint technical committee (JTC) 1 was standardized as MPEG-4 part 10 advanced video coding (AVC) in 2003.
  • H.264 is characterized in that the same picture quality can be realized by coding efficiency which is about twice as high as that of the conventional MPEG-2 and MPEG-4, that inter frame prediction, quantization, and entropy coding are adopted as a compression algorithm, and that H.264 can be widely used not only at a low bit rate of a mobile telephone or the like but also at a high bit rate of a high vision TV or the like.
  • In addition, the ITU-T recommendations can be downloaded from the URL stated in the following Non-Patent Document 1.
  • [Non-Patent Document 1] “ITU-T Recommendation H.264 Advanced video coding for generic audiovisual services”, [online], November 2007, TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU [searched on Dec. 12, 2008], the Internet <URL: http://www.itu.int/rec/T-REC-H.264-200711-I/en>
  • In order to describe problems to be solved by the present invention, a prediction method of H.264 will be simply described with reference to FIGS. 1 to 3B.
  • In H.264, intra-prediction 104 for generating an intra-prediction image predicted by using correlations within a picture and inter-prediction 105 for generating an inter-prediction image predicted by using correlations between pictures are performed. A difference between the generated prediction image and an input picture 101 is obtained, and orthogonal transform, e.g., discrete cosine transform (DCT), 102 and quantization (Q) 103 are performed on the differential data. Then, coding 110 is performed on the quantized data. In H.264, only the differential data is coded and transmitted, thereby realizing high coding efficiency.
  • Here, the reference numeral 107 indicates a deblocking filter standardized in H.264, and the reference numeral 108 is inverse orthogonal transform, e.g., inverse discrete cosine transform (IDCT), for performing an inverse processing to the processing of the orthogonal transform 102. Further, the reference numeral 109 indicates inverse quantization (IQ) for performing an inverse processing to the processing of the quantization 103. The filter 107, the inverse orthogonal transform 108 and the inverse quantization 109 perform the processing to obtain reconstructed pictures in an encoder. The reconstructed pictures for a plurality of previous frames are stored in a frame memory 106 and are retrieved to the inter-prediction 105.
  • The intra-prediction generates the prediction picture based on a correlation between adjacent pixels. In the intra-prediction, the prediction picture is generated by using correlations between a pixel to be predicted and its adjacent pixels, wherein pixels in a left column and an upper row of a block to be predicted are used. In FIG. 2, for example, reference pixels used for generating a prediction picture of 4×4 intra-prediction are illustrated.
  • In H.264/AVC, it is possible to generate prediction pictures on a basis of block of 4×4 pixels (hereinafter, referred to as 4×4 block), 8×8 pixels (hereinafter, referred to as 8×8 block), or 16×16 pixels (hereinafter, referred to as 16×16 block). As available modes, total 22 modes (9 modes in 4×4 blocks, 9 modes in 8×8 blocks and 4 modes in 16×16 blocks) can be used.
  • The intra-prediction modes of H.264/AVC in the respective blocks are illustrated in the following Table 1.
  • TABLE 1
    Intra-prediction Modes
    Intra 4× process chamber
    4/Intra 8 × 8 Intra 16 × 16
    0 Vertical 0 Vertical
    1 Horizontal 1 Horizontal
    2 DC 2 DC
    3 Diagonal Down Left 3 Plane
    4 Diagonal Down Right
    5 Vertical Right
    6 Horizontal Down
    7 Vertical Left
    8 Horizontal Up
  • In the modes 0 and 1, prediction is performed by using adjacent pixels. It is possible to obtain high prediction efficiency for blocks including vertical edges and horizontal edges. In the mode 2, an average value of adjacent pixels is used. In the modes 3 to 8, a weight average is obtained from every 2 to 3 pixels from adjacent pixels and is used as a prediction value. It is possible to obtain a high prediction effect for images including edges of 45 degrees to the left, 45 degrees to the right, 22.5 degrees to the right, 67.5 degrees to the right, 22.5 degrees to the left, and 112.5 degrees to the right, letting the vertically downward direction be 0 degree. In H.264, it is possible to realize highly efficient coding by selecting a proper mode from the intra-prediction modes of the images. In general, rough intra-prediction is performed to select an optimal intra-prediction mode.
  • In addition, although not described in detail herein, in the inter-prediction that is defined in H.264/AVC, a motion vector of a pixel to be predicted is calculated from previous and future pictures to thereby generate a prediction picture.
  • The adjacent pixels referred to in the intra-prediction are A to M illustrated in FIG. 2. However, when a picture edge, a slice boundary, and reference pixels are coded by the inter-prediction, reference pixels do not exist. Further, since reference beyond the slice boundary is prohibited, available modes are limited. In addition, in H.264, the intra-prediction is performed in the order of the numbers illustrated in FIGS. 3A and 3B.
  • The reference pixels used in the respective prediction modes are illustrated in the following Table 2.
  • TABLE 2
    Prediction Modes and Available Reference Pixels
    Intra 4 × 4/ Available Available
    Intra
    8 × 8 Reference Pixels Intra 16 × 16 Reference Pixels
    0 Vertical Upper 0 Vertical Upper
    1 Horizontal Left 1 Horizontal Left
    2 DC Upper/Left 2 DC Upper/Left
    3 Diagonal Upper/ 3 Plane Upper/Left/
    Down Left Upper Right Upper Left
    4 Diagonal Upper/Left/
    Down Right Upper Left
    5 Vertical Upper/Left/
    Right Upper Left
    6 Horizontal Upper/Left/
    Down Upper Left
    7 Vertical Upper/
    Left Upper Right
    8 Horizontal Left
    Up
  • As can be seen from the reference pixels used in Table 2, in the case of the 4×4 intra-prediction, since the pixels on the left/upper left do not exist at the picture edge, the modes 1, 4, 5, 6 and 8 cannot be used. Further, when the upper end of the block to be predicted is a slice boundary, the modes 0, 3, 4, 5, 6 and 7 cannot be used since the reference pixels on the upper/upper right are outside the slice boundary. In the case of the 8×8 intra-prediction, in the same way as in the 4×4 intra-prediction, 9 intra-prediction modes are defined and mode limitations due to the pixels that cannot be referred to are the same as those in the 4×4 intra-prediction. In the case of the 16×16 intra-prediction, an available mode is the mode 4 and reference pixels also do not exist in case of a picture edge and the slice boundary and reference beyond the slice boundary is also prohibited.
  • Further, in other cases than the above, when the reference pixels required in generating the prediction picture of the pixel block to be predicted, i.e., adjacent pixel blocks, are coded by the inter-prediction (when constrained_intra_pred_flag is ‘1 ’ in H.264), it is defined that an intra-prediction picture cannot be generated with reference to such adjacent blocks.
  • As described above, when coding is performed based on a conventional method, limitations on available modes are generated, thereby deteriorating the accuracy of the generated prediction picture. Further, a difference value between the prediction picture and an input picture increases due to the deterioration of the accuracy of the prediction picture. As a result, in the coding 110 of FIG. 2, the amount of codes required for coding the blocks to be predicted, in which limitations on modes are generated, increases.
  • In the range where a transmission band is limited, particularly, in low bit rate transmission, an increase in the amount of generated codes affects entire coding.
  • SUMMARY OF THE INVENTION
  • In view of the above, the present invention provides a moving picture code compressing apparatus capable of compressing codes without increasing the amount of generated codes, furthermore, without deteriorating the accuracy of an image to be predicted when intra- or inter-prediction is performed in units of pixel blocks.
  • In the prediction performed by the moving picture coding apparatus in accordance with the present invention, when some of reference pixels in a block to be predicted are not available, the pixels values of the reference pixels that are not available are calculated based on the pixels in the reference pixel block to generate a prediction picture of the block to be predicted by using the calculated pixel values instead of the reference pixels that are not available.
  • Then, an average value of some pixels in the reference pixel block and difference values thereof are obtained. The pixel values of the corresponding reference pixels are obtained based on the obtained average value and difference values.
  • In accordance with the embodiment of the present invention, it is possible to provide a moving picture code compressing apparatus capable of compressing codes without increasing the amount of generated codes, furthermore, without deteriorating the accuracy of a prediction picture when intra- or inter-prediction is performed in pixel blocks.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The objects and features of the present invention will become apparent from the following description of preferred embodiments, given in conjunction with the accompanying drawings, in which:
  • FIG. 1 is a configuration diagram of an encoder of H.264;
  • FIG. 2 illustrates positions of reference pixels in generating an intra-prediction picture;
  • FIGS. 3A and 3B illustrate the order of intra-prediction in a block in H.264;
  • FIG. 4 illustrates a relationship between a reference pixel block and a block to be predicted;
  • FIG. 5 illustrates a relationship between a pixel line used for padding and pixels (in a horizontal direction) to be padded in accordance with an embodiment of the present invention;
  • FIG. 6 illustrates a relationship between a pixel line used for padding and pixels (in a vertical direction) to be padded in accordance with the embodiment of the present invention;
  • FIG. 7 illustrates the outline of a padding algorithm in image prediction in accordance with the present invention;
  • FIG. 8 is a flowchart illustrating the flow of padding when upper reference pixels are not available;
  • FIG. 9 illustrates a reference pixel block and a block to be predicted when the upper reference pixels are not available;
  • FIG. 10 illustrates that the reference pixels are padded in the horizontal direction (step 1) in the relationship between the reference pixel block and the block to be predicted of FIG. 4;
  • FIG. 11 illustrates that the reference pixels are padded in the horizontal direction (step 2) in the relationship between the reference pixel block and the block to be predicted of FIG. 4;
  • FIG. 12 illustrates that the reference pixels are padded in the horizontal direction (step 3) in the relationship between the reference pixel block and the block to be predicted of FIG. 4;
  • FIG. 13 illustrates that the reference pixels are padded in the horizontal direction (step 4) in the relationship between the reference pixel block and the block to be predicted of FIG. 4;
  • FIG. 14 is a flowchart illustrating the flow of padding when left reference pixels are not available;
  • FIG. 15 illustrates a reference pixel block and a block to be predicted when the left reference pixels are not available;
  • FIG. 16 illustrates that the reference pixels are padded in the vertical direction (step 1) in the relationship between the reference pixel block and the block to be predicted of FIG. 4;
  • FIG. 17 illustrates that the reference pixels are padded in the vertical direction (step 2) in the relationship between the reference pixel block and the block to be predicted of FIG. 4;
  • FIG. 18 illustrates that the reference pixels are padded in the vertical direction (step 3) in the relationship between the reference pixel block and the block to be predicted of FIG. 4;
  • FIG. 19 illustrates a data hierarchy in H.264;
  • FIG. 20 illustrates an access unit in H.264;
  • FIG. 21 illustrates an example of an access unit, in which padding is set in the image prediction in accordance with the present invention;
  • FIG. 22 illustrates a pixel block, in which limitations on modes are generated by a slice boundary/picture edge in conventional H.264; and
  • FIG. 23 illustrates a pixel block, in which limitations on modes are generated by using adjacent pixels in inter-prediction of the conventional H.264.
  • DETAILED DESCRIPTION OF THE EMBODIMENTS
  • Hereinafter, an embodiment of the present invention will be described with reference to FIGS. 4 to 23 which form a part hereof.
  • In accordance with the embodiment of the present invention, in a data compressing process performed by a moving picture coding apparatus, when image prediction is performed, data on the pixels that cannot be referred to due to the position conditions of a block to be predicted are padded so as to be used as the reference pixels of the block to be predicted.
  • To be more specific, in accordance with the present invention, in generating a prediction image, when upper or left reference pixels are available and pixels on the other side are not available, even if the pixels at a picture edge and at a slice end or adjacent pixels are coded by inter-prediction by performing padding based on a pixel average and a pixel difference using the available reference pixel blocks, proper reference pixels are generated regardless of limitations on the prediction generated by prediction image generation modes. Therefore, when the upper or left reference pixels are available, all of the modes are available even for the pixels at the picture edge and on the slice boundary, so that a highly dense prediction image can be generated. In this way, in accordance with the embodiment of the present invention, a difference between the prediction image and an input image is reduced to thereby improve coding efficiency.
  • Hereinafter, the outline of padding in the prediction of the moving picture coding apparatus in accordance with the present invention will be described with reference to FIGS. 4 to 7.
  • In accordance with the embodiment of the present invention, when an upper or left reference pixel block of the block to be predicted illustrated in FIG. 4 cannot be referred to, one reference pixel block is padded from the other reference pixel block.
  • More specifically, as illustrated in FIGS. 5 and 6, padding is performed by using the available reference pixel lines 501 and 601 closest to pixels 502 and 602 to be padded. Here, pixels to be padded in a horizontal direction and a pixel line required for performing padding are illustrated in FIG. 5, and pixels to be padded in a vertical direction and a pixel line required for performing padding are illustrated in FIG. 6.
  • The basic padding in the image prediction in accordance with the embodiment of the present invention is to generate pixels 705 to be padded from a padding reference pixel line 704 illustrated in FIG. 7. First, a pixel average value 701 of the padding pixel line is obtained. Next, differences 702 between the respective pixel values and the pixel average value 701 are obtained. Then, a padding reference pixel 703 of the pixel to be padded is determined. Based on the padding reference pixel 703, at the respective pixels to be padded, the padding reference pixel 703 and the differences 702 are added to obtain final values of the pixels to be padded. In FIG. 7, the padding in the horizontal direction is illustrated. When the padding in the vertical direction is performed, the reference pixel line and the pixels to be padded are arranged in the vertical direction.
  • Hereinafter, the padding in the image prediction of the moving picture coding apparatus in accordance with the embodiment of the present invention will be described in detail with reference to FIGS. 8 to 18.
  • Also in this embodiment, in the same way as in H.264/AVC, intra-prediction is performed in the order of the numbers illustrated in FIGS. 3A and 3B. Further, in this embodiment, 4×4 intra-prediction will be taken as an example. In a padding method of the image prediction in accordance with the embodiment of the present invention, the padding is performed by using a macroblock including available reference pixels, a pixel average, and a pixel difference. The padding can be also performed in an 8×8 block and in a 16×16 block by using the same method as in the 4×4 block described in this embodiment.
  • First of all, padding in a case where upper pixels of FIG. 9 cannot be referred to will be described.
  • First, as illustrated in FIG. 10, an uppermost reference pixel I of left 4 pixels is copied to the position of a reference pixel M (step 1 of FIG. 8).
  • Next, an average value Ave(i_1 to i_4) of the pixel values in the uppermost horizontal line (i_1 to i_4 of FIG. 11) of a left reference pixel block is calculated by the following Eq. 1 (step 2):
  • Ave ( i_ 1 to i_N ) = i = 1 N pixel value Xi N , Eq . 1
  • where N=4 in this example.
  • Then, differences ΔAve(i_1 to i_4, i_x) between the respective pixels in the uppermost horizontal line of the reference pixel block and the average value obtained by Eq. 1 are calculated by the following Eq. 2:
  • Δ Ave ( i_ 1 to i_N , i_x ) = i_x - i = 1 N pixel value Xi N , Eq . 2
  • where N=4 in this example.
  • Subsequently, the differences of Eq. 2 are added to the pixel value of the copied reference pixel M to pad resultant values to the respective corresponding positions as the values of the upper reference pixels (step 3). In FIG. 12, an example of padding a reference pixel A is illustrated.
  • In a block to be predicted, upper right reference pixels are not available at the positions of 1, 3, 4, 5, 7, 11, 13, and 15 illustrated in FIG. 3A. Accordingly, the pixel values of EFGH cannot be predicted in view of the standards. Therefore, as illustrated in FIG. 13, the pixel value of the rightmost pixel D of the upper reference pixels is copied to set it as EFGH (step 4).
  • The upper reference pixels are padded by the processes of steps 1 to 4. Since the reference pixels become available, a prediction image is generated by all of the modes using the upper reference pixels as “available for Intra 4×4 prediction”.
  • Next, padding in a case where left pixels of FIG. 15 cannot be referred to will be described.
  • First, as illustrated in FIG. 16, the leftmost reference pixel A of upper 4 pixels is copied to the position of the reference pixel M (step 11 of FIG. 14).
  • Then, an average value Ave(a_1 to a_4) of the pixel values (a_1 to a_4 of FIG. 17) in the leftmost vertical line of an upper reference pixel block is calculated by the following Eq. 3 (step 12 of FIG. 14):
  • Ave ( a_ 1 to a_N ) = i = 1 N pixel value Xi N , Eq . 3
  • where N=4.
  • Next, in this example, differences ΔAve(a_1 to a_4, a_x) between the pixel values in the leftmost vertical line of the reference pixel block and the average value obtained by Eq. 3 are calculated by the following Eq. 4:
  • Δ Ave ( a_ 1 to a_N , a_x ) = a_x - i = 1 N pixel value Xi N , Eq . 4
  • where N=4. Then, the differences are added to the pixel value of M to pad resultant values to the respective corresponding positions of the left reference pixels.
  • In FIG. 18, an example of padding a reference pixel I is illustrated.
  • The left reference pixels are padded by the processes of the above steps 11 to 13. Since the left reference pixels become available, in the same way as the padding of the upper reference pixels, a prediction image is generated by all of the modes using the reference pixels as “available for Intra 4×4 prediction”.
  • Finally, a case where upper and left reference pixels of a block to be predicted do not exist, e.g., a case of a first macroblock of a slice, will be described. In this case, in the same ways as the conventional H.264 standard, a prediction image is generated by replacing all the pixel values of the block to be predicted by a median that is, e.g., 512 when an input format is 10 bits.
  • As described above, by performing the padding in accordance with the embodiment of the present invention, even when upper or left pixels of a block to be predicted do not exist, it is possible to generate a prediction image by using all of the modes defined by H.264. In accordance with the embodiment of the present invention, since the average value and the pixel differences of the line closest to the pixels to be padded from the available reference pixel block are used, pixels available for prediction are reconstructed in the padded pixels. As a result, it is possible to generate a highly dense prediction image.
  • Next, a case where the padding of the image prediction described in this embodiment is performed based on H.264 will be described with reference to FIGS. 19 to 21.
  • In the bit stream structure of H.264, as illustrated in FIG. 19, a network abstraction layer (NAL) including NAL units 1703 and 1704 is defined between a moving picture coding layer including coding data 1701 and parameter set 1702 to perform moving picture coding and a lower system such as MPEG-2 system 1705 for transmitting and accumulating coded information. Thus, the bit stream to the lower system 1705 is performed on a basis of NAL unit. In FIG. 19, the position of the NAL unit in H.264 is illustrated.
  • In order to access information in the bit stream in units of pictures, several NAL units are arranged in an access unit. The structure of the access unit is illustrated in FIG. 20. An AU delimiter 1801 is a start code that represents the head of the access unit. A sequence parameter set (SPS) 1802 is a header including information on coding of an entire sequence such as the profile and level of a primary coded picture (PCP) image. A picture parameter set (PPS) 1803 is a header that represents the coding mode of an entire picture. A supplement enhanced information (SEI) 1804 is a header including certain additional information such as timing information of each picture and random access information. A primary coded picture (PCP) 1805 is an NAL unit consisting of at least one slice data. A redundant coded picture (RCP) 1806, which is an NAL unit including macroblock data such as PCP, is redundancy data that can be used when PCP is lost by errors. An end of sequence (EOS) 1807 is a part that represents the end of a sequence. An end of stream (EOS) 1808 is a part that represents the end of a stream. In H.264, it is defined that the access unit includes the AU delimiter 1801 to the EOS 1808 arranged in order.
  • When the padding of the image prediction described in this embodiment is performed based on H.264, in the SPS 1802 illustrated in FIG. 20, a flag for determining intra-padding is added. At a decoder, it is determined whether the intra-padding is to be performed or not based on the flag determination. The SPS 1802 is the header including the information on the coding of the entire sequence such as the profile and level of the PCP image. In H.264, the final parameter of the SPS is vui_parameters_present_flag that represents whether the syntax structure of video usability information (VUI) that is a data structure related to video display information exists or not. After this vui_parameters_present_flag, a flag of 1 bit that represents whether the intra-padding in accordance with the embodiment of the present invention is to be performed or not is added.
  • As shown in FIG. 21, at the last part of the SPS 1802 of the conventional H.264, a padding determination flag 1900 related to the padding of the image prediction described in this embodiment is added. When the decoder performs decoding, as in the conventional method, after the PSP 1803 is decoded, the padding flag information 1900 is decoded to determine whether the padding is to be performed or not. The decoder can perform decoding by using a prediction image generation block, in the same way as the encoder.
  • Finally, the advantages of the padding of the image prediction in accordance with this embodiment will be described in comparison with the method of the conventional H.264 with reference to FIGS. 22 and 23.
  • In the conventional H.264, the modes that can be used for generating the prediction image on the slice boundary and at the picture edge are limited when the intra-prediction is used. For example, when 1 slice is set as a 1 macroblock line (16 lines) in the screen size of 1920*1080, in 4×4 pixel units, limitations on available modes are generated in the region of about 25% in the uppermost 4×4 block and at the picture edge as illustrated in FIG. 22. Similarly, mode limitations are generated in about 50% in 8×8 pixel units and are generated in all of the macroblocks in 16×16 pixel units. However, in accordance with the embodiment of the present invention, since the padding cannot be performed only by the first macroblock of the slice, the medium values are processed. In the other macroblocks, since limitations on available modes do not exist in generating the prediction image, it is possible to generate a highly dense prediction image.
  • Further, when the prediction image is generated by using the intra-prediction and the inter-prediction, if the pixel block positioned in the reference pixel block is coded by the inter-prediction (constrained_intra_pred_flag=‘1’) as illustrated in FIG. 23, it is expected that the pixel blocks, in which mode limitations are generated, further increase, in addition to the above situation. Therefore, the present invention is more effective.
  • While the invention has been shown and described with respect to the particular embodiments, it will be understood by those skilled in the art that various changes and modification may be made.

Claims (2)

1. A moving picture coding apparatus for dividing a picture into basic blocks and generating a prediction image of a block to be predicted in a basic block by using adjacent pixels in reference pixel blocks adjacent to the block to be predicted as reference pixels to perform predictive coding of a moving picture,
wherein when some of the reference pixels are not available, pixel values of the reference pixels that are not available are calculated based on pixels in the reference pixel blocks, and
wherein the prediction image of the block to be predicted is generated by using the calculated pixel values instead of the reference pixels that are not available.
2. The moving picture coding apparatus of claim 1, wherein each of the pixel values of the reference pixels that are not available is calculated based on a correlation of the pixels in the reference pixel block.
US12/591,438 2009-01-13 2009-11-19 Moving picture coding apparatus Expired - Fee Related US8953678B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009-004588 2009-01-13
JP2009004588A JP5238523B2 (en) 2009-01-13 2009-01-13 Moving picture encoding apparatus, moving picture decoding apparatus, and moving picture decoding method

Publications (2)

Publication Number Publication Date
US20100177821A1 true US20100177821A1 (en) 2010-07-15
US8953678B2 US8953678B2 (en) 2015-02-10

Family

ID=42319082

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/591,438 Expired - Fee Related US8953678B2 (en) 2009-01-13 2009-11-19 Moving picture coding apparatus

Country Status (2)

Country Link
US (1) US8953678B2 (en)
JP (1) JP5238523B2 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101977317A (en) * 2010-10-27 2011-02-16 无锡中星微电子有限公司 Intra-frame prediction method and device
US20120163457A1 (en) * 2010-12-28 2012-06-28 Viktor Wahadaniah Moving picture decoding method, moving picture coding method, moving picture decoding apparatus, moving picture coding apparatus, and moving picture coding and decoding apparatus
CN102595118A (en) * 2011-01-14 2012-07-18 华为技术有限公司 Prediction method and predictor in encoding and decoding
WO2012096622A1 (en) * 2011-01-14 2012-07-19 Telefonaktiebolaget Lm Ericsson (Publ) Methods and devices for intra coding of video
US20130136373A1 (en) * 2011-03-07 2013-05-30 Panasonic Corporation Image decoding method, image coding method, image decoding apparatus, and image coding apparatus
CN103621095A (en) * 2011-06-13 2014-03-05 高通股份有限公司 Border pixel padding for intra prediction in video coding
CN104113753A (en) * 2011-01-14 2014-10-22 华为技术有限公司 Image encoding and decoding method, image data processing method and devices thereof
CN104125457A (en) * 2011-01-14 2014-10-29 华为技术有限公司 Image coding and decoding method and device and image data processing method and device
US20150092844A1 (en) * 2012-03-16 2015-04-02 Electronics And Telecommunications Research Institute Intra-prediction method for multi-layer images and apparatus using same
CN104735458A (en) * 2011-01-14 2015-06-24 华为技术有限公司 Predication method for coding and decoding and predictor
AU2012206839B2 (en) * 2011-01-14 2015-10-01 Huawei Technologies Co., Ltd. Image coding and decoding method, image data processing method and devices thereof
AU2012259700B2 (en) * 2011-05-20 2015-10-01 Kt Corporation Method and apparatus for intra prediction within display screen
US20160021392A1 (en) * 2010-07-14 2016-01-21 Ntt Docomo, Inc. Low-complexity intra prediction for video coding
US20160044318A1 (en) * 2002-05-28 2016-02-11 Dolby International Ab Methods And Systems For Image Intra-Prediction Mode Management
AU2015224447B2 (en) * 2011-01-14 2016-02-18 Huawei Technologies Co., Ltd. Prediction method and predictor for encoding and decoding
US9369705B2 (en) 2011-01-19 2016-06-14 Sun Patent Trust Moving picture coding method and moving picture decoding method
GB2561264A (en) * 2011-05-20 2018-10-10 Kt Corp Method and apparatus for intra prediction within display screen
WO2019074265A1 (en) * 2017-10-09 2019-04-18 Samsung Electronics Co., Ltd. Producing 360 degree image content on rectangular projection in electronic device using padding information
US20190246137A1 (en) * 2011-11-10 2019-08-08 Sony Corporation Image processing apparatus and method
US10404989B2 (en) * 2016-04-26 2019-09-03 Google Llc Hybrid prediction modes for video coding
US11917146B2 (en) * 2017-03-27 2024-02-27 Interdigital Vc Holdings, Inc. Methods and apparatus for picture encoding and decoding

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11284072B2 (en) 2010-08-17 2022-03-22 M&K Holdings Inc. Apparatus for decoding an image
ES2685668T3 (en) 2010-08-17 2018-10-10 M&K Holdings Inc. Apparatus for encoding an intraprediction mode
WO2012086166A1 (en) * 2010-12-20 2012-06-28 パナソニック株式会社 Image encoding method and image decoding method
JP2012142845A (en) * 2011-01-05 2012-07-26 Canon Inc Image encoder, image encoding method and program, image decoder, and image decoding method and program
JP2012191295A (en) * 2011-03-09 2012-10-04 Canon Inc Image coding apparatus, image coding method, program, image decoding apparatus, image decoding method, and program
KR102032940B1 (en) 2011-03-11 2019-10-16 소니 주식회사 Image processing device, image processing method and computer readable recording medium
JP2012244354A (en) * 2011-05-18 2012-12-10 Sony Corp Image processing system and method
JP2013012840A (en) * 2011-06-28 2013-01-17 Sony Corp Image processing device and method
CN103782595A (en) * 2011-07-01 2014-05-07 三星电子株式会社 Video encoding method with intra prediction using checking process for unified reference possibility, video decoding method and device thereof
JP2013110466A (en) * 2011-11-17 2013-06-06 Hitachi Kokusai Electric Inc Moving image encoding device
WO2017073360A1 (en) * 2015-10-30 2017-05-04 ソニー株式会社 Image processing device and method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6134352A (en) * 1996-05-21 2000-10-17 Lucent Technologies Inc. Spatial error concealment for image processing
US20010017942A1 (en) * 2000-01-21 2001-08-30 Nokia Mobile Phones Ltd. Method for encoding images, and an image coder
US20050243920A1 (en) * 2004-04-28 2005-11-03 Tomokazu Murakami Image encoding/decoding device, image encoding/decoding program and image encoding/decoding method
US20060072676A1 (en) * 2003-01-10 2006-04-06 Cristina Gomila Defining interpolation filters for error concealment in a coded image
US20070053443A1 (en) * 2005-09-06 2007-03-08 Samsung Electronics Co., Ltd. Method and apparatus for video intraprediction encoding and decoding
US20090034617A1 (en) * 2007-05-08 2009-02-05 Canon Kabushiki Kaisha Image encoding apparatus and image encoding method
US20090141798A1 (en) * 2005-04-01 2009-06-04 Panasonic Corporation Image Decoding Apparatus and Image Decoding Method
US20100118943A1 (en) * 2007-01-09 2010-05-13 Kabushiki Kaisha Toshiba Method and apparatus for encoding and decoding image
US8208563B2 (en) * 2008-04-23 2012-06-26 Qualcomm Incorporated Boundary artifact correction within video units
US8472522B2 (en) * 2007-02-23 2013-06-25 Nippon Telegraph And Telephone Corporation Video encoding method and decoding method, apparatuses therefor, programs therefor, and storage media which store the programs

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02219390A (en) * 1989-02-21 1990-08-31 Oki Electric Ind Co Ltd System for compressing and extending video signal
JP2003125417A (en) * 2002-09-20 2003-04-25 Canon Inc Image coder and its method
JP4114885B2 (en) * 2005-10-31 2008-07-09 松下電器産業株式会社 Image encoding apparatus, method, and program
JP2008166916A (en) * 2006-12-27 2008-07-17 Victor Co Of Japan Ltd Intra prediction encoder and intra prediction coding method
US8178982B2 (en) 2006-12-30 2012-05-15 Stats Chippac Ltd. Dual molded multi-chip package system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6134352A (en) * 1996-05-21 2000-10-17 Lucent Technologies Inc. Spatial error concealment for image processing
US20010017942A1 (en) * 2000-01-21 2001-08-30 Nokia Mobile Phones Ltd. Method for encoding images, and an image coder
US20060072676A1 (en) * 2003-01-10 2006-04-06 Cristina Gomila Defining interpolation filters for error concealment in a coded image
US20050243920A1 (en) * 2004-04-28 2005-11-03 Tomokazu Murakami Image encoding/decoding device, image encoding/decoding program and image encoding/decoding method
US20090141798A1 (en) * 2005-04-01 2009-06-04 Panasonic Corporation Image Decoding Apparatus and Image Decoding Method
US20070053443A1 (en) * 2005-09-06 2007-03-08 Samsung Electronics Co., Ltd. Method and apparatus for video intraprediction encoding and decoding
US20100118943A1 (en) * 2007-01-09 2010-05-13 Kabushiki Kaisha Toshiba Method and apparatus for encoding and decoding image
US8472522B2 (en) * 2007-02-23 2013-06-25 Nippon Telegraph And Telephone Corporation Video encoding method and decoding method, apparatuses therefor, programs therefor, and storage media which store the programs
US20090034617A1 (en) * 2007-05-08 2009-02-05 Canon Kabushiki Kaisha Image encoding apparatus and image encoding method
US8208563B2 (en) * 2008-04-23 2012-06-26 Qualcomm Incorporated Boundary artifact correction within video units

Cited By (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9979963B2 (en) * 2002-05-28 2018-05-22 Dolby Laboratories Licensing Corporation Methods and systems for image intra-prediction mode management
US20160044318A1 (en) * 2002-05-28 2016-02-11 Dolby International Ab Methods And Systems For Image Intra-Prediction Mode Management
US20160150246A1 (en) * 2002-05-28 2016-05-26 Dolby Laboratories Licensing Corporation Methods And Systems For Image Intra-Prediction Mode Management
US9973762B2 (en) * 2002-05-28 2018-05-15 Dolby Laboratories Licensing Corporation Methods and systems for image intra-prediction mode management
US10841613B2 (en) 2010-07-14 2020-11-17 Ntt Docomo, Inc. Low-complexity intra prediction for video coding
US9942565B2 (en) * 2010-07-14 2018-04-10 Ntt Docomo, Inc. Low-complexity intra prediction for video coding
US10116960B2 (en) * 2010-07-14 2018-10-30 Ntt Docomo, Inc. Low-complexity intra prediction for video coding
US10397608B2 (en) * 2010-07-14 2019-08-27 Ntt Docomo, Inc. Low-complexity intra prediction for video coding
US20190335202A1 (en) * 2010-07-14 2019-10-31 Ntt Docomo, Inc. Low-complexity intra prediction for video coding
US20160021392A1 (en) * 2010-07-14 2016-01-21 Ntt Docomo, Inc. Low-complexity intra prediction for video coding
US10841614B2 (en) * 2010-07-14 2020-11-17 Ntt Docomo, Inc. Low-complexity intra prediction for video coding
US20160057448A1 (en) * 2010-07-14 2016-02-25 Ntt Docomo, Inc. Low-complexity intra prediction for video coding
CN101977317A (en) * 2010-10-27 2011-02-16 无锡中星微电子有限公司 Intra-frame prediction method and device
EP2661086A1 (en) * 2010-12-28 2013-11-06 Panasonic Corporation Motion-video decoding method, motion-video encoding method, motion-video decoding apparatus, motion-video encoding apparatus, and motion-video encoding/decoding apparatus
EP2661086A4 (en) * 2010-12-28 2015-02-11 Panasonic Ip Corp America Motion-video decoding method, motion-video encoding method, motion-video decoding apparatus, motion-video encoding apparatus, and motion-video encoding/decoding apparatus
AU2011353415B2 (en) * 2010-12-28 2016-08-04 Sun Patent Trust Moving picture decoding method, moving picture coding method, moving picture decoding apparatus, moving picture coding apparatus, and moving picture coding and decoding apparatus
US20120163457A1 (en) * 2010-12-28 2012-06-28 Viktor Wahadaniah Moving picture decoding method, moving picture coding method, moving picture decoding apparatus, moving picture coding apparatus, and moving picture coding and decoding apparatus
CN107105233A (en) * 2010-12-28 2017-08-29 太阳专利托管公司 Picture decoding method and picture decoding apparatus
CN104113753A (en) * 2011-01-14 2014-10-22 华为技术有限公司 Image encoding and decoding method, image data processing method and devices thereof
EP2665266A1 (en) * 2011-01-14 2013-11-20 Huawei Technologies Co., Ltd. Prediction method and predictor for encoding and decoding
CN102595118A (en) * 2011-01-14 2012-07-18 华为技术有限公司 Prediction method and predictor in encoding and decoding
CN104735458A (en) * 2011-01-14 2015-06-24 华为技术有限公司 Predication method for coding and decoding and predictor
AU2012206838B2 (en) * 2011-01-14 2015-06-11 Huawei Technologies Co., Ltd. Prediction method in encoding or decoding and the predictor
RU2553063C2 (en) * 2011-01-14 2015-06-10 Хуавей Текнолоджиз Ко., Лтд. Method for prediction during encoding or decoding and predictor
AU2015224447B2 (en) * 2011-01-14 2016-02-18 Huawei Technologies Co., Ltd. Prediction method and predictor for encoding and decoding
WO2012096622A1 (en) * 2011-01-14 2012-07-19 Telefonaktiebolaget Lm Ericsson (Publ) Methods and devices for intra coding of video
US10264254B2 (en) * 2011-01-14 2019-04-16 Huawei Technologies Co., Ltd. Image coding and decoding method, image data processing method, and devices thereof
US9979965B2 (en) 2011-01-14 2018-05-22 Huawei Technologies Co., Ltd. Image coding and decoding method, image data processing method, and devices thereof
US9485504B2 (en) 2011-01-14 2016-11-01 Huawei Technologies Co., Ltd. Image coding and decoding method, image data processing method, and devices thereof
CN104125457A (en) * 2011-01-14 2014-10-29 华为技术有限公司 Image coding and decoding method and device and image data processing method and device
EP2665266A4 (en) * 2011-01-14 2013-12-25 Huawei Tech Co Ltd Prediction method and predictor for encoding and decoding
AU2012206839B2 (en) * 2011-01-14 2015-10-01 Huawei Technologies Co., Ltd. Image coding and decoding method, image data processing method and devices thereof
US9369705B2 (en) 2011-01-19 2016-06-14 Sun Patent Trust Moving picture coding method and moving picture decoding method
US20130136373A1 (en) * 2011-03-07 2013-05-30 Panasonic Corporation Image decoding method, image coding method, image decoding apparatus, and image coding apparatus
US8923633B2 (en) * 2011-03-07 2014-12-30 Panasonic Intellectual Property Corporation Of America Image decoding method, image coding method, image decoding apparatus, and image coding apparatus
US20150030075A1 (en) * 2011-03-07 2015-01-29 Panasonic Intellectual Property Corporation Of America Image decoding method, image coding method, image decoding device and image coding device
US9124871B2 (en) * 2011-03-07 2015-09-01 Panasonic Intellectual Property Corporation Of America Image decoding method, image coding method, image decoding device and image coding device
US9432669B2 (en) 2011-05-20 2016-08-30 Kt Corporation Method and apparatus for intra prediction within display screen
GB2560394B (en) * 2011-05-20 2018-12-05 Kt Corp Method and apparatus for intra prediction within display screen
AU2012259700B2 (en) * 2011-05-20 2015-10-01 Kt Corporation Method and apparatus for intra prediction within display screen
US9843808B2 (en) 2011-05-20 2017-12-12 Kt Corporation Method and apparatus for intra prediction within display screen
US9749640B2 (en) 2011-05-20 2017-08-29 Kt Corporation Method and apparatus for intra prediction within display screen
US9749639B2 (en) 2011-05-20 2017-08-29 Kt Corporation Method and apparatus for intra prediction within display screen
US9584815B2 (en) 2011-05-20 2017-02-28 Kt Corporation Method and apparatus for intra prediction within display screen
US9445123B2 (en) 2011-05-20 2016-09-13 Kt Corporation Method and apparatus for intra prediction within display screen
GB2556649A (en) * 2011-05-20 2018-06-06 Kt Corp Method and apparatus for intra prediction within display screen
GB2560394A (en) * 2011-05-20 2018-09-12 Kt Corp Method and apparatus for intra prediction within display screen
GB2561264A (en) * 2011-05-20 2018-10-10 Kt Corp Method and apparatus for intra prediction within display screen
US9432695B2 (en) 2011-05-20 2016-08-30 Kt Corporation Method and apparatus for intra prediction within display screen
GB2556649B (en) * 2011-05-20 2018-10-31 Kt Corp Method and apparatus for intra prediction within display screen
US9756341B2 (en) 2011-05-20 2017-09-05 Kt Corporation Method and apparatus for intra prediction within display screen
US10158862B2 (en) 2011-05-20 2018-12-18 Kt Corporation Method and apparatus for intra prediction within display screen
GB2561264B (en) * 2011-05-20 2019-01-02 Kt Corp Method and apparatus for intra prediction within display screen
US9288503B2 (en) 2011-05-20 2016-03-15 Kt Corporation Method and apparatus for intra prediction within display screen
US9154803B2 (en) 2011-05-20 2015-10-06 Kt Corporation Method and apparatus for intra prediction within display screen
US9807399B2 (en) 2011-06-13 2017-10-31 Qualcomm Incorporated Border pixel padding for intra prediction in video coding
CN103621095A (en) * 2011-06-13 2014-03-05 高通股份有限公司 Border pixel padding for intra prediction in video coding
US20190246137A1 (en) * 2011-11-10 2019-08-08 Sony Corporation Image processing apparatus and method
US20230247217A1 (en) * 2011-11-10 2023-08-03 Sony Corporation Image processing apparatus and method
US20150092844A1 (en) * 2012-03-16 2015-04-02 Electronics And Telecommunications Research Institute Intra-prediction method for multi-layer images and apparatus using same
US10404989B2 (en) * 2016-04-26 2019-09-03 Google Llc Hybrid prediction modes for video coding
US11917146B2 (en) * 2017-03-27 2024-02-27 Interdigital Vc Holdings, Inc. Methods and apparatus for picture encoding and decoding
WO2019074265A1 (en) * 2017-10-09 2019-04-18 Samsung Electronics Co., Ltd. Producing 360 degree image content on rectangular projection in electronic device using padding information

Also Published As

Publication number Publication date
JP5238523B2 (en) 2013-07-17
JP2010166133A (en) 2010-07-29
US8953678B2 (en) 2015-02-10

Similar Documents

Publication Publication Date Title
US8953678B2 (en) Moving picture coding apparatus
US11647231B2 (en) Image processing device and image processing method
US10171828B2 (en) Modification of unification of intra block copy and inter signaling related syntax and semantics
CA2467496C (en) Global motion compensation for video pictures
US20180184085A1 (en) Method of decoding video data, video decoder performing the same, method of encoding video data, and video encoder performing the same
US9247253B2 (en) In-loop adaptive wiener filter for video coding and decoding
CN105379284B (en) Moving picture encoding device and method of operating the same
US9313491B2 (en) Chroma motion vector processing apparatus, system, and method
JP4820559B2 (en) Video data encoding and decoding method and apparatus
JP2006521771A (en) Digital stream transcoder with hybrid rate controller
US8189676B2 (en) Advance macro-block entropy coding for advanced video standards
US20150030068A1 (en) Image processing device and method
US20140286436A1 (en) Image processing apparatus and image processing method
US6040875A (en) Method to compensate for a fade in a digital video input sequence
US6754270B1 (en) Encoding high-definition video using overlapping panels
KR20080061379A (en) Coding/decoding method and apparatus for improving video error concealment
Akramullah et al. Video Coding Standards
JP5421739B2 (en) Moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding method
AU2007219272B2 (en) Global motion compensation for video pictures

Legal Events

Date Code Title Description
AS Assignment

Owner name: HITACHI KOKUSAI ELECTRIC INC., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KADOTO, TADAKAZU;KONDO, MASATOSHI;REEL/FRAME:023590/0089

Effective date: 20091020

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20230210