WO2001080567A1 - Decoder and decoding method, recorded medium, and program - Google Patents
Decoder and decoding method, recorded medium, and program Download PDFInfo
- Publication number
- WO2001080567A1 WO2001080567A1 PCT/JP2001/003204 JP0103204W WO0180567A1 WO 2001080567 A1 WO2001080567 A1 WO 2001080567A1 JP 0103204 W JP0103204 W JP 0103204W WO 0180567 A1 WO0180567 A1 WO 0180567A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- decoding
- slice
- stream
- decoder
- buffer
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present invention relates to a decoding device, a decoding method, and a recording medium, and more particularly to a 4: 2: 2P capable of real-time operation with a feasible circuit scale.
- the present invention relates to a decoding device and a decoding method capable of realizing a video decoder compatible with @HL, and a recording medium.
- MPEG2 Moving Picture Coding Experts Group / Moving Picture Experts Group2
- ISO / IEC International Standards Organization / International Electrotechnical Commission
- ITU-T Internationa 1 Telecommunication Union- Telecommunication sector
- the MPEG2 encoding stream is classified into classes according to the profile determined by the encoding method and the level determined by the number of pixels to be handled, so that it can support a wide range of applications.
- MP @ ML Mainn.Profile.Main 'level
- DVB Digital Video Broadcast
- DVD Digital Versatile Disk
- the profile and level are described in sequence-extension described later using FIG.
- the color difference signals of video are handled in the same 4: 2: 2 format as in conventional baseband, and the upper limit of the bit rate is increased 4: 2: 2P (4: 2: 2 profile).
- HL High Level
- Figure 1 shows typical classes of MPEG2 and the upper limit of various parameters in each class.
- the main level the bit rate, the number of samples per line, the number of lines per frame, the frame frequency, and the upper limit of the sample processing time are shown.
- the upper limit of the bit rate of 4: 2: 2 P @ HL is 300 (Mbit / sec), and the upper limit of the number of pixels to be processed is 62,668,800 (samples / sec) .
- the upper limit of the bit rate of MP @ MP is 15 (Mbit / sec), and the upper limit of the number of pixels to be processed is 10,368,000 (samples / sec).
- the video decoder that decodes 4: 2: 2 P @ HL is 20 times the bit rate and about 6 times the number of pixels to process compared to the video decoder that decodes MP @ ML. It turns out that the ability is needed.
- FIG. 2 shows the level structure of the MPEG2 video bitstream.
- sequence-header defines the header data of the sequence of the MPEG bitstream. If the sequence-header in the sequence-first sequence does not have a sequence extension of 21 , the provisions of IS0 / IEC11172-2 apply to this bitstream. When the sequence-extension follows the first sequence-header of the sequence, all subsequent sequence-headers are immediately followed by the sequence-extension. In other words, in the case of Fig. 2
- sequence_extension defines extension data of the sequence layer of the MPEG bitstream.
- the sequence-extension only occurs immediately after the sequence-header, and is located at the end of the bitstream to prevent loss of the frame after decoding and after frame reordering. Must not come right before.
- picture-header is immediately followed by picture-cording-extension.
- a GOP group_of_picture
- the GOPJie ader defines the header data of the GOP layer of the MPEG bitstream.
- the GOPJieder defines the header data of the GOP layer of the MPEG bitstream. The defined data element is described.
- One picture is encoded as picture_data following picture_header and picture-coding-extension.
- the first encoded frame following GOPJieader is (The first picture of GOPJieader is an I-picture.) ITU-T Rec. Although they are defined, their illustration and description are omitted here.
- the picture-header defines the header data of the picture layer of the MPEG bitstream
- the picture-coding-extension defines the extension data of the picture layer of the MPEG bitstream.
- picture-data describes data elements relating to a slice layer and a macroblock layer of the MPEG bitstream.
- picture_data is divided into multiple slices as shown in Fig. 2, and the slice is divided into multiple macroj) locks (macroblocks).
- the macro_block is composed of 16 ⁇ 16 pixel data.
- the first and last macroblocks of a slice are not skipped macroblocks (macroblocks without information).
- a macroblock is composed of 16 x 16 pixel data.
- Each block is composed of 8 ⁇ 8 pixel data.
- the internal structure of the macroblock differs between frame coding and field coding for frame images that can use frame DCT (Discrete Cosine Transform) coding and field DCT coding. I do.
- a macroblock includes one section of a luminance component and a chrominance component.
- the term macroblock refers to either the source and the decoded data or the corresponding encoded data component.
- Macroblocks have three chrominance formats: 4: 2: 0, 4: 2: 2 and 4: 4: 4. The order of the blocks in the macroblock depends on the chrominance format.
- FIG. 3A shows a macroblock in the case of the 4: 2: 0 system. In the 4: 2: 0 format, a macroblock consists of four luminance (Y) blocks and one chrominance (Cb, Cr) block.
- FIG. 3B shows a macroblock in the case of the 4: 2: 2 system. In the case of the 4: 2: 2 method, a macroblock is composed of four luminance (Y) blocks and two color difference (Cb, Cr) blocks.
- Each macroblock can be subjected to predictive encoding by several methods.
- the prediction modes are roughly classified into two types: field prediction and frame prediction.
- field prediction one or more fields of the previously decoded data are used, and each field is independently predicted.
- the frame prediction uses one or a plurality of previously decoded frames to perform frame prediction. Within a field image, all predictions are field predictions.
- a frame image can be predicted by either field prediction or frame prediction, and the prediction method is selected for each macroblock.
- two types of special prediction modes of 16 ⁇ 8 motion compensation and dual prime can be used.
- the motion vector information and other peripheral information are encoded together with the prediction error signal of each macroblock.
- the difference vector from the prediction vector is coded using the last motion vector coded using the variable length code as the prediction vector.
- the maximum length of the vector that can be displayed can be programmed for each image.
- the encoder calculates the appropriate motion vector.
- sequence-header and sequence-extension are arranged.
- the data element described by the sequence—header and sequence—extension ioni is the data element described by the sequence_header and sequence—extension ion described at the beginning of the sequence of the video stream. Is exactly the same as The reason that the same data is described in the stream in this way is that the bitstream receiving apparatus side is in the middle of the data stream (for example, the bitstream corresponding to the picture layer). This is to prevent that when the reception starts from, the data in the sequence layer cannot be received and the stream cannot be decoded.
- a 32-bit sequence_end code indicating the end of the sequence; ⁇ It is described and described.
- sequence—header (included data elements are sequence—header_code, horizontal—size—value, vertica 1—size—value, aspect—ratio—information, frame—rate—code, Dit—rate—value, marker—bit , Vbv_buffer—size—value, constrained—parameter—flag, load—intr a—quantizer—mat rix, intra—quantizer—matrix, load—non—intra—quantizer—mat rl'X and non—intra—quantizer—matrix, etc. Consists of
- sequencejieader-code is data representing the start synchronization code of the sequence layer.
- the horizontal_size_value is a data consisting of the lower 12 bits of the number of pixels in the horizontal direction of the image.
- the vertical_size_value is a data consisting of the lower 12 bits of the number of vertical lines of the image.
- frame_rate_code is data representing a display cycle of an image.
- the bit_rate_value is the lower 18 bits of the bit rate for limiting the amount of generated bits.
- Marker-bit is bit data inserted to prevent start code emulation.
- vbv buffer—size_value is the lower 10 bits of the value that determines the size of the virtual buffer VBV (Video Buffering Verifier) for controlling the amount of generated code.
- CO nstrained_parameter flag is data indicating that each parameter is within the limit.
- the load-non-intra-quantizer-matrix is data indicating the existence of non-intra MB quantized matrix data.
- load_intra—quantizer—matrix is the data that indicates the existence of quantized matrix data for intra MB.
- the intra-quantizer matrix is data indicating the value of the intra MB quantizer matrix.
- the ⁇ -intra-quantizer-matrix is data representing the value of the quantization matrix for non-intra MB.
- Fig. 5 shows the sequence configuration of the sequence-extension.
- sequence extension is e xtension—start—code, extension—start—code—identifier, profile—and—level—indication—progressive sequence, chroma—format, horizontal—size—extension, vertical—size—extension, bit—rate—extension, It consists of data elements such as marker—bit, vbv—buffer—size—extension, low_delay, frame—rate—extension—n, and frame—rate—extension—d.
- extension—start—code is a code representing the start synchronization code of the extension data.
- the extension_start_code_identifier is data indicating which extension data is sent.
- the “profile_and_leve” indication is data for specifying the file and level of the video data.
- the progressive-sequence is data indicating that the video data is a progressive scan (progressive image).
- chroma_format is a data format for specifying the color difference format of video data.
- vertical—size_extension is the data of the upper two bits that can be stored in the vertical—size—value of the sequence header.
- bit-rate-extension is the data of the upper 12 bits that can be added to the bit-rate-value of the sequence.
- the marker-bit prevents start-code emulation.
- Vbv_buffer_size_extension is the data of the upper 8 bits added to vbv-buffer_size_value of the sequence header, and low_delay is data indicating that it does not include the B-picture.
- Extension—n is the data for obtaining the frame rate by combining with the frame-rate-code of the sequence header
- frame_rate—extension—d is the frame-rate that is combined with the frame_rate— code of the sequence header to c 6 is de Isseki for obtaining Ichito, de Isseki elementary Bok representing the.
- GOP_header showing the data structure of GOPjieader is, Group_start one code, time- code, closed_gop And broken- is composed of link.
- start_code is data indicating the start synchronization code of the GOP layer.
- time_code is a time code indicating the time of the first picture of the GOP.
- closed_gop is a flag that indicates that the images in the G0P can be played independently from other G0Ps.
- broken— link indicates that the first B-picture in the GOP cannot be played correctly due to editing, etc. This is flag data.
- Fig. 7 shows a one-time configuration of picture-header.
- picture—header The picture elements are picture—start—code, temporal—reference, picture—coding—typ, vbv—delay, full—pel—forward_vector, forward_f_code N full—pel—backward_vector, and backward- f- code etc.
- picture—start_code is data representing the start synchronization code of the picture layer.
- the tempora reference is a number that indicates the display order of the pictures, and is reset at the beginning of the GOP.
- picture—coding_type is data indicating a picture type.
- v bv delay is data indicating the initial state of the virtual buffer at the time of random access.
- ful l_pel forward—vector, forward—f_code, ful 1—pel—backward—vector, and backward—code are fixed data not used in MPEG2.
- Fig. 8 shows the data structure of picture-coding-extension.
- picture—coding—extension is extension—start—code
- extension—start—code—identifier f—code [0] [0]
- intra—dc—precision picture—structure, top—field—firs t, frame—pred—frame—dct, concealment—motion—vectors, q—scale—type
- Intra—vlc_format alternate—scan, repeat—firt one field
- chroma—420—type progressive—frame
- v_a is, field—sequence, sub—carrier, burst—amp litude, and sub — Carrier— It is composed of phases.
- extension—start_code is a start code indicating the start of extension data in the picture layer.
- extension—start—code_identifier is a code that indicates which extension data will be sent.
- L code [0] [0] is a data representing the horizontal motion vector search range in the forward direction.
- f_code [0] [l] is the data representing the vertical motion vector search range in the forward direction.
- f—code [l] [0] is data representing the search range of horizontal motion vector in the backward direction.
- f_code [l] [l] is data representing the vertical motion vector search range in the backward direction.
- intra—dc—precision is a value that represents the precision of the DC coefficient.
- Applying DCT to the matrix f representing the luminance and color difference signals of each pixel in the block yields an 8 ⁇ 8 DCT coefficient matrix F.
- the coefficient at the upper left corner of this matrix F is called DC coefficient.
- the DC coefficient is This signal indicates the average luminance and average color difference within the frame.
- the picture_structure is data indicating whether it is a frame structure or a field structure. In the case of a field structure, it is data indicating the upper field or the lower field.
- top_field_first is data indicating whether the first field is top or bottom in the case of a frame structure.
- frame — predictive — frame — dct is data indicating that the frame 'mode DCT prediction is only frame mode in the case of frame structure.
- concealment—motion—vectors is a reminder that the intra-macroblock has a motion vector to conceal transmission errors.
- q-scale-type is data indicating whether to use a linear quantization scale or a non-linear quantization scale.
- intra vlcjormat is data indicating whether or not to use another two-dimensional variable length coding (VLC) for intra macroblocks.
- alternate-scan is data indicating the choice between using a zigzag scan or an alternate scan.
- repeat_firt—field is the data used for 2: 3 pulldown.
- chroma_420_type is the same value as the next progressive-frame if the signal format is 4: 2: 0, otherwise it is zero.
- progressive_fraDie is a date indicating whether this picture is a progressive scan or an in-field.
- composite display—flag is data indicating whether the source signal was a composite signal.
- FIG. 9 shows the data structure of picture_data.
- the data element defined by the picture_data () function is the data element defined by the slice () function. At least one data element defined by the si ice () function is described in the bitstream.
- slice_start—code is a time code that indicates the time of the data element defined by the slice () function.
- quantiser-scale-code is a data indicating the quantization step size set for the macroblock existing in this slice layer, and the quantiser-scale-code is set for each macroblock. In this case, the macroblock-quantizer-scale-code data set for each macroblock is preferentially used.
- the intra-si ice-flag is a flag indicating whether or not intra-si ice and reserved-bits exist in the bitstream.
- the intra_slice is a data indicating whether or not a non-intra macro block exists in the slice layer. If any of the macroblocks in the slice layer is a non-intra macroblock, intra_siice is “0”, and if all of the macroblocks in the slice layer are nonintra macroblocks, intra_slice is It becomes “1”. reserved— bits are 7-bit data and take the value of “ ⁇ ”.
- extra_bit—slice is a flag that indicates that additional information is present, and is set to “1” if extra_information_slice is present and “0” if no additional information is present. Set.
- the macroblock () function is composed of elements such as macroblock—escape, macroblock—address—increments and quantiser—seale_code ⁇ and marker—bit, and macroblockjnodes () function, motion_vectors (s ) Function, and coded—A function for describing the data element defined by the block_pattern () function.
- macroblock-escape is a fixed sequence of bits that indicates whether the horizontal difference between the reference macroblock and the previous macroblock is greater than or equal to 34. If the horizontal difference between the reference macroblock and the previous macroblock is greater than or equal to 34, 33 is added to the value of macroblock—address—increment.
- macroblock—address—increment is a value indicating the horizontal difference between the reference macroblock and the previous macroblock. If there is one macroblock—escape force s in the macroblock—address—increment eye ij, the value of this macroblock address—increment t plus 33 is the actual reference. This is data indicating the horizontal difference between the reference macroblock and the previous macroblock.
- the quantiser_scale_code is a data indicating the quantization step size set for each macroblock, and is present only when macroblock_quant is “1”.
- a slice_quanti- cer—scale—code indicating the quantization step size of the slice layer is set. If the scale_code is set for the reference macroblock, this quantization step is used. Choose a size.
- macroblock one address—increment in other words, mac rob lock one modes ()
- the number of data elements defined by this is described.
- the macrob lockjnodes () function is as shown in Figure 12 This is a function for describing the data elements such as macroblock-type, frame-motion-type, fiel d_motion_type, dct_type, etc.
- niacroblock_type is a data block indicating the coding type of the macroblock. It is.
- frame—pred_frame_dct is “0”, indicates macroblock—type.
- the frame_pred_frame_dct is a flag indicating whether or not the frame_motion_type exists in the bitstream.
- frame—motion—type is a 2-bit code indicating the prediction type of the macroblock in the frame. If there are two prediction vectors and the field-based prediction type, then framejnotion_type is “0 0”.
- frame_motion If there is one prediction vector and the field-based prediction type is frame_motion, — If the type is “0 1” and the prediction vector is 1 and the frame-based prediction is performed, framejnotion_type is “1 0” and the prediction vector is 1 and the dual-blime prediction is performed. If it is a type, fram e_inotion— type is “1 1”.
- f ie ld motion—type is a 2-bit code indicating the motion prediction of the field's Mac mouth block. If the prediction vector is a field-based prediction type with one prediction vector, “01” is used. If the prediction vector is a prediction type based on a macroblock based on two prediction vectors, it is “10”. Therefore, if the prediction vector is one and the prediction type is dual blime, it is “1 1”.
- the picture structure is a frame, and frame—pred—f raffle dct is the bitstream.
- frame- mot ion- type force s indicates the presence, if you have indicated that the f rame one pred- f rame- dct force s, is dct_type in the bitstream of its exist
- the data element representing dct_type is described next to the data element representing macroMock —type.
- the dct-type is data indicating whether the DCT is a frame DCT mode or a field DCT mode.
- each of the data elements described above starts with a special bit pattern called a start code.
- start codes are specific bit patterns that do not otherwise appear in the video stream.
- Each start code consists of a start code prefix followed by a start code value.
- the start code prefix is a bit string "0000 0 000 0000 0000 0001".
- the start code value is an 8-bit integer that identifies the type of start code.
- FIG. 13 shows the value of each start code of MPEG2.
- Many scan codes are indicated by a single start code value.
- the slice_start_code is represented by a plurality of timecode values from 01 to AF, and the start code value represents the vertical position with respect to the slice. Since all of these start codes are in byte units, the start code of the start code is the first bit of the byte so that the start code is before the start code. In this case, multiple bits "0" are inserted, and adjustment is made so that the start code is in byte units.
- Fig. 14 is a block diagram showing the circuit configuration of a conventional MPEG video decoder supporting MP @ ML.
- the MPEG video decoder has a stream input circuit 11, a buffer control circuit 12, a clock generation circuit 13, a start code detection circuit 14, a decoder 15, a motion compensation circuit 16, and a display.
- An IC (integrated circuit) 1 composed of an output circuit 17 and a buffer 2 composed of a stream buffer 21 and a video buffer 22, for example, a dynamic random access memory (DRAM). It consists of.
- DRAM dynamic random access memory
- the stream input circuit 11 of the IC 1 receives the input of the encoded stream that has been subjected to the high-efficiency encoding, and supplies it to the buffer control circuit 12.
- the buffer control circuit 12 receives the input encoding stream according to the basic clock supplied from the clock generation circuit 13. Enter the stream into stream 2 of stream 2.
- the stream buffer 21 has a capacity of 1,835,008 bits, which is the VBV buffer size required for the decoding of MP @ ML.
- the encoded stream stored in the stream buffer 21 is sequentially read out from the previously written data under the control of the buffer control circuit 12, and supplied to the start code detection circuit 14. Is done.
- the start code detection circuit 14 detects the start code described with reference to FIG. 13 from the input stream, and outputs the detected start code and the input stream to the decoder 15. Output to
- the decoder 15 decodes the input stream based on the MPEG syntax.
- the decoder 15 first decodes the header parameter of the victim layer according to the input start code, separates the slice layer into macroblocks based on the decoded header block, decodes the macroblock, and predicts the resulting prediction.
- the vectors and pixels are output to the motion compensation circuit 16.
- the coding efficiency is improved by obtaining the motion-compensated difference between adjacent images using the temporal redundancy of the images.
- the MPEG video decoder performs motion compensation on pixels using motion compensation by adding the pixel value of the reference image indicated by the motion vector to the pixel currently being decoded. Decode the image data overnight.
- the motion compensation circuit 16 writes the pixel data to the video buffer 22 of the buffer 2 via the buffer control circuit 12 In addition to preparing for display output, this pixel data is prepared in a case where it is used as reference data for another image.
- the motion compensator 16 uses the buffer 2 through the buffer control circuit 12 according to the prediction vector output from the decoder 15. Read reference pixel data from video buffer 22 Then, the read reference pixel data is added to the pixel data supplied from the decoder 15 to perform motion compensation.
- the motion compensation circuit 16 writes the motion-compensated pixel data to the video buffer 22 of the buffer 2 via the buffer control circuit 12 to prepare for display output, and the pixel data is used as a reference for other pixels. To be used as illumination data.
- the display output circuit 17 generates a synchronous timing signal for outputting the decoded image data, and based on this timing, the pixel data from the video buffer 22 via the buffer control circuit 12 based on this timing. And outputs it as a decoded video signal.
- the MPEG2 stream has a hierarchical structure.
- the amount of data below the slice layer depends on the number of pixels to be coded.
- the number of macroblocks that must be processed with one picture in HL is about six times that of ML. Further, from FIG. 3B, in 4: 2: 2P, the number of blocks processed by one macroblock is 4/3 times as large as MP.
- the VBV buffer size and the number of pixels increase. Accordingly, the buffer size of the stream buffer 21 becomes insufficient.
- the access of the input stream to the stream buffer 21 increases, and as the number of pixels increases, the access to the video buffer 22 of the motion compensation circuit 16 increases. Since the number of accesses to the buffer control circuit increases, the control of the buffer control circuit 12 cannot be performed in time. Further, with the increase of the bit rate, the number of macro programs and the number of blocks, the processing of the decoder 15 cannot be performed in time.
- a first decoding device includes: a plurality of decoding units for decoding an encoded stream; and a decoding control unit for controlling the plurality of decoding units to operate in parallel. .
- the plurality of decoding means may output a signal indicating the end of the decoding processing to the decoding control means.
- the decoding control means may output the signal indicating the end of the decoding processing to the decoding means which outputs the signal indicating the end of the decoding processing. It can be controlled to decode the encrypted stream.
- First buffer means for buffering the coded stream; and reading out a start code indicating the start of a unit of predetermined information included in the coded stream from the coded stream. Reading means for reading the position information relating to the position where the start code is held in the one buffer means; and second buffer means for buffering the start code and the position information read by the reading means.
- buffering control means for controlling the buffering of the encoded stream by the first buffer means and the buffering of the start code and position information by the second buffer means. Can be done.
- the encoding stream may be an MPEG2 encoding stream as specified in IS0 / IEC 13818-2 and ITU-T Recommendation H.262.
- Selecting means for selecting a predetermined one of the plurality of image data decoded and output by the plurality of decoding means, and a motion for receiving the input of the image data selected by the selecting means and performing motion compensation as necessary Compensation means may be further provided.
- the decoding means may cause the selection means to output an end signal indicating that the decoding processing has been completed, and the selection means stores values corresponding to the respective processing states of the plurality of decoding means.
- the values of the storage means all become the first value
- the data is stored in the storage means corresponding to the decoding means which outputs the end signal indicating that the decoding processing has been completed.
- the value is changed from the first value to the second value, and the image data decoded by the decoding means whose corresponding value stored in the storage means is the second value.
- one of the image data can be selected, and the value stored in the storage unit corresponding to the decoding unit that has decoded the selected image data can be changed to the first value. .
- Holding means for holding the image data selected by the selection means or the image data on which motion compensation has been performed by the motion compensation means; the image data selected by the selection means; and motion compensation by the motion compensation means.
- holding control means for controlling the holding of the image data by the holding means.
- the holding means may hold the luminance component and the color difference component of the image data separately.
- the image processing apparatus may further include changing means for changing the order of frames of the coded stream supplied to the decoding means, wherein the holding means includes an intra coded frame in the image sequence and a forward prediction. At least two more frames can be retained than the total number of coded frames, and the change means includes a predetermined order for reversely reproducing the coded streams. Thus, the order of the frames in the encoded stream can be changed.
- Output means for reading and outputting the image data held by the holding means may be further provided.
- the predetermined order may be an intra-coded frame, a forward prediction-coded frame, a bi-directional frame.
- the order of the predictive coded frames and the order in the bidirectional predictive coded frame can be reverse to the order of the coding, and the output means decodes the data by the decoding means.
- the bidirectional predictive coded frames held by the holding means are sequentially read and output, and at a predetermined timing, the intra coded frames or the forward predictive coded frames held by the holding means are read out. It can be read out, inserted at a predetermined position between the bidirectional predictive encoded frames, and output.
- the predetermined order is a timing at which the output means outputs the intra-coded frame or the forward prediction-coded frame, and the intra-coding of the immediately preceding image sequence decoded by the decoding means.
- the order may be such that the frames or the forward prediction coded frames are held by the holding means.
- Recording means for recording information necessary for decoding the encoded stream;
- Control means for controlling the recording of the information by the stage and the supply of the information to the decoding means.
- the encoded stream may include the information, and the control means may include: It is possible to select information necessary for the decoding process of the decoding means and supply the information to the decoding means.
- the information supplied by the control unit to the decoding unit may be an upper layer coding parameter corresponding to the frame being decoded by the decoding unit.
- Output means for reading out and outputting the image data held by the holding means may be further provided, and the decoding means is N times as fast as the processing speed required for normal reproduction of the encoded stream.
- the output means can output the image data of every N frames among the image data held by the holding means. be able to.
- First holding means for holding the coded stream, and reading a start code indicating the start of a unit of predetermined information contained in the coded stream from the coded stream, Reading means for reading position information relating to the position where the start code is held in the holding means; second holding means for holding the start code and the position information read by the reading means; First holding control means for controlling the holding of the coded stream by the holding means and the holding of the start code and the position information by the second holding means, and the decoded and output by the plurality of decoding means.
- Selecting means for selecting a predetermined one of the plurality of image data, and receiving the input of the image data selected by the selecting means and performing motion compensation as necessary Compensation means, image data selected by the selection means, or third storage means for holding image data subjected to motion compensation by the motion compensation means, image data selected by the selection means, and motion
- the image data on which motion compensation has been performed by the compensating means may be further provided with second holding control means for controlling the holding by the third holding means in the evening independently of the first holding control means.
- a first decoding method of the present invention includes a plurality of decoding steps for decoding an encoded stream, and a decoding control step for controlling processing of the plurality of decoding steps to operate in parallel.
- the program recorded on the first recording medium of the present invention stores the encoded stream. It is characterized by including a plurality of decoding steps for decoding and a decoding control step for controlling processing of the plurality of decoding steps to operate in parallel.
- a first program of the present invention includes a plurality of decoding steps for decoding an encoded stream, and a decoding control step for controlling processing of the plurality of decoding steps to operate in parallel. I do.
- a second decoding apparatus includes a plurality of slice decoders for decoding an encoded stream, and a slice decoder control means for controlling a plurality of slice decoders to operate in parallel.
- a second decoding method provides a decoding control step for controlling decoding by a plurality of slice decoders for decoding an encoded stream, and a slice decoder for controlling the decoding control steps to be processed in parallel. And a control step.
- a program recorded on the second recording medium of the present invention causes a decoding control step for controlling decoding by a plurality of slice decoders for decoding an encoded stream and a decoding control step to be processed in parallel. And a slice decoder control step for performing control as described above.
- a second program of the present invention includes a decoding control step for controlling decoding by a plurality of slice decoders for decoding an encoded stream, and a slice decoder control step for controlling the decoding control steps to be processed in parallel. It is characterized by including.
- a third decoding device includes a plurality of slice decoders for decoding a source encoded stream, and a plurality of slices for each slice constituting a source encoded stream.
- Control means for monitoring the decoding status of the decoder and controlling a plurality of slice decoders, wherein the control means controls the decoding of the picture by the slice decoder regardless of the order of the slices included in the picture. It is characterized in that slices are assigned to a plurality of slice decoders so that the loading process is the fastest.
- a plurality of slice decoders decode a source coded stream for each slice constituting a picture of the source coded stream.
- a decoding process control step for controlling the processing; and a control step for monitoring the decoding status of the plurality of slice decoders and controlling the plurality of slice decoupling processes. Regardless of the order of the included slices, the slice is allocated to a plurality of slice decoders so that the decoding process executed in the slice decoder is the fastest.
- a third program of the present invention includes a decoding processing control step for controlling decoding processing of a source coded stream for each slice constituting a picture of the source coded stream by a plurality of slice decoders. And a control step for monitoring the decoding stage of the plurality of slice decoders and controlling the plurality of slice decoding stages.
- a slice is allocated to a plurality of slice decoders so that the decoding process executed in the slice decoder is the fastest.
- a fourth decoding apparatus includes a plurality of slice decoders for decoding a source coded stream and a decoding step for the plurality of slice decoders, for each slice constituting a source encoded stream picture.
- Control means for monitoring one task and controlling a plurality of slice decos, and wherein the control means determines which slice of the plurality of slice decoders has been decoded irrespective of the order of the slices included in the picture.
- a slice to be decoded is assigned to a decoder.
- a fourth decoding method includes a decoding process control step of controlling decoding of a source coded stream for each slice constituting a picture of the source coded stream by a plurality of slice decoders.
- a slice to be decoded is assigned to a slice decoder of which decoding processing has been completed by the processing of the decoding processing control step among a plurality of slice decoders.
- a fourth program of the present invention is a decoding program control for controlling decoding of a source coded stream for each slice constituting a picture of the source coded stream by a plurality of slice decoders. Step and deco of multiple slice decoders And a control step for monitoring the status and controlling a plurality of slice decoupling operations.
- the decoding is performed among the plurality of slice decoders regardless of the order of the slices included in the picture.
- a slice to be decoded is assigned to a slice decoder for which decoding processing has been completed by the processing of the processing control step.
- the encoding stream is decoded, and the decoding process is controlled so that the decoding process operates in parallel.
- an encoded stream is decoded by a plurality of slice decoders, and decoding processing by the plurality of slice decoders is performed in parallel.
- the source coded stream is decoded for each slice constituting the picture of the source coded stream, and the decoding steps of a plurality of slice decoders are performed.
- a plurality of slice decos are controlled, and the slice processing is performed in the slice decoder so as to be the fastest, regardless of the order of the slices included in the picture.
- the source encoded stream is decoded for each slice constituting a picture of the source encoded stream, and the decoding of a plurality of slice decoders is performed.
- a plurality of slice decoders are controlled, and regardless of the order of the slices included in the picture, the slice decoder of the plurality of slice decoders that has finished decoding is controlled.
- a slice to be decoded is allocated.
- FIG. 1 is a diagram for explaining an upper limit value of each parameter according to a profile and a level of MPEG2.
- FIG. 2 is a diagram for explaining a hierarchical structure of an MPEG2 bit stream.
- FIGS. 3A and 3B are diagrams for explaining the macroblock layer.
- FIG. 4 is a diagram for explaining the data structure of sequencejieader.
- FIG. 5 is a diagram for explaining the sequence structure of sequence-extension.
- FIG. 6 is a diagram for explaining the data structure of GOPJieader.
- FIG. 7 is a diagram for explaining the data structure of picture_header.
- FIG. 8 is a diagram for explaining the data structure of picture_coding_extension.
- FIG. 9 is a diagram for explaining the data structure of picture-data.
- FIG. 10 is a diagram for explaining the data structure of slice.
- FIG. 11 is a diagram for explaining the data structure of a macroblock.
- FIG. 12 is a diagram for explaining a macroblock-modes data structure.
- FIG. 13 is a diagram for explaining a start code.
- FIG. 14 is a block diagram showing a configuration of a video decoder that decodes a conventional MUMP encoded stream.
- FIG. 15 is a block diagram showing a configuration of a video decoder to which the present invention is applied.
- FIG. 16 is a flowchart for explaining the processing of the slice decoder control circuit.
- FIG. 17 is a diagram for explaining a specific example of the processing of the slice decoder control circuit.
- FIG. 18 is a flowchart for explaining the arbitration processing of the slice decoder by the motion compensation circuit.
- FIG. 19 is a diagram for explaining a specific example of the arbitration processing of the slice decoder by the motion compensation circuit.
- FIG. 20 is a block diagram showing a configuration of a playback device including the MPEG video decoder of FIG.
- FIG. 21 is a diagram showing the structure of the MPEG video signal input to the encoder and encoded.
- FIG. 22 is a diagram illustrating an example of MPEG image encoding using inter-frame prediction.
- Fig. 23 shows the decoding process when the MPEG encoding stream is played back in the forward direction.
- FIG. 24 is a diagram for explaining the decoding process when the MPEG encoded stream is reproduced in reverse.
- BEST MODE FOR CARRYING OUT THE INVENTION Hereinafter, an embodiment of the present invention will be described with reference to the drawings.
- FIG. 15 is a block diagram showing a circuit configuration of an MPEG video decoder to which the present invention is applied.
- the MPEG video decoder in Fig. 15 has a stream input circuit 41, a start code detection circuit 42, a stream buffer control circuit 43, a clock generation circuit 44, a picture decoder 45, and a slice decoder control.
- Circuit 46, slice decoders 47 to 49, motion compensation circuit 50, luminance buffer control circuit 51, color difference buffer control circuit 52, and IC 31 composed of display output circuit 53, stream buffer 6 1 and a start code buffer 62, for example, a DRAM buffer 32, a luminance buffer 71 and a chrominance buffer 72, for example, a DRAM video buffer 33, a controller 34, And Drive 35.
- the stream input circuit 41 receives the input of the stream that has been encoded with high efficiency, and supplies the stream to the stream code detection circuit 42.
- the start code detection circuit 42 supplies the input encoded stream to the stream buffer control circuit 43, detects the start code described with reference to FIG. 13 and detects the start code.
- the stream buffer control circuit generates start code information including a type of the start code and a write pointer indicating a position where the start code is written in the stream buffer 61. 4 3 to supply.
- the clock generation circuit 44 generates a basic clock twice as large as the clock generation circuit 13 described with reference to FIG. 14 and supplies the same to the stream buffer control circuit 43.
- the stream buffer control circuit 43 writes the input encoded stream to the stream buffer 61 of the sofa 32 according to the basic clock supplied from the clock generation circuit 44, and The input start code information is stored in the buffer 32 Write to ToPnFa6.
- the stream buffer 61 will at least It has a capacity of 47, 185, 920 bits, which is a VBV buffer size required for decoding 4: 2: 2P @ HL.
- the stream buffer 61 has a capacity capable of recording at least 2 G0P of data. I have.
- the picture decoder 45 reads out the start code information from the start code buffer 62 via the stream buffer control circuit 43. For example, at the start of decoding, decoding starts from the sequencejieader described with reference to FIG. 2, so that the victim decoder 45 is the sequence code described with reference to FIG.
- the write pointer corresponding to the header_code is read from the start buffer 62, and the sequence header is read from the stream buffer 61 and decoded based on the write pointer. Subsequently, the picture decoder 45 reads out and decodes sequence-extension, GOP-header, picture-coding_extension, etc. from the stream buffer 61 in the same manner as the reading of sequence-header. .
- the picture decoder 45 When the picture decoder 45 reads out the first slice_st art—code from the start code buffer 62, all the parameters necessary for decoding the picture are complete.
- the victim decoder 45 outputs the decoded parameters of the victim layer to the slice decoder control circuit 46.
- the slice decoder control circuit 46 receives the input of the picture layer parameter, and reads out the start code information of the corresponding slice from the start code buffer 62 via the stream buffer control circuit 43.
- the slice decoder control circuit 46 has a register indicating the number of the slice included in the encoded stream, which is the slice to be decoded by any of the slice decoders 47 to 49. Then, referring to the register, the picture layer parameter and the write pointer of the slice included in the start code information are supplied to one of the slice decoders 47 to 49.
- the slice decoder control circuit 46 performs a process of selecting a slice decoder for executing the decoding among the slice decoders 47 to 49. This will be described later with reference to FIGS. 16 and 17.
- the slice decoder 47 includes a macroblock detection circuit 81, a vector decoding circuit 82, an inverse quantization circuit 83, and an inverse DCT circuit 84.
- the corresponding slice is read from the stream buffer 61 via the stream buffer control circuit 43 based on the write pointer. Then, in accordance with the picture layer parameters input from the slice decoder control circuit 46, the read slice is decoded and output to the motion compensation circuit 50.
- the macroblock detection circuit 81 separates the macroblocks in the slice layer, decodes the parameters of each macroblock, and predicts the prediction mode and prediction vector of each variable-length coded macroblock. Is supplied to a vector decoding circuit 82, and the variable-length encoded coefficient data is supplied to an inverse quantization circuit 83.
- the vector decoding circuit 82 decodes the prediction mode and the prediction vector of each macroblock that has been subjected to the variable length coding, and restores the prediction vector.
- the inverse quantization circuit 83 decodes the variable-length coded coefficient data and supplies it to the inverse DCT circuit 84.
- the inverse DCT circuit 84 performs inverse DCT on the decoded coefficient data, and restores the pixel data before encoding.
- the slice decoder 47 requests the motion compensation circuit 50 to perform motion compensation on the decoded macroblock (that is, sets the signal indicated by REQ in the figure to 1), and the motion compensation circuit 50 Receiving the signal indicating the acceptance of the execution request (the signal indicated by ACK in the figure), and supplies the decoded prediction vector and the decoded pixel to the motion compensation circuit 50.
- the slice decoder 47 receives the ACK signal, supplies the decoded prediction vector and the decoded pixel to the motion compensation circuit 50, and then changes the REQ signal from 1 to 0. I do. Then, when decoding of the next input macro program is completed, the REQU signal is changed from 0 to 1 again.
- the macroblock detection circuit 8 of the slice decoder 47 Since the same processing as in 1 to the inverse DCT circuit 84 is performed, the description is omitted.
- the motion compensation circuit 50 has three registers, Reg_REQ_A, Reg-REQ_B and Reg-REQ-C, which indicate whether or not the motion compensation of the data inputted from the slice decoders 47 to 49 has been completed.
- one of the slice decoders 47 to 49 is appropriately selected and a motion compensation execution request is accepted (ie, an ACK signal is output in response to the REQ signal). Then, the input of the prediction vector and the pixel is received), and the motion compensation processing is executed. At this time, the motion compensation circuit 50 performs the motion compensation for the slice decoders 47 to 49 of which the REQ signal is 1 at a predetermined timing among the slice decoders 47 to 49. After completing each time, the next motion compensation request is accepted.
- the second motion compensation request of the slice decoder 47 is accepted until the motion compensation of the slice decoder 48 and the slice decoder 49 ends. Absent.
- the processing by which the motion compensation circuit 50 selects which decoder of the slice decoders 47 to 49 performs motion compensation will be described later with reference to FIGS. 18 and 19. I do.
- the motion compensation circuit 50 outputs the luminance buffer control circuit 5 if the pixel data is luminance data. 1 is written to the luminance buffer 71 of the video buffer 33, and if the pixel data is chrominance data, written to the chrominance buffer 72 of the video buffer 33 via the chrominance buffer control circuit 52. In addition to preparing for display output, prepare in case this pixel data is used as reference data for another image.
- the motion compensation circuit 50 receives an input from a corresponding one of the slice decoders 47 to 49. If the pixel data is luminance data according to the prediction vector, the reference pixel is read from the luminance buffer 71 via the luminance buffer control circuit 51, and if the pixel data is color difference data, The reference pixel data is read from the color difference buffer 72 via the color difference buffer control circuit 52. Then, the motion compensation circuit 50 adds the read reference pixel data to the pixel data supplied from any of the slice decoders 47 to 49, and performs motion compensation. •Five.
- the motion compensation circuit 50 writes the pixel data subjected to the motion compensation into the luminance buffer # 1 via the luminance buffer control circuit 51 if the pixel data is luminance data, and the pixel data is If it is color difference data, it is written to the color difference buffer 72 via the color difference buffer control circuit 52 to prepare for display output, and also to prepare when this pixel data is used as a reference data for other pixels.
- the display output circuit 53 generates a synchronization timing signal for outputting the decoded image data, and reads out the luminance data from the luminance buffer 71 via the luminance buffer control circuit 51 according to this timing.
- the color difference buffer 72 is read out of the color difference buffer 72 via the color difference buffer control circuit 52 and output as a decoded video signal.
- the drive 35 is connected to the controller 34, and includes a magnetic disk 101, an optical disk 102, a magneto-optical disk 103, and a semiconductor memory 104 mounted as necessary. Send and receive data.
- the controller 34 controls the operation of the IC 31 and the drive 35 described above.
- the controller 34 for example, operates according to the programs recorded on the magnetic disk 101, optical disk 102, magneto-optical disk 103, and semiconductor memory 104 mounted on the drive. 3 1 can execute the process.
- step S2 the slice decoder control circuit 46 determines whether the slice decoder 47 is processing.
- step S3 the slice decoder control circuit 46 determines the parameters of the picture layer and the slice N included in the start code information. The write pointer is supplied to the slice decoder 47, and the slice decoder 47 decodes the slice N, and the process proceeds to step S8. If it is determined in step S2 that the slice decoder 47 is processing, the slice decoder control circuit 46 determines in step S4 whether the slice decoder 48 is processing. . If it is determined in step S4 that the slice decoder 48 is not being processed, in step S5, the slice decoder control circuit 46 includes the picture layer parameters and the start code information. The write pointer for slice N is supplied to slice decoder 48, and slice decoder 48 decodes slice N, and the process proceeds to step S8.
- step S4 If it is determined in step S4 that the slice decoder 48 is processing, the slice decoder control circuit 46 determines in step S6 whether the slice decoder 49 is processing. . If it is determined in step S6 that the slice decoder 49 is being processed, the process returns to step S2, and the subsequent processes are repeated.
- step S7 the slice decoder control circuit 46 includes the parameters of the picture layer and the slice code information.
- the write point of the slice N to be written is supplied to the slice decoder 49, and the slice decoder 49 decodes the slice N, and the process proceeds to step S8.
- step S9 the slice decoder control circuit 46 determines whether or not decoding of all slices has been completed. If it is determined in step S9 that decoding of all slices has not been completed, the process returns to step S2, and the subsequent processes are repeated. If it is determined in step S9 that decoding of all slices has been completed, the process ends.
- FIG. 17 is a diagram illustrating a specific example of the processing of the slice decoder control circuit 46 described with reference to FIG.
- the picture decoder 45 decodes the picture layer data, and the parameters are supplied to the slice decoder control circuit 46.
- step S9 since it is determined that decoding of all slices has not been completed, the process returns to step S2.
- step S2 it is determined that the slice decoder 47 is processing. Then, in step S4, it is determined that the slice decoder 48 is not performing the processing.
- step S2 the slice decoder 47 is determined to be processing, and in step S4, the slice decoder 48 is determined to be processing.
- step S8 N-N + 1 is set.
- step S9 since it is determined that decoding of all slices has not been completed, the process returns to step S2.
- the slice decoders 47 to 49 After performing the decoding process on the input slice, the slice decoders 47 to 49 output a signal indicating the completion of the decoding process to the slice decoder control circuit 46. That is, since the slice decoders 47 to 49 are all being processed until a signal indicating the completion of decoding of the slice 2 is input from any of the slice decoders 47 to 49, step S2 and step S2 are performed. Steps S4 and S6 are repeated. At the timing indicated by A in FIG. 17, the slice decoder 48 outputs a signal indicating the completion of the decoding process to the slice decoder 46.
- step S2 the slice decoder control circuit 46 receives the signal indicating the end of the decoding of the slice 3 from the slice decoder 49 at the timing indicated by B in the figure. In, it is determined that the slice decoder 49 is not processing.
- step S9 it is determined that decoding of all slices has not been completed, so the process returns to step S2.
- the same processing is repeated until the decoding of the last slice is completed.
- the slice decoder control circuit 46 allocates the slice decoding process while referring to the processing status of the slice decoders 47 to 49, a plurality of decoders can be used efficiently.
- step S22 the motion compensation circuit 50 determines whether or not all the register values are 0. If it is determined in step S22 that the register values are not all 0 (that is, there is at least one 1), the process proceeds to step S24. move on.
- the slice decoder 47 outputs, to the motion compensation circuit 50, the prediction vector decoded by the vector decoding circuit 82 and the pixel subjected to the inverse DCT by the inverse 0 ⁇ circuit 84. Then, the process proceeds to step S30.
- the slice decoder 48 outputs, to the motion compensation circuit 50, the prediction vector decoded by the vector decoding circuit 86 and the pixel subjected to the inverse DCT by the inverse DCT circuit 88. Then, the process proceeds to step S30.
- the slice decoder 49 supplies the motion compensation circuit 50 with the prediction vector decoded by the vector decoding circuit 90 and the inverse DCT by the inverse DCT circuit 92.
- the output pixel is output. Then, the process proceeds to step S30.
- step S30 the motion compensation circuit 50 determines whether the macroblock input from any of the slice decoders 47 to 49 uses motion compensation.
- step S31 the motion compensation circuit 50 performs a motion compensation process on the input macroblock. That is, the motion compensating circuit 50 outputs the luminance buffer control circuit 51 according to the prediction vector output from the corresponding decoder among the slice decoders 47 to 49 if the pixel data is luminance data. Then, the reference pixel is read from the luminance buffer 71 via the luminance buffer 71, and if the pixel data is the color difference data, the reference pixel data is read from the color difference buffer 72 via the color difference buffer control circuit 52. Then, the motion compensation circuit 50 adds the read reference pixel data to the pixel data supplied from any of the slice decoders 47 to 49 to perform motion compensation.
- the motion compensation circuit 50 writes the pixel data subjected to the motion compensation to the luminance buffer 71 via the luminance buffer control circuit 51 if the pixel data is luminance data, and the pixel data is If it is a color difference data, it is written to the color difference buffer 72 via the color difference buffer control circuit 52 to prepare for display output, and also to prepare when this pixel data is used as a reference data of another pixel. . Then, the process returns to step S22, and the subsequent processes are repeated.
- step S30 when it is determined that the macro program does not use motion compensation, in step S32, the motion compensation circuit 50 outputs a luminance buffer if the pixel data is luminance data.
- the pixel data is written to the luminance buffer 71 via the control circuit 51, and if the pixel data is color difference data, the pixel data is written to the color difference buffer 72 via the color difference buffer control circuit 52 to prepare for display output. It is prepared in case that the pixel data is used as a reference data for another image. Then, the process returns to step S22, and the subsequent processes are repeated.
- FIG. 19 is a diagram showing a specific example of the arbitration processing of the decoder by the motion compensation circuit 50 described with reference to FIG.
- step S22 the process returns to step S22 again.
- the REQ signal is output from the slice decoder 47.
- the motion compensation circuit 50 performs motion compensation while arbitrating the slice decoders 47 to 49.
- the picture decoder 45 to the slice decoder 49 wait for the end of each other operation by providing the picture code buffer 62. Without this, the stream buffer 61 can be accessed.
- the slice decoders 47 to 49 can be operated simultaneously by the processing of the slice decoder control circuit 46. Further, the motion compensation circuit 30 can appropriately select one slice decoder, access the separated luminance buffer 71 and color difference buffer 72, and perform motion compensation. Therefore, in the MPEG video decoder of FIG. 15, the decoding processing performance and the access performance to the buffer are improved, and the decoding processing for 2: 4: 4P @ HL can be performed.
- FIG. 20 is a block diagram showing a configuration of a playback device including the MPEG video decoder of FIG. Parts corresponding to those in FIG. 15 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the servo circuit 111 drives the hard disk 112 based on the control of the controller 34, and the MPEG stream read by the data reading unit (not shown) is read by the reproduction circuit 1 of the IC 31. 2 Entered in 1.
- the reproduction circuit 122 is a circuit including the stream input circuit 41 to the clock generation circuit 44 described with reference to FIG. 15.
- the MPEG streams are input in the order of input.
- the stream is output to the MPEG video decoder 122 as a playback stream.
- reverse playback reverse playback
- the input MPEG encoded stream is rearranged using a stream buffer 61 into an order suitable for reverse playback, and then played back.
- the stream is output to the MPEG video decoder 122 as a stream.
- the MPEG video decoder 122 is a circuit including the picture decoder 45 to the display output circuit 53 described with reference to FIG. 15, and the video buffer 33
- the decoded frames stored in the playback stream are read out as reference images, motion compensation is performed, and each picture (frame) of the input playback stream is decoded by the above-described method, and is stored in the video buffer 33.
- the frames stored in the video buffer 33 are sequentially read out by the processing of the display output circuit 53, output to a display unit or a display device (not shown), and displayed.
- the recording / reproducing apparatus may have a configuration different from the configuration shown in FIG. 20 (for example, a function of holding an encoded stream, as in the case of the stream buffer 61, and a frame, as in the case of the playback circuit 121). Even if the reordering function is provided in the MPEG video decoder 122), the input MPEG encoded stream is decoded and output by basically the same processing. .
- various storage media such as an optical disk, a magnetic disk, a magneto-optical disk, a semiconductor memory, and a magnetic tape can be used as the storage medium for storing the encoded stream.
- FIG. 21 is a diagram showing the structure of a picture of an MPEG video signal that is input to an encoder (not shown) and encoded.
- Frame 12 is an intra-coded frame (I picture), which is encoded without referring to other images. Such a frame is It provides an access point for the encoded sequence that is the starting point for decoding, but its compression ratio is not too high.
- Frame P5, Frame P8, Frame Pb, and Frame Pe are forward predictive coded frames (P-pictures) and are more efficient than I-pictures due to motion compensated prediction from past I-pictures or P-pictures. Encoding is performed. The P picture itself is also used as a reference for prediction.
- Frame B 3 Frame ⁇ 4 ⁇ Frame B d is a bidirectionally predicted coded frame, and is more efficiently compressed than I-picture and P-pictures. Requires both past and future reference images. B pictures are not used as prediction references.
- Fig. 22 shows an example of MPEG video signal encoding using inter-frame prediction (MPEG encoding), which is executed by an encoder (not shown) to generate the MPEG encoded image described with reference to Fig. 21.
- An example of a stream is shown.
- GOPs Groups of Pictures
- the frames B10 and B11 temporarily stored in the buffer are encoded using the frame I12 as a reference image.
- B-pictures should be coded with reference to both past and future reference images, but images that can be referred to in the forward direction, such as frame B10 and frame B11, should be encoded. If not, the lost GOP flag is set, and encoding is performed using only backward prediction without performing forward prediction.
- the frames B 13 and B 14 input while the frames B 10 and B 11 are being encoded are stored in the video buffer, and the next input frame P 15 is
- the frame I 12 is encoded with reference to the forward prediction image.
- frames B13 and F13 read from the video buffer are read.
- the frame B14 is encoded by referring to the frame I12 as a forward prediction image and referring to the frame P15 as a backward prediction image.
- the frames B16 and B17 are stored in the video buffer.
- the P-picture is encoded by referring to the previously encoded I-picture or P-picture as the forward prediction image.
- the B picture is temporarily stored in the video buffer and then coded by referring to the previously coded I or P picture as a forward prediction image or a backward prediction image.
- an image stream is encoded over a plurality of G0Ps, and an encoded stream is generated.
- an MPEG encoded stream encoded by the above-described method is recorded.
- the DCT coefficient matrix obtained by the DCT transform has the characteristic that, when a normal image is subjected to the DCT transform, it is large in the low frequency component and small in the high frequency component.
- information is compressed using quantization (dividing each DCT coefficient by a certain quantization unit and rounding the decimal point).
- the quantization unit is set as an 8 x 8 quantization table, and a small value is set for the low frequency component and a large value is set for the high frequency component.
- the quantization ID corresponding to the quantization matrix is added to the compressed data and passed to the decoding side. That is, the MPEG video decoder 122 in FIG.
- FIG. 23 shows an example of MPEG decoding using inter-frame prediction.
- the MPEG video stream input from the hard disk 112 to the playback circuit 121 for forward playback is processed by the playback circuit 121 so that the playback stream in the same arrangement of the input order as the input stream is played. It is output to the MPEG video decoder 122 as one system.
- the reproduction stream is decoded according to the procedure described with reference to FIGS. 15 to 19, and is stored in the video buffer 33.
- the buffer area in the video buffer 33 in which the frame I 12 decoded by the MPEG video decoder 122 is stored is referred to as buffer 1.
- frames B10 and B11 input to the MPEG video decoder 122 are B pictures, but are stored in the buffer 1 of the video buffer 33 because the lost G0P flag is set.
- the frame I 12 is decoded with reference to the backward reference image and stored in the video buffer 33.
- the buffer area where the decoded frame B 10 is stored is referred to as a buffer 3.
- the frame B10 is read from the buffer 3 of the video buffer 33, output to a display unit (not shown), and displayed. Then, after the decoded frame B 11 is accumulated in the buffer 3 of the video buffer 3 3 (that is, overwritten on the buffer 3), it is read out, outputted to a display unit (not shown), and displayed. You.
- frame I 12 is read from buffer 1, output to a display unit (not shown), displayed, and at that timing, the next frame P 15 is sent to buffer 1 of video buffer 33.
- the stored frame I 12 is decoded as a reference image and stored in the buffer 2 of the video buffer 33.
- the lost GOP flag is not set for frames B10 and B11, frames B10 and B11 are not decoded because there is no image that can be referenced in the forward direction. In such a case, the frame I 12 is first output from the display output circuit 53 and displayed.
- the next input frame B 13 uses the frame I 12 stored in the buffer 1 of the video buffer 33 as a forward reference image, and the frame P 15 stored in the buffer 2 as a backward reference image. It is referred to as an image, decoded, and stored in buffer 3. Then, by the processing of the display output circuit 53, the frame B13 is read from the buffer 3 of the video buffer 33, and during the output display processing, the next input frame B14 is Frame I 12 stored in buffer 1 of video buffer 3 3 is decoded as a forward reference image and frame P 15 stored in buffer 2 is referenced as a backward reference image and stored in buffer 3. Is done. Then, by the processing of the display output circuit 53, the buffer from the buffer 3 of the video buffer 33 is used. Frames B14 are read, output and displayed.
- next input frame P18 is decoded using the frame P15 stored in the buffer 2 as a forward reference image.
- frame I 12 stored in buffer 1 is not referred to thereafter, and thus decoded frame P 18 is stored in buffer 1 of video buffer 33. Stored. Then, at the timing when the frame P18 is accumulated in the buffer 1, the frame P15 is read from the buffer 2, output, and displayed.
- the frames of GOP 1 are sequentially decoded, accumulated in the buffers 1 to 3, and sequentially read out and displayed.
- the frame 122 which is an I-picture, does not require a reference image at the time of decoding, and thus is decoded as it is and stored in the buffer 2.
- the frame Pie of GOP 1 is read, output, and displayed.
- the subsequently input frames B20 and B21 of G ⁇ P2 are decoded using frame Pie of buffer 1 as a forward reference image and frame I22 of buffer 2 as a backward reference image, Stored in buffer 3 sequentially, read out and displayed.
- the B picture at the head of G0P is decoded using the P picture of the previous G0P as a forward reference image.
- each frame below GOP 3 is sequentially decoded, stored in buffers 1 to 3, sequentially read out and displayed.
- the MPEG video decoder 122 executes decoding processing with reference to the quantization ID.
- the reproduction circuit 12 1 shown in FIG. 20 stores the order of the GOP frames input to the stream buffer 61 based on the start code recorded in the start code buffer 62. Can be changed to generate a playback stream, and the MPEG video decoder 122 can decode all 15 frames.
- the playback circuit 121 uses the start code recorded in the start buffer 62 to determine the order of the G0P frames input to the stream buffer 61. It is not enough to simply reverse the stream to generate a playback stream.
- the first output and displayed frame must be frame P 2 e.
- frame P 2 e In order to decode P 2 e, it is necessary to refer to frame P 2 b as a forward reference image, and to decode frame P 2 b, frame P 2 b is used as a forward reference image. 8 is required. Since a forward reference image is also required to decode frame P 28, after all, to decode frame P 2 e and output and display it, all the I and P Must be decrypted.
- the video buffer 33 Requires a buffer area of 1 GOP (15 frames).
- the video buffer 33 requires a buffer area for 15 frames, but it is not possible to reversely reproduce all the frames for 1 G0P.
- the GOP contains a total of 5 frames of I-pictures or P-pictures.
- At least 2G0P frames can be stored in the stream buffer 61, and the order of the frames in the playback stream generated by the playback circuit 121 is determined by MPEG. Determined based on the order of decoding for reverse playback of video decoder 122, and expressed in video buffer 33 at least as "total number of I-pictures and P-pictures included in 1 G0P + 2"
- the part that crosses G0P can be stored. Continuously, all frames can be played back in reverse.
- FIG. 24 shows an operation example of the MPEG reverse reproduction decoder.
- the controller 34 controls the servo circuit 111 to output the MPEG encoded stream of G0P3 and then G0P2 from the hard disk 112 to the reproduction circuit 121.
- the reproduction circuit 121 stores the MP0 encoded stream of G0P3 and then G0P2 in the stream buffer 61.
- the reproduction circuit 122 reads the first frame I32 of the G0P3 from the stream buffer 61 and outputs it to the MPEG video decoder 122 as the first frame of the reproduction stream. Since the frame 132 is an I-picture, it does not require a reference image for decoding. Therefore, the frame 132 is decoded in the MPEG video decoder 122 and stored in the video buffer 33. In the video buffer 33, an area where the decoded frame I32 is stored is referred to as a buffer 1.
- the data of each frame is decoded based on the header described with reference to FIG. 2 and the parameters described in the extension data.
- the parameters of the respective decoders 45 in the MPEG video decoder 122 are decoded, supplied to the slice decoder control circuit 46, and used for decoding.
- decoding is performed using GOP 1 sequence—header, sequence—extension, and the upper layer parameters described in GOPJieader (for example, the quantization matrix described above).
- G0P 2 is decrypted
- decoding is performed using the upper-layer parameters described in the above description, and when G0P 3 is decoded, the sequence header of G0P 3 is decoded.
- Sequence—extension, and GOP—header—decoding is performed using the upper layer s lameness that is recorded and described.
- the MPE G video decoder 122 converts the upper layer parameters when the I picture is decoded for the first time in each G0P.
- the controller 34 holds the supplied upper layer parameters in a memory (not shown) provided therein.
- the controller 34 monitors the decoding process executed in the MPEG video decoder 122, reads out the upper layer parameters corresponding to the frame being decoded from the internal memory, and sends the read data to the MPEG video decoder 122. Supply so that appropriate decryption processing can be performed.
- the number above the frame number of the playback stream is the quantization ID, and each frame of the playback stream is the same as in the forward decoding described with reference to Fig. 23. Then, it is decoded based on the quantization ID.
- the controller 34 has a memory inside and holds the upper layer coding parameters.
- the memory connected to the controller 34 is The controller 34 holds the upper layer coding parameters in an external memory without reading the memory inside, reads it out as necessary, and supplies it to the MPEG video decoder 122. You may be able to do it.
- the MPEG video decoder 122 may be provided with a memory for holding G0P upper layer coding parameters.
- the encoding conditions such as the upper layer encoding parameters are known, the encoding conditions may be set in advance in the MPEG video decoder 122, and the upper layer encoding parameters do not change with G0P.
- the controller 34 reads out the upper layer coding parameters for each G0P and sets them to the MPEG video decoder 122 for each frame, but only once at the start of operation. Encoding parameters may be set in the MPEG video decoder 122.
- the reproduction circuit 122 reads out the frame P35 from the stream buffer 61 and outputs the frame P35 to the MPEG video decoder 122 as the next frame of the reproduction stream.
- the frame P 35 is decoded using the frame I 32 recorded in the buffer 1 as a forward reference image, and is stored in the video buffer 33.
- an area where the decoded frame P35 is accumulated is referred to as a "nofer 2".
- the reproduction circuit 122 sequentially reads out the frame P38, the frame P3b, and the frame P3e from the stream buffer 61 and outputs them as a reproduction stream. These P-pictures are decoded by the MPEG video decoder 122 using the previously decoded P-picture as a forward reference image and stored in the video buffer 33. In the video buffer 33, areas where these decoded P-picture frames are accumulated are referred to as buffers 3 to 5.
- the reproduction circuit 122 reads out the frame I22 of G0P2 from the stream buffer 61 and outputs it as a reproduction stream.
- the frame I22 which is an I-picture, is decoded without requiring a reference image, and is stored in the video encoder 33.
- the area where the decoded frame I22 is stored is referred to as buffer 6.
- the frame P3e of G0P3 is read out from the buffer 5, output, and displayed as the first image of the reverse playback. You.
- the reproduction circuit 122 reads the frame B3d of G0P3, that is, the frame to be first reversed-reproduced in the B-picture of G0P3, from the stream buffer 61. And output as a playback stream.
- frame B 3 d is decoded as frame P 3 b of buffer 4 as a forward reference image and frame P 3 e of sofa 5 as a backward reference image, and stored in video buffer 33. Is done.
- the area where the decoded frame B 3 d is stored is referred to as a buffer 7.
- the frame B 3 stored in the buffer 7 is output after the frame / field conversion and the output video synchronization timing are performed. Is shown.
- the reproducing circuit 122 reads out the frame B 3 c of GOP 3 from the stream buffer 61 and outputs it to the MPEG video decoder 122.
- the frame B3c is, like the frame B3d, the frame P3b of the buffer 4 as the forward reference image and the frame P3e of the sofa 5 as the backward reference image. Decrypted.
- the decoded frame P 3 c is stored in the sofa 7 (ie, overwrites the buffer 7) instead of the frame B 3 d, and the timing for the frame / field conversion and the output video synchronization timing is adjusted. Is output and displayed.
- the reproduction circuit 122 reads out the frame P25 of G0P2 from the stream buffer 61 and outputs it to the MPEG video decoder 122.
- the frame P25 of G0P2 is decoded using the frame I22 of the sofa 6 as a forward reference image. Since the frame P 3 e stored in the buffer 5 is no longer used as a reference image, the decoded frame P 25 is replaced by the buffer 5 instead of the frame P 3 e. Stored. Then, at the same time as the frame P25 is accumulated in the buffer 5, the frame P3b of the buffer 4 is read out and displayed.
- the reproduction circuit 122 reads out the frame B3a of G0P3 from the stream buffer 61 and outputs it as a reproduction stream.
- the frame B3a is decoded by using the frame P38 of the sofa 3 as a forward reference image and the frame P3b of the sofa 4 as a backward reference image, and the video buffer 33 Is stored in buffer 7.
- the frame stored in buffer 7: B3a is output and displayed after frame / field conversion and timing adjustment to the output video synchronization timing.
- the reproducing circuit 122 reads the frame B 39 of GOP 3 from the stream buffer 61 and outputs it to the MPEG video decoder 122 .
- frame B39 like frame B3a, uses frame P38 of buffer 3 as the forward reference picture.
- Image, frame P3b in buffer 4 is decoded as a backward reference image and stored in buffer 7 in place of frame B3a, frame / field conversion and output Timing adjustment to video synchronization timing Is output and displayed.
- the reproduction circuit 122 reads out the frame P28 of G0P2 from the stream buffer 61 and outputs it to the MPEG video decoder 122.
- the frame P28 of G0P2 is decoded using the frame P25 of the no-sofa 5 as a forward reference image. Since the frame P 3 b stored in the buffer 4 is no longer used as a reference image, the decoded frame P 28 is replaced by the buffer 4 instead of the frame P 3 b. Stored. Then, at the same timing that the frame P28 is accumulated on the sofa 4, the frame P38 of the sofa 3 is read out and displayed.
- the I picture or P picture of G0P 2 is decoded and the I picture or G picture of G0P 3 is read out from the buffer 33 and displayed at the same time as the picture stored in the buffer 33. .
- the remaining B pictures of G0P3 and the remaining P pictures of G0P2 are B37, B36, P2b, B Decoding is performed in the order of 3 4, B 3 3, and P 2 e.
- the decoded B-pictures are stored in the buffer 7, read out sequentially and displayed.
- the decoded P0-picture of G0P2 is sequentially stored in one of buffers 1 to 6 in which the frame whose reference has been completed is stored, and the P-picture of G0P3 already stored in one of buffers 1 to 6 is stored.
- the P picture is read out and output during the B picture in that evening so as to meet the order of reverse playback.
- the reproduction circuit 122 reads out the frame B31 of the G0P3, then the frame B30 from the stream buffer 61, and outputs it to the MPEG video decoder 122.
- the frame P2e which is a forward reference image necessary for decoding the frames B31 and B30, is stored in a buffer 2
- the backward reference image I32 is stored in a buffer.
- the first two frames of G0P3, that is, the last frame to be displayed during reverse playback, are also decoded because they are stored in each of them. It becomes possible.
- the decoded frames B31 and B30 are sequentially stored in the buffer 7, and output after the frame / field conversion and the timing adjustment to the output video synchronization timing are performed. Is displayed.
- the controller 34 controls the servo circuit 1 1 1 to transfer GOP 1 from the hard disk 1 1 2 to GOP 1. It is read out and supplied to the reproduction circuit 122.
- the playback circuit 1 2 1 executes a predetermined process, extracts the start code of GOP 1 and records it in the start code buffer 6 2, and also stores the encoded stream of GOP 1 in the stream buffer 6. Supply to 1 for storage.
- the reproduction circuit 122 reads out the frame I 12 of GOP 1 from the stream buffer 61 and outputs it to the MPEG video decoder 122 as a reproduction stream. Since the frame I 12 is an I picture, it is decoded by the MPEG video decoder 122 without referring to another image, and is converted to the frame 1 32 of the buffer 1 which is not referred to in the subsequent processing. Instead, it is output to the sofa 1 and accumulated. At this time, the frame P 2 e is read from the buffer 2 and output, and the reverse reproduction display of G0P 2 is started.
- the reproducing circuit 122 reads the frame to be firstly reverse-reproduced from the stream buffer 61 in the frame B 2 d of G0P 2, that is, the B victory of G0P 2, Output as a playback stream.
- the frame B 2 d is decoded using the frame P 2 b of the buffer 3 as a forward reference image and the frame P 2 e of the buffer 2 as a backward reference image, and stored in the video buffer 33. Is performed.
- the decoded frame B2d is stored in the buffer 7, and is output and displayed after frame / field conversion and timing adjustment to the output video synchronization timing are performed.
- the remaining B pictures of G0P2 and the remaining P pictures of G 1P1 are represented by B2c, P15, B2a, B29, P18, ⁇ 2 7, ⁇ 26, ⁇ 1 b, ⁇ 24, ⁇ 23, Pie, ⁇ 21, ⁇ 20 Decoded in this order, and buffers 1 to 7 in which frames that have been sequentially referred to are stored. Stored in one of the Are read out and output in this order. Then, although not shown, finally, the remaining B-pictures of GOP 1 are decoded, sequentially stored in the buffer 7, read out in the reverse reproduction order, and output.
- reverse playback was performed at the same speed as normal playback, but the playback circuit 121 played the MPEG video decoder at one-third the speed of normal playback of the playback stream.
- the MPEG video decoder 122 executes decoding processing of only one frame in a normal processing time of three frames, and outputs the data to a display unit or a display device (not shown). By displaying the same frame at the display time, 1/3 speed forward playback and reverse playback can be performed by the same processing.
- the display output circuit 53 repeatedly outputs the same frame, so that so-called still reproduction is also possible.
- the arbitrary n is reduced to 1 / nth the speed. Forward reproduction and reverse reproduction can be performed by similar processing.
- the normal playback speed is the same as the normal speed reverse playback, 1 / n speed reverse playback, still playback, 1 / n speed forward playback, and 1x speed forward playback. At any speed, smooth trick regeneration is possible.
- the MPEG video decoder 122 since the MPEG video decoder 122 is a decoder compatible with MPEG2 4: 2: 2P @ HL, it has the ability to decode an MPEG2MP @ ML encoded stream at 6 times speed. Therefore, if the playback stream generated from the encoded MPEG-2 MP @ ML stream is output to the MPEG video decoder 122 at six times the speed of normal playback, the MPEG video decoder 122 Has the ability to decode an MPEG 2 MP @ ML encoded stream at 6 ⁇ speed, so by displaying a frame extracted every 6 frames on a display unit or display device (not shown) , 6 ⁇ forward playback and reverse playback can be performed by the same processing.
- the encoding stream of MPEG 2 MP @ ML is reproduced at 6 ⁇ speed reverse reproduction, 1 ⁇ normal speed reverse reproduction, 1 / n ⁇ speed reverse reproduction, still reproduction, 1 / nx forward playback, 1x forward playback, 6x forward playback At any speed between, smooth trick playback is possible.
- the playback device of the present invention can perform N-times forward playback and reverse playback by the same processing. Reverse playback, 1x reverse playback, 1 / nx reverse playback, still playback, 1 / nx forward playback, 1x forward playback, Nx forward playback In terms of speed, smooth trick playback becomes possible.
- the series of processes described above can also be executed by software.
- the software can execute various functions by installing programs installed in dedicated hardware or various programs installed in the software. For example, it is installed from a recording medium to a general-purpose personal computer.
- this recording medium is a magnetic disk 101 (floppy disk) on which the program is recorded, which is distributed to provide the user with the program separately from the convenience store.
- Optical discs 102 including CD-R0M (Compact Disk Read Only Memory), DVDs (Digital Versatile Disk)).
- Magneto-optical discs 103 MD (Mini Disk))
- a packaged medium including semiconductor memory 104 and the like.
- steps for describing a program to be recorded on a recording medium are not limited to processes performed in chronological order in the order described, but are not necessarily performed in chronological order. Alternatively, it also includes processes that are individually executed.
- the encoded stream is decoded, and the decoding process is performed in parallel, so that real-time operation can be performed with a feasible circuit scale. It is possible to realize a video decoder compatible with 4: 2: 2P @ HL.
- an encoded stream is decoded by a plurality of slice decoders, and decoding processing is performed by the plurality of slice decoders in parallel.
- a video decoder compatible with 4: 2: 2P @ HL that can operate in real time with an achievable circuit scale can be realized.
- a source encoded stream is decoded for each slice constituting a picture of a source encoded stream, and a plurality of slice decoders are decoded.
- slices are divided into multiple slices so that the decoding process performed by the slice decoder is the fastest regardless of the order of the slices included in the picture. Since it is assigned to the decoder, it is possible to realize a video decoder supporting 4: 2: 2P @ HL that can operate in real time with a feasible circuit scale.
- a source encoded stream is decoded for each slice constituting a picture of the source encoded stream, and a plurality of slice decoders are decoded. Monitor the decoding status of a plurality of slice decoders, and control decoding of multiple slice decoders, regardless of the order of slices included in the picture. Since the appropriate slice is assigned, a video decoder that can operate in real time with a practicable circuit scale can be realized for a 4: 2: 2P book L.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Television Signal Processing For Recording (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Color Television Systems (AREA)
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE2001630180 DE60130180T2 (de) | 2000-04-14 | 2001-04-13 | Verfahren zur kodierung und dekodierung, aufzeichnungsmedium und programm |
CA2376871A CA2376871C (en) | 2000-04-14 | 2001-04-13 | Decoder and decoding method, recorded medium, and program |
EP01921837A EP1187489B1 (en) | 2000-04-14 | 2001-04-13 | Decoder and decoding method, recorded medium, and program |
US12/197,574 US20090010334A1 (en) | 2000-04-14 | 2008-08-25 | Decoding device, decoding method, recording medium, and program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2000112951 | 2000-04-14 | ||
JP2000-112951 | 2000-04-14 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/197,574 Continuation US20090010334A1 (en) | 2000-04-14 | 2008-08-25 | Decoding device, decoding method, recording medium, and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2001080567A1 true WO2001080567A1 (en) | 2001-10-25 |
Family
ID=18625011
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2001/003204 WO2001080567A1 (en) | 2000-04-14 | 2001-04-13 | Decoder and decoding method, recorded medium, and program |
Country Status (8)
Country | Link |
---|---|
US (2) | US20020114388A1 (ja) |
EP (1) | EP1187489B1 (ja) |
JP (2) | JP5041626B2 (ja) |
KR (1) | KR100796085B1 (ja) |
CN (1) | CN1223196C (ja) |
CA (1) | CA2376871C (ja) |
DE (1) | DE60130180T2 (ja) |
WO (1) | WO2001080567A1 (ja) |
Families Citing this family (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7292772B2 (en) * | 2000-05-29 | 2007-11-06 | Sony Corporation | Method and apparatus for decoding and recording medium for a coded video stream |
GB2377840A (en) * | 2001-07-18 | 2003-01-22 | Sony Uk Ltd | Audio/video recording and multiplexing apparatus |
WO2003053066A1 (en) | 2001-12-17 | 2003-06-26 | Microsoft Corporation | Skip macroblock coding |
WO2003063500A1 (en) * | 2002-01-22 | 2003-07-31 | Microsoft Corporation | Methods and systems for encoding and decoding video data to enable random access and splicing |
US7003035B2 (en) | 2002-01-25 | 2006-02-21 | Microsoft Corporation | Video coding methods and apparatuses |
US20040001546A1 (en) | 2002-06-03 | 2004-01-01 | Alexandros Tourapis | Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation |
US7280700B2 (en) * | 2002-07-05 | 2007-10-09 | Microsoft Corporation | Optimization techniques for data compression |
US7154952B2 (en) | 2002-07-19 | 2006-12-26 | Microsoft Corporation | Timestamp-independent motion vector prediction for predictive (P) and bidirectionally predictive (B) pictures |
FR2842979B1 (fr) * | 2002-07-24 | 2004-10-08 | Thomson Licensing Sa | Procede et dispositif de traitement de donnees numeriques |
US7606308B2 (en) * | 2003-09-07 | 2009-10-20 | Microsoft Corporation | Signaling macroblock mode information for macroblocks of interlaced forward-predicted fields |
US8064520B2 (en) | 2003-09-07 | 2011-11-22 | Microsoft Corporation | Advanced bi-directional predictive coding of interlaced video |
US7092576B2 (en) * | 2003-09-07 | 2006-08-15 | Microsoft Corporation | Bitplane coding for macroblock field/frame coding type information |
US7724827B2 (en) * | 2003-09-07 | 2010-05-25 | Microsoft Corporation | Multi-layer run level encoding and decoding |
US8094711B2 (en) * | 2003-09-17 | 2012-01-10 | Thomson Licensing | Adaptive reference picture generation |
KR101082233B1 (ko) * | 2004-01-20 | 2011-11-09 | 파나소닉 주식회사 | 화상 부호화 방법, 화상 복호화 방법, 화상 부호화 장치,화상 복호화 장치 및 그 프로그램 |
EP1714484A4 (en) | 2004-01-30 | 2009-03-18 | Panasonic Corp | BILDCODE AND DECODE PROCESSING; DEVICE AND PROGRAM THEREFOR |
CN1910922B (zh) | 2004-01-30 | 2013-04-17 | 松下电器产业株式会社 | 运动图片编码方法和运动图片解码方法 |
CN100571356C (zh) * | 2004-04-28 | 2009-12-16 | 松下电器产业株式会社 | 流产生装置,流产生方法,流再生装置,流再生方法 |
EP1751955B1 (en) | 2004-05-13 | 2009-03-25 | Qualcomm, Incorporated | Header compression of multimedia data transmitted over a wireless communication system |
CN1306822C (zh) * | 2004-07-30 | 2007-03-21 | 联合信源数字音视频技术(北京)有限公司 | 一种基于软硬件协同控制的视频解码器 |
US8155186B2 (en) * | 2004-08-11 | 2012-04-10 | Hitachi, Ltd. | Bit stream recording medium, video encoder, and video decoder |
JP4438059B2 (ja) * | 2004-08-24 | 2010-03-24 | キヤノン株式会社 | 画像再生装置及びその制御方法 |
JP4453518B2 (ja) * | 2004-10-29 | 2010-04-21 | ソニー株式会社 | 符号化及び復号装置並びに符号化及び復号方法 |
EP1843351B1 (en) * | 2005-01-28 | 2012-08-22 | Panasonic Corporation | Recording medium, program, and reproduction method |
US9077960B2 (en) | 2005-08-12 | 2015-07-07 | Microsoft Corporation | Non-zero coefficient block pattern coding |
JP4182442B2 (ja) * | 2006-04-27 | 2008-11-19 | ソニー株式会社 | 画像データの処理装置、画像データの処理方法、画像データの処理方法のプログラム及び画像データの処理方法のプログラムを記録した記録媒体 |
US20080253449A1 (en) * | 2007-04-13 | 2008-10-16 | Yoji Shimizu | Information apparatus and method |
US8254455B2 (en) | 2007-06-30 | 2012-08-28 | Microsoft Corporation | Computing collocated macroblock information for direct mode macroblocks |
US8731065B2 (en) | 2008-01-24 | 2014-05-20 | Nec Corporation | Dynamic image stream processing method and device, and dynamic image reproduction device and dynamic image distribution device using the same |
US7925774B2 (en) | 2008-05-30 | 2011-04-12 | Microsoft Corporation | Media streaming using an index file |
US8189666B2 (en) | 2009-02-02 | 2012-05-29 | Microsoft Corporation | Local picture identifier and computation of co-located information |
WO2010103855A1 (ja) * | 2009-03-13 | 2010-09-16 | パナソニック株式会社 | 音声復号装置及び音声復号方法 |
KR20110017303A (ko) * | 2009-08-13 | 2011-02-21 | 삼성전자주식회사 | 회전변환을 이용한 영상 부호화, 복호화 방법 및 장치 |
US9561730B2 (en) | 2010-04-08 | 2017-02-07 | Qualcomm Incorporated | Wireless power transmission in electric vehicles |
US10343535B2 (en) | 2010-04-08 | 2019-07-09 | Witricity Corporation | Wireless power antenna alignment adjustment system for vehicles |
US10244239B2 (en) * | 2010-12-28 | 2019-03-26 | Dolby Laboratories Licensing Corporation | Parameter set for picture segmentation |
WO2013108634A1 (ja) * | 2012-01-18 | 2013-07-25 | 株式会社Jvcケンウッド | 画像符号化装置、画像符号化方法及び画像符号化プログラム、並びに画像復号装置、画像復号方法及び画像復号プログラム |
JP2013168932A (ja) * | 2012-01-18 | 2013-08-29 | Jvc Kenwood Corp | 画像復号装置、画像復号方法及び画像復号プログラム |
CN105847827B (zh) * | 2012-01-20 | 2019-03-08 | 索尼公司 | 有效度图编码的复杂度降低 |
US10271069B2 (en) | 2016-08-31 | 2019-04-23 | Microsoft Technology Licensing, Llc | Selective use of start code emulation prevention |
JPWO2018142596A1 (ja) * | 2017-02-03 | 2019-02-07 | 三菱電機株式会社 | 符号化装置、符号化方法および符号化プログラム |
WO2019065444A1 (ja) * | 2017-09-26 | 2019-04-04 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 符号化装置、復号装置、符号化方法及び復号方法 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0723397A (ja) * | 1993-03-05 | 1995-01-24 | Sony Corp | 画像信号復号化装置及び画像信号復号化方法 |
JPH0870457A (ja) * | 1994-08-29 | 1996-03-12 | Graphics Commun Lab:Kk | 並列処理による画像復号装置 |
JPH08130745A (ja) * | 1994-10-28 | 1996-05-21 | Matsushita Electric Ind Co Ltd | 復号化システム、復号化装置および復号化回路 |
JPH08205142A (ja) * | 1994-12-28 | 1996-08-09 | Daewoo Electron Co Ltd | ディジタルビデオ信号への符号化/復号化装置 |
JPH10145237A (ja) * | 1996-11-11 | 1998-05-29 | Toshiba Corp | 圧縮データ復号装置 |
JPH10257436A (ja) * | 1997-03-10 | 1998-09-25 | Atsushi Matsushita | 動画像の自動階層構造化方法及びこれを用いたブラウジング方法 |
JP2000030047A (ja) * | 1998-07-15 | 2000-01-28 | Sony Corp | 符号化装置とその方法および復号化装置とその方法 |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5379070A (en) * | 1992-10-02 | 1995-01-03 | Zoran Corporation | Parallel encoding/decoding of DCT compression/decompression algorithms |
US5381145A (en) * | 1993-02-10 | 1995-01-10 | Ricoh Corporation | Method and apparatus for parallel decoding and encoding of data |
AU5632394A (en) * | 1993-03-05 | 1994-09-08 | Sony Corporation | Apparatus and method for reproducing a prediction-encoded video signal |
CA2145361C (en) * | 1994-03-24 | 1999-09-07 | Martin William Sotheran | Buffer manager |
US5510842A (en) * | 1994-05-04 | 1996-04-23 | Matsushita Electric Corporation Of America | Parallel architecture for a high definition television video decoder having multiple independent frame memories |
JP3250588B2 (ja) * | 1994-07-12 | 2002-01-28 | ソニー株式会社 | データ再生装置 |
US5532744A (en) * | 1994-08-22 | 1996-07-02 | Philips Electronics North America Corporation | Method and apparatus for decoding digital video using parallel processing |
JP3034173B2 (ja) * | 1994-10-31 | 2000-04-17 | 株式会社グラフィックス・コミュニケーション・ラボラトリーズ | 画像信号処理装置 |
EP0720372A1 (en) * | 1994-12-30 | 1996-07-03 | Daewoo Electronics Co., Ltd | Apparatus for parallel encoding/decoding of digital video signals |
US5959690A (en) * | 1996-02-20 | 1999-09-28 | Sas Institute, Inc. | Method and apparatus for transitions and other special effects in digital motion video |
JPH1056641A (ja) * | 1996-08-09 | 1998-02-24 | Sharp Corp | Mpegデコーダ |
JPH10150636A (ja) * | 1996-11-19 | 1998-06-02 | Sony Corp | 映像信号再生装置及び映像信号の再生方法 |
JPH10178644A (ja) * | 1996-12-18 | 1998-06-30 | Sharp Corp | 動画像復号装置 |
US6201927B1 (en) * | 1997-02-18 | 2001-03-13 | Mary Lafuze Comer | Trick play reproduction of MPEG encoded signals |
JPH10262215A (ja) * | 1997-03-19 | 1998-09-29 | Fujitsu Ltd | 動画像復号装置 |
JP3662129B2 (ja) * | 1997-11-11 | 2005-06-22 | 松下電器産業株式会社 | マルチメディア情報編集装置 |
JP3961654B2 (ja) * | 1997-12-22 | 2007-08-22 | 株式会社東芝 | 画像データ復号化装置及び画像データ復号化方法 |
JP3093724B2 (ja) * | 1998-04-27 | 2000-10-03 | 日本電気アイシーマイコンシステム株式会社 | 動画像データ再生装置及び動画像データの逆再生方法 |
JPH11341489A (ja) * | 1998-05-25 | 1999-12-10 | Sony Corp | 画像復号化装置とその方法 |
DE69922628T2 (de) * | 1998-06-05 | 2005-11-10 | Koninklijke Philips Electronics N.V. | Aufzeichnung und wiedergabe eines informationssignals in/von einer spur auf einem aufzeichnungsträger |
-
2001
- 2001-04-13 US US10/018,588 patent/US20020114388A1/en not_active Abandoned
- 2001-04-13 DE DE2001630180 patent/DE60130180T2/de not_active Expired - Lifetime
- 2001-04-13 CA CA2376871A patent/CA2376871C/en not_active Expired - Fee Related
- 2001-04-13 JP JP2001114698A patent/JP5041626B2/ja not_active Expired - Fee Related
- 2001-04-13 WO PCT/JP2001/003204 patent/WO2001080567A1/ja active IP Right Grant
- 2001-04-13 CN CNB018009182A patent/CN1223196C/zh not_active Expired - Fee Related
- 2001-04-13 KR KR1020017016037A patent/KR100796085B1/ko not_active IP Right Cessation
- 2001-04-13 EP EP01921837A patent/EP1187489B1/en not_active Expired - Lifetime
-
2008
- 2008-08-25 US US12/197,574 patent/US20090010334A1/en not_active Abandoned
-
2011
- 2011-03-18 JP JP2011060652A patent/JP2011172243A/ja active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0723397A (ja) * | 1993-03-05 | 1995-01-24 | Sony Corp | 画像信号復号化装置及び画像信号復号化方法 |
JPH0870457A (ja) * | 1994-08-29 | 1996-03-12 | Graphics Commun Lab:Kk | 並列処理による画像復号装置 |
JPH08130745A (ja) * | 1994-10-28 | 1996-05-21 | Matsushita Electric Ind Co Ltd | 復号化システム、復号化装置および復号化回路 |
JPH08205142A (ja) * | 1994-12-28 | 1996-08-09 | Daewoo Electron Co Ltd | ディジタルビデオ信号への符号化/復号化装置 |
JPH10145237A (ja) * | 1996-11-11 | 1998-05-29 | Toshiba Corp | 圧縮データ復号装置 |
JPH10257436A (ja) * | 1997-03-10 | 1998-09-25 | Atsushi Matsushita | 動画像の自動階層構造化方法及びこれを用いたブラウジング方法 |
JP2000030047A (ja) * | 1998-07-15 | 2000-01-28 | Sony Corp | 符号化装置とその方法および復号化装置とその方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1187489A4 * |
Also Published As
Publication number | Publication date |
---|---|
US20020114388A1 (en) | 2002-08-22 |
CA2376871A1 (en) | 2001-10-25 |
DE60130180T2 (de) | 2008-05-15 |
EP1187489B1 (en) | 2007-08-29 |
DE60130180D1 (de) | 2007-10-11 |
CA2376871C (en) | 2012-02-07 |
JP5041626B2 (ja) | 2012-10-03 |
CN1366776A (zh) | 2002-08-28 |
EP1187489A1 (en) | 2002-03-13 |
JP2011172243A (ja) | 2011-09-01 |
CN1223196C (zh) | 2005-10-12 |
KR20020026184A (ko) | 2002-04-06 |
JP2001359107A (ja) | 2001-12-26 |
EP1187489A4 (en) | 2005-12-14 |
US20090010334A1 (en) | 2009-01-08 |
KR100796085B1 (ko) | 2008-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1187489B1 (en) | Decoder and decoding method, recorded medium, and program | |
US7292772B2 (en) | Method and apparatus for decoding and recording medium for a coded video stream | |
CN100508585C (zh) | 用于控制数字视频比特流逆向播放的装置和方法 | |
US8260122B2 (en) | MPEG picture data recording apparatus, MPEG picture data recording method, MPEG picture data recording medium, MPEG picture data generating apparatus, MPEG picture data reproducing apparatus, and MPEG picture data reproducing method | |
WO1995002946A1 (en) | Decoding method and apparatus | |
JP3147792B2 (ja) | 高速再生のためのビデオデータの復号化方法及びその装置 | |
US20050141620A1 (en) | Decoding apparatus and decoding method | |
JP4906197B2 (ja) | 復号装置および方法、並びに記録媒体 | |
JP3748234B2 (ja) | Mpegデータ記録方法 | |
JPH10336586A (ja) | 画像処理装置および画像処理方法 | |
JPH0898142A (ja) | 画像再生装置 | |
JP3748243B2 (ja) | Mpegデータ記録装置 | |
JP3748245B2 (ja) | Mpegデータ記録装置 | |
JP3748241B2 (ja) | Mpegデータ記録方法 | |
JP3748240B2 (ja) | Mpegデータ記録方法 | |
JP3748242B2 (ja) | Mpegデータ記録方法 | |
JP3748244B2 (ja) | Mpegデータ記録装置 | |
JP2000123485A (ja) | 記録装置および方法 | |
TWI272849B (en) | Decoder and decoding method, recording medium, and program | |
JP2008005520A (ja) | Mpegデータ記録再生装置 | |
JP2007325304A (ja) | Mpegデータ記録再生方法 | |
JP2007336574A (ja) | Mpegデータ記録再生装置 | |
JP2008005519A (ja) | Mpegデータ記録再生装置 | |
JP2008005521A (ja) | Mpegデータ記録再生方法 | |
JP2008005522A (ja) | Mpegデータ記録再生方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 01800918.2 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CA CN KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2001921837 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2376871 Country of ref document: CA Ref document number: 2376871 Country of ref document: CA Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020017016037 Country of ref document: KR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 2001921837 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10018588 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 1020017016037 Country of ref document: KR |
|
WWG | Wipo information: grant in national office |
Ref document number: 2001921837 Country of ref document: EP |