WO1996007274A1 - Measuring and regulating synchronization of merged video and audio data - Google Patents

Measuring and regulating synchronization of merged video and audio data Download PDF

Info

Publication number
WO1996007274A1
WO1996007274A1 PCT/US1994/009565 US9409565W WO9607274A1 WO 1996007274 A1 WO1996007274 A1 WO 1996007274A1 US 9409565 W US9409565 W US 9409565W WO 9607274 A1 WO9607274 A1 WO 9607274A1
Authority
WO
WIPO (PCT)
Prior art keywords
data stream
video
system data
audio
bitstream
Prior art date
Application number
PCT/US1994/009565
Other languages
French (fr)
Inventor
Stephen G. Haigh
Original Assignee
Futuretel, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Futuretel, Inc. filed Critical Futuretel, Inc.
Priority to PCT/US1994/009565 priority Critical patent/WO1996007274A1/en
Priority to AU80098/94A priority patent/AU8009894A/en
Priority to EP94931269A priority patent/EP0783823A4/en
Priority to JP8508678A priority patent/JPH10507042A/en
Priority to US08/325,430 priority patent/US5874997A/en
Publication of WO1996007274A1 publication Critical patent/WO1996007274A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4341Demultiplexing of audio and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43072Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4305Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets

Definitions

  • the present invention relates generally to the technical field of recorded and/or transmitted compressed digital data, and, more particularly, to enabling a subsequent synchronized presentation of video and audio data as they are combined into a single compressed digital data stream.
  • Proper reproduction of a recorded and/or transmitted multimedia program consisting of compressed digitized video data accompanied by associated compressed digitized audio data, requires combining two independent digital data bitstreams into a single, synchronized, serial system data stream that includes both video and audio data.
  • Lack of or an improper synchroniza ⁇ tion of the video and audio data in assembling the data into the system data stream, or in decoding and presenting an assembled system data stream frequently causes a visible image to appear out of synchronization with accompanying sound. For example, a presentation of images showing lip movements of an individual speaking words may not be synchronized with the audible sound of those words.
  • the ISO/IEC 11172 standard defining MPEG compression specified that packets of data, extracted from the compressed video bitstream and from the compressed audio bitstream, are to be interleaved in assembling the system data stream. Further ⁇ more, in accordance with the ISO/IEC 11172 standard a system data stream may include private, reserved and padding streams in addition to compressed video and compressed audio bitstreams. While properties of the system data stream as defined by the MPEG standard impose functional and performance requirements on MPEG encoders and decoders, the system data stream specified in the MPEG standard does not define an architecture for or an implemen ⁇ tation of MPEG encoders or decoders. In fact, considerable degrees of freedom exists for possible designs and implemen ⁇ tations of encoders and decoders that operate in accordance with the ISO/IEC 11172 standard.
  • a system data stream in accordance with Part 1 of the ISO/IEC 11172 standard includes two layers of data; a system data layer which envelopes digital data of a compression layer.
  • the ISO/IEC 11172 system layer is itself divided into two sub-layers, one layer for multiplex-wide operation identified as the "pack layer,” and one for stream-specific operations identified as the "packet layer.”
  • Packs, belonging to the pack layer of a system data stream in accordance with the ISO/IEC 11172 standard include headers which specify a system clock reference ("SCR").
  • SCR system clock reference
  • the SCR fixes intended times for commencing decompression of digitized video and audio data included in the compression layer in a period of 90 kilohertz ("kHz").
  • the ISO/IEC 11172 standard defining the packet layer provides for "presentation time-stamps" ("PTS") and also optional decoding time-stamps ("DTS").
  • PTS presentation time-stamps
  • DTS decoding time-stamps
  • the PTS and DTS specify synchroni ⁇ zation for the video and audio data with respect to the SCR specified in the pack layer.
  • the packet layer which optionally contains both the PTS and DTS, is independent of the data contained in the compression layer defined by the ISO/IEC 11172 standard. For example, a video packet may start at any byte in the video stream. However, the PTS and optional DTS if encoded into each packet's header apply to the first "access unit" (“AU”) that begins within that packet.
  • AU access unit
  • the MPEG standard ISO/IEC 11172 defines an AU to be the coded representation of a "presentation unit" ("PU").
  • the ISO/IEC 11172 standard further defines a PU as a decoded audio AU or a decoded picture.
  • the standard also defines three (3) different methods, called “Layers” in the standard, for compress ⁇ ing and decompressing an audio signal. For two of these methods, the standard defines an audio AU as the smallest part of the encoded audio bitstream which can be decoded by itself. For the third method, the standard defines an audio AU as the smallest part of the encoded audio bitstream that is decodable with the use of previously acquired side and main information.
  • Part 1 of the ISO/IEC 11172 standard suggests that during synchronized presentation of compressed video and audio data, the reproduction of the video images and audio sounds may be synchro ⁇ nized by adjusting the playback of both compressed digital data streams to a master time base called the system time-clock ("STC") rather than by adjusting the playback of one stream, e.g. the video data stream, to match the playback of another stream, e.g. the audio data stream.
  • STC system time-clock
  • the ISO/IEC 11172 standard suggests that an MPEG decoder's STC may be one of the decoder's clocks (e.g. the SCR, the video PTS, or the audio PTS), the digital storage media (“DSM”) or channel clock, or it may be some external clock.
  • End-to-end synchronization of a multimedia program encoded into an MPEG system data stream occurs: a. if an encoder embeds time-stamps during assembly of the system data stream; b. if video and audio decoders receive the embedded time- stamps together with the compressed data, and c. if the decoders use the time-stamps in scheduling presentation of the multimedia program.
  • a "system header" (“SH"), which occurs at the beginning of a system data stream and which may be repeated within the stream, includes a "s--stem_audio_lock_flag” and a "system_video_lock_flag.”
  • SH system header
  • Sett ng the system_audi- o_lock_flag to one (1) indicates that a specified, constant relationship exists between the audio sampling rate and the SCR.
  • Setting the system_video_lock_flag to one (1) indicates that a specified, constant relationship exists between the video picture rate and the SCR. Setting either of these flags to zero (0) indicates that the corresponding relationship does not exist.
  • the ISO/IEC 11172 standard specifically provides that the system data stream may include a padding stream. Packets assembled into the system data stream from the padding stream may be used to maintain a constant total data rate, to achieve sector alignment, or to prevent decoder buffer underflow. Since the padding stream is not associated with decoding and presentation, padding stream packets lack both PTS and DTS values. In addition to the padding stream, "stuffing" of up to 16 bytes is allowed within each packet. Stuffing is used for purposes similar to that of the padding stream, and is particu ⁇ larly useful for providing word (16-bit) or long word (32-bit) alignment in applications where byte (8-bit) alignment is insufficient. Stuffing is the only method of filling a packet when the number of bytes required is less than the minimum size of a padding stream packet.
  • a bitstream of video data compressed in accordance with Part 2 of the ISO/IEC 11172 standard consists of a succession of frames of compressed video data.
  • a succession of frames in an MPEG compressed video data bitstream include intra ("I") frames, predicted (“P") frames, and bidirectional (“B") frames.
  • Decoding the data of an MPEG I frame without reference to any other data reproduces an entire uncompressed frame of video data.
  • An MPEG P frame may be decoded to obtain an entire uncompressed frame of video data only by reference to a prior decoded frame of video data, either reference to a prior decoded I frame or reference to a prior decoded P frame.
  • An MPEG B frame may be decoded to obtain an entire uncompressed frame of video data only by reference both to a prior and to a successive reference frame, i.e. reference to decoded I or P frames.
  • the ISO/IEC 11172 specification defines as a group of pictures ("GOP") one or more I frames together with all of the P frames and B frames for which the I frame(s) is(are) a reference.
  • a real-time MPEG encoder In assembling a system data stream, a real-time MPEG encoder must include a system header at the beginning of each system data stream, and that system header must set the sys- tem_audio_lock_flag and the system_video_lock_flag to either zero (0) or one (1). If a real-time MPEG encoder specifies that either or both of these flags are to be set, then it must appropriately insure that throughout the entire system data stream the specified, constant relationship exists respectively between the audio sampling rate and the SCR, and between the video picture rate and the SCR. If a compressed audio bitstream encoder operates independently of the rate at which frames of video occur, there can be no assurance that these constant relationships will exist in the encoded data that is to be interleaved into the system data stream.
  • An object of the present invention is to provide a method for assembling a system data stream which permits synchronized presentation of visible images and accompanying sound.
  • the present invention is a method for real-time assembly of an encoded system data stream that may be decoded by a decoder into decoded video pictures and into a decoded audio signal.
  • a system data stream assembled in accor ⁇ dance with the present invention permits a decoder to present the decoded audio signal substantially in synchronism with the decoded video pictures.
  • This system data stream is assembled by interleaving packets of data selected from a compressed audio bitstream with packets of data selected from a compressed video bitstream.
  • the compressed audio bitstream interleaved into the system data stream is generated by compressing an audio signal that is sampled at a pre-specified audio sampling rate.
  • the compressed video bitstream interleaved into the system data stream is generated by compressing a sequence of frames of a video signal having a pre-specified video frame rate.
  • an expected encoded audio-video ratio is computed which equals the pre-specified audio sampling rate divided by the pre-specified video frame rate.
  • a system header (“SH") is then embedded into the system data stream which includes both a sys- tem_audio_lock_flag and a system_video_lock_flag that are set to indicate respectively that a specified, constant relationship exists between an audio sampling rate and a system clock reference (“SCR”), and a specified, constant relationship exists between a video picture rate and the SCR.
  • Packets of data are then respectively selected from either the compressed audio bitstream or from the compressed video bitstream for assembly into the system data stream.
  • a presentation time-stamp PTS
  • DTS decoder time-stamp
  • an actual encoded audio-video ratio is computed which equals a total number of frames of the video signal that have been received for compression divided by a number that represents a count of all the samples of the audio signal that have been received for compression.
  • an encoded frame error value is then computed by first subtracting the expected encoded audio-video ratio from the actual encoded audio-video ratio to obtain a difference of the ratios. This difference of the ratios is then multiplied by the total number of frames of the video signal that have been received for compression.
  • both the pre-specified positive error value and the pre-specified negative error value represent an interval of time which is approximately equal to the time interval required for presenting one and one-half frames of the decoded video pictures.
  • An advantage of the present invention is that it produces a system data stream which may be decoded more easily.
  • Another advantage of the present invention is that it produces a system data stream which may be decoded by a variety different decoders.
  • Another advantage of the present invention is that it produces a system data stream which may be decoded by compara ⁇ tively simple decoders.
  • FIG. 1 is a diagram graphically depicting interleaving packets selected from a compressed audio bitstream with packets selected from a compressed video bitstream to ssemble a system data stream
  • FIG. 2 is a block diagram illustrating a video encoder for compressing a sequence of frames of a video signal into a compressed video bitstream, an audio encoder for compressing an audio signal into a compressed audio bitstream, and a multiplexer for interleaving packets selected from the compressed video bitstream with packets selected from the compressed audio bitstream to assemble a system data stream;
  • FIG. 3 is a diagram illustrating a system data stream assembled by interleaving packets selected from a compressed audio bitstream with packets selected from a compressed video bitstream;
  • FIG. 4 is a computer program written in the C programming language which implements the process for determining if all data for an entire frame of the video signal is to be omitted from the system data stream, or if all the data for a second copy of an entire frame of the video signal is to be assembled into the system data stream.
  • Arrows 12a and 12b in FIG. 1 depict interleaving packets selected from a compressed audio bitstream 16 with packets selected from a compressed video bitstream 18 to assemble a serial system data stream 22 consisting of concatenated packs 24.
  • An audio encoder 32 illustrated in the block diagram FIG. 2, generates the compressed audio bitstream 16 by processing an audio signal, illustrated in FIG. 2 by an arrow 34.
  • the audio encoder 32 generates the compressed audio bitstream 16 by first digitizing the audio signal 34 at a pre-specified audio sampling rate ("PSASR") , and then compressing the digitized representation of the digitized audio signal.
  • a video encoder 36 generates the compressed video bitstream 18 by compressing into MPEG GOPs a sequence of frames of a video signal, illustrated in FIG.
  • the audio encoder 32 is preferably an audio compres ⁇ sion engine model no. 96-0003-0002 marketed by FutureTel, Inc. of 1092 E. Arques Avenue, Sunnyvale, California 94086.
  • the video encoder 36 is preferable a video compression engine model no. 96-0002-002 also marketed by FutureTel, Inc.
  • the preferred audio encoder 32 and video encoder 36 are capable of real-time compression respectively of the audio signal 34 into the compressed audio bitstream 16, and the video signal 38 into the compressed video bitstream 18.
  • a system data stream multiplexer 44 repeti- tively selects a packet of compressed audio data or compressed video data respectively from the compressed audio bitstream 16 or from the compressed video bitstream 18 for interleaved assembly into packs 24 of the system data stream 22 illustrated in FIG. 1.
  • the system data stream multiplexer 44 is preferably a computer program executed by a host microprocessor included in a personal computer (not illustrated in any of the FIGs.) in which the audio encoder 32 and the video encoder 36 are located.
  • the computer program executed by the host microproces ⁇ sor transfers commands and data to the audio encoder 32 and to the video encoder 36 to produce at pre-specified bitrates the compressed audio bitstream 16 and the compressed video bitstream 18.
  • the sum of the bitrates specified by the computer program for the compressed audio bitstream 16 and the compressed video bitstream 18 are slightly less than the a bitrate specified for the system data stream 22.
  • the host microprocessor transfers additional control data to the audio encoder 32 which directs the audio encoder 32 to digitize the audio signal 34 at the PSASR.
  • the computer program executed by the host microprocessor In addition to transferring control data to the audio encoder 32 and to the video encoder 36 to prepare them for respectively producing the compressed audio bitstream 16 and the compressed video bitstream 18, the computer program executed by the host microprocessor also prepares certain data used in assembling the system data stream 22. In particular with respect to the present invention, the computer program executed by the host microprocessor computes an expected encoded audio-video ratio ("EEAVR") for the system data stream 22 by dividing the PSASR by the PSVFR.
  • EAAVR expected encoded audio-video ratio
  • the system data stream multiplexer 44 repetitively selects a packet of data respectively from the compressed audio bitstream 16 or from the compressed video bitstream 18 for assembly into the packs 24 of the system data stream 22.
  • every pack 24 of the assembled system data stream 22 in accordance with the ISO/IEC 11172 specification has a pre-specified length L.
  • Each pack 24 may have a length L as long as 65,536 bytes.
  • Each pack 24 begins with a pack header 52, designated PH in FIG. 3, which includes the system clock reference (“SCR") value for that particular pack 24.
  • SCR system clock reference
  • SH system header
  • the system header 54 may also be repeated in each pack 24 in the system data stream 22.
  • the system header 54 includes both a system_audio_lock_flag and a sys- tem_video_lock_flag.
  • the computer program executed by the host microprocessor sets the system_audio_lock_flag and the sys- tem_video_lock_flag to one (1) to indicate respectively that a specified, constant relationship exists between an audio sampling rate and the SCR, and a specified, constant relationship exists between a video picture rate and the SCR.
  • each pack 24 illustrated in FIG. 3 contains a packet 56 of data selected by the system data stream multiplexer 44 either from the compressed audio bitstream 16 or from the compressed video bitstream 18.
  • Each packet 56 includes a packet header, not illustrated in any of the FIGs. , which may contain a presentation time stamp ("PTS”) and may also include the optional decoding time stamp (“DTS”) in accordance with ISO/IEC 11172 specification.
  • PTS presentation time stamp
  • DTS decoding time stamp
  • the system data stream 22 in accordance with the present invention may also include packs of a padding stream.
  • the system data stream multiplexer 44 will assemble packs from the padding stream into the system data stream 22 to maintain a constant total data rate, to achieve sector alignment, or to prevent decoder buffer underflow.
  • the preferred audio encoder 32 Because the preferred audio encoder 32 generates the compressed audio bitstream 16 by digitizing the audio signal 34 at a pre-specified sampling rate, and then compresses the digitized audio signal to produce the compressed audio bitstream 16 at a pre-specified bitrate, the compressed audio bitstream 16 produced by the preferred audio encoder 32 inherently provides a stable timing reference for assigning the SCR, the PTS and the DTS to the packs 24 of the system data stream 22.
  • the frame rate of the video signal 38 does not provide a stable timing reference for assigning the SCR, the PTS and the DTS.
  • the computer program executed by the host microprocessor fetches data from the audio encoder 32 and the video encoder 36 in addition to packets 56 selected from the compressed audio bitstream 16 or from the compressed video bitstream 18.
  • system data stream multiplexer 44 fetches from a location 62 in the audio encoder 32 a number that represents a running count of all the samples (“NOS") of the audio signal 34 that the audio encoder 32 has received for compression.
  • system data stream multiplexer 44 also fetches from a location 64 in the video encoder 36 a running count of the total number of frames (“NOF") of the video signal 38 that the video encoder 36 has received for compression.
  • the computer program executed by the host microprocessor fetches these two values as close together in time as possible.
  • the system data stream multiplexer 44 then divides NOS by NOF to obtain an actual encoded audio-video ratio ("AEAVR”) .
  • AEAVR actual encoded audio-video ratio
  • the system data stream multiplexer 44 then first subtracts the previously computed EEAVR from the AEAVR to obtain a difference of ratios ("DOR"). Then the DOR is multiplied by NOF to obtain an encoded frame error value ("EFEV").
  • EFEV represents a difference in time, based upon the pre-specified audio sampling rate, between the actual time for the NOF that have been assembled into the system data stream 22, and the expected time for the NOF that have been assembled into the system data stream 22.
  • PSNEV pre-specified negative error value
  • the system data stream multiplexer 44 assembles into the system data stream 22 a second copy of all the data for an entire B frame in the compressed video bitstream 18.
  • the preferred values for the PSNEV and for the PSPEV represent an interval in time required for the presentation of one and one-half frames of the decoded video pictures. Thus, only if the magnitude of the EFEV represents an interval of time which exceeds the time interval required for the presentation of one and one-half frames of the decoded video pictures will an entire B frame in the compressed video bitstream 18 be omitted from the system data stream 22, or will a second copy of an entire B frame in the compressed video bitstream 18 be assembled into the system data stream 22.
  • each frame in the system data stream 22 in accor ⁇ dance with Part 2 of the ISO/IEC 11172 is numbered, if the system data stream multiplexer 44 omits from the system data stream 22 all data for an entire B frame in the compressed video bitstream 18, then the system data stream multiplexer 44 must renumber all subsequent frames in the present GOP accordingly before assem ⁇ bling them into the system data stream 22.
  • the system data stream multiplexer 44 assembles into the system data stream 22 a second copy of all the data for an entire B frame in the compressed video bitstream 18, then the system data stream multiplexer 44 must number that frame and renumber all subsequent frames from the present GOP accordingly.
  • FIG. 5 is a computer program written in the C programming language which implements the process for determining if all data for an entire frame of the video signal 38 is to be omitted from the system data stream 22, or if all the data for a second copy of an entire frame of the video signal 38 is to be assembled into the system data stream 22.
  • Line numbers 1-8 in FIG. 4 fetch counts from the location 62 in the audio encoder 32, and from the location 64 in the video signal 38 to establish values for NOF and NOS.
  • Line numbers 13-16 in FIG. 4 implement the computation of EFEV.
  • Line numbers 21-22 in FIG. 4 apply the low pass filter to EFEV.
  • Line numbers 26-36 in FIG. 4 determine whether all data for an entire frame of the video signal 38 is to be omitted from the system data stream 22, or if all the data for a second copy of an entire frame of the video signal 38 is to be assembled into the system data stream 22.
  • bitrate for the compressed video bitstream 18 In establishing the bitrate for the compressed video bitstream 18, the computer program executed by the host micropro- cessor sets that bitrate approximately one percent (1%) below a desired nominal bitrate for the system data stream 22 minus the pre-specified bitrate for the compressed audio bitstream 16. Setting the bitrate for the compressed video bitstream 18 one percent (1%) below the desired nominal bitrate provides a sufficient safety margin that the sum of the bitrates for the compressed audio bitstream 16 and the compressed video bitstream 18 plus the overhead of the system data stream 22 should never exceed the maximum bitrate for the system data stream 22 even though occasionally a second copy of all the data for an entire B frame in the compressed video bitstream 18 is assembled into the system data stream 22.
  • the system data stream multiplexer 44 only begins omitting B frames from or adding B frames to the system data stream 22 after the system data stream multiplexer 44 has been assembling the system data stream 22 for several minutes.
  • the system data stream multiplexer 44 inhibits omission or addition of B frames for a short interval of time to avoid erratic operation. Such erratic omission or addition of B frames during the first few minutes of the system data stream 22 is a consequence of dividing one comparatively small number for NOS by another comparatively small number for NOF.
  • a low pass filter is applied to EFEV to further inhibit erratic omission or addition of B frames.
  • Applying a low pass filter to EFEV insures that B frames are omitted from or added to the system data stream 22 only in response to a long term trend in the difference between the EEAVR and the AEAVR, and not due to fluctuations in the values of NOS and NOF, perhaps due to reading one value of either NOS or NOF during one GOP and reading the corresponding value either of NOF or NOS during the immediately preceding or immediately succeeding GOP.
  • the preferred low pass filter applied to EFEV has an asymmetric response. That is, characteristics of the low pass filter cause the filter's output value to return to zero (0) more quickly in response to a zero (0) value for EFEV than the filter's output value departs from zero in response to a non-zero value for EFEV.
  • the actual response times employed in the low pass filter are determined empirically.
  • the system data stream multiplexer 44 omits from or adds to the system data stream 22 a frame of the compressed video bitstream 18, then the low pass filter's output value is arbitrarily set to zero (0). Setting the low pass filter's output value to zero (0) tends to inhibit the omission of an entire frame of the compressed video bitstream 18 or the addition of a second copy of an entire frame of the compressed video bitstream 18 during processing of immediately succeeding MPEG GOPs.
  • the combination of the preferred audio encoder 32, the preferred video encoder 36, and the system data stream multiplexer 44 in accordance with the present invention permits assembly of virtually any desired system data stream 22 directly and without any intervening processing operations.
  • Phillips Consumer Electronics B.V. Coordination Office Optical and Magnetic Media Systems, Building SA-1, P.O. Box 80002, 5600 JB Eindhoven, The Netherlands has established a specification for Video CD that is colloquially referred to as the "White Book" standard.
  • Phillips' White Book standard specifies a maximum bitrate for the compressed video bitstream 18 of 1,151929.1 bits per second, an audio sampling frequency of 44.1 kHz, and an audio bitrate of 224 kBits per second.
  • Phillips' White Book standard also specifies that an audio packet is to be 2279 bytes long while a video packet has a length of 2296 bytes, and the system data stream 22 has a pack rate of 75 packs per second.
  • the system data stream multiplexer 44 in accordance with the present invention operating in conjunction with the preferred audio encoder 32 and the preferred video encoder 36, can directly assemble a system data stream 22 in accordance with Phillips' White Book standard from a suitably specified compressed audio bitstream 16 and compressed video bitstream 18 without any intervening operations.

Abstract

This invention concerns real-time assembly of a compressed audio-visual system data stream (22) so the audio and video data may be subsequently presented in synchronism. Assembly of a system data stream (22) in accordance with present invention interleaves packets of data selected from a compressed audio bitstream (16) with packets of data selected from a compressed video bitstream (18). If frames of video data being assembled into the system data stream (22) advance too far ahead of the audio data being assembled into the system data stream (22), then all the data for a single frame of the video signal is omitted from the system data stream (22). Conversely, if frames of video data being assembled into the system data stream (22) lag too far behind the audio data being assembled into the system data stream (22), then a second copy of all the data for a single frame of the video signal is assembled into the system data stream (22).

Description

MEASURING AND REGULATING SYNCHRONIZATION OF MERGED VIDEO AND AUDIO DATA
Technical Field
The present invention relates generally to the technical field of recorded and/or transmitted compressed digital data, and, more particularly, to enabling a subsequent synchronized presentation of video and audio data as they are combined into a single compressed digital data stream.
Background Art
Proper reproduction of a recorded and/or transmitted multimedia program, consisting of compressed digitized video data accompanied by associated compressed digitized audio data, requires combining two independent digital data bitstreams into a single, synchronized, serial system data stream that includes both video and audio data. Lack of or an improper synchroniza¬ tion of the video and audio data in assembling the data into the system data stream, or in decoding and presenting an assembled system data stream, frequently causes a visible image to appear out of synchronization with accompanying sound. For example, a presentation of images showing lip movements of an individual speaking words may not be synchronized with the audible sound of those words. To address the preceding issue, Part l of the Moving Pictures Experts Group ("MPEG'*) standard, International Organiza¬ tion for Standardisation ("ISO") and International Electro- technical Commission ("IEC") standard ISO/IEC 11172, defines a framework which permits combining bitstreams of digitized video and audio data into a single, synchronized, serial system data stream. Once combined into a single digital data stream, the data is in a form well suited for digital storage, such as on a hard disk or CD-ROM included in a digital computer, or for transmission, such as over a cable antenna television ("CATV") system or high bit rate digital telephone system, e.g. a Tl, ISDN Primary Rate, or ATM digital telecommunications access. A system data stream assembled in accordance with the ISO/IEC 11172 standard may be decoded by an MPEG decoder to obtain decoded pictures and/or decoded audio samples
The ISO/IEC 11172 standard defining MPEG compression specified that packets of data, extracted from the compressed video bitstream and from the compressed audio bitstream, are to be interleaved in assembling the system data stream. Further¬ more, in accordance with the ISO/IEC 11172 standard a system data stream may include private, reserved and padding streams in addition to compressed video and compressed audio bitstreams. While properties of the system data stream as defined by the MPEG standard impose functional and performance requirements on MPEG encoders and decoders, the system data stream specified in the MPEG standard does not define an architecture for or an implemen¬ tation of MPEG encoders or decoders. In fact, considerable degrees of freedom exists for possible designs and implemen¬ tations of encoders and decoders that operate in accordance with the ISO/IEC 11172 standard.
A system data stream in accordance with Part 1 of the ISO/IEC 11172 standard includes two layers of data; a system data layer which envelopes digital data of a compression layer. The ISO/IEC 11172 system layer is itself divided into two sub-layers, one layer for multiplex-wide operation identified as the "pack layer," and one for stream-specific operations identified as the "packet layer." Packs, belonging to the pack layer of a system data stream in accordance with the ISO/IEC 11172 standard, include headers which specify a system clock reference ("SCR"). The SCR fixes intended times for commencing decompression of digitized video and audio data included in the compression layer in a period of 90 kilohertz ("kHz"). To effect synchronized presentation of digitized video and audio data, the ISO/IEC 11172 standard defining the packet layer provides for "presentation time-stamps" ("PTS") and also optional decoding time-stamps ("DTS"). The PTS and DTS specify synchroni¬ zation for the video and audio data with respect to the SCR specified in the pack layer. The packet layer, which optionally contains both the PTS and DTS, is independent of the data contained in the compression layer defined by the ISO/IEC 11172 standard. For example, a video packet may start at any byte in the video stream. However, the PTS and optional DTS if encoded into each packet's header apply to the first "access unit" ("AU") that begins within that packet.
The MPEG standard ISO/IEC 11172 defines an AU to be the coded representation of a "presentation unit" ("PU"). The ISO/IEC 11172 standard further defines a PU as a decoded audio AU or a decoded picture. The standard also defines three (3) different methods, called "Layers" in the standard, for compress¬ ing and decompressing an audio signal. For two of these methods, the standard defines an audio AU as the smallest part of the encoded audio bitstream which can be decoded by itself. For the third method, the standard defines an audio AU as the smallest part of the encoded audio bitstream that is decodable with the use of previously acquired side and main information. Part 1 of the ISO/IEC 11172 standard suggests that during synchronized presentation of compressed video and audio data, the reproduction of the video images and audio sounds may be synchro¬ nized by adjusting the playback of both compressed digital data streams to a master time base called the system time-clock ("STC") rather than by adjusting the playback of one stream, e.g. the video data stream, to match the playback of another stream, e.g. the audio data stream. The ISO/IEC 11172 standard suggests that an MPEG decoder's STC may be one of the decoder's clocks (e.g. the SCR, the video PTS, or the audio PTS), the digital storage media ("DSM") or channel clock, or it may be some external clock. End-to-end synchronization of a multimedia program encoded into an MPEG system data stream occurs: a. if an encoder embeds time-stamps during assembly of the system data stream; b. if video and audio decoders receive the embedded time- stamps together with the compressed data, and c. if the decoders use the time-stamps in scheduling presentation of the multimedia program.
To inform an MPEG decoder that an encoded bitstream has an exact relationship to the SCR, a "system header" ("SH"), which occurs at the beginning of a system data stream and which may be repeated within the stream, includes a "s--stem_audio_lock_flag" and a "system_video_lock_flag." Sett ng the system_audi- o_lock_flag to one (1) indicates that a specified, constant relationship exists between the audio sampling rate and the SCR. Setting the system_video_lock_flag to one (1) indicates that a specified, constant relationship exists between the video picture rate and the SCR. Setting either of these flags to zero (0) indicates that the corresponding relationship does not exist.
As set forth above, the ISO/IEC 11172 standard specifically provides that the system data stream may include a padding stream. Packets assembled into the system data stream from the padding stream may be used to maintain a constant total data rate, to achieve sector alignment, or to prevent decoder buffer underflow. Since the padding stream is not associated with decoding and presentation, padding stream packets lack both PTS and DTS values. In addition to the padding stream, "stuffing" of up to 16 bytes is allowed within each packet. Stuffing is used for purposes similar to that of the padding stream, and is particu¬ larly useful for providing word (16-bit) or long word (32-bit) alignment in applications where byte (8-bit) alignment is insufficient. Stuffing is the only method of filling a packet when the number of bytes required is less than the minimum size of a padding stream packet.
A bitstream of video data compressed in accordance with Part 2 of the ISO/IEC 11172 standard consists of a succession of frames of compressed video data. A succession of frames in an MPEG compressed video data bitstream include intra ("I") frames, predicted ("P") frames, and bidirectional ("B") frames. Decoding the data of an MPEG I frame without reference to any other data reproduces an entire uncompressed frame of video data. An MPEG P frame may be decoded to obtain an entire uncompressed frame of video data only by reference to a prior decoded frame of video data, either reference to a prior decoded I frame or reference to a prior decoded P frame. An MPEG B frame may be decoded to obtain an entire uncompressed frame of video data only by reference both to a prior and to a successive reference frame, i.e. reference to decoded I or P frames. The ISO/IEC 11172 specification defines as a group of pictures ("GOP") one or more I frames together with all of the P frames and B frames for which the I frame(s) is(are) a reference.
In assembling a system data stream, a real-time MPEG encoder must include a system header at the beginning of each system data stream, and that system header must set the sys- tem_audio_lock_flag and the system_video_lock_flag to either zero (0) or one (1). If a real-time MPEG encoder specifies that either or both of these flags are to be set, then it must appropriately insure that throughout the entire system data stream the specified, constant relationship exists respectively between the audio sampling rate and the SCR, and between the video picture rate and the SCR. If a compressed audio bitstream encoder operates independently of the rate at which frames of video occur, there can be no assurance that these constant relationships will exist in the encoded data that is to be interleaved into the system data stream.
Disclosure of Invention
An object of the present invention is to provide a method for assembling a system data stream which permits synchronized presentation of visible images and accompanying sound.
Another object of the present invention is to provide a system data stream which maintains a constant relationship between the audio sampling rate and the SCR. Another object of the present invention is to provide a system data stream which maintains a constant relationship between the video picture rate and the SCR.
Briefly, the present invention is a method for real-time assembly of an encoded system data stream that may be decoded by a decoder into decoded video pictures and into a decoded audio signal. In particular, a system data stream assembled in accor¬ dance with the present invention permits a decoder to present the decoded audio signal substantially in synchronism with the decoded video pictures. This system data stream is assembled by interleaving packets of data selected from a compressed audio bitstream with packets of data selected from a compressed video bitstream. The compressed audio bitstream interleaved into the system data stream is generated by compressing an audio signal that is sampled at a pre-specified audio sampling rate. The compressed video bitstream interleaved into the system data stream is generated by compressing a sequence of frames of a video signal having a pre-specified video frame rate. Before commencing assembly of the system data stream, an expected encoded audio-video ratio is computed which equals the pre-specified audio sampling rate divided by the pre-specified video frame rate. A system header ("SH") is then embedded into the system data stream which includes both a sys- tem_audio_lock_flag and a system_video_lock_flag that are set to indicate respectively that a specified, constant relationship exists between an audio sampling rate and a system clock reference ("SCR"), and a specified, constant relationship exists between a video picture rate and the SCR. Packets of data are then respectively selected from either the compressed audio bitstream or from the compressed video bitstream for assembly into the system data stream. To effect synchronization, a presentation time-stamp ("PTS") and also an optional decoder time-stamp ("DTS") are embedded into the system data stream together with each packet.
Furthermore, an actual encoded audio-video ratio is computed which equals a total number of frames of the video signal that have been received for compression divided by a number that represents a count of all the samples of the audio signal that have been received for compression. Using the actual encoded audio-video ratio, an encoded frame error value is then computed by first subtracting the expected encoded audio-video ratio from the actual encoded audio-video ratio to obtain a difference of the ratios. This difference of the ratios is then multiplied by the total number of frames of the video signal that have been received for compression.
If the encoded frame error value thus computed is less than a pre-specified negative error value, all data in the compressed video bitstream for an entire frame of the video signal is then omitted from the system data stream. Conversely, if the encoded frame error value is greater than a pre-specified positive error value, then all the data for a second copy of an entire frame of the video signal is assembled from the compressed video bitstream into the system data stream. In the preferred embodiment of the present invention, both the pre-specified positive error value and the pre-specified negative error value represent an interval of time which is approximately equal to the time interval required for presenting one and one-half frames of the decoded video pictures.
An advantage of the present invention is that it produces a system data stream which may be decoded more easily.
Another advantage of the present invention is that it produces a system data stream which may be decoded by a variety different decoders.
Another advantage of the present invention is that it produces a system data stream which may be decoded by compara¬ tively simple decoders. These and other features, objects and advantages will be understood or apparent to those of ordinary skill in the art from the following detailed description of the preferred embodiment as illustrated in the various drawing figures.
Brief Description of Drawings
FIG. 1 is a diagram graphically depicting interleaving packets selected from a compressed audio bitstream with packets selected from a compressed video bitstream to ssemble a system data stream; FIG. 2 is a block diagram illustrating a video encoder for compressing a sequence of frames of a video signal into a compressed video bitstream, an audio encoder for compressing an audio signal into a compressed audio bitstream, and a multiplexer for interleaving packets selected from the compressed video bitstream with packets selected from the compressed audio bitstream to assemble a system data stream;
FIG. 3 is a diagram illustrating a system data stream assembled by interleaving packets selected from a compressed audio bitstream with packets selected from a compressed video bitstream; and
FIG. 4 is a computer program written in the C programming language which implements the process for determining if all data for an entire frame of the video signal is to be omitted from the system data stream, or if all the data for a second copy of an entire frame of the video signal is to be assembled into the system data stream.
Best Mode for Carrying Out the Invention
Arrows 12a and 12b in FIG. 1 depict interleaving packets selected from a compressed audio bitstream 16 with packets selected from a compressed video bitstream 18 to assemble a serial system data stream 22 consisting of concatenated packs 24. An audio encoder 32, illustrated in the block diagram FIG. 2, generates the compressed audio bitstream 16 by processing an audio signal, illustrated in FIG. 2 by an arrow 34. The audio encoder 32 generates the compressed audio bitstream 16 by first digitizing the audio signal 34 at a pre-specified audio sampling rate ("PSASR") , and then compressing the digitized representation of the digitized audio signal. A video encoder 36 generates the compressed video bitstream 18 by compressing into MPEG GOPs a sequence of frames of a video signal, illustrated in FIG. 2 by an arrow 38, which has a pre-specified video frame rate ("PSVFR"). The audio encoder 32 is preferably an audio compres¬ sion engine model no. 96-0003-0002 marketed by FutureTel, Inc. of 1092 E. Arques Avenue, Sunnyvale, California 94086. The video encoder 36 is preferable a video compression engine model no. 96-0002-002 also marketed by FutureTel, Inc. The preferred audio encoder 32 and video encoder 36 are capable of real-time compression respectively of the audio signal 34 into the compressed audio bitstream 16, and the video signal 38 into the compressed video bitstream 18.
In real-time, a system data stream multiplexer 44 repeti- tively selects a packet of compressed audio data or compressed video data respectively from the compressed audio bitstream 16 or from the compressed video bitstream 18 for interleaved assembly into packs 24 of the system data stream 22 illustrated in FIG. 1. The system data stream multiplexer 44 is preferably a computer program executed by a host microprocessor included in a personal computer (not illustrated in any of the FIGs.) in which the audio encoder 32 and the video encoder 36 are located. In preparing the audio encoder 32 and the video encoder 36 respectively for compressing the audio signal 34 and the video signal 38, the computer program executed by the host microproces¬ sor transfers commands and data to the audio encoder 32 and to the video encoder 36 to produce at pre-specified bitrates the compressed audio bitstream 16 and the compressed video bitstream 18. To accommodate overhead required for control data embedded in the system data stream, the sum of the bitrates specified by the computer program for the compressed audio bitstream 16 and the compressed video bitstream 18 are slightly less than the a bitrate specified for the system data stream 22. In addition to directing the audio encoder 32 to produce the compressed audio bitstream 16 at a pre-specified bitrate, the host microprocessor transfers additional control data to the audio encoder 32 which directs the audio encoder 32 to digitize the audio signal 34 at the PSASR.
In addition to transferring control data to the audio encoder 32 and to the video encoder 36 to prepare them for respectively producing the compressed audio bitstream 16 and the compressed video bitstream 18, the computer program executed by the host microprocessor also prepares certain data used in assembling the system data stream 22. In particular with respect to the present invention, the computer program executed by the host microprocessor computes an expected encoded audio-video ratio ("EEAVR") for the system data stream 22 by dividing the PSASR by the PSVFR.
After the computer program executed by the host microproces¬ sor has completed preparations for assembling the system data stream 22, the system data stream multiplexer 44 repetitively selects a packet of data respectively from the compressed audio bitstream 16 or from the compressed video bitstream 18 for assembly into the packs 24 of the system data stream 22. As illustrated in FIG 3, every pack 24 of the assembled system data stream 22 in accordance with the ISO/IEC 11172 specification has a pre-specified length L. Each pack 24 may have a length L as long as 65,536 bytes. Each pack 24 begins with a pack header 52, designated PH in FIG. 3, which includes the system clock reference ("SCR") value for that particular pack 24. In the first pack 24 of the system data stream 22, a system header 54, designated as SH in FIG. 3, follows immediately after the pack header 52. In accordance with the ISO/IEC 11172 specification, the system header 54 may also be repeated in each pack 24 in the system data stream 22. The system header 54 includes both a system_audio_lock_flag and a sys- tem_video_lock_flag. The computer program executed by the host microprocessor sets the system_audio_lock_flag and the sys- tem_video_lock_flag to one (1) to indicate respectively that a specified, constant relationship exists between an audio sampling rate and the SCR, and a specified, constant relationship exists between a video picture rate and the SCR.
Following the pack header 52 and the system header 54 if one is included in the pack 24, the remainder of each pack 24 illustrated in FIG. 3 contains a packet 56 of data selected by the system data stream multiplexer 44 either from the compressed audio bitstream 16 or from the compressed video bitstream 18. Each packet 56 includes a packet header, not illustrated in any of the FIGs. , which may contain a presentation time stamp ("PTS") and may also include the optional decoding time stamp ("DTS") in accordance with ISO/IEC 11172 specification.
Though not depicted in any of the FIGs. , the system data stream 22 in accordance with the present invention may also include packs of a padding stream. As permitted under the ISO/IEC 11172 specification, the system data stream multiplexer 44 will assemble packs from the padding stream into the system data stream 22 to maintain a constant total data rate, to achieve sector alignment, or to prevent decoder buffer underflow.
Because the preferred audio encoder 32 generates the compressed audio bitstream 16 by digitizing the audio signal 34 at a pre-specified sampling rate, and then compresses the digitized audio signal to produce the compressed audio bitstream 16 at a pre-specified bitrate, the compressed audio bitstream 16 produced by the preferred audio encoder 32 inherently provides a stable timing reference for assigning the SCR, the PTS and the DTS to the packs 24 of the system data stream 22. By comparison, due to random fluctuations in the frame rate of the video signal 38 which occur if the video signal 38 is being produced by replaying a video cassette on a video cassette recorder ("VCR") or by playing a laser disk on a laser disk player, the frame rate of the video signal 38 does not provide a stable timing reference for assigning the SCR, the PTS and the DTS. During assembly of the system data stream 22, the computer program executed by the host microprocessor fetches data from the audio encoder 32 and the video encoder 36 in addition to packets 56 selected from the compressed audio bitstream 16 or from the compressed video bitstream 18. In particular, the system data stream multiplexer 44 fetches from a location 62 in the audio encoder 32 a number that represents a running count of all the samples ("NOS") of the audio signal 34 that the audio encoder 32 has received for compression. Analogously, the system data stream multiplexer 44 also fetches from a location 64 in the video encoder 36 a running count of the total number of frames ("NOF") of the video signal 38 that the video encoder 36 has received for compression. The computer program executed by the host microprocessor fetches these two values as close together in time as possible. The system data stream multiplexer 44 then divides NOS by NOF to obtain an actual encoded audio-video ratio ("AEAVR") .
The system data stream multiplexer 44 then first subtracts the previously computed EEAVR from the AEAVR to obtain a difference of ratios ("DOR"). Then the DOR is multiplied by NOF to obtain an encoded frame error value ("EFEV"). EFEV represents a difference in time, based upon the pre-specified audio sampling rate, between the actual time for the NOF that have been assembled into the system data stream 22, and the expected time for the NOF that have been assembled into the system data stream 22.
If the EFEV thus computed is less than a pre-specified negative error value ("PSNEV"), because the actual time for NOF assembled into the compressed video bitstream 18 exceeds the expected time for NOF assembled into the compressed video bitstream 18 by more than PSNEV, then the system data stream multiplexer 44 omits from the system data stream 22 all data for an entire B frame in the compressed video bitstream 18. If the EFEV is greater than a pre-specified positive error value ("PSPEV") , because the actual time for NOF assembled into the compressed video bitstream 18 is less than the expected time for NOF assembled into the compressed video bitstream 18 by more than PSPEV, then the system data stream multiplexer 44 assembles into the system data stream 22 a second copy of all the data for an entire B frame in the compressed video bitstream 18.
The preferred values for the PSNEV and for the PSPEV represent an interval in time required for the presentation of one and one-half frames of the decoded video pictures. Thus, only if the magnitude of the EFEV represents an interval of time which exceeds the time interval required for the presentation of one and one-half frames of the decoded video pictures will an entire B frame in the compressed video bitstream 18 be omitted from the system data stream 22, or will a second copy of an entire B frame in the compressed video bitstream 18 be assembled into the system data stream 22.
Because each frame in the system data stream 22 in accor¬ dance with Part 2 of the ISO/IEC 11172 is numbered, if the system data stream multiplexer 44 omits from the system data stream 22 all data for an entire B frame in the compressed video bitstream 18, then the system data stream multiplexer 44 must renumber all subsequent frames in the present GOP accordingly before assem¬ bling them into the system data stream 22. Correspondingly, if the system data stream multiplexer 44 assembles into the system data stream 22 a second copy of all the data for an entire B frame in the compressed video bitstream 18, then the system data stream multiplexer 44 must number that frame and renumber all subsequent frames from the present GOP accordingly.
FIG. 5 is a computer program written in the C programming language which implements the process for determining if all data for an entire frame of the video signal 38 is to be omitted from the system data stream 22, or if all the data for a second copy of an entire frame of the video signal 38 is to be assembled into the system data stream 22. Line numbers 1-8 in FIG. 4 fetch counts from the location 62 in the audio encoder 32, and from the location 64 in the video signal 38 to establish values for NOF and NOS. Line numbers 13-16 in FIG. 4 implement the computation of EFEV. Line numbers 21-22 in FIG. 4 apply the low pass filter to EFEV. Line numbers 26-36 in FIG. 4 determine whether all data for an entire frame of the video signal 38 is to be omitted from the system data stream 22, or if all the data for a second copy of an entire frame of the video signal 38 is to be assembled into the system data stream 22.
Industrial Applicability
In establishing the bitrate for the compressed video bitstream 18, the computer program executed by the host micropro- cessor sets that bitrate approximately one percent (1%) below a desired nominal bitrate for the system data stream 22 minus the pre-specified bitrate for the compressed audio bitstream 16. Setting the bitrate for the compressed video bitstream 18 one percent (1%) below the desired nominal bitrate provides a sufficient safety margin that the sum of the bitrates for the compressed audio bitstream 16 and the compressed video bitstream 18 plus the overhead of the system data stream 22 should never exceed the maximum bitrate for the system data stream 22 even though occasionally a second copy of all the data for an entire B frame in the compressed video bitstream 18 is assembled into the system data stream 22.
The system data stream multiplexer 44 only begins omitting B frames from or adding B frames to the system data stream 22 after the system data stream multiplexer 44 has been assembling the system data stream 22 for several minutes. The system data stream multiplexer 44 inhibits omission or addition of B frames for a short interval of time to avoid erratic operation. Such erratic omission or addition of B frames during the first few minutes of the system data stream 22 is a consequence of dividing one comparatively small number for NOS by another comparatively small number for NOF. Small numbers occur for NOS and NOF during the first few minutes of operation because commands sent from the computer program executed by the host microprocessor that respectively trigger operation of both the audio encoder 32 and the video encoder 36 causes microcode executed both in the audio encoder 32 and in the video encoder 36 to reset to zero (0) respectively the counts present in the location 62 and in the location 64. After an interval of several minutes, the counts NOS and NOF become sufficiently large that successive DORs do not change that markedly from one GOP to the next.
In addition to completely inhibiting omission or addition of B frames for a short interval of time during the first few minutes of the system data stream 22, prior to testing EFEV to determine if a B frame should be omitted from or added to the system data stream 22, a low pass filter is applied to EFEV to further inhibit erratic omission or addition of B frames. Applying a low pass filter to EFEV insures that B frames are omitted from or added to the system data stream 22 only in response to a long term trend in the difference between the EEAVR and the AEAVR, and not due to fluctuations in the values of NOS and NOF, perhaps due to reading one value of either NOS or NOF during one GOP and reading the corresponding value either of NOF or NOS during the immediately preceding or immediately succeeding GOP.
The preferred low pass filter applied to EFEV has an asymmetric response. That is, characteristics of the low pass filter cause the filter's output value to return to zero (0) more quickly in response to a zero (0) value for EFEV than the filter's output value departs from zero in response to a non-zero value for EFEV. The actual response times employed in the low pass filter are determined empirically. Furthermore, if the system data stream multiplexer 44 omits from or adds to the system data stream 22 a frame of the compressed video bitstream 18, then the low pass filter's output value is arbitrarily set to zero (0). Setting the low pass filter's output value to zero (0) tends to inhibit the omission of an entire frame of the compressed video bitstream 18 or the addition of a second copy of an entire frame of the compressed video bitstream 18 during processing of immediately succeeding MPEG GOPs.
The combination of the preferred audio encoder 32, the preferred video encoder 36, and the system data stream multiplexer 44 in accordance with the present invention permits assembly of virtually any desired system data stream 22 directly and without any intervening processing operations. For example, Phillips Consumer Electronics B.V. , Coordination Office Optical and Magnetic Media Systems, Building SA-1, P.O. Box 80002, 5600 JB Eindhoven, The Netherlands has established a specification for Video CD that is colloquially referred to as the "White Book" standard. Phillips' White Book standard specifies a maximum bitrate for the compressed video bitstream 18 of 1,151929.1 bits per second, an audio sampling frequency of 44.1 kHz, and an audio bitrate of 224 kBits per second. Phillips' White Book standard also specifies that an audio packet is to be 2279 bytes long while a video packet has a length of 2296 bytes, and the system data stream 22 has a pack rate of 75 packs per second. The system data stream multiplexer 44 in accordance with the present invention, operating in conjunction with the preferred audio encoder 32 and the preferred video encoder 36, can directly assemble a system data stream 22 in accordance with Phillips' White Book standard from a suitably specified compressed audio bitstream 16 and compressed video bitstream 18 without any intervening operations.
Although the present invention has been described in terms of the presently preferred embodiment, it is to be understood that such disclosure is purely illustrative and is not to be interpreted as limiting. Consequently, without departing from the spirit and scope of the invention, various alterations, modifications, and/or alternative applications of the invention will, no doubt, be suggested to those skilled in the art after having read the preceding disclosure. Accordingly, it is intended that the following claims be interpreted as encompassing all alterations, modifications, or alternative applications as fall within the true spirit and scope of the invention.

Claims

The ClaimsWhat is claimed is:
1. A method for real-time assembly of an encoded system data stream that may be decoded by a decoder into decoded video pictures and into a decoded audio signal, the system data stream being assembled so the decoder may present the decoded audio signal substantially in synchronism with the decoded video pictures, the system data stream being assembled by interleaving packets of data selected from a compressed audio bitstream with packets of data selected from a compressed video bitstream, the compressed audio bitstream being generated by compressing an audio signal that is sampled at a pre-specified audio sampling rate ("PSASR") , the compressed video bitstream being generated by compressing a sequence of frames of a video signal having a pre-specified video frame rate ("PSVFR"), the method comprising the steps of: a. before commencing assembly of the system data stream, computing an expected encoded audio-video ratio ("EEAVR") which equals of the PSASR divided by the
PSVFR; b. embedding in the system data stream a system header ("SH") in which both a system_audio_lock_flag and a system_video_lock_flag are set to indicate respective- ly that a specified, constant relationship exists between an audio sampling rate and a system clock reference ("SCR"), and a specified, constant relation¬ ship exists between a video picture rate and the SCR; c. repetitively selecting a packet of data respectively from the compressed audio bitstream or from the compressed video bitstream for assembly into the system data stream; d. repetitively embedding into the system data stream, together with each packet selected respectively from the compressed audio bitstream or from the compressed video bitstream, a presentation time-stamp ("PTS"); e. computing an actual encoded audio-video ratio ("AEAVR") which equals a number that represents a count of all the samples ("NOS") of the audio signal that have been received for compression divided by a total number of frames ("NOF") of the video signal that have been received for compression; f. computing an encoded frame error value ("EFEV") by first subtracting the EEAVR from the AEAVR to obtain a difference of ratios ("DOR") , and then multiplying the DOR thus computed by NOF; and g. if the EFEV is less than a pre-specified negative error value ("PSNEV"), omitting from the system data stream all data for an entire frame of the video signal.
2. The method of claim 1 comprising the further step of: h. if the EFEV is greater than a pre-specified positive error value ("PSPEV"), assembling into the system data stream a second copy of all the data for an entire frame of the video signal.
3. The method of claim 2 wherein the PSPEV represents an interval of time which is greater than a time interval required for presentation of one-half of a single frame of the decoded video pictures.
4. The method of claim 2 wherein during several minutes immediately following commencing assembly of the system data stream omission of a frame of the video signal from the system data stream, and addition of a second copy of all the data for an entire frame of the video signal to the system data stream, are inhibited.
5. The method of claim 2 comprising the further step of: i. applying a low pass filter to EFEV before determining whether to omit a frame of the video signal from the system data stream, and before determining whether to add a frame of the video signal to the system data stream.
6. The method of claim 1 wherein the PSNEV represents an interval of time which is greater than a time interval required for presentation of one-half of a single frame of the decoded video pictures.
7. The method of claim 1 wherein during several minutes immediately following commencing assembly of the system data stream omission of a frame of the video signal from the system data stream is inhibited.
8. The method of claim 1 comprising the further step of: h. applying a low pass filter to EFEV before determining whether to omit a frame of the video signal from the system data stream.
9. A method for real-time assembly of an encoded system data stream that may be decoded by a decoder into decoded video pictures and into a decoded audio signal, the system data stream being assembled so the decoder may present the decoded audio signal substantially in synchronism with the decoded video pictures, the system data stream being assembled by interleaving packets of data selected from a compressed audio bitstream with packets of data selected from a compressed video bitstream, the compressed audio bitstream being generated by compressing an audio signal that is sampled at a pre-specified audio sampling rate ") , the compressed video bitstream being generated by compressing a sequence of frames of a video signal having a pre- specified video frame rate ("PSVFR"), the method comprising the steps of: a. before commencing assembly of the system data stream, computing an expected encoded audio-video ratio ("EEAVR") which equals of the PSASR divided by the PSVFR; b. embedding in the system data stream a system header ("SH") in which both a system_audio_lock_flag and a system_video_lock_flag are set to indicate respective¬ ly that a specified, constant relationship exists between an audio sampling rate and a system clock reference ("SCR"), and a specified, constant relation¬ ship exists between a video picture rate and the SCR; c. repetitively selecting a packet of data respectively from the compressed audio bitstream or from the compressed video bitstream for assembly into the system data stream; d. repetitively embedding into the system data stream, together with each packet selected respectively from the compressed audio bitstream or from the compressed video bitstream, a presentation time-stamp ("PTS"); e. computing an actual encoded audio-video ratio ("AEAVR") which a number that represents a count of all the samples ("NOS") of the audio signal that have been received for compression divided by equals a total number of frames ("NOF") of the video signal that have been received for compression; f. computing an encoded frame error value ("EFEV") by first subtracting the EEAVR from the AEAVR to obtain a difference of ratios ("DOR"), and then multiplying the DOR thus computed by the total number of frames of the video signal that have been received for compres¬ sion; and g. if the EFEV is greater than a pre-specified positive error value ("PSPEV") , assembling into the system data stream a second copy of all the data for an entire frame of the video signal.
10. The method of claim 9 wherein the PSPEV represents an interval of time which is greater than a time interval required for presentation of one-half of a single frame of the decoded video pictures.
11. The method of claim 9 wherein during several minutes immediately following commencing assembly of the system data stream addition of a second copy of all the data for an entire frame of the video signal to the system data stream is inhibited.
/// ///
12. The method of claim 9 comprising the further step of: h. applying a low pass filter to EFEV before determining whether to add a frame of the video signal to the system data stream.
PCT/US1994/009565 1994-08-29 1994-08-29 Measuring and regulating synchronization of merged video and audio data WO1996007274A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/US1994/009565 WO1996007274A1 (en) 1994-08-29 1994-08-29 Measuring and regulating synchronization of merged video and audio data
AU80098/94A AU8009894A (en) 1994-08-29 1994-08-29 Measuring and regulating synchronization of merged video and audio data
EP94931269A EP0783823A4 (en) 1994-08-29 1994-08-29 Measuring and regulating synchronization of merged video and audio data
JP8508678A JPH10507042A (en) 1994-08-29 1994-08-29 Measurement and regulation of synchronization of merged video and audio data
US08/325,430 US5874997A (en) 1994-08-29 1994-08-29 Measuring and regulating synchronization of merged video and audio data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US1994/009565 WO1996007274A1 (en) 1994-08-29 1994-08-29 Measuring and regulating synchronization of merged video and audio data

Publications (1)

Publication Number Publication Date
WO1996007274A1 true WO1996007274A1 (en) 1996-03-07

Family

ID=22242901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/009565 WO1996007274A1 (en) 1994-08-29 1994-08-29 Measuring and regulating synchronization of merged video and audio data

Country Status (4)

Country Link
EP (1) EP0783823A4 (en)
JP (1) JPH10507042A (en)
AU (1) AU8009894A (en)
WO (1) WO1996007274A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0797197A2 (en) * 1996-03-21 1997-09-24 Kabushiki Kaisha Toshiba Packing method, recording medium and transmitting and receiving apparatus for variable length data
EP0801392A2 (en) * 1996-04-08 1997-10-15 Pioneer Electronic Corporation Information record medium, apparatus for recording the same and apparatus for reproducing the same
EP0806874A2 (en) * 1996-05-10 1997-11-12 General Instrument Corporation Of Delaware Error detection and recovery for high rate isochronous data in MPEG-2 data streams
TWI472225B (en) * 2010-01-06 2015-02-01 Sony Corp Reception apparatus and method, program and reception system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6249319B1 (en) * 1998-03-30 2001-06-19 International Business Machines Corporation Method and apparatus for finding a correct synchronization point within a data stream

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4734764A (en) * 1985-04-29 1988-03-29 Cableshare, Inc. Cable television system selectively distributing pre-recorded video and audio messages
US4847690A (en) * 1987-02-19 1989-07-11 Isix, Inc. Interleaved video system, method and apparatus
US4849817A (en) * 1987-02-19 1989-07-18 Isix, Inc. Video system, method and apparatus for incorporating audio or data in video scan intervals
US5053860A (en) * 1988-10-03 1991-10-01 North American Philips Corp. Method and apparatus for the transmission and reception multicarrier high definition television signal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3942957C2 (en) * 1989-12-23 1994-06-01 Ziegler Hans Peter Device for introducing a metered gas volume into a mold cavity of an injection mold filled with a plastic melt

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4734764A (en) * 1985-04-29 1988-03-29 Cableshare, Inc. Cable television system selectively distributing pre-recorded video and audio messages
US4847690A (en) * 1987-02-19 1989-07-11 Isix, Inc. Interleaved video system, method and apparatus
US4849817A (en) * 1987-02-19 1989-07-18 Isix, Inc. Video system, method and apparatus for incorporating audio or data in video scan intervals
US5053860A (en) * 1988-10-03 1991-10-01 North American Philips Corp. Method and apparatus for the transmission and reception multicarrier high definition television signal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0783823A4 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0797197A2 (en) * 1996-03-21 1997-09-24 Kabushiki Kaisha Toshiba Packing method, recording medium and transmitting and receiving apparatus for variable length data
EP0797197A3 (en) * 1996-03-21 1999-07-14 Kabushiki Kaisha Toshiba Packing method, recording medium and transmitting and receiving apparatus for variable length data
EP0801392A2 (en) * 1996-04-08 1997-10-15 Pioneer Electronic Corporation Information record medium, apparatus for recording the same and apparatus for reproducing the same
EP0801392A3 (en) * 1996-04-08 1999-05-19 Pioneer Electronic Corporation Information record medium, apparatus for recording the same and apparatus for reproducing the same
EP1310957A2 (en) * 1996-04-08 2003-05-14 Pioneer Electronic Corporation Information record medium, apparatus for recording the same and apparatus for reproducing the same
EP1310957A3 (en) * 1996-04-08 2004-09-01 Pioneer Electronic Corporation Information record medium, apparatus for recording the same and apparatus for reproducing the same
EP0806874A2 (en) * 1996-05-10 1997-11-12 General Instrument Corporation Of Delaware Error detection and recovery for high rate isochronous data in MPEG-2 data streams
EP0806874A3 (en) * 1996-05-10 2000-09-20 General Instrument Corporation Error detection and recovery for high rate isochronous data in MPEG-2 data streams
TWI472225B (en) * 2010-01-06 2015-02-01 Sony Corp Reception apparatus and method, program and reception system

Also Published As

Publication number Publication date
JPH10507042A (en) 1998-07-07
AU8009894A (en) 1996-03-22
EP0783823A1 (en) 1997-07-16
EP0783823A4 (en) 1998-12-02

Similar Documents

Publication Publication Date Title
US5874997A (en) Measuring and regulating synchronization of merged video and audio data
CA2278376C (en) Method and apparatus for adaptive synchronization of digital video and audio playback in a multimedia playback system
US6873629B2 (en) Method and apparatus for converting data streams
US6339760B1 (en) Method and system for synchronization of decoded audio and video by adding dummy data to compressed audio data
JP2003519985A (en) Data stream conversion method and device
KR100694164B1 (en) A reproducing method and recording medium thereof
JPH08168042A (en) Data decoding device and method therefor
US8045836B2 (en) System and method for recording high frame rate video, replaying slow-motion and replaying normal speed with audio-video synchronization
US7359621B2 (en) Recording apparatus
JPH08237650A (en) Synchronizing system for data buffer
JP3429652B2 (en) Digital coding and multiplexing equipment
JP2008123693A (en) Reproducing apparatus, reproducing method, and its recording medium
US6754273B1 (en) Method for compressing an audio-visual signal
EP0783823A1 (en) Measuring and regulating synchronization of merged video and audio data
JPH1118051A (en) I-frame extract method
JP4534168B2 (en) Information processing apparatus and method, recording medium, and program
US20130287361A1 (en) Methods for storage and access of video data while recording
JP2004040579A (en) Digital broadcast reception device and synchronous reproduction method for digital broadcast
CA2197559A1 (en) Measuring and regulating synchronization of merged video and audio data
JP2008176918A (en) Reproducing apparatus and method, and recording medium
Lu et al. Mechanisms of MPEG stream synchronization
CN113490047A (en) Android audio and video playing method
JP2004248104A (en) Information processor and information processing method
US20090110364A1 (en) Reproduction apparatus and reproduction method
JPH09167153A (en) Method and apparatus for synchronization of visual information with audio information

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU BB BR CA FI JP KP KR NO PL RU US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2197559

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 1994931269

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1994931269

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 08325430

Country of ref document: US

WWR Wipo information: refused in national office

Ref document number: 1994931269

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1994931269

Country of ref document: EP