EP2389767A2 - Three-dimensional subtitle display method and three-dimensional display device for implementing the same - Google Patents

Three-dimensional subtitle display method and three-dimensional display device for implementing the same

Info

Publication number
EP2389767A2
EP2389767A2 EP10733627A EP10733627A EP2389767A2 EP 2389767 A2 EP2389767 A2 EP 2389767A2 EP 10733627 A EP10733627 A EP 10733627A EP 10733627 A EP10733627 A EP 10733627A EP 2389767 A2 EP2389767 A2 EP 2389767A2
Authority
EP
European Patent Office
Prior art keywords
subtitle
depth
value
information
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP10733627A
Other languages
German (de)
French (fr)
Other versions
EP2389767A4 (en
Inventor
Jong-Yeul Suh
Jin-Pil Kim
Jae-Hyung Song
Ho-Taek Hong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Publication of EP2389767A2 publication Critical patent/EP2389767A2/en
Publication of EP2389767A4 publication Critical patent/EP2389767A4/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/183On-screen display [OSD] information, e.g. subtitles or menus

Definitions

  • This disclosure relates to a three-dimensional subtitle display method and a three-dimensional display device for implementing the same.
  • the broadcast program For displaying text information (e.g., subtitles, closed captions, etc.) related to a broadcast program on a screen, the broadcast program may be produced by including text information (i.e. subtitles) into a broadcast signal itself and transmitted together therewith, or text information (subtitles) not integrated with the broadcast signal may be separately transmitted to allow a broadcast receiver to selectively display such subtitles.
  • text information i.e. subtitles
  • subtitles text information
  • subtitles not integrated with the broadcast signal
  • closed caption broadcasting can display speech-to-text outputs, lyrics of songs, film script translations, online TV guide information, emergency broadcast data, and other text-type services to viewers.
  • closed caption broadcasting tends to be limitedly compulsory in terms of media access rights and providing comprehensive services, its utilization is expected to drastically increase.
  • auxiliary images that are additionally provided to the receiver may include graphic elements, beyond simple text, thereby increasing the utility of supplementary images (See ‘ETSI EN 300 468 V1.9.1’ standard regarding the standard of service information of DVB systems and ‘ETSI EN 300 743 V1.2.1’ and ‘ETSI EN 300 743 V1.3.1’ standards regarding a DVB subtitling system, etc.).
  • supplementary images including text and/or graphic elements are referred to as ‘subtitles’, and lately, the term ‘subtitle(s)’ is more commonly used in relation to an image display device as well as to DVB technology.
  • subtitling is used to denote the overall processing used for displaying subtitles (and/or other textual information).
  • a stereoscopic 3D display system two images are captured by using two image sensors spaced apart by about 65 millimeters, which simulates the positioning of a pair of human eyes, and then transmitted as broadcast signals to a receiver. Then, the receiver produces the two images to be viewed by the left and right eyes of the viewer, thus simulating a binocular disparity to allow for depth perception and stereoscopic viewing.
  • subtitles i.e., textual information, etc.
  • subtitle data or other information related to subtitles or text
  • implementation of stereoscopic 3D images together with subtitles may be achieved by, for example, simultaneously displaying the subtitles on left and right images being alternately displayed.
  • processing and displaying of subtitles with 3D effects is technically difficult to achieve in practice.
  • a method in which the broadcast station transmits 2D subtitle image data and the receiver itself renders the desired 3D subtitle images based on the received 2D subtitle data can be considered, but properly defining and rendering the various 3D attributes (e.g., the thickness and stereoscopic color of caption/subtitle text, the color and transparency of a caption/subtitle text display area, and the like) with respect to continuously inputted subtitles would significantly increase the calculation burden at the receiver.
  • a method in which the receiver previously determines the 3D attributes to be indiscriminately applied to subtitles and performing 3D rendering on the subtitles according to the fixed 3D attributes may be considered. In this case, although the calculation burden could be somewhat reduced, the aesthetic nature of the displayed 3D subtitles may deteriorate and thus the displayed 3D subtitles do not meet viewer satisfaction.
  • the present inventors recognized the above-identified needs and drawbacks and based upon such problem recognition, conceived the various features described hereafter. As a result, a method for effectively displaying 3D images and information that allow subtitles and other textual information to be effectively blended with a 3D image such that the subtitles properly correspond with the 3D image has been developed as per the embodiments described hereafter.
  • Another aspect of the embodiments herein is to provide a 3D display device suitable for implementing such display method.
  • a method for displaying three-dimensional (3D) subtitles in a 3D display device in which, 3D image signals, subtitle data, depth-related information related to the subtitle data, and a 3D region composition information defining a display region of the subtitle are received. Then, the subtitle data is then formed (i.e., generated, synthesized, produced, etc.) to be three-dimensional using the received depth-related information and the 3D region composition information. Thereafter, the 3D image signals are displayed together with the formed subtitle data.
  • the 3D image signal, the subtitle data, the depth-related information and the 3D region composition information may be received via broadcast signals.
  • the depth-related information may be also processed in terms of pixels.
  • the 3D subtitle display method may further include generating a depth value look-up table for storing the reciprocal relationship between pseudo-depth information and actual depth information.
  • the depth-related information may be expressed as pseudo-depth information with respect to each pixel, and in the displaying step, the pseudo-depth information may be converted into the actual depth information with reference to the depth value look-up table.
  • look-up table definition information for generating or updating the depth value look-up table may be included in the broadcast signal.
  • the display device may pre-store the depth value look-up table for later use.
  • the actual depth information regarding pixels may be a depth value in a forward/backward direction with respect to the pixels.
  • the ‘forward/backward direction’ can refer to a direction that is relatively perpendicular to a display screen of a display device at the receiver.
  • the look-up table definition information may indicate a reciprocal relationship between the magnifications with respect to the pseudo-depth information and the display screen of a receiver, based on which the depth value look-up table may be generated to indicate the reciprocal relationship between the pseudo-depth information and the actual depth information.
  • the actual depth information may be a horizontal disparity value with respect to the pixels.
  • the subtitle data may be received in units of subtitle objects, which may include characters, a character string or a graphic element.
  • a display region of the subtitle data may be set in units of subtitle objects.
  • the display region may be a 3D object space obtained by extending an object region in a forward/backward direction under, for example, the DVB standard.
  • the display region of the subtitle data may be set to include a plurality of subtitle objects. This display region may be a 3D page space obtained by extending a page in a forward/backward direction under, for example, the DVB standard.
  • a 3D display device including a broadcast signal receiving unit and a composing and outputting unit.
  • the broadcast signal receiving unit may receive a broadcast signal including a 3D image signal, subtitle data, depth-related information related to the subtitle, and a 3D region composition information defining a display region of the subtitle data, and demodulates and decodes the broadcast signal.
  • the composing and outputting unit may form (i.e., compose, synthesize, generate, etc.) the subtitle data to be three-dimensional using the depth-related information and the 3D region composition information, and displays the 3D images together with the subtitle data that was formed to be three-dimensional.
  • the 3D display device may further include a memory for storing a depth value look-up table indicating a reciprocal relationship between pseudo-depth information and actual depth information.
  • the depth-related information included in the broadcast signal may be expressed as pseudo-depth information regarding each pixel.
  • the composing and outputting unit may convert the pseudo-depth information into actual depth information with reference to the depth value look-up table and configure the subtitle data based on the actual depth information.
  • subtitles (as well as other types of textual information) can be displayed to have a particular visual effect, such as a cubic effect or a three-dimensional effect, such that the subtitles correspond with a 3D image without drastically increasing the calculation burden required for performing 3D rendering at a television receiver.
  • a particular visual effect such as a cubic effect or a three-dimensional effect
  • the utility and visual attractiveness of the subtitles can be greatly increased.
  • additional parameters can be supplemented and provided to describe the 3D subtitle display region, the depth information, and the like, based upon the technical standards that apply to existing subtitle signal transmission and reception techniques, to thus accomplish backward compatibility with particular existing technical standards.
  • FIG. 1 illustrates a schematic block diagram of a broadcasting system according to an exemplary embodiment
  • FIG. 2 illustrates the exemplary syntax of a subtitling descriptor
  • FIG. 3 illustrates an example of allocating certain field values to subtitling type fields in the subtitling descriptor of FIG. 2;
  • FIG. 4 illustrates the syntax of a general subtitle packet data
  • FIG. 5 illustrates some exemplary types of subtitle segments used according to an exemplary embodiment
  • FIG. 6 illustrates an exemplary structure of a common syntax of subtitling segments
  • FIG. 7 illustrates an exemplary structure of the syntax of a 3D display definition segment (3D_DDS);
  • FIG. 8 illustrates an exemplary structure of the syntax of a 3D page composition segment (3D_PCS);
  • FIG. 9 illustrates an exemplary structure of the syntax of a 3D region composition segment (3D_RCS).
  • FIG. 10 illustrates exemplary dimension and reference point coordinates of an object region space defined in implementing a 3D subtitling according to an exemplary embodiment
  • FIGs. 11 and 12 illustrate exemplary structures of the syntax of a 3D object data segment (3D_ODS);
  • FIG. 13 illustrates an exemplary structure of the syntax of a depth value look-up table definition segment (DVLUTDS) for defining a DVLUT;
  • DVDUTDS depth value look-up table definition segment
  • FIG. 14 illustrates an example of the structure of a DVLUT
  • FIG. 15 illustrates another example of the structure of a DVLUT
  • FIG. 16 is a schematic block diagram of a television receiver according to an exemplary embodiment.
  • FIG. 17 is a flow chart illustrating an exemplary process of displaying 3D subtitles in the television receiver illustrated in FIG. 16.
  • 3D (three-dimensional) video standards technology there are basically five main techniques by which 3D/stereoscopic imagery can be encoded onto a standard video signal. These can be characterized as field-sequential, side-fields (side-by-side), sub-fields (over-under), separate channels, and anaglyph. It can be said that the field- sequential and side-field methods are the most commonly used today.
  • a video signal to be converted to another standard it can be said that three aspects of the video signal may need to be changed: field rate, lines/frame and color encoding. To do so, field/line omission and/or duplication techniques, field/line interpolation techniques, and motion estimation techniques may need to be performed.
  • FIG. 1 illustrates a schematic block diagram of a broadcasting system according to an exemplary embodiment.
  • the illustrated system can support at least one type of existing (or developing) DVB standard, and includes a 3D image/video capture means (such as a binocular camera 100), a processing means (such as a preprocessing unit 102), a coding means (such as a program coding unit 104), a control means (such as a controller 114), and a channel processing means (such as a channel adapter 120).
  • a 3D image/video capture means such as a binocular camera 100
  • a processing means such as a preprocessing unit 102
  • a coding means such as a program coding unit 104
  • a control means such as a controller 114
  • a channel processing means such as a channel adapter 120.
  • the exemplary labels or names for these and other elements are not meant to be limiting, as other equivalent and/or alternative elements may be implemented as well.
  • the (binocular) camera 100 includes two lenses and corresponding image pickup devices that are used to capture a pair of 2D images of a front scene.
  • the two lenses and the image pickup devices are disposed to have a distance of about 65 millimeters (mm) like that of human eyes, and accordingly, the camera 100 acquires two 2D images having a binocular disparity.
  • the image acquired by the left lens (and its image pickup device) will be referred to as a left image
  • the image acquired by the right lens (and its image pickup device) will be referred to as a right image.
  • the preprocessing unit 102 performs appropriate processing to cancel (or at least minimize) any noise or other type of signal interference that may be present in the original left and right images acquired by the camera 100, then performs image processing to make any corrections to such images, and solves an imbalancing phenomenon of a luminance component.
  • the images before and/or after the preprocessing performed by the preprocessor 102 may be stored in a storage unit (or other memory device), and editing or other further image processing thereto may be performed. Accordingly, there may be some time delay between when the camera 100 captures the images and when the program coding unit 104 performs coding for the captured images.
  • a voice/audio coding unit 106 receives voice/audio signals from a plurality of microphones (or other audio pick-up device) installed at proper locations with respect to an image capturing area/region and codes the voice/audio signals according to an appropriate technical standard (such as an AC-3 standard) to generate an audio elementary stream (ES) output.
  • a technical standard such as an AC-3 standard
  • the image decoding unit 108 codes the images acquired by the camera 100 according to a certain technical standard and compresses the coded images by removing the temporal and spatial redundancy to generate a video elementary stream (ES) output.
  • the image coding unit 108 codes the image signals according to an MPEG-2 standard of ISO/IEC 13838-2 and a digital video broadcasting (DVB) standard stipulated by ETSI.
  • the image coding unit 108 may code the images according to the H.264/AVC standard stipulated by the Joint Video Team (JVT) of ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6, or other various coding schemes.
  • the subtitle coding unit 110 receives the subtitle data from the controller 114, compresses and codes the received subtitle data, and outputs a subtitle stream.
  • the coding process by the subtitle coding unit 110 and the coding process by the image coding unit 108 may be performed in a similar manner.
  • a packet generating unit (or other type of data packet processing means) packetizes the audio ES outputs, the video ES outputs, and the subtitle streams to generate a packetized elementary stream (PES) output.
  • PES packetized elementary stream
  • a transport multiplexing unit 112 receives a voice PES, an image PES, and a subtitle PES, and also receives program specific information (PSI) and service information (SI) from the controller 114, and multiplexes the PES packets and the PSI/SI information to generate a transport stream (TS) output.
  • PSI program specific information
  • SI service information
  • the controller 114 which includes a subtitle generating unit 116 and a PSI/SI generating unit 118, also controls the general operation of the overall system and generates subtitle data and PSI/SI data.
  • the subtitle generating unit 116 generates time-coded subtitle information and provides the same to the subtitle coding unit 110.
  • the subtitle coding unit 110 may be integrated with the subtitle generating unit 116.
  • the subtitle generating unit 116 also provides information regarding a subtitle service to the PSI/SI generating unit 118.
  • the subtitle service information may include information indicating that the subtitles are provided in a three-dimensional manner.
  • the PSI/SI generating unit 118 operates to generate PSI/SI data.
  • a program map table includes a subtitling descriptor (or other type of means for providing descriptive information or indicators) for signaling (or describing) subtitle service information.
  • the subtitling descriptor is generated based on the ETSI EN 300 468 V1.9.1 standard, which is a technical standard for service information (SI) of a DVB system. A detailed syntax structure of the subtitling descriptor will be described later in this disclosure.
  • a channel adapter 120 performs error correction coding on the transport stream (TS) such that any errors that may be caused by noise (or other interference) via a transport channel can be detected from the receiver and appropriately corrected. Then, appropriate modulation according to a particular modulation scheme (e.g., an OFDM modulation scheme) that is adopted by the system is performed, and the modulated signals are transmitted.
  • a source coding and modulation process by the channel adapter 120 is performed based on ETSI EN 300 744 V1.6.1, which is a technical standard for a source coding and modulation scheme applicable to digital radio (wireless) channel / (OTA: over-the-air) interface transmissions.
  • the subtitle stream carries or transfers one or more subtitles (i.e. subtitle data), and each subtitle service (or other content service) includes text and/or graphic information required to properly display the subtitles.
  • Each subtitle service includes one or more object pages (or other form of graphical representation) that are displayed to overlap on a broadcast image.
  • Each (subtitle) object page may include one or more object regions (or areas), and each object region may have a rectangular or box-like shape having particular attributes.
  • Graphic objects can be disposed with the object regions in the background image.
  • Each graphic object may be comprised of a character (letter), a word, a sentence, or may be a logo, an icon, any other type of graphical element, or any combination thereof.
  • At least one depth value (or other value that represents a specific 3D graphical/image characteristic) of each pixel may be provided or a horizontal disparity value (or other value that represents a graphical difference, discrepancy, inconsistency, inequality, or the like) between 2D images for implementing a stereoscopic 3D image may be provided such that each graphic object can be properly rendered and displayed in a three-dimensional manner in the receiver.
  • a page (or other graphical layout scheme) that provides an arrangement or combination of object regions for displaying each object is defined, based upon which subtitles are to be displayed.
  • a page identifier e.g., page_id or other type of identification means
  • a page identifier is assigned to each page, and when certain definition information regarding object regions or objects is transferred to the receiver, a page identifier indicating the specific page associated with such corresponding information is included.
  • the language of a subtitle and a page identifier are signaled (or otherwise informed) through a subtitling descriptor (or other form of parameter) within the PMT, and an accurate display point (with respect to a display location and/or display time) is designated through a presentation timing stamp (PTS) or other type of time-related parameter provided within a PES packet header (or other portion of the packet).
  • PTS presentation timing stamp
  • each data unit i.e., a segment to be described hereinbelow
  • each data unit may include data applied only to a single particular subtitle service or may include data shared by two or more subtitle services.
  • An example of data shared by two or more subtitle services may be a segment transmitting a logo (or other graphic element) commonly applied to subtitle services in various languages. Accordingly, a page identifier is assigned to each segment.
  • the page identifier may include a composition page identifier (or other type of indication) for signaling or identifying a segment applied only to a single subtitle service and an ancillary page identifier (or other type of indication) for signaling or identifying a data segment shared among a plurality of subtitle services.
  • a subtitling descriptor can send the page identifier values of segments required for decoding each subtitle service.
  • FIG. 2 shows an exemplary structure of the syntax of a subtitling descriptor. Although various types of fields or parameters having various bit lengths may be employed, some exemplary syntax characteristics will be explained as follows:
  • a “descriptor_tag” field is an 8-bit descriptor identifier. In case of a subtitling descriptor, it may have a value of ‘0x59’.
  • a “descriptor_length” is an 8-bit field indicating a total number of bytes of a descriptor part following this field value.
  • An “ISO_639_language_code” is a 24-bit field indicating a subtitle language as a three-character language code according to the ISO-639 technical standard.
  • a “subtitling_type” is an 8-bit field for transmitting contents of a subtitle and information regarding an intended screen ratio.
  • FIG. 3 illustrates an example of field values allocated to a “subtitling_type” field. Although various other values may be employed, some exemplary field values will be explained as follows:
  • a field value of ‘0x00’ is reserved for later use
  • a field value of ‘0x01’ indicates a European Broadcasting Union (EBU) tele-text subtitle service
  • a field value of ‘0x02’ indicates a service associated with an EPU tele-text service
  • a field value of ‘0x03’ indicates vertical blanking interval data
  • field values from ‘0x04’ to ‘0x0F’ are reserved for later use.
  • a field value of ‘0x10’ indicates a (general) DVB subtitle without restriction to a screen ratio
  • a field value of ‘0x11’ indicates a (general) DVB subtitle to be displayed on a monitor having a screen ratio of 4:3
  • a field value of ‘0x12’ indicates a (general) DVB subtitle to be displayed on a monitor having a screen ratio of 16:9
  • a field value of ‘0x13’ indicates a (general) DVB subtitle to be displayed on a monitor having a screen ratio of 2.21:1
  • a field value of ‘0x14’ indicates a (general) DVB subtitle to be displayed on an HD (High Definition) monitor.
  • Field values from ‘0x15’ to ‘0x1F’ are reserved for later use.
  • a field value of ‘0x20’ indicates a DVB subtitle (for the hearing impaired) without restriction to a screen ratio
  • a field value of ‘0x21’ indicates a DVB subtitle (for the hearing impaired) to be displayed on a monitor having a screen ratio of 4:3
  • a field value of ‘0x22’ indicates a DVB subtitle (for the hearing impaired) to be displayed on a monitor having a screen ratio of 16:9
  • a field value of ‘0x23’ indicates a DVB subtitle (for the hearing impaired) to be displayed on a monitor having a screen ratio of 2.21:1
  • a field value of ‘0x24’ indicates a DVB subtitle (for the hearing impaired) to be displayed on a high definition (HD) monitor.
  • Field values from ‘0x25’ to ‘0x2F’ are reserved for later use.
  • a field value of ‘0x30’ indicates an open language translation service for the hearing impaired, and a field value of ‘0x31’ indicates a closed language translation service for the hearing impaired.
  • Field values from ‘0x32’ to ‘0XAF’ are reserved for later use, and field values from ‘0XB0’ to ‘0XFE’ are allowed for the user to define and use, and a field value of ‘0xFF’ are reserved for later use.
  • some field values for example, ‘0xB0’ to ‘0xB4’ are allowed for the user to define and use, and to indicate that subtitle segments (to be described hereafter) include 3D subtitle information.
  • a field value of ‘0xB0’ indicates a 3D subtitle without restriction to a screen ratio
  • a field value of ‘0xB1’ indicates a 3D subtitle service to be displayed on the monitor having a screen ratio of 4:3
  • a field value of ‘0xB2’ indicates a 3D DVB subtitle to be displayed on the monitor having a screen ratio of 16:9
  • a field value of ‘0xB3’ indicates a 3D subtitle service to be displayed on the monitor having a screen ratio of 2.21:1
  • a field value of ‘0x14’ indicates a 3D subtitle service to be displayed on the HD monitor.
  • other field values may also be used additionally and/or alternatively to those described above.
  • composition_page_id is a 16-bit field for discriminating a page, namely, a composition page, including data applied only to a single subtitle service. This field may be used for segments, namely, a 3D page composition segment (3D_PCS) and a 3D region composition segment (3D_RCS), which define a data structure of a subtitle screen.
  • ancillary_page_id is a 16-bit field used for discriminating a page, namely, an ancillary page, including data shared by two or more services.
  • This field is preferably not used for a composition segment, and selectively used only for a color look-up table definition segment (CLUTDS), a 3D object data segment (3D_ODS), a depth value look-up table definition segment (DVLUTDS), or the like.
  • CLUTDS color look-up table definition segment
  • 3D_ODS 3D object data segment
  • DVDUTDS depth value look-up table definition segment
  • the subtitling descriptor (or other type of means for providing descriptive information or indicators) of the exemplary embodiments is adapted (or configured) to provide indications (or signals) with respect to at least a subtitle language(s), a subtitle type(s), a composition_page_id value required for decoding a service, and an ancillary_page_id value with respect to each service included in a stream.
  • a basic building block or unit of a subtitle stream is a subtitle segment.
  • Subtitle segments are included in PES packets, and the PES packets are included in transmission packets of a TS and transferred to the receiver.
  • a display time point of a subtitle i.e. the time when the subtitle should be displayed
  • PES presentation timing stamp
  • the PES packet includes a packet header and packet data, and subtitle data is coded in the form of the syntax of PES_data_field() within packet data (or packet header).
  • a “data_identifier” field is coded into a value of ‘0x20’.
  • a “subtitle_stream_id” field which is identification information of a subtitle stream within the PES packet, has a value of ‘0x00’ in case of the DVB subtitle stream.
  • subtitle data is formatted to be arranged according to the syntax of subtitling_segment() starting from a bit stream of ‘0000 1111’.
  • An “end_of_PES_data_field_maker” field is a data end identifier.
  • a complete set of segments of subtitle services associated with the same PTS is called a ‘display set, and the “end_of_PES_data_field_maker” field indicates that a final segment of the display set is finished.
  • the particular field and values therein may be changed accordingly.
  • FIG. 5 illustrates some exemplary types of subtitle segments used according to an exemplary embodiment.
  • a 3D display definition segment (3D_DDS), a 3D page composition segment (3D_PCS), and a 3D region composition segment (3D_RCS) are segments for transferring a 3D region configuration information defining a display region of a subtitle.
  • a 3D object data segment (3D_ODS) is a segment defining subtitle data with respect to each object and its depth-related information.
  • the CLUTDS and the DVLUTDS are used for transmitting data to be referred to when coding data with respect to an object is interpreted, which serves to reduce the bandwidth required for transferring the subtitle data and depth-related information.
  • An end of display set segment may be used to explicitly indicate that one display set (of information) has been finished.
  • the subtitle service may be fabricated with a size different from an overall screen size of the receiver, and accordingly, in transmitting a subtitle stream, a display size fabricated in consideration of the subtitle service can be explicitly designated.
  • the 3D display definition segment (3D_DDS) can be selectively used to define a maximum range of an image region for which a subtitle can be rendered in the receiver.
  • the subtitles can be provided in a three-dimensional manner, and thus the rendering available range can be defined by designating a maximum value and a minimum value in a three-dimensional manner, namely, in three axial directions of a 3D rectangular coordinates system.
  • subtitles can be fabricated in units of (object) pages as an arrangement or combination of object regions for displaying graphical objects, which are then transmitted and displayed at the receiver.
  • the 3D page composition segment (3D_PCS) defines a list of object regions constituting a page and a position of each object region in a 3D space of having a particular reference point.
  • the respective object regions are disposed such that horizontal scan lines do not overlap.
  • the page composition segments (3D_PCS) includes state information of a page, namely, information regarding whether data transferred through a corresponding segment is to update a portion of the page (“normal case”), information regarding whether every element constituting a page is to be newly transmitted in order to correct an existing page (“acquisition point”), or information regarding whether an existing page is discarded and a completely new page is defined (“mode change”).
  • the “mode change’ state is rarely used, for example, only at a start point of a program or only when there is a significant difference in the form of subtitles.
  • the page composition segment (3D_PCS) may further include time-out information regarding the particular page, namely, information regarding a valid term of the particular instance of the page.
  • the 3D region composition segment (3D_RCS) defines the size of the individual object region in the 3D space, the attributes of CLUT (Color Look-Up Table) designation information and the like used for expressing color, and a list of objects to be displayed within an object region.
  • each object region may have a solid form, not a planar form, and accordingly, it may have a virtual box shape within a 3D display space provided by the 3D television.
  • the 3D region composition segment according to the present exemplary embodiment includes an attribute definition field regarding a plane as well as a plane in the direction of the user.
  • the 3D object data is used to describe certain coding data for each object.
  • an object data segment includes solid configuration information for each object, which allows the receiver to properly render a 3D object based on the solid configuration information.
  • a color look-up table for defining pixel values of particular pixels (namely, values related to color and transparency) as a mapping relationship between pseudo color (CLUT_entry_id) value and actual colors (Y, Cr, Cb, and T) is associated with each of the object regions, such that the receiver can determine an actual display color of pseudo-color values included in a subtitle stream with reference to the CLUT.
  • CLUT definition segment CLUTDS
  • transfer information for configuring the CLUT to the receiver.
  • a particular CLUT is applied to each object region, and a new definition of the CLUT may be transferred to update the mapping relationship between the pseudo-color value and the actual color.
  • the CLUTDS follows the ETSI EN 300 743 V1.3.1 technical standard with respect to a particular type of DVB subtitling system, and thus a description of the particular details thereof will be omitted merely for the sake of brevity, but would be understood by those skilled in the art.
  • solid coordinates information of a 3D subtitle object can be expressed in terms of pseudo-depth values. Namely, a reciprocal relationship between pseudo-depth value(s) and physical depth coordinates information is stored in the depth value look-up able (DVLUT) in the receiver.
  • the depth of pixels is represented as one or more pseudo-depth values, and the receiver converts such pseudo-depth values into physical depth coordinates with reference to the DVLUT to thus thereby reduce the required transmission bandwidth.
  • the DVLUT definition segment (DVLUTDS) is used for transferring information for configuring the DVLUT to the receiver.
  • the DVLUT may be previously determined when the receiver is fabricated, and the DVLUTDS may or may not be transferred separately. Also, in this case, of course, the DVLUT may be updated through the DVLUTDS.
  • subtitling segments may include a common part in the structure of the syntax.
  • Such common syntax structure will now be described with reference to FIG. 6 prior to explaining each segment.
  • FIG. 6 illustrates the structure of a common syntax of certain subtitling segments.
  • the “sync_byte” is an 8-bit synchronization field coded with a value of ‘0000 1111’.
  • a decoder parses a segment based on a “segment_length” field within a PES packet, it may determine whether or not a transmission packet has a missing part by verifying its synchronization by using the “sync_byte” field.
  • the “segment_type” field indicates a type of data within a segment_data_field().
  • the “segment_type” field has a value of ‘0x10’, such would indicate that the segment has a page composition segment data (3D_PCS).
  • field values of ‘0x11’, ‘0x12’, ‘0x13’, and ‘0x14’ indicate that the fields are a region composition segment (3D_RCS), a CLUT definition segment (CLUTDS), an object data segment (3D_ODS), and a display definition segment (3D_DDS), respectively.
  • a field value of ‘0x80’ can indicate an end of a display set segment.
  • the DVLUT definition segment (DVLUTDS) may be indicated as, for example, a field value of ‘0x40’, which is one type of value that is reserved for later use in the ETSI EN 300 743 V1.3.1 technical standard.
  • a “page_id” value discriminates a subtitle service of data included in a subtitling segment through a comparison with a value included in a subtitling descriptor.
  • the segments having the page_id signaled as a composition page identifier in the subtitling descriptor is used to transfer subtitling data that is particularly applied to a single subtitle service.
  • segments having a page identifier (e.g., page_id) signaled as an ancillary page identifier in the subtitling descriptor can be used to transfer subtitling data shared by a plurality of subtitle services.
  • a “segment_length” field indicates the number of bytes to be included in a segment_data_field() and can be disposed (or placed) behind the “segment_length” field.
  • the segment_data_field() is payload of the corresponding segment.
  • the syntax of the payload varies according to segment types, and the details of which will be described in turn hereinafter.
  • FIG. 7 illustrates the structure of the syntax of a 3D display definition segment (3D_DDS).
  • a “dds_version_number” field indicates a version of a display definition segment. If any one of the content of the display definition segment is changed, the version number may be increased in a modulo-16 manner.
  • a “display_window_flag” field When a “display_window_flag” field is set as 1, such indicates that a subtitle display set associated with the DDS should be rendered within a maximum rendering available range (referred to as a ‘window region’, hereinafter) set in the display.
  • the size and position of the window region are defined by the following parameters, namely, by “display_window_horiozntal_position_maximum”,“display_window_horiozntal_position_minimum”,“display_window_vertical_position_minimum”,“display_window_vertical_position_maximum”,“display_window_z-position_minimum”,and“display_window_z_position_maximum”fields.Meanwhile, when the “display_window_flag” field is set as 0, such indicates that a subtitle display set associated with the DDS must (or should) be directly rendered in a front and rear space of
  • the “display_width” field indicates a maximum horizontal directional width of the display assumed by a subtitle stream associated with the segment. Meanwhile, the “display_height” field may indicate a value of 1 at a maximum vertical directional height of the display assumed by the subtitle stream associated with the segment.
  • the “display_window_horizontal_position_minimum” field can indicate the leftmost pixel of the subtitle window region based on the leftmost pixel of the display.
  • the “display_window_horizontal_position_minimum” field can indicate the rightmost pixel of the subtitle window region based on the leftmost pixel of the display.
  • the “display_window_vertical_position_minimum” field can indicate the uppermost line of the subtitle window region based on the uppermost scan line of the display.
  • the “display_window_vertical_position_maximum” field can indicate the lowermost line of the subtitle window region based on the uppermost scan line of the display.
  • the display definition segment (3D-DDS) can additionally include two particular fields, namely, the “display_window_z-position_minimum” and the “display_window_z-position_maximum” fields, in addition to the four two-dimensional fields described in the ETSI EN 300 743 V1.3.1 technical standard.
  • the “display_window_z-position_minimum” field indicates a minimum coordinates value on the z axis of the window region. Namely, this field value indicates a value of a position farthest from a viewer in the range of the z-axis value with respect to a subtitle expressed in a 3D manner.
  • the unit of this field value may be the same as a single pixel size value in the two-dimensional field.
  • the “display_window_z-position_maximum” field indicates a maximum coordinates value on the z-axis of the window region. Namely, this field value indicates a value of a position nearest to the viewer in the range of the z-axis value with respect to the subtitle expressed in a 3D manner.
  • FIG. 8 illustrates the structure of the syntax of a 3D page composition segment (3D_PCS).
  • a “page_time_out” field indicates a time duration taken for a page instance to be erased from the screen because it is not valid any longer, in units of seconds.
  • a “page_version_number” field indicates the version of a page composition segment. If any one of the content of the page composition segment is changed, the version number increases in a modulo-16 manner.
  • a “page_state” field signals a state of a subtitling page instance described in a page composition segment.
  • the values of the “page_state” field are defined as shown in Table 1 shown below (See ETSI EN 300 743 V1.3.1 technical standard regarding the DVB type subtitling system).
  • a display set When the “page_state” field value indicates the “mode change” or the “acquisition point”, a display set must (or should) include a region composition segment (3D_RCS) with respect to each of object regions constituting a page associated with the page composition segment (3D_PCS).
  • 3D_RCS region composition segment
  • a “region_id” field is a unique identifier with respect to a single object region. Each object region is displayed within a page instance defined in the page composition segment.
  • a “region_horizontal_address” field indicates a horizontal address of the uppermost left pixel of an object region, and a “region_vertical_address” field indicates a vertical address of the uppermost line of the object region.
  • the object region location information described in the page composition segment (3D_PCS) additionally includes a “region_z_address” field.
  • the “region_z_address” field indicates a z-axis coordinates value with respect to the rear plane of the object region. In this case, if the object region does not have a planar form or a uniform face, the “region_z_address” field indicates a minimum value of the z coordinate.
  • FIG. 9 illustrates the structure of the syntax of a 3D region composition segment (3D_RCS).
  • a “region_id” field is an 8-bit unique identifier with respect to an object region including information in an RCS.
  • a “region_version_number” field is a version of the object region.
  • a “region_fill_flag” is set to 1
  • CLUT color look-up table
  • a “region_fill_flag” field indicates that a front face of the object region should be filled by a background color defined by a “region_8-bit_pixel-code” field.
  • a “region_width” field indicates a horizontal directional length of the object region by the pixel number
  • a “region_height” field indicates a vertical directional length of the object region by the pixel number
  • a “region_z-length” field added as one of the 3D attributes of the object region indicates the length of the 3D object region on the z-axis. Accordingly, the size of the 3D object region space is determined by “region_width”, “region_height”, and “region_z-length”.
  • FIG. 10 illustrates the dimension of a 3D object region and reference point coordinates of an object region space defined by the page composition segment (3D_PCS) in implementing a 3D subtitling according to an exemplary embodiment.
  • a “region_level_of_compatibility” field indicates a minimum CLUT type required for the decoder to decode the object region. If this field has a value of ‘0x01’, it indicates that 2-bit/input CLUT is required; if this field has a value of ‘0x02’, it indicates that a 4-bit/input CLUT is required; and if this field has a value of ‘0x03’, it indicates that an 8-bit/input CLUT is required.
  • a ‘region_depth field indicates a pixel color depth intended for the object region. If this field has a value of ‘0x01’, it indicates that a pixel color depth is 2 bits; if this field has a value of ‘0x02’, it indicates that a pixel color depth is 4 bits; and if this field has a value of ‘0x03’, it indicates that a pixel color depth is 8 bits.
  • a “CLUT_id” field discriminates the CLUT applied to the particular object region.
  • a “region_8-bit_pixel-code” field indicates an input value (or entry), namely, a pseudo-color value, in a 8-bit CLUT to be applied as a background color (or other graphical display characteristic) with respect to the object region when the “region_fill_flag” field is set.
  • a pseudo-color value in a 8-bit CLUT to be applied as a background color (or other graphical display characteristic) with respect to the object region when the “region_fill_flag” field is set.
  • a “region_4-bit_pixel-code” field indicates an input value (or entry) in a 4-bit CLUT to be applied as a background color (or other graphical display characteristic) with respect to the object region when the “region_fill_flag” field is set, in case where the color depth of the object region is 4 bits or in case where the color depth of the object region is 8 bits and the “region_level_of_compatibility” field indicates that the 4-bit/input CLUT meets the minimum requirements. In other cases, the value of this field is not defined.
  • a ‘region_2-bit_pixel-code” field indicates an input value (or entry) in a 2-bit CLUT to be applied as a background color (or other graphical display characteristic) with respect to the object region when the “region_fill_flag” field is set, in case where the color depth of the object region is 2 bits or in case where the color depth of the object region is 4 bits or 8 bits and the “region_level_of_compatibility” field indicates that the 2-bit/input CLUT meets the minimum requirements. In other cases, the value of this field is not defined.
  • a “DVLUT_id” field discriminates a DVLUT applied to the object region.
  • an “object_id” field is a unique identifier of an object displayed within the object region. Namely, when the “object_type” field has a value of ‘0x00’, it indicates a bit map object; if the “object_type” field has a value of ‘0x01’, it indicates a character object; and if the “object_type” field has a value of ‘0x02’, it indicates a character string object.
  • An “object_provider_flag” is a 2-bit flag indicating how the object is provided. If this field has a value of ‘0x00’, it indicates that the object is provided as a subtitling stream; and if this field has a value of ‘0x01’, it indicates that the object is provided in a state that it is stored in a ROM in the receiver decoder.
  • An “object_horizontal_position” field indicates a horizontal directional position of left pixels at the uppermost end of an object in units of the pixels
  • an “object_vertical_position” field indicates a vertical directional position of left pixels at the uppermost end of the object in units of the pixels.
  • each object can have a 3D shape, and thus object position information described in the region composition segment (3D_RCS) additionally includes an “object_z_position” field.
  • This field indicates coordinates on the z-axis on the rear surface of a subtitle object.
  • an object When an object has a non-uniform surface, it indicates a minimum value on the z-axis of the object space.
  • This field must have a value ranging from 0 to region_z-length -1, and a value outside this range is received, it is erroneous information, so the receiver is requested to perform error processing by itself.
  • the “object_type” field value is ‘0x01’ indicating that the corresponding object is a character object or when the field value is ‘0x02’ indicating that the corresponding object is a character string object
  • foreground or background color information or other graphical characteristic regarding the corresponding object is provided.
  • a “foreground_pixel_code” field indicates an input value, namely, a pseudo-color value, in the 8-bit CLUT selected as a foreground color of a character or a character string
  • a “background_pixel_code” field indicates a pseudo-color value selected as a background color of the character or the character string, namely, as the background color of the object region.
  • a “top_surface_type” field indicates a type of the top surface of a 3D character and has a value corresponding to a ‘uniform plane’, ‘rounded’, or other graphical characteristic.
  • a “side_surface_type” field indicates a type of sides in contact with the top surface of the 3D character, having a value corresponding to ‘shaded’, ‘tilted’, ‘non-tilted’, or the like.
  • FIGs. 11 and 12 illustrate exemplary structures of the syntax of a 3D object data segment (3D_ODS).
  • An “object_id” field is an 8-bit unique identifier with respect to an object to which the segment data is related.
  • An “object_version_number” field is the version of the segment data. When any content within the segment changes, the version number can be increased in a modulo-16 manner.
  • An “object_coding_method” field indicates a method in which an object is coded. If this field has a value of ‘0x00’, it indicates that a 2D object has been coded by pixels, and if this field has a value of ‘0x01’, it indicates that a 2D object has been coded into a character string. In one embodiment, when this field has a value of ‘0x02’, it indicates that a 3D object has been coded by pixels, and when this field has a value of ‘0x03’, it indicates that a character string to be displayed as 3D has been coded.
  • non_modifying_color_flag When a “non_modifying_color_flag” field is set as 1, it indicates that an input value 1 of the CLUT is a color that cannot be corrected (i.e., correction-unavailable color).
  • correction-unavailable color When the correction-unavailable color is assigned to an object pixel, the color of the background where the corresponding pixels are positioned or the object cannot be corrected. This scheme can be used to generate a ‘transparent hole’ in the object.
  • pixel coding data might also be included.
  • an “object_coding_method” field has a field value of ‘0x00’ indicating a 2D object coded by pixels
  • pixel coding data can be inserted.
  • the “object_coding_method” field has a value of ‘0x01’ indicating a 2D object coded by character string
  • a character string coding data can be inserted.
  • 3D pixel coding data can be inserted.
  • a “top_field_data_block_length” field indicates the number of bytes included in pixel data sub-blocks with respect to an odd numbered scan line screen image (top field) among two interlace-scanned screen images.
  • a “bottom_field_data_block_length” field indicates the number of bytes included in pixel data sub-blocks with respect to even numbered scan line screen image (bottom field) among the two interlace-scanned screen images.
  • the bytes of the pixel data sub-blocks pixel-data_sub-block() with respect to bottom field corresponding to the “bottom_field_data_block_length” field value are sequentially inserted.
  • the word length can be adjusted by filling eight (or other appropriate number of) padding bits.
  • a ‘number_of_codes” field indicates the number of code bytes to be processed by the decoder. Subsequent to this field, character codes corresponding to a field value are arranged.
  • a “top_surface_color_block_length” field and a “top_surface_depth_block_length” field indicates the number of bytes of data expressing a front surface in the 3D object data.
  • the front surface refers to a portion exposed to the surface, namely, the surface seen by the user.
  • a “top_surface_color_block_length” field indicates the number of bytes of pixel value data
  • a “top_surface_depth_block_length” field indicates the number of bytes of depth information regarding the front surface.
  • A“hidden_surface_color_block_length”field and a “hidden_surface_depth_block_length” field indicate the number of bytes of code data used for expressing a hidden surface of the 3D object data.
  • the hidden surface refers to information regarding an area of the 3D object occluded (or otherwise not immediately viewable) by the front surface, namely, an area that can be filtered to be seen through the front surface when the front surface is set to be at least partially transparent or translucent.
  • the code data expressing such hidden surface includes pixel value data and depth information, like the data of the front surface.
  • a “hidden_surface_color_block_length” field indicates the number of bytes of pixel value data with respect to the hidden surface
  • a “hidden_surface_depth_block_length” field indicates the number of bytes of depth information regarding the hidden surface.
  • the bytes of pixel data sub-block pixel-data_sub-block() regarding the front surface are sequentially inserted correspondingly according to the “top_surafce_color_block_length” field value.
  • a color value with respect to each pixel constituting the front surface of the 3D object is expressed as a pseudo-color value, namely, as an input value of the CLUT.
  • the receiver extracts pixels values of the respective pixels in the form of a pseudo-color value from the pixel data sub-block pixel-data_sub-block(), and obtains an actual color for a display and an applied transparency value by performing conversion by using the CLUT.
  • the syntax structure of the pixel-data_sub-block() is the same as that described in the ETSI EN 300 743 V1.3.1 technical standard regarding the DVB type subtitling system.
  • the ETSI EN 300 743 V1.3.1 technical standard regarding the DVB type subtitling system will be quoted to be used and its detailed description will be omitted merely for the sake of brevity, but would be clearly understood by those skilled in the art.
  • the bytes of a 3D coordinates data sub-block z_data_sub-block_3D() regarding the front surface of the 3D object are sequentially inserted.
  • the 3D coordinates data sub-block z_data_sub-block_3D() is comprised of a byte string obtained by coding the 3D object, including coding data of depth coordinates with respect to each pixel of the front surface of the 3D object.
  • the depth coordinates regarding each pixel refers to the position in the z-axis direction with respect to corresponding pixels, and the receiver can perform 3D rendering on the corresponding portion by using it to render displays in a 3D manner.
  • the bytes of the pixel data sub-block pixel-data_sub-block() regarding the hidden surface are sequentially inserted, correspondingly according to a “hidden_surface_color_block_length” field.
  • the bytes of 3D coordinates data sub-block z_data_sub-block_3D_3D() regarding the hidden surface are sequentially inserted correspondingly according to a “hidden_surface_depth_block_length” field value.
  • the 3D coordinates data sub-block z_data_sub-block_3D() has a syntax similar to that of the pixel data sub-block pixel-data_sub-block(). In this case, however, as mentioned above, in one exemplary embodiment, in indicating depth coordinates of a 3D object, a method similar to the method in which a pixel value is expressed by using a CLUT input value may be employed.
  • a depth value look up table defining a reciprocal relationship between a pseudo-depth value and physical depth coordinates is previously transferred to the receiver, and depth information is displayed by an input value of the DVLUT, namely, by a pseudo-depth value, in the 3D coordinates data sub-block z_data_sub-block_3D(), thereby reducing a transmission bandwidth.
  • the DVLUT may be defined by a DVLUT definition segment (DVLUTDS) and updated.
  • the DVLUT definition segment (DVLUTDS) may be previously stored in the receiver.
  • the word length can be adjusted by filling eight (or other appropriate number of) padding bits.
  • FIG. 13 illustrates an exemplary structure of the syntax of a depth value look-up table definition segment (DVLUTDS) for defining a DVLUT.
  • the DUVLT is a table for defining a reciprocal relationship between pseudo-depth values from 0 to 255 and actual depth information by using an 8-bit non-coded integer.
  • FIGs. 14 and 15 illustrate examples of the structure of the DVLUT.
  • the DVLUT may store reciprocal relationship by the pixels between input values (DVLUT_entry_id), namely, pseudo-depth values, having values within the range from 0 to 255, and physical depth values.
  • the DVLUT may store the reciprocal relationship by the pixels between the input values (DVLUT_entry_id), namely, pseudo-depth values, having values within the range from 0 to 255, and horizontal disparity (or parallax) values.
  • the DVLUT may be separately defined for each object.
  • the DVLUT definition segment (DVLUTDS) of FIG. 13 is used to define or update the DVLUT.
  • the “DVLUT_id” field indicates a unique identifier with respect to the DVLUT.
  • the “DVLUT_version_number” field indicates the version of the DVLUTDS. When even one of the contents within this segment changes, the version number increases in a modulo-16 manner.
  • An “output_type” field indicates a type of an output value of the DVLUT defined by the DVLUTDS. In detail, if the “output_type” field value is 0, it indicates that an output value of the DVLUT defined by the DVLUTDS is a physical depth value. Meanwhile, if the “output_type” field value is 1, an output of the DVLUT defined by the DVLUTDS is a horizontal disparity (or parallax) value with respect to pixels.
  • a “DVLUT_entry_id” field indicates an input value of the DVLUT.
  • a first input value of the DVLUT has a value of ‘0’.
  • “output_type” field value is 0 indicating that an output value of the DVLUT is an physical depth value, namely, a z-axis directional position value with respect to pixels
  • “output_num_value” field data and “output_den_value” field data are inserted in order to express z-axis directional depth coordinates values as a ratio over the screen width of the receiver or in units of multiples such that they correspond to the DVLUT input value.
  • the receiver may configure the DVLUT of FIG. 14 or that of FIG. 15, and interpret the pseudo-depth value transferred in the 3D depth coordinate data sub-block z_data_sub-block_3D() to render a particular 3D subtitle(s).
  • the receiver may calculate an physical depth value (z_value) according to Equation 1 shown below by using the “output_num_value” field data and the “output_den_value” field data with respect to the respective DVLUT input values, namely, the pseudo-depth values, and stores the same in the DVLUT.
  • the “width” denotes the screen width.
  • the receiver converts a pseudo-depth value transmitted in the 3D depth coordinate data sub-block z_data_sub-block_3D() of 3D_ODS into an physical depth value by using the DVLUT to obtain physical 3D information regarding each point of each object within the sub-title, and renders the same so as to be displayed on the 3D display device.
  • the “output_num_value” value may include a positive or negative sign (or symbol), so the depth value (z-value) cannot have both negative and positive values.
  • a 3D image is formed at a front side, namely, toward the viewer, based on the display reference face.
  • the absolute size of the depth value (z-value) refers to the relative size based on the screen width, and an image is formed at the rear side or at the front side of the display reference face depending whether or not the depth value (z-value) is a negative value or a positive value.
  • the receiver stores each DVLUT input value, namely, the pair of pseudo-depth value and the horizontal disparity value in the DVLUT.
  • the receiver regards the image expressed by the pixel data sub-block pixel-data_sub-block() as a base view (e.g., a left image) among the pair of 2D images, and shifts the pixels of the left image by the horizontal disparity value with respect to the corresponding pixels to configure a subtitle object image with respect to an extended view.
  • the unit of the horizontal disparity value is preferably expressed by pixels.
  • the horizontal disparity value is 0, it indicates the same position as that of the display reference face (e.g., the rear face of the object region in which the z-axis coordinate is a “region_z_address” field value). If the horizontal disparity value is a negative value, it indicates that an image is focused at the front side of the display reference value. If the horizontal disparity value is a positive value, it indicates that an image is focused at the rear side of the display reference value.
  • FIG. 16 is an exemplary schematic block diagram of a television receiver according to an exemplary embodiment.
  • Such television receiver may be suitable for receiving broadcast signals based on one or more DVB technical standards used to reproduce images and video.
  • a broadcast signal receiving unit 190 may be configured to receive broadcast signals including 3D image signals, subtitle data, depth-related information related to the subtitle data, and a 3D region composition information that define a display region of the subtitle data.
  • a demodulation and channel decoding unit 200 (or other equivalent component), which cooperates with the broadcast signal receiving unit, selects a broadcast signal of one channel from among a plurality of broadcast signals, demodulates the selected broadcast signal, and error-correction-decodes the demodulated broadcast signal to output a transport stream (TS).
  • demodulation and channel decoding unit 200 may be comprised of a demodulating unit (or other equivalent component) configured to demodulate at least portions of the broadcast signals received by the broadcast signal receiving unit, and a decoding unit (or other equivalent component) configured to decode at least portions of the broadcast signals demodulated by the demodulation unit.
  • the decoding unit may also include a demultiplexing unit 202, a voice decoding unit 204, and an image decoding unit 206, which will be explained further below.
  • the demultiplexing unit 202 demutiplexes the TS to separate a video PES, an audio PES, and a subtitle PES, and extracts PSI/SI information including a program map table (PMT).
  • a depacketization unit releases packets of the video PES and the audio PES to restore a video ES and an audio ES.
  • the voice decoding unit 204 decodes the audio ES to output a digital audio bit stream.
  • the audio bit stream is converted into an analog audio signal by a digital-to-analog converter, amplified by an amplifier, and then outputted through a speaker (or other output means).
  • the image decoding unit 206 parses the video ES to extract header data and an MPEG-2 video bit stream.
  • the image decoding unit 206 also decodes the MPEG-2 video bit stream and outputs left and right broadcast image signals for implementing and displaying stereoscopic 3D images.
  • a selection filter 208, a subtitle decoding unit 210, a CLUT 212, a pixel buffer 214, a composition buffer 216, a DVLUT 218, and a 3D graphic engine 220 constitute a circuit (or other scheme or means of hardware, software and/or a combination thereof) for decoding the subtitle stream to generate a 3D subtitle bit map image.
  • the selection filter 208 receives the subtitle stream, namely, subtitle PES packets, from the demultiplexing unit 202, separates a header to depacketize the packets, and restores subtitle segments.
  • the selection filter 208 extracts a presentation time stamp (PTS) (or similar component) from the header of each PES packet and stores it in a memory, so that the data can be referred to in a subtitle reproduction process.
  • PTS presentation time stamp
  • the selection filter 208 may not directly extract the PTS and an additional processor may extract the PTS.
  • the selection filter 208 may receive a PMT from the demultiplexing unit 202 and parses it to extract a subtitling descriptor.
  • the selection filter 208 classifies the subtitle segments based on a page identifier (page_id) value.
  • an object data segment (3D_ODS) is provided to the subtitle decoding unit 210 and decoded.
  • a display definition segment (3D_DDS), a page composition segment (3D_PCS), a region composition segment (3D_RCS) are provided to the composition buffer 216 and used for decoding of the object data segment (3D_ODS) and rendering of a 3D subtitle.
  • the CLUTDS is used to generate or update a CLUT, and the CLUT may be stored in the composition buffer 216 or in an additional memory.
  • the DVLUTDS is used to configure or update the DVLUT, and in this case, the DVLUT may also be stored in the composition buffer 216 or in the additional memory. Meanwhile, the segments such as 3D_DDS, 3D_PCS, 3D_RCS, CLUTDS, DVLUTDS, and the like, may be decoded by the subtitle decoding unit 210 or an additional processor and then provided to corresponding units, instead of being directly provided from the selection filter 208 to the units.
  • the subtitle decoding unit 210 decodes the object data segment (3D_ODS) with reference to the CLUT, 3D_DDS, 3D_PCS, and 3D_RCS, and temporarily stores the decoded pixel data in the pixel buffer 214.
  • the subtitle decoding unit 210 decodes the pixel data sub-block pixel-data_sub-block() with respect to a top field and the pixel data sub-block pixel-data_sub-block() with respect to a bottom field, and stores the decoded pixel data in the pixel buffer 214 by the pixels.
  • the subtitle decoding unit 210 decodes the character code, generates a bit map image at the corresponding character string object, and stores the same in the pixel buffer 214.
  • the subtitle decoding unit 210 decodes the pixel data sub-block pixel-data_sub-block() with respect to the front surface and hidden surface of the object and stores the decoded pixel data in the pixel buffer 214.
  • the subtitle decoding unit 210 converts a pseudo-color value expressing a color value of each pixel into an actual color value with reference to the CLUT 212 and stores the same.
  • the subtitle decoding unit 210 decodes 3D coordinate data sub-blocks z_data_sub-block_3D() with respect to the front surface and hidden surface of the object, and stores the decoded 3D coordinate data in the pixel buffer 214.
  • the subtitle decoding unit 210 converts a pseudo-depth value expressing a depth value of each pixel into an physical depth value with reference to the DVLUT 218, and stores the same. In this manner, when the 3D-coded object is decoded, the map with respect to the depth coordinate values and the horizontal disparity value with respect to each pixel are stored together with the 2D pixel bit map in the pixel buffer 214.
  • the composition buffer 216 temporarily stores and updates the data included in the 3D_DDS, 3D_PCS, 3D_RCS, so that the subtitle decoding unit 210 can refer to them in decoding the object data segment (3D_ODS).
  • the data stored in the composition buffer 216 is used when the 3D graphic engine 220 renders the 3D subtitle.
  • the DVLUT 218 stores the depth value look-up table.
  • the subtitle decoding unit 210 may refer to the depth value look-up table in decoding the object data segment (3D_ODS). Also, when the DVLUT 218 performs rendering by the 3D graphic engine 220, the depth value look-up table can be referred to.
  • the 3D graphic engine 220 (or other equivalent component such as a graphics acceleration chip or processor) configures a subtitle page and object regions constituting the page with reference to the display definition segment (3D_DDS), the page composition segment (3D_PCS), and the region composition segment (3D_RCS) stored in the composition buffer 216 and the presentation time stamp (PTS) stored in the memory.
  • the 3D graphic engine 220 receives pixel bit map data and pixel depth map data from the pixel buffer 214 with respect to each object corresponding to each object region, and performs 3D rendering based on the received data to generate a 3D subtitle image signal.
  • the television receiver displays a 3D image in a holographic/volumetric manner
  • the 3D graphic engine 220 outputs 3D graphic data fitting the format.
  • the 3D graphic engine 220 outputs a pair of subtitle OSD images to be outputted to the left and right image screen plane.
  • the pixel depth map data is stored in the pixel buffer 214, and the pixel depth map data includes a depth coordinates value or the horizontal disparity value with respect to each pixel.
  • the depth coordinates value or the horizontal disparity value with respect to each pixel is converted from the pseudo-depth value to an physical depth value by the subtitle decoding unit 210 and then stored.
  • the depth coordinates value or the horizontal disparity value with respect to each pixel may be stored in the form of a pseudo-depth value in the pixel buffer 214.
  • the 3D graphic engine 220 may perform 3D rendering operation while converting the pseudo-depth value into an physical depth value with reference to the DVLUT 218.
  • the substantial 3D rendering operation may be implemented by using one of the existing 3D rendering schemes or a scheme that may be proposed in the future, or by using any combination of applicable schemes together.
  • Those skilled in the art can easily implement such techniques and thus its detailed description will be omitted merely for the sake of brevity.
  • a mixer/formatter 222 (or other equivalent component) mixes a 3D subtitle image signal transferred form the 3D graphic engine 214 to the left and right broadcast image signals transferred from the image decoding unit 206, and outputs the mixed signal to the screen plane 224. Accordingly, the 3D subtitle included in the stereoscopic region is outputted in an overlap manner on the 3D image of the screen plane 224.
  • a program map table (PMT) is extracted from a DVB broadcast stream and a subtitling descriptor within the PMT is read to recognize basic information regarding a subtitle.
  • PMT program map table
  • S250 subtitling_type field within the subtitling descriptor
  • the PMT is parsed to recognize a PID value of a stream having the “stream_type” value of ‘0x06’ (S252).
  • the “stream_type” value indicates a TS transferring a PES packet including private data in the ISO/IEC 13818.1 standard regarding MPEG-2. Because the DVB subtitling stream is transferred through the private data PES packet, it can be a candidate of the subtitle PES packet detected based on the “stream_type” value.
  • the DVB subtitle PES packets have the “data_identifier” field having a value set as ‘0x20’ and the “subtitle_stream_id” field having a value set as ‘0x00’. Accordingly, in step S254, a PES packet having the “data_identifier” field having the value of ‘0x20’ and the “subtitle_stream_id” field having the value of ‘0x00’ is detected.
  • the segment data is classified and extracted according to the “segment_type” field value (S256).
  • the segment is classified as a 3D page composition segment (3D_PCS).
  • the segment is classified as a 3D region composition segment (3D_RCS).
  • the segment is classified as a CLUT definition segment (CLUT).
  • the segment is classified as a 3D object data segment (3D_ODS).
  • segment_type has a value of ‘0x43’
  • segment_type has a value of ‘0x44’
  • segment is classified as a DVLUT definition segment (DVLUTDS).
  • step S258 a window space (or region) on which a 3D subtitle is to be displayed, a page space, the size and position of an object region space, 3D object composition information are recognized by using the 3D_DDS, the 3D_PCS, and the 3D_RCS.
  • step S240 the pixel data sub-block pixel-data_sub-block() and the 3D coordinates data sub-block z_data_sub-block_3D() included in the 3D_ODS are decoded to acquire a pseudo-color value and a pseudo-depth value with respect to a 3D subtitle object.
  • the pseudo-color value is converted into a color value to be actually outputted in the 3D display by using the CLUT.
  • the pseudo-depth value is converted into a depth value (z-position) to be actually outputted in the 3D display by using the DVLUT (S262).
  • 3D rendering is performed to generate a 3D subtitle bit map, formatted according to a 3D display scheme, and then outputted (S264).
  • the horizontal disparity value at the level of pixels is used as an example of information additionally transferred to the receiver.
  • the horizontal disparity value may be provided as a value between the pair of stereoscopic images, for example, between the left and right images.
  • a horizontal disparity value with respect to an extended value namely, another view among the left and right images, based on a base view among the left and right images may be provided as z-axis directional position information included in the 3D_DDS, the 3D_PCS and the 3D_RCS through each segment.
  • an image with respect to the extended view is generated shifting an image with respect to the base view by the horizontal disparity value included in each segment, and synthesized (i.e. composed, combined, etc.) with a broadcast image, so as to be outputted to the stereoscopic display.
  • the 3D coordinate data sub-block z_data_sub-block_3D() may not be transmitted.
  • the “top_surface_depth_block_length”field and the hidden_surface_depth_block_length” field are set as 0.
  • the receiver may acquire horizontal disparity information of the subtitle by using the 3D_DDS, the 3D_PCS, the 3D_RCS and the DVLUT, and control an output of the subtitle in the stereoscopic display by using them.
  • the DVLUT is used to interpret spatial information for providing a cubic effect to the 2D subtitle with respect toe ach object.
  • the pseudo-depth value utilizing the DVLUT may be used to indicate the “display_window_z-position_minimum” and the “display_window_z-position_maximum” field value in the display definition segment (3D_DDS) and/or the “region_z_address” field value in the object region composition segment (3D_RCS).
  • particular features of the exemplary embodiments can be part of an apparatus (e.g., control device, circuitry, dedicated processors, integrated chip, and/or implemented together with software, hardware, and/or a combination thereof having appropriate coding/commands stored in a storage medium to be executed by a microprocessor or the like) comprising at least a selector (208) or other equivalent component, a 3D subtitle decoder (210) or other equivalent component, and a 3D graphics engine (220) or other equivalent component.
  • an apparatus e.g., control device, circuitry, dedicated processors, integrated chip, and/or implemented together with software, hardware, and/or a combination thereof having appropriate coding/commands stored in a storage medium to be executed by a microprocessor or the like
  • a selector (208) or other equivalent component e.g., a selector (208) or other equivalent component
  • a 3D subtitle decoder (210) or other equivalent component e.g., a 3D graphics engine
  • the selector (208) can receive subtitle data streams obtained from broadcast multimedia signals and classify various subtitle segments in the received subtitle data streams into a 3D object data segment used in defining subtitle data with respect to graphical objects, at least three 3D display characteristic segments used in transferring 3D region configuration information defining a subtitle display region, a depth-related definition segment used in generating/updating a depth value look-up table, and a color-related definition segment used in generating/updating a color look-up table.
  • the 3D subtitle decoder (210) cooperating with said selector can perform decoding of said 3D object data segment with reference to said color look-up table and to said 3D display characteristic segments, and decoding of 2D/3D objects, coded in terms of pixels, with reference to said depth value look-up table for converting pseudo-depth values into physical depth values for each pixel, to thus generate 3D subtitle bit map image information.
  • the 3D graphics engine (220) cooperating with said 3D subtitle decoder can process said 3D subtitle bit map image information, based on said depth value look-up table and said 3D display characteristic segments, into 3D subtitle image signals used for graphically rendering subtitles to have a three-dimensional visual display effect.
  • the apparatus may further comprise a processing unit (222) (or other equivalent component) that receives said 3D subtitle image signals generated from said 3D graphics engine and receives decoded 3D image signals obtained from said broadcast multimedia signals, and processes said 3D subtitle image signals and said decoded 3D image signals to be suitable for displaying images and subtitles together in a three-dimensional manner.
  • a processing unit 222 (or other equivalent component) that receives said 3D subtitle image signals generated from said 3D graphics engine and receives decoded 3D image signals obtained from said broadcast multimedia signals, and processes said 3D subtitle image signals and said decoded 3D image signals to be suitable for displaying images and subtitles together in a three-dimensional manner.
  • the apparatus may further comprise a storage medium comprised of a composition buffer (216) to store at least said 3D display characteristic segments comprising a 3D display definition segment, a 3D page composition segment, a 3D region composition segment; a pixel buffer (214) to store at least said 3D subtitle bit map image information; said depth value look-up table (DVLUT) (218) to store at least depth information related to pixels; and said color look-up table (CLUT) (212) to store at least color information related to pixels.
  • a composition buffer 216
  • the apparatus may further comprise a storage medium comprised of a composition buffer (216) to store at least said 3D display characteristic segments comprising a 3D display definition segment, a 3D page composition segment, a 3D region composition segment
  • a pixel buffer (214) to store at least said 3D subtitle bit map image information
  • said depth value look-up table (DVLUT) (218) to store at least depth information related to pixels
  • said color look-up table (CLUT) (212) to store
  • the apparatus having such selector, such storage medium, such 3D subtitle decoder, such 3D graphics engine, and such processing unit can be implemented in a three-dimensional display device.
  • 3D graphics processing technology e.g., OpenGL standards, X3D standards, Mobile Graphics Standard, etc.
  • 3D display-related technology e.g., 3D-NTSC, 3D-PAL, 3D-SECAM, MUTED: Multi-User 3D Television Display, 3D-TV, 3D-HD, 3D-PDPs, 3D-LCDs, etc.
  • the television receiver (or other type of digital content reception means) can display subtitles or other textual information with a cubic or 3D effect such that the subtitles can naturally blends in with the 3D images or video. Accordingly, the utility and attractiveness of subtitles can be increased. Also, because additional parameters are supplementarily added to the existing subtitle signal transmission/reception method, backward compatibility with the existing technical standards can be achieved.
  • the various features described herein can be implemented for any display device that has a 3D image display capability and needs to have a closed caption (i.e. subtitle, textual information, etc.) display function.
  • the present features can be particularly useful for a stereoscopic display device regardless of a formatting type, such as a dual-mode display, a time sequence-mode display, or the like.

Abstract

A three-dimensional (3D) subtitle display method in a 3D display device is disclosed to display a subtitle such that the subtitle naturally blends with a 3D image. In a method for displaying three-dimensional (3D) subtitles in a 3D display device, 3D image signals, subtitle data, depth-related information related to the subtitle data, and a 3D region composition information defining a display region of the subtitle data are received. The subtitle data is formed to be three-dimensional using the received depth-related information and the 3D region composition information, and the 3D image signals are displayed together with the formed subtitle data.

Description

    THREE-DIMENSIONAL SUBTITLE DISPLAY METHOD AND THREE-DIMENSIONAL DISPLAY DEVICE FOR IMPLEMENTING THE SAME
  • This disclosure relates to a three-dimensional subtitle display method and a three-dimensional display device for implementing the same.
  • For displaying text information (e.g., subtitles, closed captions, etc.) related to a broadcast program on a screen, the broadcast program may be produced by including text information (i.e. subtitles) into a broadcast signal itself and transmitted together therewith, or text information (subtitles) not integrated with the broadcast signal may be separately transmitted to allow a broadcast receiver to selectively display such subtitles. So-called closed caption broadcasting can display speech-to-text outputs, lyrics of songs, film script translations, online TV guide information, emergency broadcast data, and other text-type services to viewers. Recently, as closed caption broadcasting tends to be limitedly compulsory in terms of media access rights and providing comprehensive services, its utilization is expected to drastically increase.
  • In particular, according to the digital video broadcasting (DVB) standard stipulated by the ETSI (European Telecommunications Standards Institute), auxiliary images that are additionally provided to the receiver may include graphic elements, beyond simple text, thereby increasing the utility of supplementary images (See ‘ETSI EN 300 468 V1.9.1’ standard regarding the standard of service information of DVB systems and ‘ETSI EN 300 743 V1.2.1’ and ‘ETSI EN 300 743 V1.3.1’ standards regarding a DVB subtitling system, etc.). In these standards, such supplementary images including text and/or graphic elements are referred to as ‘subtitles’, and lately, the term ‘subtitle(s)’ is more commonly used in relation to an image display device as well as to DVB technology. Here, the term ‘subtitling’ is used to denote the overall processing used for displaying subtitles (and/or other textual information).
  • Meanwhile, the advancement of television technology has reached a level of implementing a device for displaying stereoscopic images (or three-dimensional (3D) images), and in particular, full-scale commercialization of a stereoscopic type 3D television is underway. In a stereoscopic 3D display system, two images are captured by using two image sensors spaced apart by about 65 millimeters, which simulates the positioning of a pair of human eyes, and then transmitted as broadcast signals to a receiver. Then, the receiver produces the two images to be viewed by the left and right eyes of the viewer, thus simulating a binocular disparity to allow for depth perception and stereoscopic viewing.
  • When subtitles (i.e., textual information, etc.) are desired to be implemented into a stereoscopic-type 3D television (or other types of three-dimensional display devices), subtitle data (or other information related to subtitles or text) should also be implemented with 3D effects such that degradation to the overall image quality of the 3D images is minimized. Implementation of stereoscopic 3D images together with subtitles may be achieved by, for example, simultaneously displaying the subtitles on left and right images being alternately displayed. However, the processing and displaying of subtitles with 3D effects is technically difficult to achieve in practice.
  • The technical details involved in transmitting such subtitle information are defined in the above-mentioned technical standards stipulating DVB technology. However, the contents defined in these standards are suitable only for transmitting subtitle data with respect to a general 2D television, but not for transmitting so-called 3D television signals. If so-called three-dimensional (3D) subtitle data is intended to be transmitted according to the above-mentioned standards, additional subtitle data (or information) corresponding to a pair of images for implementing a 3D image must be transmitted, which results in at least twice the amount of information that needs to be handled, thus causing problems of ineffective use of resources in signal processing, signal transmission and signal reception.
  • A method in which the broadcast station transmits 2D subtitle image data and the receiver itself renders the desired 3D subtitle images based on the received 2D subtitle data can be considered, but properly defining and rendering the various 3D attributes (e.g., the thickness and stereoscopic color of caption/subtitle text, the color and transparency of a caption/subtitle text display area, and the like) with respect to continuously inputted subtitles would significantly increase the calculation burden at the receiver. A method in which the receiver previously determines the 3D attributes to be indiscriminately applied to subtitles and performing 3D rendering on the subtitles according to the fixed 3D attributes may be considered. In this case, although the calculation burden could be somewhat reduced, the aesthetic nature of the displayed 3D subtitles may deteriorate and thus the displayed 3D subtitles do not meet viewer satisfaction.
  • Accordingly, there is a need to develop a method for effectively displaying so-called three-dimensional subtitles (or other textual information) that correspond with 3D images in a receiver, with minimal image degradation, while using limited bandwidth resources in an effective manner and minimizing the calculation burden at the receiver.
  • The present inventors recognized the above-identified needs and drawbacks and based upon such problem recognition, conceived the various features described hereafter. As a result, a method for effectively displaying 3D images and information that allow subtitles and other textual information to be effectively blended with a 3D image such that the subtitles properly correspond with the 3D image has been developed as per the embodiments described hereafter.
  • Another aspect of the embodiments herein is to provide a 3D display device suitable for implementing such display method.
  • To achieve the above technical aspects, there is provided a method for displaying three-dimensional (3D) subtitles in a 3D display device in which, 3D image signals, subtitle data, depth-related information related to the subtitle data, and a 3D region composition information defining a display region of the subtitle are received. Then, the subtitle data is then formed (i.e., generated, synthesized, produced, etc.) to be three-dimensional using the received depth-related information and the 3D region composition information. Thereafter, the 3D image signals are displayed together with the formed subtitle data.
  • The 3D image signal, the subtitle data, the depth-related information and the 3D region composition information may be received via broadcast signals.
  • According to the embodiments described hereafter, because the image signals and subtitle (text) data can be expressed in terms of pixels, the depth-related information may be also processed in terms of pixels.
  • In one exemplary embodiment, the 3D subtitle display method may further include generating a depth value look-up table for storing the reciprocal relationship between pseudo-depth information and actual depth information. In this case, the depth-related information may be expressed as pseudo-depth information with respect to each pixel, and in the displaying step, the pseudo-depth information may be converted into the actual depth information with reference to the depth value look-up table. Meanwhile in this embodiment, look-up table definition information for generating or updating the depth value look-up table may be included in the broadcast signal. As a modification thereto, the display device may pre-store the depth value look-up table for later use.
  • In one embodiment, the actual depth information regarding pixels may be a depth value in a forward/backward direction with respect to the pixels. Here, the ‘forward/backward direction’ can refer to a direction that is relatively perpendicular to a display screen of a display device at the receiver. The look-up table definition information may indicate a reciprocal relationship between the magnifications with respect to the pseudo-depth information and the display screen of a receiver, based on which the depth value look-up table may be generated to indicate the reciprocal relationship between the pseudo-depth information and the actual depth information. Meanwhile, in a different embodiment, the actual depth information may be a horizontal disparity value with respect to the pixels.
  • In another embodiment, the subtitle data may be received in units of subtitle objects, which may include characters, a character string or a graphic element. A display region of the subtitle data may be set in units of subtitle objects. The display region may be a 3D object space obtained by extending an object region in a forward/backward direction under, for example, the DVB standard. Meanwhile, the display region of the subtitle data may be set to include a plurality of subtitle objects. This display region may be a 3D page space obtained by extending a page in a forward/backward direction under, for example, the DVB standard.
  • Additionally, there is provided a 3D display device including a broadcast signal receiving unit and a composing and outputting unit. The broadcast signal receiving unit may receive a broadcast signal including a 3D image signal, subtitle data, depth-related information related to the subtitle, and a 3D region composition information defining a display region of the subtitle data, and demodulates and decodes the broadcast signal. The composing and outputting unit may form (i.e., compose, synthesize, generate, etc.) the subtitle data to be three-dimensional using the depth-related information and the 3D region composition information, and displays the 3D images together with the subtitle data that was formed to be three-dimensional.
  • The 3D display device may further include a memory for storing a depth value look-up table indicating a reciprocal relationship between pseudo-depth information and actual depth information. The depth-related information included in the broadcast signal may be expressed as pseudo-depth information regarding each pixel. In this case, the composing and outputting unit may convert the pseudo-depth information into actual depth information with reference to the depth value look-up table and configure the subtitle data based on the actual depth information.
  • In the exemplary embodiments, subtitles (as well as other types of textual information) can be displayed to have a particular visual effect, such as a cubic effect or a three-dimensional effect, such that the subtitles correspond with a 3D image without drastically increasing the calculation burden required for performing 3D rendering at a television receiver. Thus, the utility and visual attractiveness of the subtitles can be greatly increased. Also, additional parameters can be supplemented and provided to describe the 3D subtitle display region, the depth information, and the like, based upon the technical standards that apply to existing subtitle signal transmission and reception techniques, to thus accomplish backward compatibility with particular existing technical standards.
  • FIG. 1 illustrates a schematic block diagram of a broadcasting system according to an exemplary embodiment;
  • FIG. 2 illustrates the exemplary syntax of a subtitling descriptor;
  • FIG. 3 illustrates an example of allocating certain field values to subtitling type fields in the subtitling descriptor of FIG. 2;
  • FIG. 4 illustrates the syntax of a general subtitle packet data;
  • FIG. 5 illustrates some exemplary types of subtitle segments used according to an exemplary embodiment;
  • FIG. 6 illustrates an exemplary structure of a common syntax of subtitling segments;
  • FIG. 7 illustrates an exemplary structure of the syntax of a 3D display definition segment (3D_DDS);
  • FIG. 8 illustrates an exemplary structure of the syntax of a 3D page composition segment (3D_PCS);
  • FIG. 9 illustrates an exemplary structure of the syntax of a 3D region composition segment (3D_RCS);
  • FIG. 10 illustrates exemplary dimension and reference point coordinates of an object region space defined in implementing a 3D subtitling according to an exemplary embodiment;
  • FIGs. 11 and 12 illustrate exemplary structures of the syntax of a 3D object data segment (3D_ODS);
  • FIG. 13 illustrates an exemplary structure of the syntax of a depth value look-up table definition segment (DVLUTDS) for defining a DVLUT;
  • FIG. 14 illustrates an example of the structure of a DVLUT;
  • FIG. 15 illustrates another example of the structure of a DVLUT;
  • FIG. 16 is a schematic block diagram of a television receiver according to an exemplary embodiment; and
  • FIG. 17 is a flow chart illustrating an exemplary process of displaying 3D subtitles in the television receiver illustrated in FIG. 16.
  • Regarding 3D (three-dimensional) video standards technology, there are basically five main techniques by which 3D/stereoscopic imagery can be encoded onto a standard video signal. These can be characterized as field-sequential, side-fields (side-by-side), sub-fields (over-under), separate channels, and anaglyph. It can be said that the field- sequential and side-field methods are the most commonly used today.
  • Also, for a video signal to be converted to another standard, it can be said that three aspects of the video signal may need to be changed: field rate, lines/frame and color encoding. To do so, field/line omission and/or duplication techniques, field/line interpolation techniques, and motion estimation techniques may need to be performed.
  • All of the above-mentioned techniques are applicable to the following features in the exemplary embodiments.
  • FIG. 1 illustrates a schematic block diagram of a broadcasting system according to an exemplary embodiment. The illustrated system can support at least one type of existing (or developing) DVB standard, and includes a 3D image/video capture means (such as a binocular camera 100), a processing means (such as a preprocessing unit 102), a coding means (such as a program coding unit 104), a control means (such as a controller 114), and a channel processing means (such as a channel adapter 120). The exemplary labels or names for these and other elements are not meant to be limiting, as other equivalent and/or alternative elements may be implemented as well.
  • The (binocular) camera 100 includes two lenses and corresponding image pickup devices that are used to capture a pair of 2D images of a front scene. The two lenses and the image pickup devices are disposed to have a distance of about 65 millimeters (mm) like that of human eyes, and accordingly, the camera 100 acquires two 2D images having a binocular disparity. In the following description, among the two 2D images constituting a pair of stereoscopic images, the image acquired by the left lens (and its image pickup device) will be referred to as a left image, and the image acquired by the right lens (and its image pickup device) will be referred to as a right image.
  • The preprocessing unit 102 performs appropriate processing to cancel (or at least minimize) any noise or other type of signal interference that may be present in the original left and right images acquired by the camera 100, then performs image processing to make any corrections to such images, and solves an imbalancing phenomenon of a luminance component. The images before and/or after the preprocessing performed by the preprocessor 102 may be stored in a storage unit (or other memory device), and editing or other further image processing thereto may be performed. Accordingly, there may be some time delay between when the camera 100 captures the images and when the program coding unit 104 performs coding for the captured images.
  • In the program coding unit 104, a voice/audio coding unit 106 receives voice/audio signals from a plurality of microphones (or other audio pick-up device) installed at proper locations with respect to an image capturing area/region and codes the voice/audio signals according to an appropriate technical standard (such as an AC-3 standard) to generate an audio elementary stream (ES) output.
  • The image decoding unit 108 codes the images acquired by the camera 100 according to a certain technical standard and compresses the coded images by removing the temporal and spatial redundancy to generate a video elementary stream (ES) output. In one exemplary embodiment, the image coding unit 108 codes the image signals according to an MPEG-2 standard of ISO/IEC 13838-2 and a digital video broadcasting (DVB) standard stipulated by ETSI. Also, the image coding unit 108 may code the images according to the H.264/AVC standard stipulated by the Joint Video Team (JVT) of ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6, or other various coding schemes.
  • The subtitle coding unit 110 receives the subtitle data from the controller 114, compresses and codes the received subtitle data, and outputs a subtitle stream. The coding process by the subtitle coding unit 110 and the coding process by the image coding unit 108 may be performed in a similar manner.
  • A packet generating unit (or other type of data packet processing means) packetizes the audio ES outputs, the video ES outputs, and the subtitle streams to generate a packetized elementary stream (PES) output.
  • A transport multiplexing unit 112 receives a voice PES, an image PES, and a subtitle PES, and also receives program specific information (PSI) and service information (SI) from the controller 114, and multiplexes the PES packets and the PSI/SI information to generate a transport stream (TS) output.
  • The controller 114, which includes a subtitle generating unit 116 and a PSI/SI generating unit 118, also controls the general operation of the overall system and generates subtitle data and PSI/SI data.
  • The subtitle generating unit 116 generates time-coded subtitle information and provides the same to the subtitle coding unit 110. In one modification, the subtitle coding unit 110 may be integrated with the subtitle generating unit 116. Meanwhile, the subtitle generating unit 116 also provides information regarding a subtitle service to the PSI/SI generating unit 118. In particular, according to an exemplary embodiment, the subtitle service information may include information indicating that the subtitles are provided in a three-dimensional manner.
  • The PSI/SI generating unit 118 operates to generate PSI/SI data. In particular, in the PSI/SI data, a program map table (PMT) includes a subtitling descriptor (or other type of means for providing descriptive information or indicators) for signaling (or describing) subtitle service information. In one exemplary embodiment, the subtitling descriptor is generated based on the ETSI EN 300 468 V1.9.1 standard, which is a technical standard for service information (SI) of a DVB system. A detailed syntax structure of the subtitling descriptor will be described later in this disclosure.
  • A channel adapter 120 performs error correction coding on the transport stream (TS) such that any errors that may be caused by noise (or other interference) via a transport channel can be detected from the receiver and appropriately corrected. Then, appropriate modulation according to a particular modulation scheme (e.g., an OFDM modulation scheme) that is adopted by the system is performed, and the modulated signals are transmitted. In an exemplary embodiment, a source coding and modulation process by the channel adapter 120 is performed based on ETSI EN 300 744 V1.6.1, which is a technical standard for a source coding and modulation scheme applicable to digital radio (wireless) channel / (OTA: over-the-air) interface transmissions.
  • In the system of FIG. 1, the subtitle stream carries or transfers one or more subtitles (i.e. subtitle data), and each subtitle service (or other content service) includes text and/or graphic information required to properly display the subtitles. Each subtitle service includes one or more object pages (or other form of graphical representation) that are displayed to overlap on a broadcast image. Each (subtitle) object page may include one or more object regions (or areas), and each object region may have a rectangular or box-like shape having particular attributes. Graphic objects can be disposed with the object regions in the background image. Each graphic object may be comprised of a character (letter), a word, a sentence, or may be a logo, an icon, any other type of graphical element, or any combination thereof. According to an exemplary embodiment, when transmitting at least one pixel value (or other graphic element unit) with respect to each graphic object, at least one depth value (or other value that represents a specific 3D graphical/image characteristic) of each pixel may be provided or a horizontal disparity value (or other value that represents a graphical difference, discrepancy, inconsistency, inequality, or the like) between 2D images for implementing a stereoscopic 3D image may be provided such that each graphic object can be properly rendered and displayed in a three-dimensional manner in the receiver.
  • In the subtitling system based on a particular DVB subtitle technical standard according to at least one exemplary embodiment described herein, a page (or other graphical layout scheme) that provides an arrangement or combination of object regions for displaying each object is defined, based upon which subtitles are to be displayed. To this end, a page identifier (e.g., page_id or other type of identification means) is assigned to each page, and when certain definition information regarding object regions or objects is transferred to the receiver, a page identifier indicating the specific page associated with such corresponding information is included. Apart from transmission of information for defining or updating a page through a PES packet, in the system of FIG. 1, the language of a subtitle and a page identifier are signaled (or otherwise informed) through a subtitling descriptor (or other form of parameter) within the PMT, and an accurate display point (with respect to a display location and/or display time) is designated through a presentation timing stamp (PTS) or other type of time-related parameter provided within a PES packet header (or other portion of the packet).
  • Meanwhile, one or more portions of the subtitle data (or information) may be shared among two or more subtitle services within the same subtitle stream. Namely, within the subtitle stream, each data unit (i.e., a segment to be described hereinbelow) may include data applied only to a single particular subtitle service or may include data shared by two or more subtitle services. An example of data shared by two or more subtitle services may be a segment transmitting a logo (or other graphic element) commonly applied to subtitle services in various languages. Accordingly, a page identifier is assigned to each segment. The page identifier may include a composition page identifier (or other type of indication) for signaling or identifying a segment applied only to a single subtitle service and an ancillary page identifier (or other type of indication) for signaling or identifying a data segment shared among a plurality of subtitle services. A subtitling descriptor can send the page identifier values of segments required for decoding each subtitle service.
  • FIG. 2 shows an exemplary structure of the syntax of a subtitling descriptor. Although various types of fields or parameters having various bit lengths may be employed, some exemplary syntax characteristics will be explained as follows:
  • A “descriptor_tag” field is an 8-bit descriptor identifier. In case of a subtitling descriptor, it may have a value of ‘0x59’. A “descriptor_length” is an 8-bit field indicating a total number of bytes of a descriptor part following this field value. An “ISO_639_language_code” is a 24-bit field indicating a subtitle language as a three-character language code according to the ISO-639 technical standard. A “subtitling_type” is an 8-bit field for transmitting contents of a subtitle and information regarding an intended screen ratio. Here, it is clear that the above-mentioned examples are not meant to be limiting, as numerous other types of tags, fields, bit lengths, parameters, or the like can be used and implemented for the exemplary embodiments described herein.
  • FIG. 3 illustrates an example of field values allocated to a “subtitling_type” field. Although various other values may be employed, some exemplary field values will be explained as follows:
  • According to the ETSI EN 300 468 V1.9.1 technical standard with respect to a particular technical standard for service information of the DVB system, in the “subtitling_type” field, a field value of ‘0x00’ is reserved for later use, a field value of ‘0x01’ indicates a European Broadcasting Union (EBU) tele-text subtitle service, a field value of ‘0x02’indicates a service associated with an EPU tele-text service, a field value of ‘0x03’ indicates vertical blanking interval data, and field values from ‘0x04’ to ‘0x0F’ are reserved for later use. A field value of ‘0x10’ indicates a (general) DVB subtitle without restriction to a screen ratio, a field value of ‘0x11’ indicates a (general) DVB subtitle to be displayed on a monitor having a screen ratio of 4:3, a field value of ‘0x12’ indicates a (general) DVB subtitle to be displayed on a monitor having a screen ratio of 16:9, a field value of ‘0x13’ indicates a (general) DVB subtitle to be displayed on a monitor having a screen ratio of 2.21:1, and a field value of ‘0x14’ indicates a (general) DVB subtitle to be displayed on an HD (High Definition) monitor. Field values from ‘0x15’ to ‘0x1F’ are reserved for later use. A field value of ‘0x20’ indicates a DVB subtitle (for the hearing impaired) without restriction to a screen ratio, a field value of ‘0x21’ indicates a DVB subtitle (for the hearing impaired) to be displayed on a monitor having a screen ratio of 4:3, a field value of ‘0x22’ indicates a DVB subtitle (for the hearing impaired) to be displayed on a monitor having a screen ratio of 16:9, a field value of ‘0x23’ indicates a DVB subtitle (for the hearing impaired) to be displayed on a monitor having a screen ratio of 2.21:1, and a field value of ‘0x24’ indicates a DVB subtitle (for the hearing impaired) to be displayed on a high definition (HD) monitor. Field values from ‘0x25’ to ‘0x2F’ are reserved for later use. A field value of ‘0x30’ indicates an open language translation service for the hearing impaired, and a field value of ‘0x31’ indicates a closed language translation service for the hearing impaired. Field values from ‘0x32’ to ‘0XAF’ are reserved for later use, and field values from ‘0XB0’ to ‘0XFE’ are allowed for the user to define and use, and a field value of ‘0xFF’ are reserved for later use.
  • Here, it is clear that the above-mentioned field values are not meant to be limiting, as numerous other types of tags, fields, bit lengths, parameters, or the like can be used and implemented for the exemplary embodiments described herein.
  • In one embodiment, some field values, for example, ‘0xB0’ to ‘0xB4’ are allowed for the user to define and use, and to indicate that subtitle segments (to be described hereafter) include 3D subtitle information. In particular, in the “subtitling_type”, a field value of ‘0xB0’ indicates a 3D subtitle without restriction to a screen ratio, a field value of ‘0xB1’ indicates a 3D subtitle service to be displayed on the monitor having a screen ratio of 4:3, a field value of ‘0xB2’ indicates a 3D DVB subtitle to be displayed on the monitor having a screen ratio of 16:9, and a field value of ‘0xB3’ indicates a 3D subtitle service to be displayed on the monitor having a screen ratio of 2.21:1, and a field value of ‘0x14’ indicates a 3D subtitle service to be displayed on the HD monitor. Clearly, other field values may also be used additionally and/or alternatively to those described above.
  • In FIG. 2, “composition_page_id” is a 16-bit field for discriminating a page, namely, a composition page, including data applied only to a single subtitle service. This field may be used for segments, namely, a 3D page composition segment (3D_PCS) and a 3D region composition segment (3D_RCS), which define a data structure of a subtitle screen. Meanwhile, “ancillary_page_id” is a 16-bit field used for discriminating a page, namely, an ancillary page, including data shared by two or more services. This field is preferably not used for a composition segment, and selectively used only for a color look-up table definition segment (CLUTDS), a 3D object data segment (3D_ODS), a depth value look-up table definition segment (DVLUTDS), or the like.
  • In this manner, the subtitling descriptor (or other type of means for providing descriptive information or indicators) of the exemplary embodiments is adapted (or configured) to provide indications (or signals) with respect to at least a subtitle language(s), a subtitle type(s), a composition_page_id value required for decoding a service, and an ancillary_page_id value with respect to each service included in a stream.
  • As mentioned above, a basic building block or unit of a subtitle stream is a subtitle segment. Subtitle segments are included in PES packets, and the PES packets are included in transmission packets of a TS and transferred to the receiver. A display time point of a subtitle (i.e. the time when the subtitle should be displayed) may be determined by a presentation timing stamp (PTS) or similar time information within a header of the PES packet. The PES packet includes a packet header and packet data, and subtitle data is coded in the form of the syntax of PES_data_field() within packet data (or packet header). In FIG. 4, in case of a DVB subtitle stream, a “data_identifier” field is coded into a value of ‘0x20’. A “subtitle_stream_id” field, which is identification information of a subtitle stream within the PES packet, has a value of ‘0x00’ in case of the DVB subtitle stream. In a while-loop, subtitle data is formatted to be arranged according to the syntax of subtitling_segment() starting from a bit stream of ‘0000 1111’. An “end_of_PES_data_field_maker” field is a data end identifier. A complete set of segments of subtitle services associated with the same PTS is called a ‘display set, and the “end_of_PES_data_field_maker” field indicates that a final segment of the display set is finished. Here, it can be clearly understood that the particular field and values therein may be changed accordingly.
  • FIG. 5 illustrates some exemplary types of subtitle segments used according to an exemplary embodiment. A 3D display definition segment (3D_DDS), a 3D page composition segment (3D_PCS), and a 3D region composition segment (3D_RCS) are segments for transferring a 3D region configuration information defining a display region of a subtitle. A 3D object data segment (3D_ODS) is a segment defining subtitle data with respect to each object and its depth-related information. The CLUTDS and the DVLUTDS are used for transmitting data to be referred to when coding data with respect to an object is interpreted, which serves to reduce the bandwidth required for transferring the subtitle data and depth-related information. An end of display set segment may be used to explicitly indicate that one display set (of information) has been finished.
  • The subtitle service may be fabricated with a size different from an overall screen size of the receiver, and accordingly, in transmitting a subtitle stream, a display size fabricated in consideration of the subtitle service can be explicitly designated. The 3D display definition segment (3D_DDS) can be selectively used to define a maximum range of an image region for which a subtitle can be rendered in the receiver. In particular, according to the present exemplary embodiment, the subtitles can be provided in a three-dimensional manner, and thus the rendering available range can be defined by designating a maximum value and a minimum value in a three-dimensional manner, namely, in three axial directions of a 3D rectangular coordinates system.
  • As mentioned above, subtitles can be fabricated in units of (object) pages as an arrangement or combination of object regions for displaying graphical objects, which are then transmitted and displayed at the receiver. The 3D page composition segment (3D_PCS) defines a list of object regions constituting a page and a position of each object region in a 3D space of having a particular reference point. Here, the respective object regions are disposed such that horizontal scan lines do not overlap. Also, the page composition segments (3D_PCS) includes state information of a page, namely, information regarding whether data transferred through a corresponding segment is to update a portion of the page (“normal case”), information regarding whether every element constituting a page is to be newly transmitted in order to correct an existing page (“acquisition point”), or information regarding whether an existing page is discarded and a completely new page is defined (“mode change”). Here, the “mode change’ state is rarely used, for example, only at a start point of a program or only when there is a significant difference in the form of subtitles. Meanwhile, the page composition segment (3D_PCS) may further include time-out information regarding the particular page, namely, information regarding a valid term of the particular instance of the page.
  • The 3D region composition segment (3D_RCS) defines the size of the individual object region in the 3D space, the attributes of CLUT (Color Look-Up Table) designation information and the like used for expressing color, and a list of objects to be displayed within an object region. In the present exemplary embodiment, each object region may have a solid form, not a planar form, and accordingly, it may have a virtual box shape within a 3D display space provided by the 3D television. In consideration of this, the 3D region composition segment according to the present exemplary embodiment includes an attribute definition field regarding a plane as well as a plane in the direction of the user.
  • The 3D object data is used to describe certain coding data for each object. According to the present exemplary embodiment, an object data segment includes solid configuration information for each object, which allows the receiver to properly render a 3D object based on the solid configuration information.
  • A color look-up table (CLUT) for defining pixel values of particular pixels (namely, values related to color and transparency) as a mapping relationship between pseudo color (CLUT_entry_id) value and actual colors (Y, Cr, Cb, and T) is associated with each of the object regions, such that the receiver can determine an actual display color of pseudo-color values included in a subtitle stream with reference to the CLUT. Also, the CLUT definition segment (CLUTDS) can be denoted as transfer information for configuring the CLUT to the receiver. A particular CLUT is applied to each object region, and a new definition of the CLUT may be transferred to update the mapping relationship between the pseudo-color value and the actual color. In one embodiment, the CLUTDS follows the ETSI EN 300 743 V1.3.1 technical standard with respect to a particular type of DVB subtitling system, and thus a description of the particular details thereof will be omitted merely for the sake of brevity, but would be understood by those skilled in the art.
  • In the present exemplary embodiment, in a similar manner to a method in which the color with respect to pixels is displayed by using the pseudo-color value of the CLUT, solid coordinates information of a 3D subtitle object can be expressed in terms of pseudo-depth values. Namely, a reciprocal relationship between pseudo-depth value(s) and physical depth coordinates information is stored in the depth value look-up able (DVLUT) in the receiver. In transmitting the 3D subtitle information, the depth of pixels is represented as one or more pseudo-depth values, and the receiver converts such pseudo-depth values into physical depth coordinates with reference to the DVLUT to thus thereby reduce the required transmission bandwidth. The DVLUT definition segment (DVLUTDS) is used for transferring information for configuring the DVLUT to the receiver. Here, the DVLUT may be previously determined when the receiver is fabricated, and the DVLUTDS may or may not be transferred separately. Also, in this case, of course, the DVLUT may be updated through the DVLUTDS.
  • Meanwhile, the subtitling segments may include a common part in the structure of the syntax. Such common syntax structure will now be described with reference to FIG. 6 prior to explaining each segment.
  • FIG. 6 illustrates the structure of a common syntax of certain subtitling segments.
  • The “sync_byte” is an 8-bit synchronization field coded with a value of ‘0000 1111’. When a decoder parses a segment based on a “segment_length” field within a PES packet, it may determine whether or not a transmission packet has a missing part by verifying its synchronization by using the “sync_byte” field.
  • The “segment_type” field indicates a type of data within a segment_data_field(). Regarding a particular subtitling procedure, if the “segment_type” field has a value of ‘0x10’, such would indicate that the segment has a page composition segment data (3D_PCS). Also, field values of ‘0x11’, ‘0x12’, ‘0x13’, and ‘0x14’ indicate that the fields are a region composition segment (3D_RCS), a CLUT definition segment (CLUTDS), an object data segment (3D_ODS), and a display definition segment (3D_DDS), respectively. A field value of ‘0x80’ can indicate an end of a display set segment. The DVLUT definition segment (DVLUTDS) may be indicated as, for example, a field value of ‘0x40’, which is one type of value that is reserved for later use in the ETSI EN 300 743 V1.3.1 technical standard.
  • A “page_id” value discriminates a subtitle service of data included in a subtitling segment through a comparison with a value included in a subtitling descriptor. Here, the segments having the page_id signaled as a composition page identifier in the subtitling descriptor is used to transfer subtitling data that is particularly applied to a single subtitle service. In comparison, segments having a page identifier (e.g., page_id) signaled as an ancillary page identifier in the subtitling descriptor can be used to transfer subtitling data shared by a plurality of subtitle services.
  • A “segment_length” field indicates the number of bytes to be included in a segment_data_field() and can be disposed (or placed) behind the “segment_length” field.
  • The segment_data_field() is payload of the corresponding segment. The syntax of the payload varies according to segment types, and the details of which will be described in turn hereinafter.
  • FIG. 7 illustrates the structure of the syntax of a 3D display definition segment (3D_DDS).
  • A “dds_version_number” field indicates a version of a display definition segment. If any one of the content of the display definition segment is changed, the version number may be increased in a modulo-16 manner.
  • When a “display_window_flag” field is set as 1, such indicates that a subtitle display set associated with the DDS should be rendered within a maximum rendering available range (referred to as a ‘window region’, hereinafter) set in the display. The size and position of the window region are defined by the following parameters, namely, by “display_window_horiozntal_position_maximum”,“display_window_horiozntal_position_minimum”,“display_window_vertical_position_minimum”,“display_window_vertical_position_maximum”,“display_window_z-position_minimum”,and“display_window_z_position_maximum”fields.Meanwhile, when the “display_window_flag” field is set as 0, such indicates that a subtitle display set associated with the DDS must (or should) be directly rendered in a front and rear space of a display plane defined by a “display_width” field and a “display_height” field.
  • The “display_width” field indicates a maximum horizontal directional width of the display assumed by a subtitle stream associated with the segment. Meanwhile, the “display_height” field may indicate a value of 1 at a maximum vertical directional height of the display assumed by the subtitle stream associated with the segment.
  • The “display_window_horizontal_position_minimum” field can indicate the leftmost pixel of the subtitle window region based on the leftmost pixel of the display. The “display_window_horizontal_position_minimum” field can indicate the rightmost pixel of the subtitle window region based on the leftmost pixel of the display.
  • The “display_window_vertical_position_minimum” field can indicate the uppermost line of the subtitle window region based on the uppermost scan line of the display. The “display_window_vertical_position_maximum” field can indicate the lowermost line of the subtitle window region based on the uppermost scan line of the display.
  • According to an exemplary embodiment, because the maximum rendering available range (namely, the subtitle window region) is defined in a three-dimensional manner, the display definition segment (3D-DDS) can additionally include two particular fields, namely, the “display_window_z-position_minimum” and the “display_window_z-position_maximum” fields, in addition to the four two-dimensional fields described in the ETSI EN 300 743 V1.3.1 technical standard.
  • The “display_window_z-position_minimum” field indicates a minimum coordinates value on the z axis of the window region. Namely, this field value indicates a value of a position farthest from a viewer in the range of the z-axis value with respect to a subtitle expressed in a 3D manner. The unit of this field value may be the same as a single pixel size value in the two-dimensional field.
  • The “display_window_z-position_maximum” field indicates a maximum coordinates value on the z-axis of the window region. Namely, this field value indicates a value of a position nearest to the viewer in the range of the z-axis value with respect to the subtitle expressed in a 3D manner.
  • FIG. 8 illustrates the structure of the syntax of a 3D page composition segment (3D_PCS).
  • A “page_time_out” field indicates a time duration taken for a page instance to be erased from the screen because it is not valid any longer, in units of seconds.
  • A “page_version_number” field indicates the version of a page composition segment. If any one of the content of the page composition segment is changed, the version number increases in a modulo-16 manner.
  • A “page_state” field signals a state of a subtitling page instance described in a page composition segment. The values of the “page_state” field are defined as shown in Table 1 shown below (See ETSI EN 300 743 V1.3.1 technical standard regarding the DVB type subtitling system).
  • Table 1
  • When the “page_state” field value indicates the “mode change” or the “acquisition point”, a display set must (or should) include a region composition segment (3D_RCS) with respect to each of object regions constituting a page associated with the page composition segment (3D_PCS).
  • Within the number of accumulated bytes processed by a decoder namely, within the while-loop repeatedly performed while the processed_length is smaller than the “segment_length” value, information regarding each object region is arranged in the order that a “region_vertical_address” value increases, and information regarding a single object region is expressed clearly at every repetition.
  • Within the while-loop, a “region_id” field is a unique identifier with respect to a single object region. Each object region is displayed within a page instance defined in the page composition segment. A “region_horizontal_address” field indicates a horizontal address of the uppermost left pixel of an object region, and a “region_vertical_address” field indicates a vertical address of the uppermost line of the object region.
  • In one exemplary embodiment, because each object region may have a format which is three-dimensionally defined, the object region location information described in the page composition segment (3D_PCS) additionally includes a “region_z_address” field. The “region_z_address” field indicates a z-axis coordinates value with respect to the rear plane of the object region. In this case, if the object region does not have a planar form or a uniform face, the “region_z_address” field indicates a minimum value of the z coordinate.
  • FIG. 9 illustrates the structure of the syntax of a 3D region composition segment (3D_RCS).
  • A “region_id” field is an 8-bit unique identifier with respect to an object region including information in an RCS.
  • A “region_version_number” field is a version of the object region. When a “region_fill_flag” is set to 1, when a color look-up table (CLUT) of the object region is changed, or when the object region has an object list having a length which is not 0, the version number increases in the modulo-16 manner.
  • A “region_fill_flag” field indicates that a front face of the object region should be filled by a background color defined by a “region_8-bit_pixel-code” field.
  • A “region_width” field indicates a horizontal directional length of the object region by the pixel number, and a “region_height” field indicates a vertical directional length of the object region by the pixel number.
  • A “region_z-length” field added as one of the 3D attributes of the object region, indicates the length of the 3D object region on the z-axis. Accordingly, the size of the 3D object region space is determined by “region_width”, “region_height”, and “region_z-length”.
  • FIG. 10 illustrates the dimension of a 3D object region and reference point coordinates of an object region space defined by the page composition segment (3D_PCS) in implementing a 3D subtitling according to an exemplary embodiment.
  • A “region_level_of_compatibility” field indicates a minimum CLUT type required for the decoder to decode the object region. If this field has a value of ‘0x01’, it indicates that 2-bit/input CLUT is required; if this field has a value of ‘0x02’, it indicates that a 4-bit/input CLUT is required; and if this field has a value of ‘0x03’, it indicates that an 8-bit/input CLUT is required.
  • A ‘region_depth field indicates a pixel color depth intended for the object region. If this field has a value of ‘0x01’, it indicates that a pixel color depth is 2 bits; if this field has a value of ‘0x02’, it indicates that a pixel color depth is 4 bits; and if this field has a value of ‘0x03’, it indicates that a pixel color depth is 8 bits.
  • A “CLUT_id” field discriminates the CLUT applied to the particular object region.
  • A “region_8-bit_pixel-code” field indicates an input value (or entry), namely, a pseudo-color value, in a 8-bit CLUT to be applied as a background color (or other graphical display characteristic) with respect to the object region when the “region_fill_flag” field is set. When 2 bits or 4 bits are applied as the pixel depth value, the value of the “region_8-bit_pxiel-code” field is not defined.
  • A “region_4-bit_pixel-code” field indicates an input value (or entry) in a 4-bit CLUT to be applied as a background color (or other graphical display characteristic) with respect to the object region when the “region_fill_flag” field is set, in case where the color depth of the object region is 4 bits or in case where the color depth of the object region is 8 bits and the “region_level_of_compatibility” field indicates that the 4-bit/input CLUT meets the minimum requirements. In other cases, the value of this field is not defined.
  • A ‘region_2-bit_pixel-code” field indicates an input value (or entry) in a 2-bit CLUT to be applied as a background color (or other graphical display characteristic) with respect to the object region when the “region_fill_flag” field is set, in case where the color depth of the object region is 2 bits or in case where the color depth of the object region is 4 bits or 8 bits and the “region_level_of_compatibility” field indicates that the 2-bit/input CLUT meets the minimum requirements. In other cases, the value of this field is not defined.
  • A “DVLUT_id” field discriminates a DVLUT applied to the object region.
  • Within the number of accumulated bytes processed by the decoder, namely, within the while-loop repeatedly performed while the processed_length is smaller than the “segment_length” value, information regarding each object region is arranged in an appropriate manner.
  • Within the while-loop, an “object_id” field is a unique identifier of an object displayed within the object region. Namely, when the “object_type” field has a value of ‘0x00’, it indicates a bit map object; if the “object_type” field has a value of ‘0x01’, it indicates a character object; and if the “object_type” field has a value of ‘0x02’, it indicates a character string object.
  • An “object_provider_flag” is a 2-bit flag indicating how the object is provided. If this field has a value of ‘0x00’, it indicates that the object is provided as a subtitling stream; and if this field has a value of ‘0x01’, it indicates that the object is provided in a state that it is stored in a ROM in the receiver decoder.
  • An “object_horizontal_position” field indicates a horizontal directional position of left pixels at the uppermost end of an object in units of the pixels, and an “object_vertical_position” field indicates a vertical directional position of left pixels at the uppermost end of the object in units of the pixels.
  • In one embodiment, each object can have a 3D shape, and thus object position information described in the region composition segment (3D_RCS) additionally includes an “object_z_position” field. This field indicates coordinates on the z-axis on the rear surface of a subtitle object. When an object has a non-uniform surface, it indicates a minimum value on the z-axis of the object space. This field must have a value ranging from 0 to region_z-length -1, and a value outside this range is received, it is erroneous information, so the receiver is requested to perform error processing by itself.
  • Meanwhile, if the “object_type” field value is ‘0x01’ indicating that the corresponding object is a character object or when the field value is ‘0x02’ indicating that the corresponding object is a character string object, foreground or background color information (or other graphical characteristic) regarding the corresponding object is provided. A “foreground_pixel_code” field indicates an input value, namely, a pseudo-color value, in the 8-bit CLUT selected as a foreground color of a character or a character string, and a “background_pixel_code” field indicates a pseudo-color value selected as a background color of the character or the character string, namely, as the background color of the object region.
  • According to one embodiment, information regarding a type with respect to a top surface, in consideration of setting of coordinates axes, and a side surface is additionally provided. A “top_surface_type” field indicates a type of the top surface of a 3D character and has a value corresponding to a ‘uniform plane’, ‘rounded’, or other graphical characteristic. A “side_surface_type” field indicates a type of sides in contact with the top surface of the 3D character, having a value corresponding to ‘shaded’, ‘tilted’, ‘non-tilted’, or the like.
  • FIGs. 11 and 12 illustrate exemplary structures of the syntax of a 3D object data segment (3D_ODS).
  • An “object_id” field is an 8-bit unique identifier with respect to an object to which the segment data is related.
  • An “object_version_number” field is the version of the segment data. When any content within the segment changes, the version number can be increased in a modulo-16 manner.
  • An “object_coding_method” field indicates a method in which an object is coded. If this field has a value of ‘0x00’, it indicates that a 2D object has been coded by pixels, and if this field has a value of ‘0x01’, it indicates that a 2D object has been coded into a character string. In one embodiment, when this field has a value of ‘0x02’, it indicates that a 3D object has been coded by pixels, and when this field has a value of ‘0x03’, it indicates that a character string to be displayed as 3D has been coded.
  • When a “non_modifying_color_flag” field is set as 1, it indicates that an input value 1 of the CLUT is a color that cannot be corrected (i.e., correction-unavailable color). When the correction-unavailable color is assigned to an object pixel, the color of the background where the corresponding pixels are positioned or the object cannot be corrected. This scheme can be used to generate a ‘transparent hole’ in the object.
  • Meanwhile, it should be noted that pixel coding data might also be included. When an “object_coding_method” field has a field value of ‘0x00’ indicating a 2D object coded by pixels, pixel coding data can be inserted. If the “object_coding_method” field has a value of ‘0x01’ indicating a 2D object coded by character string, a character string coding data can be inserted. When the “object_coding_method” field has a value of ‘0x02’ or ‘0x03’ indicating a 3D coded object, 3D pixel coding data can be inserted.
  • In more detail, as for pixel coding data with respect to 2D object, a “top_field_data_block_length” field indicates the number of bytes included in pixel data sub-blocks with respect to an odd numbered scan line screen image (top field) among two interlace-scanned screen images. A “bottom_field_data_block_length” field indicates the number of bytes included in pixel data sub-blocks with respect to even numbered scan line screen image (bottom field) among the two interlace-scanned screen images. Subsequently, the bytes of the pixel data sub-blocks pixel-data_sub-block() with respect to a top field corresponding to the “top_field_data_block_length” field value are sequentially inserted. And, the bytes of the pixel data sub-blocks pixel-data_sub-block() with respect to bottom field corresponding to the “bottom_field_data_block_length” field value are sequentially inserted. After the pixel data sub-blocks pixel-data_sub-blocks() are inserted, if a word alignment has not been made, namely, if the total number of bytes is not a multiple of the number of bytes need to constitute a word, the word length can be adjusted by filling eight (or other appropriate number of) padding bits.
  • In the coding data with respect to a 2D character string object, a ‘number_of_codes” field indicates the number of code bytes to be processed by the decoder. Subsequent to this field, character codes corresponding to a field value are arranged.
  • In the 3D pixel coding data, a “top_surface_color_block_length” field and a “top_surface_depth_block_length” field indicates the number of bytes of data expressing a front surface in the 3D object data. Here, the front surface refers to a portion exposed to the surface, namely, the surface seen by the user. In particular, a “top_surface_color_block_length” field indicates the number of bytes of pixel value data, and a “top_surface_depth_block_length” field indicates the number of bytes of depth information regarding the front surface.
  • A“hidden_surface_color_block_length”field and a “hidden_surface_depth_block_length” field indicate the number of bytes of code data used for expressing a hidden surface of the 3D object data. Here, the hidden surface refers to information regarding an area of the 3D object occluded (or otherwise not immediately viewable) by the front surface, namely, an area that can be filtered to be seen through the front surface when the front surface is set to be at least partially transparent or translucent. The code data expressing such hidden surface includes pixel value data and depth information, like the data of the front surface. In particular, a “hidden_surface_color_block_length” field indicates the number of bytes of pixel value data with respect to the hidden surface, and a “hidden_surface_depth_block_length” field indicates the number of bytes of depth information regarding the hidden surface.
  • Subsequently, the bytes of pixel data sub-block pixel-data_sub-block() regarding the front surface are sequentially inserted correspondingly according to the “top_surafce_color_block_length” field value. In the pixel data sub-block pixel-data_sub-block(), a color value with respect to each pixel constituting the front surface of the 3D object is expressed as a pseudo-color value, namely, as an input value of the CLUT. Accordingly, the receiver extracts pixels values of the respective pixels in the form of a pseudo-color value from the pixel data sub-block pixel-data_sub-block(), and obtains an actual color for a display and an applied transparency value by performing conversion by using the CLUT. In one embodiment, the syntax structure of the pixel-data_sub-block() is the same as that described in the ETSI EN 300 743 V1.3.1 technical standard regarding the DVB type subtitling system. Thus, the ETSI EN 300 743 V1.3.1 technical standard regarding the DVB type subtitling system will be quoted to be used and its detailed description will be omitted merely for the sake of brevity, but would be clearly understood by those skilled in the art.
  • Next, the bytes of a 3D coordinates data sub-block z_data_sub-block_3D() regarding the front surface of the 3D object are sequentially inserted. Here, the 3D coordinates data sub-block z_data_sub-block_3D() is comprised of a byte string obtained by coding the 3D object, including coding data of depth coordinates with respect to each pixel of the front surface of the 3D object. The depth coordinates regarding each pixel refers to the position in the z-axis direction with respect to corresponding pixels, and the receiver can perform 3D rendering on the corresponding portion by using it to render displays in a 3D manner.
  • Finally, the bytes of the pixel data sub-block pixel-data_sub-block() regarding the hidden surface are sequentially inserted, correspondingly according to a “hidden_surface_color_block_length” field. Then, the bytes of 3D coordinates data sub-block z_data_sub-block_3D_3D() regarding the hidden surface are sequentially inserted correspondingly according to a “hidden_surface_depth_block_length” field value.
  • A method of describing the 3D coordinates data sub-block z_data_sub-block_3D() will now be described. The 3D coordinates data sub-block z_data_sub-block_3D() has a syntax similar to that of the pixel data sub-block pixel-data_sub-block(). In this case, however, as mentioned above, in one exemplary embodiment, in indicating depth coordinates of a 3D object, a method similar to the method in which a pixel value is expressed by using a CLUT input value may be employed. Namely, a depth value look up table (DVLUT) defining a reciprocal relationship between a pseudo-depth value and physical depth coordinates is previously transferred to the receiver, and depth information is displayed by an input value of the DVLUT, namely, by a pseudo-depth value, in the 3D coordinates data sub-block z_data_sub-block_3D(), thereby reducing a transmission bandwidth. Here, because each 3D display device has a different z-axis range that can be expressed and there is a possibility that an actually physically rendered 3D depth value with respect to the same depth value can be differently interpreted, and such may be expressed as a depth input value transferred through the DVLUT as a relative value based on the width of the screen. In one embodiment, the DVLUT may be defined by a DVLUT definition segment (DVLUTDS) and updated. However, in a modification, the DVLUT definition segment (DVLUTDS) may be previously stored in the receiver.
  • After describing all the object data, if a word alignment has not been made, namely, if the total number of bytes is not a multiple of the number of bytes constituting the word, the word length can be adjusted by filling eight (or other appropriate number of) padding bits.
  • FIG. 13 illustrates an exemplary structure of the syntax of a depth value look-up table definition segment (DVLUTDS) for defining a DVLUT. In one embodiment, the DUVLT is a table for defining a reciprocal relationship between pseudo-depth values from 0 to 255 and actual depth information by using an 8-bit non-coded integer.
  • FIGs. 14 and 15 illustrate examples of the structure of the DVLUT. As illustrated in FIG. 14, the DVLUT may store reciprocal relationship by the pixels between input values (DVLUT_entry_id), namely, pseudo-depth values, having values within the range from 0 to 255, and physical depth values. Alternatively, as shown in FIG. 15, the DVLUT may store the reciprocal relationship by the pixels between the input values (DVLUT_entry_id), namely, pseudo-depth values, having values within the range from 0 to 255, and horizontal disparity (or parallax) values. The DVLUT may be separately defined for each object. The DVLUT definition segment (DVLUTDS) of FIG. 13 is used to define or update the DVLUT.
  • Referring back to FIG. 13, the “DVLUT_id” field indicates a unique identifier with respect to the DVLUT. The “DVLUT_version_number” field indicates the version of the DVLUTDS. When even one of the contents within this segment changes, the version number increases in a modulo-16 manner.
  • An “output_type” field indicates a type of an output value of the DVLUT defined by the DVLUTDS. In detail, if the “output_type” field value is 0, it indicates that an output value of the DVLUT defined by the DVLUTDS is a physical depth value. Meanwhile, if the “output_type” field value is 1, an output of the DVLUT defined by the DVLUTDS is a horizontal disparity (or parallax) value with respect to pixels.
  • Within the number of accumulated bytes processed by the decoder, namely, within the while-loop repeatedly performed while the processed_length is smaller than the “segment_length” value, information regarding each DVLUT mapping can be arranged appropriately.
  • Among the data regarding the DVLUT mapping information, a “DVLUT_entry_id” field indicates an input value of the DVLUT. A first input value of the DVLUT has a value of ‘0’. When the “output_type” field value is 0 indicating that an output value of the DVLUT is an physical depth value, namely, a z-axis directional position value with respect to pixels, “output_num_value” field data and “output_den_value” field data are inserted in order to express z-axis directional depth coordinates values as a ratio over the screen width of the receiver or in units of multiples such that they correspond to the DVLUT input value. Meanwhile, if an “output_type” field value is 1 indicating that an output value of the DVLUT is a horizontal disparity value, “parallax value” field data indicating a horizontal disparity value with respect to pixels is inserted correspondingly according to the DVLUT input value.
  • Based on the transferred DVLUT mapping information data, the receiver may configure the DVLUT of FIG. 14 or that of FIG. 15, and interpret the pseudo-depth value transferred in the 3D depth coordinate data sub-block z_data_sub-block_3D() to render a particular 3D subtitle(s).
  • In more detail, when the “output_type” field value is 0, the receiver may calculate an physical depth value (z_value) according to Equation 1 shown below by using the “output_num_value” field data and the “output_den_value” field data with respect to the respective DVLUT input values, namely, the pseudo-depth values, and stores the same in the DVLUT.
  • Here, the “width” denotes the screen width. When the display size of each receiver is relatively displayed, based on size, every device can guarantee a common cubic effect. Over each object, the receiver converts a pseudo-depth value transmitted in the 3D depth coordinate data sub-block z_data_sub-block_3D() of 3D_ODS into an physical depth value by using the DVLUT to obtain physical 3D information regarding each point of each object within the sub-title, and renders the same so as to be displayed on the 3D display device.
  • The “output_num_value” value may include a positive or negative sign (or symbol), so the depth value (z-value) cannot have both negative and positive values. In case of a negative number, a 3D image is formed at a rear side based on the display reference face (z=0), and in case of a positive number, a 3D image is formed at a front side, namely, toward the viewer, based on the display reference face. In this manner, the absolute size of the depth value (z-value) refers to the relative size based on the screen width, and an image is formed at the rear side or at the front side of the display reference face depending whether or not the depth value (z-value) is a negative value or a positive value.
  • Meanwhile, if the “output_type” field value is 1, the receiver stores each DVLUT input value, namely, the pair of pseudo-depth value and the horizontal disparity value in the DVLUT. The receiver regards the image expressed by the pixel data sub-block pixel-data_sub-block() as a base view (e.g., a left image) among the pair of 2D images, and shifts the pixels of the left image by the horizontal disparity value with respect to the corresponding pixels to configure a subtitle object image with respect to an extended view. Here, the unit of the horizontal disparity value is preferably expressed by pixels. If the horizontal disparity value is 0, it indicates the same position as that of the display reference face (e.g., the rear face of the object region in which the z-axis coordinate is a “region_z_address” field value). If the horizontal disparity value is a negative value, it indicates that an image is focused at the front side of the display reference value. If the horizontal disparity value is a positive value, it indicates that an image is focused at the rear side of the display reference value.
  • FIG. 16 is an exemplary schematic block diagram of a television receiver according to an exemplary embodiment. Such television receiver may be suitable for receiving broadcast signals based on one or more DVB technical standards used to reproduce images and video.
  • A broadcast signal receiving unit 190 (or other equivalent component) may be configured to receive broadcast signals including 3D image signals, subtitle data, depth-related information related to the subtitle data, and a 3D region composition information that define a display region of the subtitle data.
  • A demodulation and channel decoding unit 200 (or other equivalent component), which cooperates with the broadcast signal receiving unit, selects a broadcast signal of one channel from among a plurality of broadcast signals, demodulates the selected broadcast signal, and error-correction-decodes the demodulated broadcast signal to output a transport stream (TS). Here, demodulation and channel decoding unit 200 may be comprised of a demodulating unit (or other equivalent component) configured to demodulate at least portions of the broadcast signals received by the broadcast signal receiving unit, and a decoding unit (or other equivalent component) configured to decode at least portions of the broadcast signals demodulated by the demodulation unit. Here, the decoding unit may also include a demultiplexing unit 202, a voice decoding unit 204, and an image decoding unit 206, which will be explained further below.
  • The demultiplexing unit 202 (or other equivalent component) demutiplexes the TS to separate a video PES, an audio PES, and a subtitle PES, and extracts PSI/SI information including a program map table (PMT). A depacketization unit (or other equivalent component) releases packets of the video PES and the audio PES to restore a video ES and an audio ES.
  • The voice decoding unit 204 (or other equivalent component) decodes the audio ES to output a digital audio bit stream. The audio bit stream is converted into an analog audio signal by a digital-to-analog converter, amplified by an amplifier, and then outputted through a speaker (or other output means).
  • The image decoding unit 206 (or other equivalent component) parses the video ES to extract header data and an MPEG-2 video bit stream. The image decoding unit 206 also decodes the MPEG-2 video bit stream and outputs left and right broadcast image signals for implementing and displaying stereoscopic 3D images.
  • A selection filter 208, a subtitle decoding unit 210, a CLUT 212, a pixel buffer 214, a composition buffer 216, a DVLUT 218, and a 3D graphic engine 220 (together with other additional and/or alternative components) constitute a circuit (or other scheme or means of hardware, software and/or a combination thereof) for decoding the subtitle stream to generate a 3D subtitle bit map image.
  • The selection filter 208 (or other equivalent component) receives the subtitle stream, namely, subtitle PES packets, from the demultiplexing unit 202, separates a header to depacketize the packets, and restores subtitle segments. In the de-packetization process, the selection filter 208 extracts a presentation time stamp (PTS) (or similar component) from the header of each PES packet and stores it in a memory, so that the data can be referred to in a subtitle reproduction process. In a modification, the selection filter 208 may not directly extract the PTS and an additional processor may extract the PTS. Also, the selection filter 208 may receive a PMT from the demultiplexing unit 202 and parses it to extract a subtitling descriptor.
  • The selection filter 208 classifies the subtitle segments based on a page identifier (page_id) value. Among the segments classified by the selection filter 208, an object data segment (3D_ODS) is provided to the subtitle decoding unit 210 and decoded. A display definition segment (3D_DDS), a page composition segment (3D_PCS), a region composition segment (3D_RCS) are provided to the composition buffer 216 and used for decoding of the object data segment (3D_ODS) and rendering of a 3D subtitle. The CLUTDS is used to generate or update a CLUT, and the CLUT may be stored in the composition buffer 216 or in an additional memory. The DVLUTDS is used to configure or update the DVLUT, and in this case, the DVLUT may also be stored in the composition buffer 216 or in the additional memory. Meanwhile, the segments such as 3D_DDS, 3D_PCS, 3D_RCS, CLUTDS, DVLUTDS, and the like, may be decoded by the subtitle decoding unit 210 or an additional processor and then provided to corresponding units, instead of being directly provided from the selection filter 208 to the units.
  • The subtitle decoding unit 210 (or other equivalent component) decodes the object data segment (3D_ODS) with reference to the CLUT, 3D_DDS, 3D_PCS, and 3D_RCS, and temporarily stores the decoded pixel data in the pixel buffer 214.
  • When the “object_coding_method” field has a value of ‘0x00’ indicating a 2D object coded by the pixels, the subtitle decoding unit 210 decodes the pixel data sub-block pixel-data_sub-block() with respect to a top field and the pixel data sub-block pixel-data_sub-block() with respect to a bottom field, and stores the decoded pixel data in the pixel buffer 214 by the pixels. When the “object_coding_method” field has a value of ‘0x01’ indicating a 2D object coded by character string, the subtitle decoding unit 210 decodes the character code, generates a bit map image at the corresponding character string object, and stores the same in the pixel buffer 214.
  • Meanwhile, when the “object_coding_method” field has a value of ‘0x02’ or ‘0x03’ indicating a 3D coded object, the subtitle decoding unit 210 decodes the pixel data sub-block pixel-data_sub-block() with respect to the front surface and hidden surface of the object and stores the decoded pixel data in the pixel buffer 214. In particular, in this step, the subtitle decoding unit 210 converts a pseudo-color value expressing a color value of each pixel into an actual color value with reference to the CLUT 212 and stores the same. In addition, the subtitle decoding unit 210 decodes 3D coordinate data sub-blocks z_data_sub-block_3D() with respect to the front surface and hidden surface of the object, and stores the decoded 3D coordinate data in the pixel buffer 214. In particular, in this step, the subtitle decoding unit 210 converts a pseudo-depth value expressing a depth value of each pixel into an physical depth value with reference to the DVLUT 218, and stores the same. In this manner, when the 3D-coded object is decoded, the map with respect to the depth coordinate values and the horizontal disparity value with respect to each pixel are stored together with the 2D pixel bit map in the pixel buffer 214.
  • The composition buffer 216 temporarily stores and updates the data included in the 3D_DDS, 3D_PCS, 3D_RCS, so that the subtitle decoding unit 210 can refer to them in decoding the object data segment (3D_ODS). In addition, the data stored in the composition buffer 216 is used when the 3D graphic engine 220 renders the 3D subtitle.
  • The DVLUT 218 stores the depth value look-up table. The subtitle decoding unit 210 may refer to the depth value look-up table in decoding the object data segment (3D_ODS). Also, when the DVLUT 218 performs rendering by the 3D graphic engine 220, the depth value look-up table can be referred to.
  • The 3D graphic engine 220 (or other equivalent component such as a graphics acceleration chip or processor) configures a subtitle page and object regions constituting the page with reference to the display definition segment (3D_DDS), the page composition segment (3D_PCS), and the region composition segment (3D_RCS) stored in the composition buffer 216 and the presentation time stamp (PTS) stored in the memory. In addition, the 3D graphic engine 220 receives pixel bit map data and pixel depth map data from the pixel buffer 214 with respect to each object corresponding to each object region, and performs 3D rendering based on the received data to generate a 3D subtitle image signal. In an exemplary embodiment, the television receiver displays a 3D image in a holographic/volumetric manner, and the 3D graphic engine 220 outputs 3D graphic data fitting the format. In a modification of displaying a 3D image in a stereoscopic manner, the 3D graphic engine 220 outputs a pair of subtitle OSD images to be outputted to the left and right image screen plane.
  • As mentioned above, the pixel depth map data is stored in the pixel buffer 214, and the pixel depth map data includes a depth coordinates value or the horizontal disparity value with respect to each pixel. In a preferred embodiment, the depth coordinates value or the horizontal disparity value with respect to each pixel is converted from the pseudo-depth value to an physical depth value by the subtitle decoding unit 210 and then stored. In a different embodiment, the depth coordinates value or the horizontal disparity value with respect to each pixel may be stored in the form of a pseudo-depth value in the pixel buffer 214. In this case, the 3D graphic engine 220 may perform 3D rendering operation while converting the pseudo-depth value into an physical depth value with reference to the DVLUT 218.
  • The substantial 3D rendering operation may be implemented by using one of the existing 3D rendering schemes or a scheme that may be proposed in the future, or by using any combination of applicable schemes together. Those skilled in the art can easily implement such techniques and thus its detailed description will be omitted merely for the sake of brevity.
  • A mixer/formatter 222 (or other equivalent component) mixes a 3D subtitle image signal transferred form the 3D graphic engine 214 to the left and right broadcast image signals transferred from the image decoding unit 206, and outputs the mixed signal to the screen plane 224. Accordingly, the 3D subtitle included in the stereoscopic region is outputted in an overlap manner on the 3D image of the screen plane 224.
  • The process of receiving subtitle information and displaying a 3D subtitle in the television receiver as shown in FIG. 16 will now be described in detail with reference to FIG. 17.
  • First, a program map table (PMT) is extracted from a DVB broadcast stream and a subtitling descriptor within the PMT is read to recognize basic information regarding a subtitle. In particular, whether or not the subtitle service is a 3D service is recognized by using the “subtitling_type” field within the subtitling descriptor (S250).
  • Next, the PMT is parsed to recognize a PID value of a stream having the “stream_type” value of ‘0x06’ (S252). When the “stream_type” value is ‘0x06’, it indicates a TS transferring a PES packet including private data in the ISO/IEC 13818.1 standard regarding MPEG-2. Because the DVB subtitling stream is transferred through the private data PES packet, it can be a candidate of the subtitle PES packet detected based on the “stream_type” value.
  • Among the PES packets, the DVB subtitle PES packets have the “data_identifier” field having a value set as ‘0x20’ and the “subtitle_stream_id” field having a value set as ‘0x00’. Accordingly, in step S254, a PES packet having the “data_identifier” field having the value of ‘0x20’ and the “subtitle_stream_id” field having the value of ‘0x00’ is detected.
  • Subsequently, the segment data is classified and extracted according to the “segment_type” field value (S256). Here, if the “segment_type” field has a value of ‘0x40’, the segment is classified as a 3D page composition segment (3D_PCS). If the “segment_type” field has a value of ‘0x41’, the segment is classified as a 3D region composition segment (3D_RCS). If the “segment_type” field has a value of ‘0x12’, the segment is classified as a CLUT definition segment (CLUT). If the “segment_type” field has a value of ‘0x42’, the segment is classified as a 3D object data segment (3D_ODS). If the “segment_type” field has a value of ‘0x43’, the segment is classified as a 3D display definition segment (3D_DDS). If the “segment_type” field has a value of ‘0x44’, the segment is classified as a DVLUT definition segment (DVLUTDS).
  • In step S258, a window space (or region) on which a 3D subtitle is to be displayed, a page space, the size and position of an object region space, 3D object composition information are recognized by using the 3D_DDS, the 3D_PCS, and the 3D_RCS. In step S240, the pixel data sub-block pixel-data_sub-block() and the 3D coordinates data sub-block z_data_sub-block_3D() included in the 3D_ODS are decoded to acquire a pseudo-color value and a pseudo-depth value with respect to a 3D subtitle object. Subsequently, the pseudo-color value is converted into a color value to be actually outputted in the 3D display by using the CLUT. Also, the pseudo-depth value is converted into a depth value (z-position) to be actually outputted in the 3D display by using the DVLUT (S262).
  • Finally, 3D rendering is performed to generate a 3D subtitle bit map, formatted according to a 3D display scheme, and then outputted (S264).
  • The feature of the exemplary embodiments can be modified variably without altering the technical idea or essential characteristics described herein and may be implemented in various substantial forms.
  • For example, in the above description, in order to provide a 3D attribute to a 2D subtitle, the horizontal disparity value at the level of pixels is used as an example of information additionally transferred to the receiver. However, in a different embodiment, the horizontal disparity value may be provided as a value between the pair of stereoscopic images, for example, between the left and right images. Namely, instead of the spatial coordinates with respect to each plane, a horizontal disparity value with respect to an extended value, namely, another view among the left and right images, based on a base view among the left and right images may be provided as z-axis directional position information included in the 3D_DDS, the 3D_PCS and the 3D_RCS through each segment. In this case, with respect to the 3D_DDS, the 3D_PCS and the 3D_RCS, an image with respect to the extended view is generated shifting an image with respect to the base view by the horizontal disparity value included in each segment, and synthesized (i.e. composed, combined, etc.) with a broadcast image, so as to be outputted to the stereoscopic display. In this case, the 3D coordinate data sub-block z_data_sub-block_3D() may not be transmitted. In the 3D_ODS, the “top_surface_depth_block_length”field and the hidden_surface_depth_block_length” field are set as 0. At this time, the receiver may acquire horizontal disparity information of the subtitle by using the 3D_DDS, the 3D_PCS, the 3D_RCS and the DVLUT, and control an output of the subtitle in the stereoscopic display by using them.
  • Meanwhile, in the above description, the DVLUT is used to interpret spatial information for providing a cubic effect to the 2D subtitle with respect toe ach object. In a modification, the pseudo-depth value utilizing the DVLUT may be used to indicate the “display_window_z-position_minimum” and the “display_window_z-position_maximum” field value in the display definition segment (3D_DDS) and/or the “region_z_address” field value in the object region composition segment (3D_RCS).
  • Referring back to FIG. 16, particular features of the exemplary embodiments can be part of an apparatus (e.g., control device, circuitry, dedicated processors, integrated chip, and/or implemented together with software, hardware, and/or a combination thereof having appropriate coding/commands stored in a storage medium to be executed by a microprocessor or the like) comprising at least a selector (208) or other equivalent component, a 3D subtitle decoder (210) or other equivalent component, and a 3D graphics engine (220) or other equivalent component.
  • The selector (208) can receive subtitle data streams obtained from broadcast multimedia signals and classify various subtitle segments in the received subtitle data streams into a 3D object data segment used in defining subtitle data with respect to graphical objects, at least three 3D display characteristic segments used in transferring 3D region configuration information defining a subtitle display region, a depth-related definition segment used in generating/updating a depth value look-up table, and a color-related definition segment used in generating/updating a color look-up table.
  • The 3D subtitle decoder (210) cooperating with said selector can perform decoding of said 3D object data segment with reference to said color look-up table and to said 3D display characteristic segments, and decoding of 2D/3D objects, coded in terms of pixels, with reference to said depth value look-up table for converting pseudo-depth values into physical depth values for each pixel, to thus generate 3D subtitle bit map image information.
  • The 3D graphics engine (220) cooperating with said 3D subtitle decoder can process said 3D subtitle bit map image information, based on said depth value look-up table and said 3D display characteristic segments, into 3D subtitle image signals used for graphically rendering subtitles to have a three-dimensional visual display effect.
  • The apparatus may further comprise a processing unit (222) (or other equivalent component) that receives said 3D subtitle image signals generated from said 3D graphics engine and receives decoded 3D image signals obtained from said broadcast multimedia signals, and processes said 3D subtitle image signals and said decoded 3D image signals to be suitable for displaying images and subtitles together in a three-dimensional manner.
  • Additionally, the apparatus may further comprise a storage medium comprised of a composition buffer (216) to store at least said 3D display characteristic segments comprising a 3D display definition segment, a 3D page composition segment, a 3D region composition segment; a pixel buffer (214) to store at least said 3D subtitle bit map image information; said depth value look-up table (DVLUT) (218) to store at least depth information related to pixels; and said color look-up table (CLUT) (212) to store at least color information related to pixels.
  • Furthermore, it can be understood that the apparatus having such selector, such storage medium, such 3D subtitle decoder, such 3D graphics engine, and such processing unit can be implemented in a three-dimensional display device.
  • Meanwhile, it should be noted that various technical standards of the digital video broadcasting (DVB) standard stipulated by the ETSI (European Telecommunications Standards Institute) have been mentioned in this description. Those skilled in the art can clearly understand that the numerous features described herein may be implemented in compliance with addition and/or alternative technical standards related to digital multimedia technology (e.g., MPEG-related standards, Blu-rayTM 3D standards, MVC: Multiview Video Coding, AVC: Advanced Video Coding, SMPTE-related standards, IEEE-related standards, ITU-related standards, SCTE-related standards, DVB Mobile TV (DVB-H, DVB-SH, DVB-IPDC, etc.), as well as those standards covering NTSC, PAL, SECM, ATSC, HDTV, Wireless HD Video, etc. technologies), three-dimensional graphics processing technology (e.g., OpenGL standards, X3D standards, Mobile Graphics Standard, etc.), 3D display-related technology (e.g., 3D-NTSC, 3D-PAL, 3D-SECAM, MUTED: Multi-User 3D Television Display, 3D-TV, 3D-HD, 3D-PDPs, 3D-LCDs, etc.), and the like, that are clearly applicable to at least some of the various features described herein.
  • As the exemplary embodiments may be implemented in several forms without departing from the characteristics thereof, it should also be understood that the above-described embodiments are not limited by any of the details of the foregoing description, unless otherwise specified, but rather should be construed broadly within its scope as defined in the appended claims. Therefore, various changes and modifications that fall within the scope of the claims, or equivalents of such scope are therefore intended to be embraced by the appended claims.
  • According to the exemplary embodiments described thus far, the television receiver (or other type of digital content reception means) can display subtitles or other textual information with a cubic or 3D effect such that the subtitles can naturally blends in with the 3D images or video. Accordingly, the utility and attractiveness of subtitles can be increased. Also, because additional parameters are supplementarily added to the existing subtitle signal transmission/reception method, backward compatibility with the existing technical standards can be achieved.
  • The various features described herein can be implemented for any display device that has a 3D image display capability and needs to have a closed caption (i.e. subtitle, textual information, etc.) display function. In particular, the present features can be particularly useful for a stereoscopic display device regardless of a formatting type, such as a dual-mode display, a time sequence-mode display, or the like.

Claims (16)

  1. A method for displaying three-dimensional (3D) subtitles in a 3D display device, the method comprising:
    receiving 3D image signals, subtitle data, depth-related information related to the subtitle data, and 3D region composition information defining a display region of the subtitle data;
    forming the subtitle data to be three-dimensional using the received depth-related information and the 3D region composition information; and
    displaying the 3D image signals together with the formed subtitle data.
  2. The method of claim 1, wherein the 3D image signal, the subtitle data, the depth-related information and the 3D region composition information are received through a broadcast signal.
  3. The method of claim 2, further comprising:
    generating a depth value look-up table for storing the reciprocal relationship between pseudo-depth information and actual depth information,
    wherein the depth-related information is expressed as pseudo-depth information regarding each pixel, the displaying step comprises converting the pseudo-depth information into the actual depth information with reference to the depth value look-up table.
  4. The method of claim 3, wherein look-up table definition information for generating or updating the depth value look-up table is included in the broadcast signal, and the receiving step comprises generating or updating the depth value look-up table according to the look-up table definition information.
  5. The method of claim 3, wherein the actual depth information regarding pixels is a depth value in a forward/backward direction with respect to the pixels.
  6. The method of claim 3, wherein actual depth information regarding the pixels is expressed as a magnification to the width of a screen plane of the 3D display device.
  7. The method of claim 4, wherein the look-up table definition information includes a reciprocal relationship between the magnifications with respect to the pseudo-depth information regarding the pixels and the display screen of a receiver.
  8. The method of claim 3, wherein the actual depth information is a horizontal disparity value with respect to the pixels.
  9. The method of claim 2, wherein, in the receiving step, the subtitle data is received in units of subtitle objects, and a display region of the subtitle is set in units of subtitle objects.
  10. The method of claim 2, wherein, in the receiving step, the subtitle data is received in units of the subtitle objects, and the display region of the subtitle is set to include a plurality of subtitle objects.
  11. A three-dimensional (3D) display device comprising:
    a broadcast signal receiving unit configured to receive broadcast signals including 3D image signals, subtitle data, depth-related information related to the subtitle data, and a 3D region composition information that define a display region of the subtitle data;
    a demodulating unit configured to demodulate at least portions of the broadcast signals received by the broadcast signal receiving unit;
    a decoding unit configured to decode at least portions of the broadcast signals demodulated by the demodulation unit;
    a composing and outputting unit configured to form the subtitle data to be three-dimensional using the depth-related information and the 3D region composition information; and
    a display unit configured to display the 3D image signals together with the formed subtitle data.
  12. The device of claim 11, further comprising:
    a memory configured to store a depth value look-up table indicating a reciprocal relationship between pseudo-depth information and actual depth information,
    wherein the depth-related information is expressed as pseudo-depth information regarding each pixel, and the composing and outputting unit converts the pseudo-depth information into actual depth information with reference to the depth value look-up table and configures the subtitle data that was formed to be three-dimensional based on the actual depth information.
  13. The device of claim 12, wherein the actual depth information regarding the pixels is a depth value in a forward/backward direction with respect to the pixels.
  14. The device of claim 12, wherein the actual depth information regarding the pixels is a horizontal disparity value with respect to the pixels.
  15. The device of claim 11, wherein the broadcast signal receiving unit receives the subtitle data in units of subtitle objects, and the composing and outputting unit sets a display region of the subtitle in units of subtitle objects.
  16. The device of claim 11, wherein the broadcast signal receiving unit receives the subtitle data in units of subtitle objects, and the composing and outputting unit sets the display region of the subtitle data to include a plurality of subtitle objects.
EP10733627.3A 2009-01-20 2010-01-19 Three-dimensional subtitle display method and three-dimensional display device for implementing the same Withdrawn EP2389767A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14595809P 2009-01-20 2009-01-20
PCT/KR2010/000345 WO2010085074A2 (en) 2009-01-20 2010-01-19 Three-dimensional subtitle display method and three-dimensional display device for implementing the same

Publications (2)

Publication Number Publication Date
EP2389767A2 true EP2389767A2 (en) 2011-11-30
EP2389767A4 EP2389767A4 (en) 2013-09-25

Family

ID=42356315

Family Applications (1)

Application Number Title Priority Date Filing Date
EP10733627.3A Withdrawn EP2389767A4 (en) 2009-01-20 2010-01-19 Three-dimensional subtitle display method and three-dimensional display device for implementing the same

Country Status (3)

Country Link
EP (1) EP2389767A4 (en)
CN (1) CN102292993B (en)
WO (1) WO2010085074A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10659813B2 (en) 2013-04-10 2020-05-19 Zte Corporation Method, system and device for coding and decoding depth information

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102273210B (en) 2008-12-02 2014-08-13 Lg电子株式会社 Method for displaying 3d caption and 3d display apparatus for implementing the same
KR101622688B1 (en) 2008-12-02 2016-05-19 엘지전자 주식회사 3d caption display method and 3d display apparatus for implementing the same
US8817072B2 (en) 2010-03-12 2014-08-26 Sony Corporation Disparity data transport and signaling
CN105163105B (en) 2010-05-30 2018-03-27 Lg电子株式会社 The method and apparatus for handling and receiving the digital broadcast signal for 3-dimensional captions
EP2594079B1 (en) * 2010-07-12 2018-03-21 Koninklijke Philips N.V. Auxiliary data in 3d video broadcast
CN102137264B (en) * 2010-08-25 2013-03-13 华为技术有限公司 Method, device and system for controlling display of graphic text in three-dimensional television
JP2012120143A (en) * 2010-11-10 2012-06-21 Sony Corp Stereoscopic image data transmission device, stereoscopic image data transmission method, stereoscopic image data reception device, and stereoscopic image data reception method
KR101763263B1 (en) * 2010-12-24 2017-07-31 삼성전자주식회사 3d display terminal apparatus and operating method
US20120293636A1 (en) * 2011-05-19 2012-11-22 Comcast Cable Communications, Llc Automatic 3-Dimensional Z-Axis Settings
CN102202224B (en) * 2011-06-22 2013-03-27 清华大学 Caption flutter-free method and apparatus used for plane video stereo transition
JP2013066075A (en) * 2011-09-01 2013-04-11 Sony Corp Transmission device, transmission method and reception device
JP2013239833A (en) * 2012-05-14 2013-11-28 Sony Corp Image processing apparatus, image processing method, and program
WO2014166100A1 (en) * 2013-04-12 2014-10-16 Mediatek Singapore Pte. Ltd. A flexible dlt signaling method
WO2015139203A1 (en) * 2014-03-18 2015-09-24 Mediatek Singapore Pte. Ltd. Dlt signaling in 3d video coding
CN105657395A (en) * 2015-08-17 2016-06-08 乐视致新电子科技(天津)有限公司 Subtitle playing method and device for 3D (3-Dimensions) video

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008044191A2 (en) * 2006-10-11 2008-04-17 Koninklijke Philips Electronics N.V. Creating three dimensional graphics data
WO2008115222A1 (en) * 2007-03-16 2008-09-25 Thomson Licensing System and method for combining text with three-dimensional content
US20090142041A1 (en) * 2007-11-29 2009-06-04 Mitsubishi Electric Corporation Stereoscopic video recording method, stereoscopic video recording medium, stereoscopic video reproducing method, stereoscopic video recording apparatus, and stereoscopic video reproducing apparatus

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004274125A (en) * 2003-03-05 2004-09-30 Sony Corp Image processing apparatus and method
JP4190357B2 (en) * 2003-06-12 2008-12-03 シャープ株式会社 Broadcast data transmitting apparatus, broadcast data transmitting method, and broadcast data receiving apparatus
KR100585966B1 (en) * 2004-05-21 2006-06-01 한국전자통신연구원 The three dimensional video digital broadcasting transmitter- receiver and its method using Information for three dimensional video
KR101096973B1 (en) * 2005-01-14 2011-12-20 파나소닉 주식회사 Content detection device in digital broadcast
KR100657322B1 (en) * 2005-07-02 2006-12-14 삼성전자주식회사 Method and apparatus for encoding/decoding to implement local 3d video
KR20070047665A (en) * 2005-11-02 2007-05-07 삼성전자주식회사 Broadcasting receiver, broadcasting transmitter, broadcasting system and control method thereof
WO2008038205A2 (en) * 2006-09-28 2008-04-03 Koninklijke Philips Electronics N.V. 3 menu display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008044191A2 (en) * 2006-10-11 2008-04-17 Koninklijke Philips Electronics N.V. Creating three dimensional graphics data
WO2008115222A1 (en) * 2007-03-16 2008-09-25 Thomson Licensing System and method for combining text with three-dimensional content
US20090142041A1 (en) * 2007-11-29 2009-06-04 Mitsubishi Electric Corporation Stereoscopic video recording method, stereoscopic video recording medium, stereoscopic video reproducing method, stereoscopic video recording apparatus, and stereoscopic video reproducing apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of WO2010085074A2 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10659813B2 (en) 2013-04-10 2020-05-19 Zte Corporation Method, system and device for coding and decoding depth information

Also Published As

Publication number Publication date
CN102292993A (en) 2011-12-21
WO2010085074A2 (en) 2010-07-29
EP2389767A4 (en) 2013-09-25
CN102292993B (en) 2015-05-13
WO2010085074A3 (en) 2010-10-21

Similar Documents

Publication Publication Date Title
WO2010085074A2 (en) Three-dimensional subtitle display method and three-dimensional display device for implementing the same
WO2010071291A1 (en) Method for 3d image signal processing and image display for implementing the same
WO2010064774A1 (en) 3d image signal transmission method, 3d image display apparatus and signal processing method therein
WO2010064784A2 (en) Method for displaying 3d caption and 3d display apparatus for implementing the same
CA2749064C (en) 3d caption signal transmission method and 3d caption display method
WO2010064853A2 (en) 3d caption display method and 3d display apparatus for implementing the same
WO2010126221A2 (en) Broadcast transmitter, broadcast receiver and 3d video data processing method thereof
WO2010117129A2 (en) Broadcast transmitter, broadcast receiver and 3d video data processing method thereof
US8860782B2 (en) Stereo image data transmitting apparatus and stereo image data receiving apparatus
US8963995B2 (en) Stereo image data transmitting apparatus, stereo image data transmitting method, stereo image data receiving apparatus, and stereo image data receiving method
WO2010041896A2 (en) Receiving system and method of processing data
US20110141233A1 (en) Three-dimensional image data transmission device, three-dimensional image data transmission method, three-dimensional image data reception device, and three-dimensional image data reception method
US20110149034A1 (en) Stereo image data transmitting apparatus and stereo image data transmittimg method
WO2010087621A2 (en) Broadcast receiver and video data processing method thereof
WO2010143820A2 (en) Device and method for providing a three-dimensional pip image
WO2011005025A2 (en) Signal processing method and apparatus therefor using screen size of display device
WO2012002690A2 (en) Digital receiver and method for processing caption data in the digital receiver
WO2011152633A2 (en) Method and apparatus for processing and receiving digital broadcast signal for 3-dimensional subtitle
KR20120038388A (en) Stereoscopic image data transmitter, method for transmitting stereoscopic image data, and stereoscopic image data receiver
EP2371140A2 (en) Broadcast receiver and 3d subtitle data processing method thereof
WO2011028024A2 (en) Cable broadcast receiver and 3d video data processing method thereof
WO2011046271A1 (en) Broadcast receiver and 3d video data processing method thereof
WO2011049372A2 (en) Method and apparatus for generating stream and method and apparatus for processing stream
US20120200565A1 (en) 3d-image-data transmission device, 3d-image-data transmission method, 3d-image-data reception device, and 3d-image-data reception method
WO2012063675A1 (en) Stereoscopic image data transmission device, stereoscopic image data transmission method, and stereoscopic image data reception device

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20110803

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20130828

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 13/00 20060101AFI20130822BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180321

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20180801