EP1952629A1 - Method and apparatus for synchronizing visual and voice data in dab/dmb service system - Google Patents
Method and apparatus for synchronizing visual and voice data in dab/dmb service systemInfo
- Publication number
- EP1952629A1 EP1952629A1 EP06823659A EP06823659A EP1952629A1 EP 1952629 A1 EP1952629 A1 EP 1952629A1 EP 06823659 A EP06823659 A EP 06823659A EP 06823659 A EP06823659 A EP 06823659A EP 1952629 A1 EP1952629 A1 EP 1952629A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- synchronization
- web document
- speech
- document
- visual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H60/00—Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
- H04H60/76—Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet
- H04H60/81—Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself
- H04H60/82—Arrangements characterised by transmission systems other than for broadcast, e.g. the Internet characterised by the transmission system itself the transmission system being the Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/242—Synchronization processes, e.g. processing of PCR [Program Clock References]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
- H04N21/43072—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of multiple content streams on the same device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234318—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into objects, e.g. MPEG-4 objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4782—Web browsing, e.g. WebTV
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8106—Monomedia components thereof involving special audio data, e.g. different tracks for different languages
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8126—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8543—Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/04—Synchronising
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04H—BROADCAST COMMUNICATION
- H04H2201/00—Aspects of broadcast communication
- H04H2201/10—Aspects of broadcast communication characterised by the type of broadcast system
- H04H2201/20—Aspects of broadcast communication characterised by the type of broadcast system digital audio broadcasting [DAB]
Definitions
- the present invention relates to a digital data broadcasting service; and, more particularly, to a method for providing a Web service that can simultaneously input/output speech data along with visual data by integrating speech Web data with broadcasting Web sites
- DMB Multimedia Broadcasting
- DAB Digital Audio Broadcasting
- BWS Conventional broadcasting Web sites
- HTTP Hyper Text Markup Language
- DMB Digital Multimedia Broadcasting
- DAB Digital Audio Broadcasting
- the method can simply output the Web data defined by the HTML onto the screen. Therefore, the method cannot sufficiently transfer data in a broadcasting system for a mobile environment, such as a DAB-based DMB.
- an X + V method is underway for standardization and development to provide a multi-modal Web service.
- the method too, operates based on a visual interface with the XHTML as a host language, and it is somewhat inappropriate for a mobile environment.
- the present invention provides a method for synchronizing visual and speech Web data that can overcome the aforementioned drawbacks and provide users with a speech-directed Web service in a mobile environment or a fixed location environment, instead of a visual-directed Web service, and an apparatus thereof.
- BWS broadcasting Web sites
- an embodiment of the present invention defines a speech-directed Web language to provide a speech-directed Web service in consideration of a mobile environment, instead of a screen-directed Web service .
- another embodiment of the present invention provides a service capable of inputting/outputting speech data by integrating a conventional Web service framework, e.g., a BWS service, with a speech input/output module.
- a conventional Web service framework e.g., a BWS service
- yet another embodiment of the present invention provides a technology of synchronizing a content following a visual Web specification, e.g., HTML, and a VoiceXML content capable of providing a speech Web service, that is, a technology of synchronizing visual data with speech data.
- a visual Web specification e.g., HTML
- a VoiceXML content capable of providing a speech Web service
- processing of documents should be synchronized, and a user input device should be synchronized for one document. It is the object of the present invention to provide a method and apparatus for the synchronizations.
- a method for synchronizing visual data with speech data to provide a broadcasting Web sites (BWS) service capable of simultaneously inputting/outputting speech data in a multimedia broadcasting service which includes the steps of: a) generating a visual Web document; b) generating a speech Web document including synchronization tags related to the visual Web document; and c) identifying the speech Web document and the visual Web document based on a sub-channel or a directory and transmitting the speech Web document and the visual Web document independently.
- BWS broadcasting Web sites
- an apparatus for synchronizing visual data with speech data to provide a BWS service capable of simultaneously inputting/outputting speech data in a multimedia broadcasting service which includes: a) a content data generator for generating a visual Web document and a speech Web document including synchronization tags related to the visual Web document; b) a multimedia object transfer (MOT) server for transforming both the generated visual Web document and the speech Web document into an MOT protocol; and c) a transmitting system for identifying the speech Web document and the visual Web document of the MOT protocol based on a sub-channel or a directory and transmitting the speech Web document and the visual Web document independently .
- a content data generator for generating a visual Web document and a speech Web document including synchronization tags related to the visual Web document
- a multimedia object transfer (MOT) server for transforming both the generated visual Web document and the speech Web document into an MOT protocol
- a transmitting system for identifying the speech Web document and the visual Web document of the MOT protocol based on a sub-channel or
- a method for synchronizing visual data with speech data to provide a BWS service capable of simultaneously inputting/outputting speech data in a multimedia broadcasting service which includes the steps of: a) receiving and loading a visual Web document and a speech Web document including synchronization tags related to the visual Web document, the visual Web document and the speech Web document being identified based on a sub-channel or a directory and transmitted independently; and b) analyzing the synchronization tags when a synchronization event occurs and performing a corresponding synchronization operation.
- an apparatus for synchronizing visual data with speech data to provide a BWS service capable of simultaneously inputting/outputting speech data in a multimedia broadcasting service which includes: a) a baseband receiver for receiving broadcasting signals through a multimedia broadcasting network and performing channel decoding; b) a multimedia object transfer (MOT) decoder for decoding channel-decoded packets and restoring a visual Web document and a speech Web document including synchronization tags related to the visual Web document; and c) an integrated Web browser for analyzing the synchronization tag when a synchronization event occurs and executing a corresponding synchronization operation.
- a baseband receiver for receiving broadcasting signals through a multimedia broadcasting network and performing channel decoding
- a multimedia object transfer (MOT) decoder for decoding channel-decoded packets and restoring a visual Web document and a speech Web document including synchronization tags related to the visual Web document
- MOT multimedia object transfer
- an HTML document which is a visual Web document
- a VoiceXML content which is a speech Web document
- a multimedia broadcasting service user can conveniently access to corresponding information by receiving both screen output and speech output for a Web data service and, if necessary, making a command by speech even in a mobile environment.
- the present invention has an advantage that it can ensure the backward compatibility with lower-ranked services by individually authoring and transmitting data to provide an integrated synchronization service, instead of integrating markup languages and transmitting them in the form of data of a sort, which is generally used.
- the technology of the present invention adds synchronization-related elements to a host markup language to thereby maintain a conventional service framework.
- users can receive a conventional broadcasting Web site and, at the same time, access to the Web by speech, listen to information, and control the Web by speech.
- Fig. 1 is an exemplary view illustrating how broadcasting Web site documents are authored to be synchronized and capable of speech input/output in accordance with an embodiment of the present invention
- Fig. 2 is a view describing broadcasting Web site documents capable of speech input/output and a data transmitting method in accordance with an embodiment of the present invention
- Fig. 3 is an exemplary view showing broadcasting Web site documents capable of speech input/output when a synchronization document is separately provided in accordance with an embodiment of the present invention
- Fig. 4 is a block view describing a Digital Multimedia Broadcasting (DMB) system which is configured based on a Digital Audio Broadcasting (DAB) and providing a broadcasting Web sites (BWS) service capable of simultaneous speech input/output; and
- DMB Digital Multimedia Broadcasting
- DAB Digital Audio Broadcasting
- BWS broadcasting Web sites
- Fig. 5 is a block view illustrating an integrated Web browser of Fig. 4.
- broadcasting Web sites defined to provide a Web service in a multimedia broadcasting service, such as Digital Audio Broadcasting (DAB) and Digital Multimedia Broadcasting (DMB)
- DAB Digital Audio Broadcasting
- DMB Digital Multimedia Broadcasting
- the Web language that becomes the basis for providing the service includes a basic profile which adopts HTML 3.2 as a Web specification in consideration of a terminal with a relatively low specification, and a non-restrictive profile which has no restriction in consideration of a high-specification terminal, such as a personal computer (PC).
- PC personal computer
- the profiles are based on the HTML, which is a Web representation language, it requires a Web browser to provide a terminal with a BWS service.
- the browser may be called a BWS browser and it provides a Web service by receiving and decoding Web contents of txt, html, jpg, and png formats transmitted as objects through a multimedia object transfer (MOT) .
- MOT multimedia object transfer
- the output is provided in the visual form. That is, texts or still images are displayed on a screen with a hyperlink function and they transit into the other contents transmitted together through the MOT to thereby provide a visual-based local Web service.
- the specification includes a function of recovering a speech file or other multimedia files, it is possible to provide the output not only on the screen but also by speech.
- GUI Graphical User Interface
- VoiceXML is a Web language devised for an interactive speech response service of an Interactive Voice Response (IVR) type. When it is actually mounted on the terminal, it can provide a speech enabled Web service.
- the technology defines a markup language that can be transited into another application, document, or dialogue based on a dialogue obtained by modeling a conversation between a human being and a machine.
- the VoiceXML can provide a Web service that can input/output data by speech.
- Web information can be delivered by speech by applying a Text To Speech (TTS) technology which transforms text data into speech data and the Automatic Speech Recognition (ASR) technology which performs speech recognition to an input/output module, and user input data are received by speech to process a corresponding command or execute a corresponding application.
- TTS Text To Speech
- ASR Automatic Speech Recognition
- the VoiceXML is effective in a mobile environment. It has an advantage that users listen to a Web service provided without a visual output on the screen and perform navigation by inputting speech data at desired information.
- there is a limitation in delivering Web information by speech only and, when speech input/output is made together with visual data on the screen it is convenient and it is possible to provide diverse additional data services.
- the present invention provides a transmission and synchronization method for providing a multi-modal Web service by integrating the conventional BWS Web specification, i.e., HTML, with a speech Web language, i.e., VoiceXML.
- a transmission and synchronization method will be described.
- the basic principle of the present invention is to generate a speech Web document including synchronization information related to a visual Web document and transmit the visual Web document and the speech Web document through another sub-channel or another directory of the same sub-channel.
- Fig. 1 is an exemplary view illustrating how of broadcasting Web site documents are authored to be synchronized and capable of speech input/output in accordance with an embodiment of the present invention.
- a visual Web document and a speech Web document are separately created in the embodiment of the present invention.
- the visual Web document is an HTML or an xHTML content defined in the BWS
- the speech Web document is a document integrating elements or tags in charge of synchronization between the VoiceXML and the visual Web documents, a speech recognition module, and a component-related module such as a speech combiner and a receiver.
- Fig. 2 is a view describing broadcasting Web site documents capable of speech input/output and a data transmitting method in accordance with an embodiment of the present invention.
- the visual Web document and the speech Web document are transmitted and signaled through different sub-channels or the same subchannel, they are transmitted using different directories, This is to make a terminal capable of receiving an existing BWS service receive the conventional service, even if the BWS is cooperated with a speech Web document.
- the signaling for the speech BWS is additionally processed in the speech Web document, i.e., a speech module.
- the synchronization between the visual Web document and the speech Web document is processed by using synchronization tags ⁇ esync>, ⁇ icync> and ⁇ fsync>.
- the synchronization tags are described in the speech Web document without exception. Also, the synchronization tags are identified by the following namespace.
- a VoiceXML forming a speech Web document has the following name space.
- the entire namespace including the visual Web document and the speech Web document may be designated as follows:
- the synchronization tags for processing synchronization between the visual Web document, i.e., HTML, and the speech Web document, i.e., the VoiceXML, should describe synchronization between an application, documents, and forms within the document.
- 30 synchronization tags used for the purpose are ⁇ esync>, ⁇ isync> and ⁇ fsync>.
- the tags ⁇ esync> and ⁇ isync> are in charge of synchronization between an application and documents, whereas the ⁇ fsync> tag is in charge of synchronization between forms.
- the synchronization between the application and the documents should be simultaneously loaded, interpreted and rendered in the initial period when the application starts.
- the synchronization between the forms signifies that user input data are simultaneously inputted to a counterpart form.
- the ⁇ esync> tag is used to describe synchronization information between applications or between documents, when synchronization related information, i.e., ⁇ esync>, and related attributes do not exist in the speech Web document but exist in an independent external document, e.g., a document with an extension name of '.sync'.
- the ⁇ esync> tag supports the synchronization function based on the attributes shown in the following Table 1.
- the external document synchronization using the ⁇ esync> tag requires metadata which provide the speech Web document with information on the external synchronization document.
- metadata used is a ⁇ metasync> tag having attributes defined as shown in Table 2.
- the ⁇ metasync> tag should be positioned in the speech Web document and it provides metadata to the ⁇ esync> tags stored in the external document.
- the entire operation mechanism is as shown in Fig. 3. That is, the synchronization document and the related ⁇ esync> tags are interpreted through the ⁇ metasync> tag described in the speech Web document and then the related BWS document is simultaneously loaded and rendered.
- the ⁇ isync> tag indicates a synchronization method of a document. Differently from the ⁇ esync> tag, it is not authored in a separate document but it is formed by directly describing related synchronization tags within a predetermined form.
- the form includes a ⁇ form> tag and a ⁇ menu> tag of a VoiceXML, in a speech Web document. This is to support synchronization occurring when a predetermined form of the speech Web document should be synchronized with a BWS Web document and when a predetermined document needs to be transited.
- each form when there are a plurality of forms in on speech Web document and each form requires BWS documents having multiple pages and a synchronized operation with the BWS documents, it can be resolved by describing related ⁇ isync> tags in each form.
- the tag ⁇ isync> may be described in tags ⁇ link> or ⁇ goto> of the VoiceXML and secure synchronized transit.
- the attributes of the ⁇ isync> tag for realizing synchronization based on the tag ⁇ isync> tag are as shown in the following Table 3
- the following shows an example that shows synchronization of an application document authored by using the tags ⁇ esync> and ⁇ isync>.
- a X ⁇ service_main_intro" dialogue is outputted by speech and, at the same time, a corresponding html page which affects the entire document, for example, a main page of the entire service, is synchronized based on the
- ⁇ dialogue is executed.
- a "hotnews_intro" dialogue is executed, a BWS Web document corresponding to the
- the ⁇ fsync> tag is needed for inter-form synchronization between the speech Web document and the BWS Web document and it processes the user input data.
- the concept of the ⁇ fsync> tag is similar to document synchronization. It signifies that, when the user input data are processed through speech recognition of the speech Web document, the processed user input data are transferred to and reflected in a ⁇ input> tag of the BWS. Conversely, when data are inputted from a user on the BWS, the content of the user input data is reflected in a ⁇ field> tag of the speech Web document.
- the ⁇ fsync> tag is a sort of executable contents. It may be positioned within the ⁇ form> of the VoiceXML or it may exist independently. If any, the scope of the ⁇ fsync> tag is limited to a document. When the ⁇ fsync> has a global scope over all documents, it should be specified in a root document. Then, it may be activated in all documents.
- the ⁇ field' attribute When the ⁇ field' attribute is not specified, it means that the ⁇ field> of a corresponding form is an object to be synchronized. In this case, the form should have only one ⁇ field> tag. If there are a plurality of ⁇ field> tags in one form, its attributes should be specified necessarily. Also, each field should have a unique name.
- the ⁇ fsync> tag has the attributes shown in the following Table 4 and it should be in charge of synchronization between forms.
- Attribute Function field This signifies a ⁇ field> name of a VoiceXML
- Speech data input from the user should be updated in the ⁇ field> tag of VoiceXML and ⁇ input> tag of the BWS HTML.
- Visual data inputted from the user such as data input through a keyboard or a pen, should be updated in the ⁇ input> tag of HTML and the ⁇ field> tag of VoiceXML simultaneously.
- Visual data inputted from the user should satisfy a guard condition of the ⁇ field> tag of VoiceXML.
- the ⁇ field> tag of VoiceXML should be matched one- to-one with the ⁇ input> tag of HTML in the moment when the inputted data are about to be reflected.
- the form synchronization should be carried out in parallel to the document synchronization. That is, the ⁇ field> or ⁇ input> tag to be synchronized may be validly updated only in a document already synchronized.
- the tags of the two modules, which should receive the inputted data should mutually exist in the synchronized document. When they are described on the external document, only the 'root' document is allowed for general synchronization.
- the data should be mutually inputted only in the form of an activated speech Web document.
- Fig. 4 is a block view describing a Digital Multimedia Broadcasting (DMB) system providing a broadcasting Web sites (BWS) service capable of simultaneous speech input/output.
- DMB Digital Multimedia Broadcasting
- the DMB system for providing speech-based BWS service capable of simultaneous speech input/output can be divided into a DMB transmission part and a DMB reception part based on a DAB system.
- the DMB transmission part includes a content data generator 110, a multimedia object transfer server (MOT) 120, and a DMB transmitting system 130.
- the content data generator 110 generates speech contents (speech Web documents) and BWS contents (visual Web documents).
- the MOT server 120 transforms the directory and file objects of the speech contents and BWS contents into MOT protocols before they are transmitted.
- the DMB transmitting system 130 multiplexes the respective MOT data of the transformed MOT protocol, which include both speech Web documents and visual Web documents, with different directory of the same sub- channel or different sub-channels and broadcasts them through a DMB broadcasting network.
- the present invention is not limited to them and a speech Web document and a visual Web document may be generated in an external device and transmitted from the external device.
- the DMB broadcasting reception part i.e., the DMB receiving block 200, includes a DMB baseband receiver 210, an MOT decoder 220, and a DMB integrated Web browser 230.
- the DMB baseband receiver 210 receives DMB broadcasting signals from the DMB broadcasting network based on the DAB system, performs decoding for corresponding subchannels, and outputs data of the respective sub-channels.
- the MOT decoder 220 decodes packets transmitted from the DMB baseband receiver 210 and restores MOT objects.
- the DMB integrated Web browser 230 executes the restored MOT objects, which include directories and files, independently or based on a corresponding synchronization method.
- the restored objects includes visual Web documents and speech Web documents related to the visual Web documents
- the DMB integrated Web browser 230 analyzes the aforementioned synchronization tags during the generation of synchronization event and executes the synchronization function based on the synchronization tags.
- Fig. 5 is a block view illustrating an integrated Web browser of Fig. 4.
- the integrated Web browser 230 includes a speech Web browser 233, a BWS browser 235, and a synchronization management module 231.
- the speech Web browser 233 drives speech markup generation language extended based on VoiceXML .
- the BWS browser 235 drives Web pages based on the HTML.
- the synchronization management module 231 manages synchronization between the speech Web browser 233 and the BWS browser 235.
- the speech Web browser 233 sequentially drives Web pages authored in the VoiceXML-based speech markup language, outputs speech to the user, and processes user input data through a speech device.
- the BWS browser 235 drives the Web pages authored in the HTML language defined in the DAB/DMB specifications and displays input/output on a screen, just as commercial browsers.
- the synchronization management module 231 receives synchronization events generated in the speech Web browser 233 and the BWS browser 235 and synchronizes corresponding pages and forms of each page based on the pre-defined synchronization protocol (synchronization tags).
- Examples of the DMB receiving block 200 include a personal digital assistant (PDA), a mobile communication terminal, and a settop box for a vehicle that can receive and restore DAB and DMB service.
- PDA personal digital assistant
- mobile communication terminal a mobile communication terminal
- settop box for a vehicle that can receive and restore DAB and DMB service.
- the method of the present invention can be realized in a computer-readable recording medium, such as CD-ROM, RAM, ROM, floppy disks, hard disks, magneto-optical disks and the like. Since the process can be easily implemented by those skilled in the art to which the present invention pertains, further description will not be provided herein.
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20050111238 | 2005-11-21 | ||
PCT/KR2006/004901 WO2007058517A1 (en) | 2005-11-21 | 2006-11-21 | Method and apparatus for synchronizing visual and voice data in dab/dmb service system |
Publications (2)
Publication Number | Publication Date |
---|---|
EP1952629A1 true EP1952629A1 (en) | 2008-08-06 |
EP1952629A4 EP1952629A4 (en) | 2011-11-30 |
Family
ID=38048864
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP06823659A Ceased EP1952629A4 (en) | 2005-11-21 | 2006-11-21 | Method and apparatus for synchronizing visual and voice data in dab/dmb service system |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP1952629A4 (en) |
KR (1) | KR100862611B1 (en) |
WO (1) | WO2007058517A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111125065A (en) * | 2019-12-24 | 2020-05-08 | 阳光人寿保险股份有限公司 | Visual data synchronization method, system, terminal and computer readable storage medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100862611B1 (en) * | 2005-11-21 | 2008-10-09 | 한국전자통신연구원 | Method and Apparatus for synchronizing visual and voice data in DAB/DMB service system |
KR100902732B1 (en) * | 2007-11-30 | 2009-06-15 | 주식회사 케이티 | Proxy, Terminal, Method for processing the Document Object Model Events for modalities |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010023436A1 (en) * | 1998-09-16 | 2001-09-20 | Anand Srinivasan | Method and apparatus for multiplexing seperately-authored metadata for insertion into a video data stream |
US20020194388A1 (en) * | 2000-12-04 | 2002-12-19 | David Boloker | Systems and methods for implementing modular DOM (Document Object Model)-based multi-modal browsers |
WO2003067413A1 (en) * | 2002-02-07 | 2003-08-14 | Sap Aktiengesellschaft | Multi-modal synchronization |
US20030182622A1 (en) * | 2002-02-18 | 2003-09-25 | Sandeep Sibal | Technique for synchronizing visual and voice browsers to enable multi-modal browsing |
US20030187944A1 (en) * | 2002-02-27 | 2003-10-02 | Greg Johnson | System and method for concurrent multimodal communication using concurrent multimodal tags |
US20040128342A1 (en) * | 2002-12-31 | 2004-07-01 | International Business Machines Corporation | System and method for providing multi-modal interactive streaming media applications |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11215490A (en) * | 1998-01-27 | 1999-08-06 | Sony Corp | Satellite broadcast receiver and its method |
JP2001008173A (en) * | 1999-06-21 | 2001-01-12 | Mitsubishi Electric Corp | Data transmission equipment |
JP2001267945A (en) * | 2000-03-21 | 2001-09-28 | Clarion Co Ltd | Multiplex broadcast receiver |
KR20040063373A (en) * | 2003-01-07 | 2004-07-14 | 예상후 | Method of Implementing Web Page Using VoiceXML and Its Voice Web Browser |
KR100561228B1 (en) * | 2003-12-23 | 2006-03-15 | 한국전자통신연구원 | Method for VoiceXML to XHTML+Voice Conversion and Multimodal Service System using the same |
KR100629434B1 (en) * | 2004-04-24 | 2006-09-27 | 한국전자통신연구원 | Apparatus and Method for processing multimodal web-based data broadcasting, and System and Method for receiving multimadal web-based data broadcasting |
KR100862611B1 (en) * | 2005-11-21 | 2008-10-09 | 한국전자통신연구원 | Method and Apparatus for synchronizing visual and voice data in DAB/DMB service system |
-
2006
- 2006-11-20 KR KR1020060114402A patent/KR100862611B1/en not_active IP Right Cessation
- 2006-11-21 EP EP06823659A patent/EP1952629A4/en not_active Ceased
- 2006-11-21 WO PCT/KR2006/004901 patent/WO2007058517A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010023436A1 (en) * | 1998-09-16 | 2001-09-20 | Anand Srinivasan | Method and apparatus for multiplexing seperately-authored metadata for insertion into a video data stream |
US20020194388A1 (en) * | 2000-12-04 | 2002-12-19 | David Boloker | Systems and methods for implementing modular DOM (Document Object Model)-based multi-modal browsers |
WO2003067413A1 (en) * | 2002-02-07 | 2003-08-14 | Sap Aktiengesellschaft | Multi-modal synchronization |
US20030182622A1 (en) * | 2002-02-18 | 2003-09-25 | Sandeep Sibal | Technique for synchronizing visual and voice browsers to enable multi-modal browsing |
US20030187944A1 (en) * | 2002-02-27 | 2003-10-02 | Greg Johnson | System and method for concurrent multimodal communication using concurrent multimodal tags |
US20040128342A1 (en) * | 2002-12-31 | 2004-07-01 | International Business Machines Corporation | System and method for providing multi-modal interactive streaming media applications |
Non-Patent Citations (4)
Title |
---|
"Digital Audio Broadcasting (DAB); Broadcast website; Part 3: TopNews basic profile specification European Broadcasting Union Union Européenne de Radio-Télévision EBUÜER; ETSI TS 101 498-3", IEEE, LIS, SOPHIA ANTIPOLIS CEDEX, FRANCE, vol. BC, no. V2.1.1, 1 October 2005 (2005-10-01), XP014032284, ISSN: 0000-0001 * |
"Digital Audio Broadcasting (DAB); Multimedia Object Transfer (MOT) protocol; Draft EN 301 234", IEEE, LIS, SOPHIA ANTIPOLIS CEDEX, FRANCE, vol. BC, no. V1.2.1, 1 September 1998 (1998-09-01), XP014003046, ISSN: 0000-0001 * |
AMANN N ET AL: "Multimodal access position paper", INTERNET CITATION, 26 November 2001 (2001-11-26), XP002244151, Retrieved from the Internet: URL:http://www.w3.org/2002/mmi/2002/siemens-26nov01.pdf [retrieved on 2003-06-12] * |
See also references of WO2007058517A1 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111125065A (en) * | 2019-12-24 | 2020-05-08 | 阳光人寿保险股份有限公司 | Visual data synchronization method, system, terminal and computer readable storage medium |
CN111125065B (en) * | 2019-12-24 | 2023-09-12 | 阳光人寿保险股份有限公司 | Visual data synchronization method, system, terminal and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
KR100862611B1 (en) | 2008-10-09 |
EP1952629A4 (en) | 2011-11-30 |
WO2007058517A1 (en) | 2007-05-24 |
KR20070053627A (en) | 2007-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP1143679B1 (en) | A conversational portal for providing conversational browsing and multimedia broadcast on demand | |
US6715126B1 (en) | Efficient streaming of synchronized web content from multiple sources | |
US9300505B2 (en) | System and method of transmitting data over a computer network including for presentations over multiple channels in parallel | |
JP3880517B2 (en) | Document processing method | |
US8645134B1 (en) | Generation of timed text using speech-to-text technology and applications thereof | |
KR20050100608A (en) | Voice browser dialog enabler for a communication system | |
JP4350868B2 (en) | Interactive broadcasting system using multicast data service and broadcast signal markup stream | |
US11197048B2 (en) | Transmission device, transmission method, reception device, and reception method | |
KR100833500B1 (en) | System and Method to provide Multi-Modal EPG Service on DMB/DAB broadcasting system using Extended EPG XML with voicetag | |
JP2003044093A5 (en) | ||
CN101617536A (en) | Represent the method for at least one content of serving and relevant equipment and computer program to terminal transmission from server | |
EP1952629A1 (en) | Method and apparatus for synchronizing visual and voice data in dab/dmb service system | |
KR100513045B1 (en) | Apparatus and Method for Providing EPG based XML | |
EP2447940B1 (en) | Method of and apparatus for providing audio data corresponding to a text | |
KR100576546B1 (en) | Data service apparatus for digital broadcasting receiver | |
Lee et al. | Mobile multimedia broadcasting applications: Speech enabled data services | |
EP1696342A1 (en) | Combining multimedia data | |
Kim et al. | An Extended T-DMB BWS for User-friendly Mobile Data Service | |
EP1696341A1 (en) | Splitting multimedia data | |
Matsumura et al. | Restoring semantics to BML content for data broadcasting accessibility | |
Guo et al. | A method of mobile video transmission based on J2ee | |
JP2002182684A (en) | Data delivery system for speech recognition and method and data delivery server for speech recognition | |
EP1958110A1 (en) | Method for authoring location-based web contents, and appapatus and method for receiving location-based web data service in digital mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20080521 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20111103 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04H 60/82 20080101ALI20111027BHEP Ipc: G06F 17/30 20060101ALI20111027BHEP Ipc: H04M 3/493 20060101ALI20111027BHEP Ipc: H04N 7/00 20110101AFI20111027BHEP |
|
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20120725 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R003 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED |
|
18R | Application refused |
Effective date: 20140205 |