CN1393107A - Transcript triggers for video enhancement - Google Patents
Transcript triggers for video enhancement Download PDFInfo
- Publication number
- CN1393107A CN1393107A CN01802881A CN01802881A CN1393107A CN 1393107 A CN1393107 A CN 1393107A CN 01802881 A CN01802881 A CN 01802881A CN 01802881 A CN01802881 A CN 01802881A CN 1393107 A CN1393107 A CN 1393107A
- Authority
- CN
- China
- Prior art keywords
- supplementary
- video frequency
- frequency program
- program
- screen text
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/4104—Peripherals receiving signals from specially adapted client devices
- H04N21/4126—The peripheral being portable, e.g. PDAs or mobile phones
- H04N21/41265—The peripheral being portable, e.g. PDAs or mobile phones having a remote control device for bidirectional communication between the remote control device and client device
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/162—Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing
- H04N7/163—Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing by receiver means only
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/735—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/78—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/783—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/7844—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/235—Processing of additional data, e.g. scrambling of additional data or processing content descriptors
- H04N21/2353—Processing of additional data, e.g. scrambling of additional data or processing content descriptors specifically adapted to content descriptors, e.g. coding, compressing or processing of metadata
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44222—Analytics of user selections, e.g. selection of programs or purchase activity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/4508—Management of client data or end-user data
- H04N21/4532—Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/454—Content or additional data filtering, e.g. blocking advertisements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/462—Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
- H04N21/4622—Retrieving content or additional data from different sources, e.g. from a broadcast channel and the Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/475—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
- H04N21/4755—End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for defining user preferences, e.g. favourite actors or genre
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4782—Web browsing, e.g. WebTV
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/478—Supplemental services, e.g. displaying phone caller identification, shopping application
- H04N21/4786—Supplemental services, e.g. displaying phone caller identification, shopping application e-mailing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/812—Monomedia components thereof involving advertisement data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8126—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
- H04N21/8133—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/84—Generation or processing of descriptive data, e.g. content descriptors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/858—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
- H04N21/8586—Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/16—Analogue secrecy systems; Analogue subscription systems
- H04N7/173—Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
- H04N7/17309—Transmission or handling of upstream communications
- H04N7/17318—Direct or substantially direct transmission and handling of requests
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Library & Information Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Computer Security & Cryptography (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Television Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
A system and method for retrieving information supplemental to video programming. Transcript text is searched for terms of interest and information associated with the terms is identified. Depending upon a user profile and the category of video segment being viewed, the supplemental information is formatted for display. Over time, the rules for associating the supplemental information with the terms of interest may be modified using a learning model.
Description
Background of invention
1. invention field
The present invention relates to the medium technique field.In particular to video and relevant screen text.
2. the cross reference of related application
The present invention utilizes screen text that vision signal is associated with supplementary, and extract and the increase Word message, related as this assignee in No. 09/351086 common pending application of submission on July 9th, 1999, here it is incorporated herein by reference.3. correlation technique
In the last few years, the quantity of source of media was increasing, and also in continuous increase, the information that makes has exceeded load-bearing capacity from the amount of the information in each source.Most consumers was both not free also is unwilling the time flower is sought on bulk information with the relevant information of their demand.Therefore people have been developed so-called " selling technique ".The web application that Pointcast or Backweb are such, or the web browser that upgrades can inquire that the user is interested in which kinds of information and website.The webserver is given the user with user's interest information " distribution " then, rather than waits for that the user asks to obtain them.It is periodic doing like this, can not arouse people's attention.
Meanwhile, along with the progress of medium technique, the boundary line between video, audio frequency and other medium is fuzzy.The progress of medium technique makes becomes possibility with internet information and out of Memory material with the video display that traditional TV programme sends to the consumer together.Because the internet has become the instrument of ecommerce, the consumer will check the combination of video, audio frequency and these medium of Word message of identical or related subject.The consumer has been familiar with the hyperlink notion, and " digging into " extracts the such notion of extraneous information of the theme that they just check on World Wide Web (WWW).
At present, the extraction of these extraneous informations can utilize close-captioned text, audio frequency and automatic plot segmentation and identification to realize.The Broadcast Journalism editing machine (BNE) that Mitre company provides is by automatically being divided into news broadcast plot section one by one, and the summary in that each plot section is provided with first row of the relevant close-captioned text of these sections makes this extraction become possibility.In addition, also find out the keyword of close-captioned text or audio-frequency information for each plot section.
The occurrence number that is the keyword of each plot section of being complementary according to the search of electing with the consumer of the Broadcast Journalism browser (BNN) of Mitre company equally sorts to program segment.Therefore, the interested probably plot section of consumer is found out.But BNN and BNE combined a clear and definite search in the instructions for use consumer brain, and usually is not like this in typical channel search situation.
For the user provides No. 5809471 United States Patent (USP) that the patent of television program assistant information has Brodsky " to utilize the vocabulary of Dynamic Extraction to extract the extraneous information that does not have in interactive television or the telephone signal " and No. 6005565 United States Patent (USP) " comprehensive search of electronic program guides, the Internet and out of Memory resource " of Legall or the like.In ' No. 471 patents, extract keyword from TV programme or close-captioned text, produce the dictionary of a dynamic change.The user asks acquired information on the basis of word of seeing or the speech heard from television broadcasting.User's request is compared with dictionary, when finding same words, just search for the supplementary that will show.
According to ' No. 565 patent, theme or source that user's selection will be searched for.According to the information of user's input, out of Memory resource and display of search results that research tool searching epg and World Wide Web are such.' No. 471 patents and ' No. 565 patents all require the user that interested keyword is provided.Theme with program (stock market's report just) forms contrast, and these two patents all do not follow the overall content (news program just) of program to associate the supplementary of extracting.
The invention summary
Therefore, provide a kind of method and a kind of system, the television-viewing that utilizes screen text automatically to provide auxiliary multimedia messages to strengthen the consumer is experienced, and is very significant.So-called screen text (transcripttext) comprises one of following content at least: literal, program screen literal, the electronic program guide information that video text, speech recognition software produce and comprise all or the close-captioned text of partial programme information.Video text is with image as a setting, stack that shows on the prospect or overlay text.For example, the location name usually occurs as video text.Video text also can be the form that embeds literal, for example can discern and extract street name from video image.
Providing is not only known interest of single consumer or profile special use, simultaneously also is that the supplementary of the programme content special use watched also is very significant.For example, news segments can be relevant with the connection of wired network news (CNN) webpage, and advertisement then can be relevant with other product information.This method and system will utilize learning model to produce new association continuously between television content and other media content, determine to show this supplementary of what and which kind of type simultaneously.In this way, supplementary can combine with the seamless unoccupied place of TV programme, and can not influence spectators, also can not require spectators to carry out any operation.
For above demand, the invention provides a kind of system, (process steps that just a kind of method, a kind of device and computer can be carried out) is used to extract with the relevant supplementary of video-frequency band, is presented on consumer's the video display.This system comprises an identification engine, is used for determining to follow the close-captioned text of video-frequency band or whether comprising the etendue critical word that is used to extract supplementary with the relevant literal of other screen text.If find a keyword, just show supplementary, the information that from bulk information, chooses according to the context of user profiles and this section according to the rule that stores.Also can expand these screen text keywords, compare with user profiles then.On the basis of grouped data, automatically determine the context of this section.These data comprise the natural language processing of program classification, method for tracking target, screen text information and/or electronic program guide information.
Information is presented in the window, perhaps unobtrusively is superimposed upon on the main video-frequency band.Also information can be transmitted to for example hand-held device or email accounts, be stored in the second-level storage, perhaps buffer memory in local storage.In the plot classification, this system automatically discerns the beginning and end of each section, thereby can upgrade corresponding to the contextual regular subclass of program segment.
On the other hand, the present invention associates supplementary with the video-frequency band of watching this group rule is dynamic, is based upon on the basis of a learning model.This group rule is upgraded from one group of source, comprises third-party source, and makes the user can obtain these information according to user's selection and behavior pattern.In one embodiment, this rule is to launch from the PDA(Personal Digital Assistant) with wireless connections.
The purpose that provides this summary is to make the reader can promptly understand essence of the present invention.In order more intactly to understand the present invention, can be with reference to following detailed introduction and accompanying drawing to preferred embodiment.
The accompanying drawing summary
Fig. 1 describes the system of the present invention that adopts.
Fig. 2 illustrates the unit of the processor that comprises in this system.
Fig. 3 a and 3b are the flow charts that is used to illustrate the course of work of the present invention.
Fig. 4 is a table, and it illustrates the supplementary trigger word (triggers) of given video-frequency band among the present invention.
How Fig. 4 a illustrates etendue critical word and trigger word.
Fig. 5 illustrates an embodiment of learning model of the present invention.
How Fig. 6 explanation is upgraded and the maintenance association rule database in order to extract supplementary.
How Fig. 7 explanation shows supplementary.
Fig. 8 illustrates an embodiment that adopts set-top box among the present invention.
Fig. 9 illustrates the another one embodiment that adopts television indicator among the present invention.
Preferred embodiment
Fig. 1 has drawn and has adopted a representative embodiment of system of the present invention.In this embodiment, multimedia processor system 6 comprises other circuit and the element that processor 12, memory 10, input/output circuitry 8 and those of skill in the art know.With an analog video signal or digital stream input receiver 2.This stream adopts MPEG or other proprietary broadcasting format.
According to mpeg standard, the video data discrete cosine transform coding is divided into adjustable length coded data packet and launches.A standard of mpeg standard, MPEG-2 is described in International Standards Organization-file ISO/IECJTCI/SC29/WG11 of Motion Picture Experts Group " coding of moving image and audio signal " in July, 1996.MPEG only is a form example that can be used for this system.
The screen text that transmits in vision signal 162 is extracted from analog video signal line 21 or mpeg stream user data fields by screen text extractor 4.Screen text extractor 4 is also with the video frequency program segmentation.The screen text of particular frame can be stored in the memory 10.Also it can be analyzed as real time data stream.
Go back stored electrons performance guide information (EPG) in the memory 10.Download these information according to user's request or at pre-programmed time, provide several days or several all television program information.It is launched at field blanking interval or by the MPEG-2 special table on " domesticated dog " by local analog TV broadcaster.Also can launch by telephone wire or by wireless device.The EPG data comprise program category and subclass, viewing rate and the brief such information of program introduction.The EPG data are used to determine the type of program, are a news program, a sponsored program extracts, a soap opera or a travelling documentary film such as it.
Be stored in second-level storage 18, the personal profiles information that also has keyword or " trigger word " form that can obtain in memory 10 illustrates user's interest place.Typical trigger word can be " Clint Eastwood ", " environment ", " presidential election " or " hockey ".In one aspect of the invention these trigger words are extended to and comprise synonym and related term.
As in this area as you know, by user's input, set up automatically or combine, set up the profile of people one by one of user interest by these two kinds of methods.For example, TiVo
TMIndividual TV is professional to allow the user to utilize TiVo
TMWhich program " making progress " on the remote controller or " downwards " button explanation user like.TiVo
TMUtilize this Information Selection user to like other related-program of watching subsequently.
When a trigger word meets the keyword that comprises in the screen text, extract auxiliary data, for example by communicator 17 from the internet 14 or dedicated source 13.Another auxiliary data source is an another one channel for example.On display 16, these data are shown as a World Wide Web webpage or its part then, perhaps be superimposed upon on the main video in a kind of unobtrusive mode.Also a simple URL(uniform resource locator) (URL) or informational message can be returned to the beholder.
These trigger words are stored in the second-level storage 18 equally with the rule that the such auxiliary data of World Wide Web (WWW) webpage associates, can from memory 10, obtain.These rules are set up by a default profile, and this default profile is the behavior according to the user, perhaps by remind the user import interest information then an interrogator of generation rule collection upgraded.Also receive these rules from PDA(Personal Digital Assistant) or the such mobile device 15 of cell phone by communicator 17.According to the context of the program segment of watching, these rules associate supplementary with trigger word.For example, if this program is the advertisement of ClintEastwood New cinema, context is exactly advertisement so, and the auxiliary data of extracting is the description of his film watched.If this program segment is to describe the traffic accident of Clint Eastwood, context is exactly a news, and the auxiliary data of extraction is exactly the biography webpage, perhaps follows
Www.cnn.comLink, to obtain about his more information of this problem in news why.
As mentioned above, correlation rule is also relevant with the combination of EPG field.For example, if " ClintEastwood " appears at the actor fields of EPG data, and context is advertisement, and closed caption data is " we will get back to Clint Eastwood and full hand banknote very soon after the following advertisement ", so, this correlation rule just extracts with the relevant auxiliary data of film of showing.On the other hand, if " Clint Eastwood " do not appear in the actor fields of EPG data, its context is advertisement, and closed caption data is " will broadcast the plateau tramp that Clint Eastwood acts the leading role on Friday ", so, correlation rule just extracts this film such auxiliary data of broadcast time.These difference can be by for example comparing mark to determine with the literal that extracts in the closed caption data.If identical, the program of WKG working advertisement is exactly the program of watching.Also can utilize natural language processing to discern " returning " such key words, they can illustrate that also the program of WKG working advertisement is exactly the program of watching.
In addition, if do not occur in the actor fields of EPG data " Clint Eastwood ", context is advertisement, and closed caption data is said " meeting is at the New cinema of the nearest Clint of broadcast Eastwood ", this correlation rule just extracts auxiliary data by being linked to Clint Eastwood homepage so, finds out the more information of film.
Correlation rule is the type of definite medium that will extract also.For example, if " Kosovo " is trigger word, and program is by national geography magazine patronage, and this correlation rule just extracts a map in this zone so.If the program segment context is a news, " war " this speech is in the EPG data, and this correlation rule just extracts the up-to-date political historical information in this zone so.
In the other embodiment, this system comprises that one has and handles and the video display of memory, perhaps is used to handle the independent set-top box with store information.These embodiments can comprise communicator or arrive the interface of communicator.The reception of vision signal and internet information is undertaken by wireless, satellite, cable or other medium.This system can change over by communicator 17 and launch supplementary as output signal on radio transmitter or by wireless device, and signal wherein embeds in the carrier wave 160.Supplementary can be transmitted to an email contact list, and/or downloads the voice mail device of giving the such mobile device 15 of cell phone, and/or is transmitted to the such hand-hold type hand-held device of Palm Pilot .
Fig. 2 is a schematic diagram of processor unit.Profile generator 50 produces a profile of the known interest of user and stores, comprising the keyword of trigger word information or interest.This be by the input of user for example, allow the user to a series of problems make answers, a default profile of generation or find out point of interest by monitor user activities and finish on the basis of the user personality that user's modification is crossed.Rule producer 52 produces these correlation rules, and they logically combine each trigger word with various contexts, determines and which supplementary is shown to the user.Identification engine 54 with each trigger word with screen text relatively determines whether trigger word exists as a keyword in the Word message.When having found one to follow a trigger word identical, extract part 56 and just extract supplementary, 58 pairs of data of format part format, for demonstration.Context monitor 60 monitors context, sees whether it changes because show new program segment.When context changed, context monitor 60 was just visited the new subclass that second-level storage 18 extracts correlation rule.
Data Update device 62 is used to upgrade supplementary, with in conjunction with for example new website or reflect the Search Results of various search engines.Repeat counter 64 computation requests obtain the frequency of a certain information, and click steam monitor 66 is measured the frequency that the user asks to obtain auxiliary data.These intelligent agents are revised information type and the amount of presenting to user's information with extracting modifier 68 collaborative works.
Fig. 3 a and 3b are the flow charts of explanation method of the present invention.At first, in step S201, the vision signal of importing is inputed to receiver.This vision signal be simulation or digital form.Screen text extractor different with processor or that be combined in the processor extracts screen text in step S202, determines the beginning and end of each vision signal section.Next in step S203, processor extracts keyword from screen text.The extracting method of keyword is well-known in the art, has described a kind of like this extracting method in No. 5809471 United States Patent (USP) of Brodsky " utilizes the dictionary that dynamically extracts to extract the information that can not find in interactive television or the telephone signal ".Shown in Fig. 4 a, by being associated with the illustrated synonym of the step S204 among Fig. 3 a or associative key, they from screen text 154, extract these keywords 152, obtain more meaningful, complete results more.A dictionary or a such database of Wordnet are used for this purpose, and Wordnet is an online dictionary, and its design has been subjected to the encouragement of vernacular theory.The various piece of voice signal is organized into synset, and each all represents a dictionary notion.
Can also come the etendue critical word by the theme of determining screen text.For example, if having " inflation ", " Alan Greenspan " and " unemployment rate " such keyword simultaneously, just can know to have trigger word " economy " in the screen text.Equally, if keyword " US President " is arranged in the screen text, just there is trigger word " presidential Clinton ".
When in dictionary and the such reference tool of encyclopedia auxiliary data being arranged, just can adopt special rules, shown in Figure 41 14132.In a kind of pattern, trigger word is transformed into different keywords according to beholder's the degree of understanding.For example, if the beholder is a child or a beholder who speaks a foreign language, trigger word " unemployment " just is converted into keyword " not work ", but can not convert keyword " unnecessary " to.In another kind of pattern, according to above-described mode expanded keyword.
Father and mother control program in the program segment this below level or context this carry out below level.Therefore, when broadcast was not suitable for child's advertisement in for example suitable cartoon playing process, father and mother needn't worry.When playing advertisement, only play a special frame to child.This special frame can be taked the form of toy advertisement, rather than common sealing screen.Also to expand the sealing trigger word, to strengthen the effect of sealing.For example, if father and mother do not wish that child sees with the relevant video clips of war, just converts trigger word " war " to " armed conflict " and " bombing " such keyword and speech.An example of trigger word expansion provides in Fig. 4 a 102 156.
Get back to Fig. 3 a, in step S205, read the personal profiles that comprises trigger word.In step S206, the keyword that processor will utilize screen text to obtain compares with the trigger word that comprises in the user profiles.If without any something in common, processor just continues to extract other screen text.
If the coupling part is arranged, in the step S207 of Fig. 3 b, just discern the context of the video frequency program that is broadcasting.This carries out in several ways, utilizes the such low level feature extraction method of closed caption data, EPG data, object tracking or color, motion, texture or shape.Utilize the natural language technology from screen text, to extract the context of program segment simultaneously.For example Microsoft has developed a kind of software, and it is learnt by analyzing literal, and it comprises online dictionary and encyclopedia, and automatically obtains knowledge by this analysis.This knowledge is used for the explanation of restriction to the speech " aircraft " of " aloft aircraft may be dangerous " this sentence subsequently, and concludes that thus this sentence is relevant with aviation, rather than relevant with timber processing.
Software also utilizes the paper analysis to determine the structure of close-captioned text and its context, carries out work on this level of paper.For example, news program is confirmed as news program, because it generally all is the most important fact of report, in its beginning explanation " personage, incident, time, place, how to take place ".Therefore, the program with " 7 o'clock of morning Clint Eastwood Carmel California a gunbattle has taken place in the street, is filmed with household video camera by the witness " beginning is considered to a news peg.Can also obtain context from being combined in the EPG data of above-described type and sub-type field or field.
In step S208, read correlation rule below.Correlation rule determines to extract which auxiliary data from the database that stores according to keyword and context.In step S209, read the customization display module.These modules make the user can limited subscriber want the type of the information of watching, thereby the restricted information amount.For example, the user may only wish to see the URL(uniform resource locator) (URL) of WWW page or leaf, only wishes to see the title that the page, page abstract or complete page are bigger.The user can select him to wish the auxiliary resources of seeing and make the priority of these resources higher.
In step S210, the lane database that stores from memory extracts auxiliary data.Database comprises interested project, perhaps points to the pointer of interested project, is attached to trigger word.For example, database comprises the arbitrary content in the following content: famous person and public figure's name, country, capital and the such geography information of president, product and brand name, classification topic.
Safeguard from one group of source setting up and upgrade this database.These sources comprise for example Bloomberg website, encyclopedia, dictionary and a networking station or a search engine.Information and closed caption data from EPG also are combined in this database.
One group is upgraded and purifies rule, as illustrated in Figures 5 and 6, also is stored in a database or a spectators' the profile, it is safeguarded, with the size of management database or profile and its current key assignments.For example, after election finishes, with such " outmoded " clauses and subclauses of link of deleting election results and arriving ballot and candidate message.
Get back to Fig. 3 b, in step S211, supplementary is formatd so that show.Information is presented in the window, perhaps unobtrusively is superimposed upon on the main video-frequency band.Also information format can be changed into and be used to be transferred to for example Palm Pilot of Palm company production
TMSuch hand-held device or be transferred to email accounts.
Fig. 4 illustrates the correlation rule 100 of several trigger words 102.In this table, first tabulation shows that trigger word 102, the 2~4 tabulation diagrammatic sketch show the possible context 104,106,108,110 of trigger word.Correlation rule 102 from first trigger word 102 " Clint Eastwood ", when occurring this trigger word 102 in the user profiles, one that extracts in three different auxiliary information items 116,118,120 shows, which specifically extracts depend on context appears at which the Clint Eastwood in the video-frequency band of watching.Though in fact the link of all only having drawn of each circle exists a plurality of links in this form examples.If Clint Eastwood appears in the advertisement, this system will be linked to
Www.imdb.comThe WWW webpage, and according to the customization display model show this webpage.If Clint Eastwood appears in the talk show, this talk show section that he occurs will be stored and be used to extract 118 and/or a notification signal sent to spectators in real time.Also can send an off-line notification signal, watch after being provided with, tell the beholder that this section is stored.
Extract notification signal with automated manner or manual mode.Also notification signal is associated with a theme, thereby when broadcast Clint Eastwood film next time, can show.If Clint Eastwood appears in the news program, this system will be linked to
Www.cnn.comThe WWW webpage.Notification signal has priority, makes the user can select to wish the situation that obtains notifying.For example, the user only wishes to see with the relevant notice of severe weather warning.
Trigger word 102 Macedonian second correlation rule 122 at be 4 kinds of different contexts.If trigger word " Macedonia " appears in the advertisement, system just is connected to
Www.travel.com130 WWW webpage.If Macedonia is the theme of a talk show, this system just is connected to an inlet of " Macedonia " in the Compton's Encyclopedia 132.If Macedonia is the theme of a news program, just with the user be tuned to broadcasting on the radio station of 134 these programs.If Macedonia is the theme of a program of national geography magazine patronage, this system just is linked to
Www.yahoo.com/maps136, show Macedonian map.
Correlation rule 3~5124126128 should make an explanation according to mode identical in the above-mentioned example.Shown in form, the certain trigger word 102 such as " Meryl Streep " appears in the screen text, and system only provides supplementary for specific context.For " MerylStreep ", only supplementary is offered talk show and news context.If desired, such rule is expanded to an inventory that is applied to famous actor or all performers.
Fig. 4 a explanation how to expand trigger word and keyword extracts supplementary.For for example screen text shown in the figure 150, from screen text 150, extract keyword 152 " Lyme arthritis ".Subsequently this keyword 152 is expanded to corresponding other keyword " flat lice ", " flat lice bites ", " psoriasis " and " deer horse flies bite ".If have any word to appear in the screen text in these etendue critical words, will be extracted out with the relevant supplementary of Lyme Disease.
Fig. 4 a also illustrates how to expand trigger word.Trigger word 102 " Lyme arthritis " is expanded 156 to comprising relevant word " flat lice bites ", " western Rhine virus " and " killing the mosquito spray ".Therefore, if screen text 150 comprises any expansion trigger word, just store this section.
Fig. 5 illustrates display module and the correlation rule that upgrades customization with a learning model continuously.Repeat counter 20 recording users are to the request number of times of same auxiliary data, and for example the user sends such request by clicking a URL.Also have, the extraction part 56 of processor shown in Figure 2 can be extracted more than one supplementary for each section, and the user can select the user to wish the information of seeing.If the user asks the number of times of a certain auxiliary data to be less than a pre-determined number, extract modifier 24 and just upgrade the correlation rule 26 that stores, from rule, delete auxiliary data, perhaps rule is comprised instead a new source.The frequency of click steam monitor 22 all auxiliary datas of monitoring user request.If the user selects the number of times of auxiliary data to be less than pre-determined number, extract modifier 24 and just revise client's display module 28 of this user, to user's demonstration information still less.
Fig. 6 explanation how to upgrade and maintenance dynamically associates rule database.This database comprises that some are interested, perhaps points to interested pointer, and when the keyword in the screen text was identical with trigger word in the user profiles, they provided supplementary.Along with the transition of time new database more constantly,, be complementary with the user profiles that constantly changes to reflect current incident.
Existing data source set 36 illustrates the Data Source that constitutes association rule database 26.Comprise from the external data 38 of various open sources, Proprietary Informations with from the data source set 36 of the data of internet 14 and upgrading, be incorporated into for example new website, perhaps reflect the Search Results of various search engines by Data Update device 40.Keep one group and refresh rule 32, the size of database is remained in the predetermined scope.According to one group of priority having formulated, deletion information when needing.Also preserve one group of rule 34 of purifying, illustrate when how to delete " outmoded " information.Note the date for some kinds of information, the old information of month number and/or year number is all deleted than being scheduled to.
Fig. 7 explanation shows an embodiment of supplementary 70 in main video-frequency band in unremarkable mode.Supplementary appears at the bottom of image.
Fig. 8 illustrates that set-top box 75 comprises an embodiment of a receiver 2, this receiver 2 receiving video programs and screen text.Screen text extractor and sectionalizer 4 extract screen text 150 from vision signal, and it is associated with advertisement or the new such video frequency program section of animation.Processor system 6 comprises in this area processing unit-an I/O part 8 as you know, a memory 10 and a processor 12.Processor system extracts the supplementary of video frequency program from each provenance by communicator 17.Drawn as an example in these sources three, internet 14, proprietary (the non-public) database 13 and the such mobile device 15 of PDA.Communicator 17 can connect with other device that does not draw by wireless device, wire line MODEM, Digital Subscriber Line or network.Second-level storage 18 is used to store supplementary and rule, is used for information extraction.Set-top box can connect with PC display or the such display of television set.
Fig. 9 illustrates the another one embodiment, and TV 80 wherein comprises a receiver 2, screen text extractor and sectionalizer 4, processor system 6, second-level storage 18, communicator 17 and a display 16.Processor system 6 comprises in this area processing unit as you know---I/O part 8, a memory 10 and a processor 12.Television set 80 follows the source of supplementary to be connected by the communicator 17 that connects internet 14, proprietary source 13 and mobile device 15.
The present invention has been described with reference to specific illustrative embodiment.Obviously the present invention is not limited to embodiment described herein, and those of skill in the art can make amendment to them, changes and improvements, and can not depart from essence and the scope that following claim provides.
Claims (36)
1. correlating method that is used to extract the video frequency program supplementary may further comprise the steps:
Receiving video program (2);
In video frequency program, determine at least one section (4);
Receive described at least one section grouped data (4,2);
The screen text of receiving video program (4);
For the video frequency program beholder determines a user profiles (50);
The combining classification data are determined one group of rule (52), when screen text and user profiles satisfy a set condition supplementary are associated with video frequency program; With
On the basis of this group rule, automatically extract supplementary, on display, show (56).
2. that group rule (100) that the process of claim 1 wherein comprises the information from user profiles (102).
3. the method for claim 2, user profiles wherein comprises a trigger word (102) at least, it determines the interested theme of video frequency program beholder.
4. the method for claim 3, that set condition explanation wherein have only when a keyword in the screen text with at least one trigger word (102) identical (S206) in the user profiles time, and identification engine (54) just extracts supplementary.
5. the screen text that the process of claim 1 wherein comprises close-captioned text, video text, program screen literal or electronic program guide information.
6. the screen text that the process of claim 1 wherein (150) is produced by speech recognition software.
7. the method for claim 1 also comprises from a mobile device (15) or a third party source (13) receiving the step of the part of this group rule (100) at least.
8. the process of claim 1 wherein to have at least the pointer of part supplementary and sensing supplementary to be stored in the database (26), perhaps be transmitted to personal digital assistant (15), perhaps be transmitted to an e-mail address (14).
9. the process of claim 1 wherein that the extraction of supplementary (116,118,120) is real-time.
10. the supplementary that the process of claim 1 wherein (116,118,120) is formatted at window (70) and goes up demonstration, perhaps is superimposed upon on the video frequency program of display (16).
11. the supplementary that the process of claim 1 wherein is a Word message (114) or from the webpage of World Wide Web (116).
12. the method for claim 5 also comprises from electronic program guide information (150) being each video frequency program section step of selective rule group (100) automatically.
13. the method for claim 3, also comprise by screen text (150) and carry out natural language processing each video frequency program section, selective rule group (100) automatically is used for determining that whether certain keyword (S203) in the screen text (4) is with the identical step of a trigger word (102) in the user profiles.
14. the method for claim 3, also comprise at least one keyword (S203,152) in definite screen text (150), this at least one keyword (S204,152) is extended to comprises correlation word (154), when this keyword or correlation word are complementary (S206) with at least one trigger word (102) in the user profiles, extract the step of supplementary (S210).
15. the method for claim 3, also comprise the analysis of talking of the screen text (150) of each video frequency program section, automatically produce one group of rule (52), be used for determining the step whether keyword (152) in the screen text (150) follows the trigger word (S206,102) in the user profiles to be complementary.
16. the method for claim 3, also comprise at least one trigger word (154) in the user profiles is extended to and comprise at least one word, determine at least one keyword in the screen text, when this trigger word or relevant word are complementary with at least one keyword in the screen text, extract the step of supplementary.
17. the method for claim 8 comprises also that database was added in deletion (40) to before the specific date or with the relevant supplementary (26) of the incident that finished or the step of supplementary pointer.
18. the method for claim 11 wherein has only the URL(uniform resource locator) (URL) (28,70) of webpage or is revealed less than a part of webpage (28) of whole webpage or webpage wherein (28) summary.
19. the method for claim 1 also comprises the supplementary amount that monitoring video program viewing person watches, the video frequency program beholder watches the frequency (20) of supplementary, changes the step that (24) formatd the supplementary amount that supplies demonstration according to predetermined rule.
20. the supplementary that the process of claim 1 wherein is included in the email message (15), perhaps is downloaded (17) and gives people's information manager (15) one by one.
21. extract a kind of device of video frequency program supplementary, this device comprises:
A receiver (2) of receiving video program, video frequency program grouped data and video frequency program screen text;
A screen text extractor (4) is used for determining at least one section of video frequency program, and screen text is associated with described section;
Context monitor (60, S207), it monitors the grouped data (104,106,108,110) of each section, thereby determines the context of each section;
A profile generator (50), it sets up a user profiles for the video frequency program beholder;
A rule producer (52), combining classification data (102,104,106,108,110), when screen text (50) and user profiles (102) satisfy a set condition, set up one group of rule (100) supplementary (116,118,120) is associated with video frequency program;
One is extracted part (56), extract supplementary (116,118,120) on the basis of that group rule (100);
A format part (58), it formats (S211) to the supplementary that extracts, so that show together with video frequency program.
22. the device of claim 21, when the trigger word (102) in the user profiles time with the keyword (152) identical (S206) in the screen text, extraction extracting section (S210) supplementary (116,118,120) wherein.
23. the device of claim 22, expansion (156) is at least one trigger word (102) of user profiles wherein, makes it comprise correlation word, and this trigger word and correlation word are compared (S206) with keyword (152).
24. the device of claim 22, expansion (154, S204) be at least one keyword (152) in the screen text (150) wherein, makes it comprise correlation word, and this trigger word (102) is compared with keyword (154) and correlation word.
25. the device of claim 21, extraction wherein (S207,104,106,108,110) partly (56) extract the information of this section according to the context of this section.
26. the treatment step that computer can be carried out is used for extracting the supplementary of video frequency program, these treatment steps that computer can be carried out are stored in the media (18) that computer can read, and comprising:
Receiving video program, the grouped data of describing video frequency program and the receiving step (S210) of video frequency program screen text;
On the basis of grouped data, determine at least one section and the contextual context determining step (S207) of this section of video frequency program;
Determine the keyword determining step (S203) of keyword in the screen text of at least one section of video frequency program;
Keyword is extended to the keyword spread step (S204) that comprises correlation word;
Extract the personal profiles extraction step (S205) of user profiles for the beholder who watches video frequency program;
The keyword comparison step (S206) that keyword and correlation word are compared with at least one trigger word in the user profiles;
Extract one group of rule, the correlation rule extraction step (S208) of which supplementary that will extract video frequency program on the contextual basis of determining is described;
In the time of the success of keyword comparison step, on the basis of this group rule, extract the extraction step (S210) of supplementary; With
The supplementary that format extracts is for the formatting step (S211) that shows.
27. a kind of signal (60) in the embedding carrier wave is represented video frequency program (162) and its supplementary (116,118,120), comprises video frequency program grouped data (104,106,108,110); Screen text (150); User profiles (102); And screen text is when satisfying a set condition (S206) with user profiles, in conjunction with the video frequency program grouped data, and the rule (100) that supplementary is associated with video frequency program.
28. a kind of device of extraction and display video programs supplementary comprises:
The device (2) of receiving video program (162);
Determine the device of at least one section (4) in the video frequency program;
Receive the device of the program classification data of describing at least one section (4,2);
Receiving video program screen text (150) is followed at least one section device that (4) associate with these screen texts;
Extract the device of video frequency program beholder (50) user profiles;
When screen text and user profiles (102) satisfy a set condition (S206), determine (52) one groups of rules (100), combining classification data (104,106,108,110), the device that supplementary (116,118,120) is associated with video frequency program;
On the basis of this group rule (56, S210), extract the device of supplementary; With
Format (58) supplementary is used for video frequency program device shown together.
29. a kind of set-top box (75) of video frequency program beholder comprising:
The receiving system (2) of receiving video program (102), video frequency program grouped data (104,106,108,110) and video frequency program screen text (150);
Screen text extracts and sectioning (4), determines at least one section in the video frequency program, and these screen texts are associated with this at least one section;
Connect the communicator (17) of receiving video program (116,118,120) supplementary with at least one information source (14,13,15);
Processor device (6), it
A) extraction video frequency program beholder's user profiles (50) which comprises at least a trigger word (102), reflecting video program viewing person's interest,
B) grouped data is associated (60, S207) with this at least one section,
C) in conjunction with these grouped datas, determine one group of rule (52), supplementary is associated with this section,
D) trigger word that in screen text, comprises in the search subscriber profile (54),
E) when trigger word (102) is included in the screen text (150), utilize communicator (17) on the basis of this group rule (100), extract supplementary (56) and
F) format the supplementary confession demonstration that (58) extract; With
Storage device (18) is used to store screen text, user profiles, this group rule and supplementary.
30. the set-top box of claim 29 (75), receiving system receiving digital video program wherein.
31. the set-top box of claim 29 (75), processor wherein (12) is deciphered and is formatd digital video programs, so that show on conformable display.
32. the set-top box of claim 29 (75), video frequency program beholder wherein selects to pass through communicator (a 17) destination (15) in emission supplementary past.
33. the set-top box of claim 29 (75), wherein processor (12) is the supplementary (116,118,120) that each section extracts more than one, the supplementary that extracts is automatically arranged according to a kind of priority according to user profiles (S209), and acquiescence has the supplementary of limit priority and wants formatted for showing (S211).
34. the set-top box of claim 29 (75), wherein processor (12) is the supplementary (116,118,120) that each section extracts more than one, and the video frequency program beholder that video frequency program beholder selective extraction is come out wants the supplementary seen.
35. a television set (80) comprising:
Receiving system (2), its receiving video program (162), video frequency program grouped data (104,106,108,110) and video frequency program screen text (150);
Screen text extracts and sectioning (4), and it determines in the video frequency program at least one section, screen text is followed at least one section associate;
Communicator (17) couples together with at least one information source, receives the supplementary of this video frequency program;
Processor device (12), it
A) extraction video frequency program beholder's user profiles (50) which comprises at least a trigger word (102), reflecting video program viewing person's interest,
B) grouped data is associated (4,2) with this at least one section,
C) in conjunction with these grouped datas, determine one group (52) rules (100), supplementary is associated with this section,
D) trigger word (102) that in the search subscriber profile of screen text (54) lining, comprises,
E) when trigger word (102) is included in the screen text, utilize communicator (17) on the basis of this group rule (100), extract supplementary (116,118,120) and
F) format the supplementary confession demonstration that (58) extract;
Store the storage device (18) of screen text, user profiles, this group rule and supplementary; With
The display unit of display video programs, the supplementary that extracts and formatd.
36. the treatment step that computer can be carried out is used for extracting the supplementary of video frequency program, the treatment step that this computer can be carried out is stored in the media (18) that computer can read, and comprising:
A receiving step (S201) is used for the screen text data of receiving video program, the grouped data of describing this video frequency program and this video frequency program;
A division step (S202) is used for determining at least one section and the grouped data of this section in the video frequency program;
First determining step (S205) is used for determining video frequency program beholder's a user profiles;
Second determining step (S208) is used for the combining classification data and determines one group of rule, when screen text and user profiles satisfy a set condition, supplementary associated with video frequency program; With
Extraction step (S210) is used for automatically extracting supplementary on the basis of this group rule.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US62718800A | 2000-07-27 | 2000-07-27 | |
US09/627,188 | 2000-07-27 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1393107A true CN1393107A (en) | 2003-01-22 |
CN1187982C CN1187982C (en) | 2005-02-02 |
Family
ID=24513587
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB018028810A Expired - Fee Related CN1187982C (en) | 2000-07-27 | 2001-07-11 | Transcript triggers for video enhancement |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1410637A2 (en) |
JP (1) | JP2004505563A (en) |
KR (1) | KR20020054325A (en) |
CN (1) | CN1187982C (en) |
WO (1) | WO2002011446A2 (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008113287A1 (en) * | 2007-03-22 | 2008-09-25 | Huawei Technologies Co., Ltd. | An iptv system, media server, and iptv program search and location method |
CN101930779A (en) * | 2010-07-29 | 2010-12-29 | 华为终端有限公司 | Video commenting method and video player |
CN101267518B (en) * | 2007-02-28 | 2011-05-18 | 三星电子株式会社 | Method and system for extracting relevant information from content metadata |
WO2012016505A1 (en) * | 2010-08-02 | 2012-02-09 | 联想(北京)有限公司 | File processing method and file processing device |
US8176068B2 (en) | 2007-10-31 | 2012-05-08 | Samsung Electronics Co., Ltd. | Method and system for suggesting search queries on electronic devices |
CN102473191A (en) * | 2009-08-07 | 2012-05-23 | 汤姆森许可贸易公司 | System and method for searching in internet on a video device |
US8200688B2 (en) | 2006-03-07 | 2012-06-12 | Samsung Electronics Co., Ltd. | Method and system for facilitating information searching on electronic devices |
US8209724B2 (en) | 2007-04-25 | 2012-06-26 | Samsung Electronics Co., Ltd. | Method and system for providing access to information of potential interest to a user |
CN102567435A (en) * | 2010-12-31 | 2012-07-11 | 宏碁股份有限公司 | Integration method of multimedia information source and hyperlink device and electronic device thereof |
CN101605011B (en) * | 2008-06-13 | 2013-05-01 | 索尼株式会社 | Information processing apparatus and information processing method |
CN103096173A (en) * | 2011-10-27 | 2013-05-08 | 腾讯科技(深圳)有限公司 | Information processing method and device of network television system |
US8510453B2 (en) | 2007-03-21 | 2013-08-13 | Samsung Electronics Co., Ltd. | Framework for correlating content on a local network with information on an external network |
US8843467B2 (en) | 2007-05-15 | 2014-09-23 | Samsung Electronics Co., Ltd. | Method and system for providing relevant information to a user of a device in a local network |
CN104079988A (en) * | 2014-06-30 | 2014-10-01 | 北京酷云互动科技有限公司 | Television program related information pushing device and method |
US8863221B2 (en) | 2006-03-07 | 2014-10-14 | Samsung Electronics Co., Ltd. | Method and system for integrating content and services among multiple networks |
US8935269B2 (en) | 2006-12-04 | 2015-01-13 | Samsung Electronics Co., Ltd. | Method and apparatus for contextual search and query refinement on consumer electronics devices |
US8938465B2 (en) | 2008-09-10 | 2015-01-20 | Samsung Electronics Co., Ltd. | Method and system for utilizing packaged content sources to identify and provide information based on contextual information |
US20150127675A1 (en) | 2013-11-05 | 2015-05-07 | Samsung Electronics Co., Ltd. | Display apparatus and method of controlling the same |
US9286385B2 (en) | 2007-04-25 | 2016-03-15 | Samsung Electronics Co., Ltd. | Method and system for providing access to information of potential interest to a user |
Families Citing this family (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7095871B2 (en) | 1995-07-27 | 2006-08-22 | Digimarc Corporation | Digital asset management and linking media signals with related data using watermarks |
US9630443B2 (en) | 1995-07-27 | 2017-04-25 | Digimarc Corporation | Printer driver separately applying watermark and information |
US8332478B2 (en) | 1998-10-01 | 2012-12-11 | Digimarc Corporation | Context sensitive connected content |
US7657064B1 (en) | 2000-09-26 | 2010-02-02 | Digimarc Corporation | Methods of processing text found in images |
US6899475B2 (en) | 2002-01-30 | 2005-05-31 | Digimarc Corporation | Watermarking a page description language file |
JP2003259316A (en) | 2002-02-28 | 2003-09-12 | Toshiba Corp | Stream processing system and stream processing program |
US20030192047A1 (en) * | 2002-03-22 | 2003-10-09 | Gaul Michael A. | Exporting data from a digital home communication terminal to a client device |
JP4352653B2 (en) * | 2002-04-12 | 2009-10-28 | 三菱電機株式会社 | Video content management system |
US20030229895A1 (en) * | 2002-06-10 | 2003-12-11 | Koninklijke Philips Electronics N. V. Corporation | Anticipatory content augmentation |
US20040078807A1 (en) * | 2002-06-27 | 2004-04-22 | Fries Robert M. | Aggregated EPG manager |
US10721066B2 (en) | 2002-09-30 | 2020-07-21 | Myport Ip, Inc. | Method for voice assistant, location tagging, multi-media capture, transmission, speech to text conversion, photo/video image/object recognition, creation of searchable metatags/contextual tags, storage and search retrieval |
US6996251B2 (en) | 2002-09-30 | 2006-02-07 | Myport Technologies, Inc. | Forensic communication apparatus and method |
US7778438B2 (en) | 2002-09-30 | 2010-08-17 | Myport Technologies, Inc. | Method for multi-media recognition, data conversion, creation of metatags, storage and search retrieval |
US7360235B2 (en) | 2002-10-04 | 2008-04-15 | Scientific-Atlanta, Inc. | Systems and methods for operating a peripheral record/playback device in a networked multimedia system |
CN1723458A (en) * | 2002-12-11 | 2006-01-18 | 皇家飞利浦电子股份有限公司 | Method and system for utilizing video content to obtain text keywords or phrases for providing content related links to network-based resources |
US7487532B2 (en) | 2003-01-15 | 2009-02-03 | Cisco Technology, Inc. | Optimization of a full duplex wideband communications system |
GB0304763D0 (en) * | 2003-03-01 | 2003-04-02 | Koninkl Philips Electronics Nv | Real-time synchronization of content viewers |
US8014557B2 (en) | 2003-06-23 | 2011-09-06 | Digimarc Corporation | Watermarking electronic text documents |
ATE395788T1 (en) * | 2003-08-25 | 2008-05-15 | Koninkl Philips Electronics Nv | REAL-TIME MEDIA DICTIONARY |
US10635723B2 (en) | 2004-02-15 | 2020-04-28 | Google Llc | Search engines and systems with handheld document data capture devices |
US9008447B2 (en) | 2004-04-01 | 2015-04-14 | Google Inc. | Method and system for character recognition |
US9143638B2 (en) | 2004-04-01 | 2015-09-22 | Google Inc. | Data capture from rendered documents using handheld device |
US7990556B2 (en) | 2004-12-03 | 2011-08-02 | Google Inc. | Association of a portable scanner with input/output and storage devices |
US9116890B2 (en) | 2004-04-01 | 2015-08-25 | Google Inc. | Triggering actions in response to optically or acoustically capturing keywords from a rendered document |
US8874504B2 (en) | 2004-12-03 | 2014-10-28 | Google Inc. | Processing techniques for visual capture data from a rendered document |
US8620083B2 (en) | 2004-12-03 | 2013-12-31 | Google Inc. | Method and system for character recognition |
US8953908B2 (en) | 2004-06-22 | 2015-02-10 | Digimarc Corporation | Metadata management and generation using perceptual features |
US8346620B2 (en) | 2004-07-19 | 2013-01-01 | Google Inc. | Automatic modification of web pages |
US8307403B2 (en) * | 2005-12-02 | 2012-11-06 | Microsoft Corporation | Triggerless interactive television |
JP2007300497A (en) | 2006-05-01 | 2007-11-15 | Canon Inc | Program searching apparatus, and control method of program searching apparatus |
JP2009540404A (en) * | 2006-06-06 | 2009-11-19 | エクスビブリオ ベースローテン フェンノートシャップ | Contextual dynamic ads based on captured rendering text |
EP2067102A2 (en) * | 2006-09-15 | 2009-06-10 | Exbiblio B.V. | Capture and display of annotations in paper and electronic documents |
US8447066B2 (en) | 2009-03-12 | 2013-05-21 | Google Inc. | Performing actions based on capturing information from rendered documents, such as documents under copyright |
EP2406767A4 (en) | 2009-03-12 | 2016-03-16 | Google Inc | Automatically providing content associated with captured information, such as information captured in real-time |
ES2352397B1 (en) * | 2009-06-24 | 2011-12-29 | Francisco Monserrat Viscarri | DEVICE, PROCEDURE AND SYSTEM TO GENERATE AUDIOVISUAL EVENTS. |
US9081799B2 (en) | 2009-12-04 | 2015-07-14 | Google Inc. | Using gestalt information to identify locations in printed information |
JP5445085B2 (en) * | 2009-12-04 | 2014-03-19 | ソニー株式会社 | Information processing apparatus and program |
US9323784B2 (en) | 2009-12-09 | 2016-04-26 | Google Inc. | Image search using text-based elements within the contents of images |
US20130007807A1 (en) * | 2011-06-30 | 2013-01-03 | Delia Grenville | Blended search for next generation television |
GB2507097A (en) * | 2012-10-19 | 2014-04-23 | Sony Corp | Providing customised supplementary content to a personal user device |
US8839309B2 (en) * | 2012-12-05 | 2014-09-16 | United Video Properties, Inc. | Methods and systems for displaying contextually relevant information from a plurality of users in real-time regarding a media asset |
KR20150136316A (en) * | 2014-05-27 | 2015-12-07 | 삼성전자주식회사 | Electrical apparatus, method and system for providing information |
US10423727B1 (en) | 2018-01-11 | 2019-09-24 | Wells Fargo Bank, N.A. | Systems and methods for processing nuances in natural language |
WO2023220274A1 (en) * | 2022-05-13 | 2023-11-16 | Google Llc | Entity cards including descriptive content relating to entities from a video |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5614940A (en) * | 1994-10-21 | 1997-03-25 | Intel Corporation | Method and apparatus for providing broadcast information with indexing |
KR19980063435A (en) * | 1996-12-11 | 1998-10-07 | 포만제프리엘 | Method and system for interactively displaying and accessing program information on television |
IL127790A (en) * | 1998-04-21 | 2003-02-12 | Ibm | System and method for selecting, accessing and viewing portions of an information stream(s) using a television companion device |
-
2001
- 2001-07-11 EP EP01951665A patent/EP1410637A2/en not_active Withdrawn
- 2001-07-11 KR KR1020027003919A patent/KR20020054325A/en not_active Application Discontinuation
- 2001-07-11 WO PCT/EP2001/007965 patent/WO2002011446A2/en active Application Filing
- 2001-07-11 CN CNB018028810A patent/CN1187982C/en not_active Expired - Fee Related
- 2001-07-11 JP JP2002515840A patent/JP2004505563A/en not_active Withdrawn
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8200688B2 (en) | 2006-03-07 | 2012-06-12 | Samsung Electronics Co., Ltd. | Method and system for facilitating information searching on electronic devices |
US8863221B2 (en) | 2006-03-07 | 2014-10-14 | Samsung Electronics Co., Ltd. | Method and system for integrating content and services among multiple networks |
US8935269B2 (en) | 2006-12-04 | 2015-01-13 | Samsung Electronics Co., Ltd. | Method and apparatus for contextual search and query refinement on consumer electronics devices |
US8782056B2 (en) | 2007-01-29 | 2014-07-15 | Samsung Electronics Co., Ltd. | Method and system for facilitating information searching on electronic devices |
US8115869B2 (en) | 2007-02-28 | 2012-02-14 | Samsung Electronics Co., Ltd. | Method and system for extracting relevant information from content metadata |
CN101267518B (en) * | 2007-02-28 | 2011-05-18 | 三星电子株式会社 | Method and system for extracting relevant information from content metadata |
US8510453B2 (en) | 2007-03-21 | 2013-08-13 | Samsung Electronics Co., Ltd. | Framework for correlating content on a local network with information on an external network |
WO2008113287A1 (en) * | 2007-03-22 | 2008-09-25 | Huawei Technologies Co., Ltd. | An iptv system, media server, and iptv program search and location method |
US8209724B2 (en) | 2007-04-25 | 2012-06-26 | Samsung Electronics Co., Ltd. | Method and system for providing access to information of potential interest to a user |
US9286385B2 (en) | 2007-04-25 | 2016-03-15 | Samsung Electronics Co., Ltd. | Method and system for providing access to information of potential interest to a user |
US8843467B2 (en) | 2007-05-15 | 2014-09-23 | Samsung Electronics Co., Ltd. | Method and system for providing relevant information to a user of a device in a local network |
US8176068B2 (en) | 2007-10-31 | 2012-05-08 | Samsung Electronics Co., Ltd. | Method and system for suggesting search queries on electronic devices |
CN101605011B (en) * | 2008-06-13 | 2013-05-01 | 索尼株式会社 | Information processing apparatus and information processing method |
US8938465B2 (en) | 2008-09-10 | 2015-01-20 | Samsung Electronics Co., Ltd. | Method and system for utilizing packaged content sources to identify and provide information based on contextual information |
CN102473191B (en) * | 2009-08-07 | 2015-05-20 | 汤姆森许可贸易公司 | System and method for searching in internet on a video device |
CN102473191A (en) * | 2009-08-07 | 2012-05-23 | 汤姆森许可贸易公司 | System and method for searching in internet on a video device |
CN101930779A (en) * | 2010-07-29 | 2010-12-29 | 华为终端有限公司 | Video commenting method and video player |
WO2012016505A1 (en) * | 2010-08-02 | 2012-02-09 | 联想(北京)有限公司 | File processing method and file processing device |
US10210148B2 (en) | 2010-08-02 | 2019-02-19 | Lenovo (Beijing) Limited | Method and apparatus for file processing |
CN102567435A (en) * | 2010-12-31 | 2012-07-11 | 宏碁股份有限公司 | Integration method of multimedia information source and hyperlink device and electronic device thereof |
CN103096173A (en) * | 2011-10-27 | 2013-05-08 | 腾讯科技(深圳)有限公司 | Information processing method and device of network television system |
CN103096173B (en) * | 2011-10-27 | 2016-05-11 | 腾讯科技(深圳)有限公司 | The information processing method of network television system and device |
US20150127675A1 (en) | 2013-11-05 | 2015-05-07 | Samsung Electronics Co., Ltd. | Display apparatus and method of controlling the same |
CN105706454A (en) * | 2013-11-05 | 2016-06-22 | 三星电子株式会社 | Display apparatus and method of controlling the same |
US10387508B2 (en) | 2013-11-05 | 2019-08-20 | Samsung Electronics Co., Ltd. | Method and apparatus for providing information about content |
US11409817B2 (en) | 2013-11-05 | 2022-08-09 | Samsung Electronics Co., Ltd. | Display apparatus and method of controlling the same |
CN104079988A (en) * | 2014-06-30 | 2014-10-01 | 北京酷云互动科技有限公司 | Television program related information pushing device and method |
Also Published As
Publication number | Publication date |
---|---|
EP1410637A2 (en) | 2004-04-21 |
KR20020054325A (en) | 2002-07-06 |
CN1187982C (en) | 2005-02-02 |
JP2004505563A (en) | 2004-02-19 |
WO2002011446A3 (en) | 2002-04-11 |
WO2002011446A2 (en) | 2002-02-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1187982C (en) | Transcript triggers for video enhancement | |
US9202523B2 (en) | Method and apparatus for providing information related to broadcast programs | |
EP2541963B1 (en) | Method for identifying video segments and displaying contextually targeted content on a connected television | |
US7248830B2 (en) | Method of and apparatus for generation/presentation of program-related contents | |
CN106156360B (en) | A kind of application method of multimedia player | |
US8478759B2 (en) | Information presentation apparatus and mobile terminal | |
US10225625B2 (en) | Caption extraction and analysis | |
JP4922245B2 (en) | Server, method and program for providing advertisement information related to viewed content | |
US20030101104A1 (en) | System and method for retrieving information related to targeted subjects | |
US20040117405A1 (en) | Relating media to information in a workflow system | |
CN101395627A (en) | Improved advertising with video ad creatives | |
US20190259063A1 (en) | Apparatus and method for synchronising advertisements | |
WO2006060311A1 (en) | Programming guide content collection and recommendation system for viewing on a portable device | |
CN109327714A (en) | It is a kind of for supplementing the method and system of live broadcast | |
CN101271454A (en) | Multimedia content association search and association engine system for IPTV | |
KR20030007727A (en) | Automatic video retriever genie | |
US20070162412A1 (en) | System and method using alphanumeric codes for the identification, description, classification and encoding of information | |
KR102244195B1 (en) | Providing Method for virtual advertisement and service device supporting the same | |
WO2003065229A1 (en) | System and method for the efficient use of network resources and the provision of television broadcast information | |
JP2007317217A (en) | Method for relating information, terminal device, server device, and program | |
Kaneko et al. | AI-driven smart production | |
Bywater et al. | Scalable and Personalised broadcast service | |
KR100786099B1 (en) | Search system and method for digital data broadcasting | |
JP2006350863A (en) | Method, device, system, and program for presenting subject, and a recording medium with program stored thereon | |
KR20090099440A (en) | Keyword advertising method and system based on tag information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20050202 Termination date: 20090811 |