US20100251291A1 - System, Method and Computer Program Product for Processing Video Data - Google Patents

System, Method and Computer Program Product for Processing Video Data Download PDF

Info

Publication number
US20100251291A1
US20100251291A1 US12/728,681 US72868110A US2010251291A1 US 20100251291 A1 US20100251291 A1 US 20100251291A1 US 72868110 A US72868110 A US 72868110A US 2010251291 A1 US2010251291 A1 US 2010251291A1
Authority
US
United States
Prior art keywords
video content
user
keywords
content units
receiving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/728,681
Inventor
Angelo J. Pino, JR.
Mohammed Sattar
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US12/728,681 priority Critical patent/US20100251291A1/en
Publication of US20100251291A1 publication Critical patent/US20100251291A1/en
Priority to US15/194,842 priority patent/US10311102B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7844Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using original textual content or text extracted from visual content or transcript of audio data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/26603Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel for automatically generating descriptors from content, e.g. when it is not made available by its provider, using content analysis techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2668Creating a channel for a dedicated end-user group, e.g. insertion of targeted commercials based on end-user profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4828End-user interface for program selection for searching program descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • H04N21/8405Generation or processing of descriptive data, e.g. content descriptors represented by keywords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data

Definitions

  • the present invention generally relates to processing video data, and more particularly to a system, method and computer program product for processing, searching, sorting and rating video content.
  • Set-top boxes connect televisions and other displays to external sources of video content data.
  • Many interactive set-top boxes allow users to choose to view one of a plurality of video content units (e.g. movies, television programs, etc.).
  • video content units e.g. movies, television programs, etc.
  • Users can search for content by title or, in some instances, by other information.
  • the title for example, is often not representative of the subject matter of the video content unit. Consequently, if a user does not know the title of the video content unit, he or she may not be able to find video content of interest.
  • the dialogue of a video content unit (spoken by persons in the video content unit) will contain audio content that is representative of the subject matter of the video content unit.
  • users are typically unable to search for video content based on phrases used within the audio portion of the video content.
  • video content unit a video that the user believes may be of interest (e.g., contains the desired subject matter)
  • he or she has no information about the quality of the video.
  • television systems today do not allow users to rate the quality of the video content units viewed or to see the ratings of other users so that the user can make an informed decision to view or not view the video content unit.
  • the present invention provides a system, method and computer program product for processing video.
  • the method includes storing a plurality of transcripts in a memory along with information identifying an associated video content unit, receiving a search request comprising one or more keywords originating from a user, searching at least some of the plurality of transcripts for the one or more keywords, determining search results that comprises a plurality of video content units that satisfy a similarity threshold with the search request, wherein the plurality of video content units that satisfy a similarity threshold with the search request comprise video content units having a transcript that includes at least some of the one or more keywords, and transmitting information identifying at least some of the video content units of the search results to a remote device for presentation to the user.
  • the method may include prompting users to rate video content units, storing an average user rating for video content units and displaying user ratings as part of the search results.
  • advertisements may be selected, based on suggested keywords and keywords entered by the user.
  • FIG. 1 is a diagram illustrating a system for practicing some embodiments of the present invention.
  • FIG. 2 depicts an example of a video search user interface that includes suggested keywords, according to an example embodiment of the present invention.
  • FIG. 3 is a flow chart of a method for processing video data in accordance with an example embodiment the present invention.
  • FIG. 4 depicts an example of a video search user interface that includes search results, according to an example embodiment of the present invention.
  • FIG. 5 is a flow chart of a method for responding to search request in accordance with an example embodiment the present invention.
  • the present invention includes a system, method and computer program product which processes a plurality of compressed video content units to create a transcript of the audio dialogue contained in each video.
  • the transcripts are stored in memory with information identifying the video from which it was processed.
  • the system searches the transcripts of the videos for the keywords and the videos satisfying a similarity threshold with the search request are sorted in accordance with the degree of their conformity to the search request, and then transmitted to the users.
  • the present invention also includes means for receiving and storing subjective rating data received from users (e.g., who have watched all or part of a video) that are then associated in memory with the videos, and may additionally (or instead) sort the search results based on the average user rating associated with each video.
  • video, video content unit, and content unit may be used interchangeably and refer to any available video content such as a movie, a television program (e.g., a sitcom, a television series, a comedy, an infomercial, a commercial, a documentary, news programming), a sporting event, etc. —any of which may comprise video on demand, pay per view, and a live (when possible) or a recorded content.
  • video content may comprise multiple segments between which the broadcaster may insert other content units (e.g., commercials and/or news alerts).
  • FIG. 1 illustrates a system for practicing embodiments of the present invention.
  • the system includes a cable headend 112 , a database server 101 , a data processor 102 , a database 115 , a plurality of set-top boxes 104 a - c , one or more video sources (e.g., satellite), and the internet 110 .
  • Some embodiments may include fewer, additional, or different components. For example, not all embodiments need to make use of the Internet and instead may include a local or private network.
  • the database server 101 comprises a computer system (one or more co-located or distributed computers) or cluster executing one or more computer programs.
  • the database server 101 may comprise any database management system (DBMS) or relational database management system (RDBMS) such as Oracle, DB2, IMS, MYSQL, MSSQL or any distributed database (or data warehouse).
  • DBMS database management system
  • RDBMS relational database management system
  • the database server 101 accesses the database 115 , which may store the transcript of each video and information identifying the video from which each transcript was obtained.
  • Database 115 may comprise a computer system (one or more co-located or distributed computers) executing one or more computer programs and also may store an average rating for each video content unit and the rating provided by each user for each video content unit.
  • a data warehouse comprised of multiple databases may be used as the storage system.
  • the database 115 may also store some videos. In other embodiments the videos may be stored elsewhere and remote from the location storing the transcripts.
  • the database server 101 may be configured to receive a search request from a user 103 that includes keywords 106 a - c .
  • the DBMS or RDBMS managing the textualized digital transcripts and video content units reduces the search request to a query execution plan using hash tables and the like.
  • the database server 101 is configured to search the transcripts to identify associated video content units found to satisfy a similarity threshold with the search request 109 .
  • the database 115 may store additional information about each video content unit in various fields. Such information may include the channels on which the content is to be broadcast (e.g., Discovery Channel®, ABC®, ESPN®, etc.), a title, a subject, description (e.g., episode information), the speakers or creators of the audio in the video content unit, Motion Picture Association of American (MPAA) rating information, duration, availability dates (and times) of the video content units, pricing information associated with the content units (in any), the media type (e.g., high definition (HD) or not HD), the genre of media featured in the content units (e.g., drama, comedy, horror, sports, news, etc.) and the like.
  • the channels on which the content is to be broadcast e.g., Discovery Channel®, ABC®, ESPN®, etc.
  • a title e.g., episode information
  • description e.g., episode information
  • MPAA Motion Picture Association of American
  • duration e.g., availability dates (and times) of the video content units
  • Users may provide search requests that include parameters for any of these fields in addition to keywords.
  • the user may search based on the user rated such as searching for the highest rated content of a particular genre (e.g., comedy), the highest rated content at a particular time (e.g., at 2 PM, on now, or for the next time slot), or the highest rated content type (e.g., movie, television show, live broadcast, etc.).
  • the database 115 may also store (or the system may access) electronic program guide information for video content that is not yet available (but is available at a future time).
  • the database server 101 (or other computer system) provides video content units and search results to the set-top boxes 104 a - c via the cable headend 112 , such as video content units that have been requested by users 103 a - c .
  • the video content units may be supplied from a first source and the search results from a second source.
  • the data processor 102 comprises a computer system (one or more co-located or distributed computers) executing one or more computer programs stored in a tangible medium.
  • the data processor 102 accesses each video content unit to process the video content unit to obtain a transcript of the dialogue of each video content unit for storage in the database 115 .
  • Other data, such as non-keyword parameters, for each video content unit may be provided from metadata from any suitable source such as, for example, metadata from the source of the video content unit itself.
  • the data processor 102 may also access live video content (e.g., received via a satellite receiver or other source) to convert the audio content to a transcript for storage.
  • the database 115 may include at least a portion of the transcript of the broadcasted video to allow users to search for keywords of live broadcasts currently available for viewing.
  • the data processor 102 may include the software and hardware necessary to access and process the video content such as a decoder to demultiplex MPEG compressed video content to obtain the closed caption content of the video.
  • the data processor 102 may be functionally capable of demultiplexing content units, such as MPEG compatible (or compliant) video content units; analyzing the streams forming the video content units, such as elementary streams and program streams; remultiplexing content units (if necessary); and extracting audio data (such as the closed caption) from the content units.
  • subtitles are meant to collectively refer to subtitles and/or closed caption content that forms part of a video content unit or otherwise supplied apart from the system of the present invention.
  • the content units may be encoded in one of any number of various data encodings well-known to those of skill in the art created by various video compression codecs, such as H.261, RealVideo and WMV. While the present invention is described in the context of extracting the closed caption content form compressed video, other embodiments may obtain the transcripts from other types of video.
  • NTSC programming captions are encoded into Line 21 of the vertical blanking interval.
  • ATSC digital television
  • three streams are encoded in the video: two are backward compatible Line 21 captions, and the third is a set of up to 63 additional caption streams encoded in EIA-708 format.
  • EIA-708 captions are textual like traditional Line 21 captions. Captioning is transmitted and stored differently in PAL and SECAM countries, where teletext is used rather than Line 21 , but the methods of preparation are similar.
  • the Line 21 data stream can consist of data from several data channels multiplexed together.
  • Field 1 has four data channels: two Captions (CC 1 , CC 2 ) and two Text (T 1 , T 2 ).
  • Field 2 has five additional data channels: two Captions (CC 3 , CC 4 ), two Text (T 3 , T 4 ), and Extended Data Services (XDS).
  • XDS data structure is defined in EIA-608.
  • NTSC DVDs may carry closed captions in the Line 21 format and are sent to the display device by the DVD player and can be displayed with a television's built-in decoder or a set-top decoder as usual.
  • video DVDs may also carry captions as a bitmap overlay which can be turned on and off via the DVD player, just like subtitles.
  • HD DVD and Blu-ray disc media cannot carry Line 21 closed captioning due to the design of High-Definition Multimedia Interface (HDMI) specifications that were designed to replace older analog and digital standards, such as VGA, S-Video, and DVI.
  • HDMI High-Definition Multimedia Interface
  • Both Blu-ray disc and HD DVD can use either DVD bitmap subtitles (with extended definition) or advanced subtitles to carry SDH type subtitling, the latter being an XML based textual format which includes font, styling and positioning information as well as a unicode representation of the text.
  • DVD bitmap subtitles with extended definition
  • advanced subtitles to carry SDH type subtitling
  • the latter being an XML based textual format which includes font, styling and positioning information as well as a unicode representation of the text.
  • various means of obtaining the transcript of a video content unit may be used depending on the type of video content unit.
  • commercially available software may be employed such as, for example purposes only, Expert Caption® (or MPEGRepairHD) offered by PixelTools Corporation.
  • the data processor 102 is configured, in certain embodiments, to process the audio portions of the content units as they become available to the data processor 102 .
  • data processor 102 may process the newly available video content units to obtain and store the transcripts (and other associated data) of each.
  • the data processor 102 processes these video content units by demultiplexing packets within the content units.
  • the audio and video portions of the content unit may be interleaved together into program streams.
  • the program streams contain packets linked together by standard tables built when the content unit was multiplexed.
  • the data processor 102 creates a transcript from the caption data (which is thus a transcript of the dialog) and provides the transcript to the database server 101 , which stores the transcript in the database 115 in association with information identifying the video content unit from which the transcript was obtained.
  • the cable headend 112 may comprise a master facility for interacting with set top boxes and transmitting video content units over a video content delivery system, such as a cable television system (which may use coaxial cable, fiber, or any suitable medium).
  • the cable headend 112 may comprise all satellite and electronic equipment, systems, hardware, firmware (including conversion and transmission equipment), software, etcetera, necessary to receive and re-transmit content units over the infrastructure in communication with the cable headend 112 for this purpose and to interact with the set top boxes.
  • the cable headend 112 receives video content units from the satellite, the database server 101 , or another content source.
  • the selected video content units may be obtained by the cable headend 112 for delivery via any suitable means from any suitable source.
  • some video content may be stored locally, some may be received via satellite, and some may be received via a network.
  • the database server 101 , processor 102 and database 115 may server multiple cable headends 112 .
  • the set-top boxes 104 a - c comprise interactive devices that connect televisions and other audio-visual devices (e.g., televisions, stereos, etc.) to external sources of video content units, such as the cable headend 112 , in a manner that allows users 103 to request and receive video content.
  • the display may comprise a television such as a cathode ray tube (“CRT”), a liquid crystal display (“LCD”), a plasma display panel (“PDP”), or the like.
  • the set top box 104 may comprise a stand alone device or a circuit card configured to be inserted into a television.
  • the keywords 106 a - c shown in FIG. 1 , comprise search terms entered into a set-top box 104 by a user 103 via a remote control.
  • the keywords 106 a - c form part of a search request and are received by the cable headend 112 and provided to the database server 101 .
  • the keywords 106 a - c may comprise phrases (one or more words) that relate to content which the user wishes to search, which content typically will relate to the subject matter of interest of the user.
  • suggested keywords are provided below the query box in a keyword suggestion menu 302 . If the user sees the desired keyword in the keyword suggestion menu, the user can scroll to the desired keyword and select the desired keyword to enter a query for that keyword. If the desired keyword is not present in the keyword suggestion menu 302 , the user may enter the entire keyword manually (by inputting additional characters) and press the enter key (on the remote control) to enter the keyword. The keyword and any other search criteria is sent to the cable headend 112 by the set top box 104 .
  • the suggested keywords populated into the keyword suggestion menu may be stored in local menu of the set top box 104 .
  • the cable headend 112 may update (transmit for storage by the set top box 104 ) the suggested keyword list (which may be stored in the database 115 ) periodically (e.g., daily or weekly) that is based on new contemporary topics of interest. For example, Olympics might be included as a suggested keyword in memory of the set top box 104 only around the time of the Olympics. Instead or additionally, the suggested keyword list may be updated based on the keyword searches of other users.
  • the first keyword may be added to the suggested keyword list, which is transmitted to all of the set top boxes 104 .
  • the headend 112 In response to reception of the search request including keywords 106 a - c , the headend 112 provides the search criteria to the database server 101 , which searches the transcripts and other fields that may be required by the search request. It is worth noting that the video search interface of FIG. 2 is abbreviated and does not depict all of the search fields available to the user.
  • each time the user enters a character the character is transmitted to the cable headend 112 and then to the database server 101 .
  • the database server 101 may search and retrieve the appropriate suggested keyword list for the characters entered by the user and transmit the suggested keyword list to the user (via the cable headend 112 ) for display.
  • there is no need update the suggest keyword list in each set top box because it only needs to be updated (if at all) in the database 115 .
  • the content library comprises the video content units and form part of or be separate from the database 115 .
  • the content library is in logical communication with the database server 101 (or cable headend 112 ) via a distributed network, or through other means well-known to those of skill in the art.
  • the content units (that are not received via broadcast) are stored in a computer readable database 115 controlled by the database server 101 , such as IBM DB2, Oracle, or SQL.
  • the keywords 106 a - c are formulated into a database query and then associated with a query execution plan which identifies video content units that satisfy the database query by searching the transcripts of the audio portions of the content units.
  • These database queries may be generated using various languages including SQL, XPATH, and the like.
  • the user may also supply other identifiers for creating, or identifying, the proper query execution plan such as the title, and the names of speakers featured in the content units, the ratings, price, genre, subject, channel, or other forms of additional information described above.
  • the query may include keywords as well other search criteria.
  • the keywords 106 a - c may be formulated into database queries which are dynamic (meaning the query is generated as needed by a user with a form that is unknown until the query is received by the database server 101 and which form is likely to change between requests) or static (meaning the database query is predefined does not change form between requests, although the parametric data values of the query may change).
  • the dynamic database queries comprising the keywords 106 a - c may comprise more than just textualized words or phrases, but also logical bitwise operations that the database server 101 infers were intended by the user 103 such as NOR, OR, XOR, AND, NAND, (or quotes and plus signs) and the like, as well as other information.
  • the users 103 a - c may be afforded a great deal of flexibility in searching the content library for not just words and phrases, but combinations of keywords, phrases, genre, channels, and orators featured throughout the content units.
  • the search results 109 a - c identify the video content units that satisfy a database query comprising the keywords 106 a - c (which database query was applied to the transcripts associated with the content units stored in the content library) and may take the form as shown in FIG. 4 .
  • FIG. 3 is a flow chart of an example method for processing video data in accordance with the present invention.
  • the method 200 begins by receiving one or more compressed video content units into memory such as into a content library from a content source (e.g., a satellite communication, a network communication, etc.).
  • a content source e.g., a satellite communication, a network communication, etc.
  • the method 200 progresses by processing the video content units to create transcripts of the content units at process 204 .
  • the method 200 proceeds with the operation of storing the transcripts in memory, such as in database 115 at 206 .
  • the stored transcript of each video content unit is stored in memory in association with (e.g., indexed to) information identifying the video content unit from which it was processed as well as the other parameters available for search (e.g., the ratings, price, genre, subject, channel, time slot, title, etc.).
  • a user 103 enters a search request that includes keywords (e.g., a static or dynamic search request) 106 from the set-top box 104 , which request (including keywords) is transmitted to the cable headend 112 , which is received at 208 .
  • keywords e.g., a static or dynamic search request
  • the method 200 comprises searching 210 the transcripts in memory for transcripts satisfying a similarity threshold with the keywords of the search request. For example, if a user provided a keyword “UFO,” the database server 101 may search the transcripts and identify the video content units that each have an associated transcript that includes the keyword “UFO.” If no transcripts include the keywords, the process may transmit a notification to the set top box 104 for display to the user.
  • a similarity threshold For example, if a user provided a keyword “UFO,” the database server 101 may search the transcripts and identify the video content units that each have an associated transcript that includes the keyword “UFO.” If no transcripts include the keywords, the process may transmit a notification to the set top box 104 for display to the user.
  • a search request may include parameters other than keywords.
  • the user may search based on any one or more channels on which the video content unit is to be broadcast (e.g., Discovery Channel®, ABC®, ESPN®, etc.), text in a title, text in a subject (i.e., description), the names of speakers or creators of the audio in the video content unit, duration, Motion Picture Association of American (MPAA) rating information, availability dates of the content unit(s), pricing information associated with the video content units, the media type (e.g., high definition (HD) or not HD), the genre of media featured in the content units (e.g., drama, comedy, horror, sports, news, etc.) and the like.
  • channels on which the video content unit is to be broadcast e.g., Discovery Channel®, ABC®, ESPN®, etc.
  • text in a title i.e., description
  • the names of speakers or creators of the audio in the video content unit i.e., description
  • duration Motion Picture Association of American rating information
  • the set top box 104 may provide a search form with fields to enter these parameters (and others field including keywords) to allow the user to provide a search request to search these parameters.
  • these parameters and others field including keywords
  • the transcripts of the video content units that satisfy all of the non-keyword parameters are searched.
  • information identifying the content units that satisfy the search request is sorted based on their conformity to the search request. For example, the content units that satisfy the non-keyword parameters and that include the highest number of the keywords (e.g., the twenty content units that include the most keywords) may be sorted in descending order (i.e., the video content units having the greatest number of keywords sorted at the top of the list).
  • the search results may be sorted based on the average user rating associated with each video content unit. For example, the twenty content units that include the most keywords (and that satisfy the other non-keyword parameters) may be sorted based the highest average user rating (i.e., the video content units having the highest average ratings sorted at the top of the list) — instead (or in addition to) being sorted based only on keywords.
  • the database server 101 may also access and retrieve information of advertising to be displayed by the set top box concurrently with the search results.
  • the database server 101 may identify one or more video content units (such as video on demand or pay per view that requires the user to pay for the advertised content) to be advertised (listed) at the top of the display above the sorted search results (i.e., advertised) even though these video content units may not satisfy (or not fully satisfy) the search request.
  • the advertising may be selected based on the location of the user, the amount that an advertiser has agreed to pay (such as a bid if the advertisement is selected by the user or displayed to the user) given the keyword, location of the user, and other information.
  • the advertising may be selected in a manner that is similar to pay per impression or pay per click advertising, where advertisers bid on keywords to have their advertisement presented and, in some embodiments, only pay the operator if their advertisement is selected by the user.
  • the advertising may be for any commercially available product and is not limited to advertising of video content units.
  • the advertising may advertise audio content (MP3 files for downloading via the cable system), electronics, travel services, restaurants, local venders, and any other service or product.
  • the search results which in this example embodiment may comprise a sorted list of video content units having transcripts that satisfy a similarity threshold with the search request (and any advertisements), are transmitted 214 to the set-top box 104 for presentation to a user 103 .
  • FIG. 4 depicts an example of a format for displaying search results that may be employed for some embodiments.
  • the user 103 may select a content unit identified in the search results 109 for viewing by scrolling to through the search results list and selecting the desired video content unit (with the remote control). This user selection is transmitted at 218 by the set-top box 104 to the cable headend 112 , which, in response, transmits the selected video content unit to the set-top box 104 for presentation to the user at 222 .
  • the content units identified in the search results 109 may not be available for immediate viewing, and the user 103 may elect to automatically receive and record the content unit when it becomes available (by responding to prompts provided by the set top box 104 ).
  • the content units listed in the search results 109 may be associated with a price which the user 103 must agree to pay before the content unit itself is finally transmitted to the user 103 by the cable headend 112 .
  • the method 200 further comprises an operation of receiving rating data from the user 103 (e.g., after transmitting the content unit to the user 103 for viewing).
  • the rating data comprises feedback from the user 103 on the quality of the content unit, the desirability of the content unit, or on the subjective degree to which the content unit satisfied the user's 108 user selection (expectations).
  • Example of rating data includes scalable ranges of values, such as 1-100, 1-10, A-F, or 1-5 (e.g., 5 stars).
  • the rating data may be provided by users 103 through the set-top boxes 104 and supplied to the database server 110 by the cable headend 112 .
  • the database server 110 may process each user rating to provide an average user rating which is stored in the database 110 in association with information identifying the video content unit (and transcript).
  • the search results may be sorted based on the average user rating.
  • the rating may be a non-keyword parameter that the user may include as part of a search request. For example, the user may elect to search for only four and five star movies having a particular keyword.
  • FIG. 5 depicts a process of using an example embodiment of the present invention.
  • the set top box 104 receives a character for a keyword.
  • the set top box 104 searches the suggested keyword list to identify keywords (which may be a word or phrase) that include a word that begins with the text entered so far by the user 103 .
  • the suggested keyword list searching may be performed by the cable headend 112 , the database server 101 , another remote computer system, and/or a combination of these and the set top box 104 .
  • the suggested keywords (the results of the keyword list search) are displayed as shown in FIG. 2 . If the user enters another character, the process continues at 405 .
  • the keyword and other search criteria parameters are sent to and received by the cable headend 112 and subsequently the database server 101 (at 420 ).
  • the keyword and other search criteria parameters e.g., selected rating, time slot, category, genre, media type, duration, etc.
  • the database server 101 receives each character an advertisement from the set top box 104 (or retrieved from memory) and displayed.
  • the displayed advertisements may be selected based one or more of the suggested keywords displayed (along with advertiser keyword bids), the user's location, the time of day, day of the week, non-keyword search criteria (e.g., a user provided rating, channel, category, etc.) entered by the user, and/or other data.
  • the advertisements selected may be selected by the database server 101 (from the database 115 ), the set top box (from local memory), the cable headend 112 , and/or another computer system.
  • the database server 101 constructs a database query based on the search criteria (keyword and other search criteria parameters).
  • the database server 101 queries the database 115 .
  • the data to be queried may be stored in multiple databases in practice.
  • the search results may be sorted based on the degree to which each video matches the search criteria and on the average user rating associated with each video.
  • all or a portion of the sorted search results are transmitted to the set top box 104 for display to the user along with one or more advertisements.
  • the user may request to see additional search results, which are transmitted from the cable headend 112 to the set top box for display (e.g., along with a new advertisement).
  • the set top box 104 receives a request for a video listed in the search results.
  • the requested video content is transmitted by the cable headend 112 (e.g., retrieved from a video source such as the content library or from a live broadcast) to the set top box 104 for display.
  • a request for a user rating may be displayed by the set top box 104 such as, for example, at the end of (or during the ending credits of) the presentation of the received video content.
  • the set top box 104 may display a dialog box asking the user to enter a number from 1-10 (or select one of 5 displayed stars) for video content just watched.
  • the prompt for a user rating may also be displayed intermittently while the video content is being displayed—directly over the video content or during a commercial between video segments.
  • the user rating is received via the set top box 104 by the database server 115 which may process the user rating and store a new average rating (that factors in the user provided rating) to the video content unit.
  • a separate computer system processes and stores the user rating data.
  • the user may select an advertisement.
  • data indicating selection of an advertisement is received.
  • advertisements are paid for on a per-click basis. Consequently, at 465 an account associated with the selected advertisement may be debited an amount associated with a bid (or the bid amount) supplied by the advertiser.
  • the advertisers account may be debited each time the advertisement is presented. It is worth noting that the present invention may be used with both pay per impression and pay per click advertisements simultaneously (i.e., with both types of ads being displayed concurrently).
  • the cable headend 112 processes the response to the advertisement, which may include (1) transmitting additional information for display by the set top box, (2) transmitting an email with additional information to an email address (stored in memory in association with the user) or sending a text message to a telephone of the user, (3) transmitting the advertised video unit or audio content for storage and/or presentation to the user, (4) cause information to be delivered to the home of the user (e.g., via postal mail), (5) execute a purchase (e.g., debit an account associated with the user) and cause a product to be delivered to the user's residence (e.g., food) or service be provided to the user (e.g., a new telephone service be provided).
  • a purchase e.g., debit an account associated with the user
  • a product e.g., a product to be delivered to the user's residence (e.g., food) or service be provided to the user (e.g., a new telephone service be provided).

Abstract

A system, method and computer program product for processing video is provided. In one embodiment, the method includes storing a plurality of transcripts in a memory along with information identifying an associated video content unit, receiving a search request comprising one or more keywords originating from a user, searching at least some of the plurality of transcripts for the one or more keywords, determining search results that comprises a plurality of video content units that satisfy a similarity threshold with the search request, wherein the plurality of video content units that satisfy a similarity threshold with the search request comprise video content units having a transcript that includes at least some of the one or more keywords, and transmitting information identifying at least some of the video content units of the search results to a remote device for presentation to the user. In addition, the method may include prompting users to rate video content units, storing an average user rating for video content units and displaying user ratings as part of the search results. In addition, advertisements may be selected, based on suggested keywords and keywords entered by the user.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application claims priority to U.S. Provisional Application No. 61/162,844, filed Mar. 24, 2009, which is hereby incorporated by reference in its entirety for all purposes.
  • FIELD OF THE INVENTION
  • The present invention generally relates to processing video data, and more particularly to a system, method and computer program product for processing, searching, sorting and rating video content.
  • BACKGROUND OF THE INVENTION
  • Set-top boxes connect televisions and other displays to external sources of video content data. Many interactive set-top boxes allow users to choose to view one of a plurality of video content units (e.g. movies, television programs, etc.). Each day the amount of available video content increases and, therefore, finding desirable video content is becoming more and more challenging to the user. Users can search for content by title or, in some instances, by other information. However, the title, for example, is often not representative of the subject matter of the video content unit. Consequently, if a user does not know the title of the video content unit, he or she may not be able to find video content of interest.
  • Often the dialogue of a video content unit (spoken by persons in the video content unit) will contain audio content that is representative of the subject matter of the video content unit. However, users are typically unable to search for video content based on phrases used within the audio portion of the video content.
  • In addition, in many instances even though the user has searched or otherwise found a video (hereinafter also referred to as a video content unit) that the user believes may be of interest (e.g., contains the desired subject matter), he or she has no information about the quality of the video. Generally, television systems today do not allow users to rate the quality of the video content units viewed or to see the ratings of other users so that the user can make an informed decision to view or not view the video content unit.
  • Thus, there exists a need for a system, method and computer program product that facilitates searching of the audio content of a plurality of video content units for one or more keywords. In addition, there is a need to allow users to rate video content units and to sort search results based on the user ratings. These and other features may be provided by one or more embodiments of the present invention.
  • These features and advantages of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter.
  • SUMMARY OF INVENTION
  • The present invention provides a system, method and computer program product for processing video. In one embodiment, the method includes storing a plurality of transcripts in a memory along with information identifying an associated video content unit, receiving a search request comprising one or more keywords originating from a user, searching at least some of the plurality of transcripts for the one or more keywords, determining search results that comprises a plurality of video content units that satisfy a similarity threshold with the search request, wherein the plurality of video content units that satisfy a similarity threshold with the search request comprise video content units having a transcript that includes at least some of the one or more keywords, and transmitting information identifying at least some of the video content units of the search results to a remote device for presentation to the user. In addition, the method may include prompting users to rate video content units, storing an average user rating for video content units and displaying user ratings as part of the search results. In addition, advertisements may be selected, based on suggested keywords and keywords entered by the user.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The invention is further described in the detailed description that follows, by reference to the noted drawings by way of non-limiting illustrative embodiments of the invention, in which like reference numerals represent similar parts throughout the drawings. As should be understood, however, the invention is not limited to the precise arrangements and instrumentalities shown. In the drawings:
  • FIG. 1 is a diagram illustrating a system for practicing some embodiments of the present invention.
  • FIG. 2 depicts an example of a video search user interface that includes suggested keywords, according to an example embodiment of the present invention.
  • FIG. 3 is a flow chart of a method for processing video data in accordance with an example embodiment the present invention.
  • FIG. 4 depicts an example of a video search user interface that includes search results, according to an example embodiment of the present invention.
  • FIG. 5 is a flow chart of a method for responding to search request in accordance with an example embodiment the present invention.
  • DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
  • In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular networks, communication systems, computers, terminals, devices, components, techniques, data and network protocols, software products and systems, operating systems, development interfaces, hardware, etc. in order to provide a thorough understanding of the present invention.
  • However, it will be apparent to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. Detailed descriptions of well-known networks, communication systems, computers, terminals, devices, components, techniques, data and network protocols, software products and systems, operating systems, development interfaces, and hardware are omitted so as not to obscure the description
  • The present invention includes a system, method and computer program product which processes a plurality of compressed video content units to create a transcript of the audio dialogue contained in each video. The transcripts are stored in memory with information identifying the video from which it was processed. In response to search requests from users that include keywords and other parameters, the system searches the transcripts of the videos for the keywords and the videos satisfying a similarity threshold with the search request are sorted in accordance with the degree of their conformity to the search request, and then transmitted to the users. The present invention also includes means for receiving and storing subjective rating data received from users (e.g., who have watched all or part of a video) that are then associated in memory with the videos, and may additionally (or instead) sort the search results based on the average user rating associated with each video.
  • As used herein, the term video, video content unit, and content unit may be used interchangeably and refer to any available video content such as a movie, a television program (e.g., a sitcom, a television series, a comedy, an infomercial, a commercial, a documentary, news programming), a sporting event, etc. —any of which may comprise video on demand, pay per view, and a live (when possible) or a recorded content. In addition, such content may comprise multiple segments between which the broadcaster may insert other content units (e.g., commercials and/or news alerts).
  • FIG. 1 illustrates a system for practicing embodiments of the present invention. The system includes a cable headend 112, a database server 101, a data processor 102, a database 115, a plurality of set-top boxes 104 a-c, one or more video sources (e.g., satellite), and the internet 110. Some embodiments may include fewer, additional, or different components. For example, not all embodiments need to make use of the Internet and instead may include a local or private network.
  • The database server 101, in the shown embodiment, comprises a computer system (one or more co-located or distributed computers) or cluster executing one or more computer programs. The database server 101 may comprise any database management system (DBMS) or relational database management system (RDBMS) such as Oracle, DB2, IMS, MYSQL, MSSQL or any distributed database (or data warehouse). The database server 101 accesses the database 115, which may store the transcript of each video and information identifying the video from which each transcript was obtained. Database 115 may comprise a computer system (one or more co-located or distributed computers) executing one or more computer programs and also may store an average rating for each video content unit and the rating provided by each user for each video content unit. In some embodiments, a data warehouse comprised of multiple databases may be used as the storage system. In this embodiment, the database 115 may also store some videos. In other embodiments the videos may be stored elsewhere and remote from the location storing the transcripts.
  • The database server 101 may be configured to receive a search request from a user 103 that includes keywords 106 a-c. In various embodiments, the DBMS or RDBMS managing the textualized digital transcripts and video content units reduces the search request to a query execution plan using hash tables and the like. In response to a search request, the database server 101 is configured to search the transcripts to identify associated video content units found to satisfy a similarity threshold with the search request 109.
  • In addition to storing transcripts, the database 115 may store additional information about each video content unit in various fields. Such information may include the channels on which the content is to be broadcast (e.g., Discovery Channel®, ABC®, ESPN®, etc.), a title, a subject, description (e.g., episode information), the speakers or creators of the audio in the video content unit, Motion Picture Association of American (MPAA) rating information, duration, availability dates (and times) of the video content units, pricing information associated with the content units (in any), the media type (e.g., high definition (HD) or not HD), the genre of media featured in the content units (e.g., drama, comedy, horror, sports, news, etc.) and the like. Users may provide search requests that include parameters for any of these fields in addition to keywords. In addition, the user may search based on the user rated such as searching for the highest rated content of a particular genre (e.g., comedy), the highest rated content at a particular time (e.g., at 2 PM, on now, or for the next time slot), or the highest rated content type (e.g., movie, television show, live broadcast, etc.). Thus, the database 115 may also store (or the system may access) electronic program guide information for video content that is not yet available (but is available at a future time).
  • In the illustrated embodiment, the database server 101 (or other computer system) provides video content units and search results to the set-top boxes 104 a-c via the cable headend 112, such as video content units that have been requested by users 103 a-c. In many embodiments, the video content units may be supplied from a first source and the search results from a second source.
  • Like the database server 101, the data processor 102, comprises a computer system (one or more co-located or distributed computers) executing one or more computer programs stored in a tangible medium. In this example embodiment, the data processor 102 accesses each video content unit to process the video content unit to obtain a transcript of the dialogue of each video content unit for storage in the database 115. Other data, such as non-keyword parameters, for each video content unit may be provided from metadata from any suitable source such as, for example, metadata from the source of the video content unit itself. In addition, the data processor 102 may also access live video content (e.g., received via a satellite receiver or other source) to convert the audio content to a transcript for storage. Thus, anytime after a live broadcast has started, the database 115 may include at least a portion of the transcript of the broadcasted video to allow users to search for keywords of live broadcasts currently available for viewing.
  • The data processor 102 may include the software and hardware necessary to access and process the video content such as a decoder to demultiplex MPEG compressed video content to obtain the closed caption content of the video. The data processor 102 may be functionally capable of demultiplexing content units, such as MPEG compatible (or compliant) video content units; analyzing the streams forming the video content units, such as elementary streams and program streams; remultiplexing content units (if necessary); and extracting audio data (such as the closed caption) from the content units.
  • Most of the world does not distinguish captions from subtitles. Subtitles, in the US, however, assumes the viewer can hear but cannot understand the language, so the subtitles translate dialogue and some on-screen text. Captions assumes the viewer is hearing impaired and aims to describe all significant audio content including spoken dialogue, non-speech information (such as the identity of speakers and their manner of speaking), along with music or sound effects using words or symbols. While the disclosed embodiment uses captions, other embodiments may use subtitles or captions and subtitles. As used herein, the term “audio text” is meant to collectively refer to subtitles and/or closed caption content that forms part of a video content unit or otherwise supplied apart from the system of the present invention.
  • The content units may be encoded in one of any number of various data encodings well-known to those of skill in the art created by various video compression codecs, such as H.261, RealVideo and WMV. While the present invention is described in the context of extracting the closed caption content form compressed video, other embodiments may obtain the transcripts from other types of video.
  • NTSC programming captions are encoded into Line 21 of the vertical blanking interval. For ATSC (digital television) programming, three streams are encoded in the video: two are backward compatible Line 21 captions, and the third is a set of up to 63 additional caption streams encoded in EIA-708 format. EIA-708 captions are textual like traditional Line 21 captions. Captioning is transmitted and stored differently in PAL and SECAM countries, where teletext is used rather than Line 21, but the methods of preparation are similar.
  • The Line 21 data stream can consist of data from several data channels multiplexed together. Field 1 has four data channels: two Captions (CC1, CC2) and two Text (T1, T2). Field 2 has five additional data channels: two Captions (CC3, CC4), two Text (T3, T4), and Extended Data Services (XDS). XDS data structure is defined in EIA-608.
  • NTSC DVDs may carry closed captions in the Line 21 format and are sent to the display device by the DVD player and can be displayed with a television's built-in decoder or a set-top decoder as usual. Independent of Line 21, video DVDs may also carry captions as a bitmap overlay which can be turned on and off via the DVD player, just like subtitles. HD DVD and Blu-ray disc media cannot carry Line 21 closed captioning due to the design of High-Definition Multimedia Interface (HDMI) specifications that were designed to replace older analog and digital standards, such as VGA, S-Video, and DVI. Both Blu-ray disc and HD DVD can use either DVD bitmap subtitles (with extended definition) or advanced subtitles to carry SDH type subtitling, the latter being an XML based textual format which includes font, styling and positioning information as well as a unicode representation of the text. Thus, various means of obtaining the transcript of a video content unit may be used depending on the type of video content unit. In many instances, commercially available software may be employed such as, for example purposes only, Expert Caption® (or MPEGRepairHD) offered by PixelTools Corporation.
  • The data processor 102 is configured, in certain embodiments, to process the audio portions of the content units as they become available to the data processor 102. Thus, as new video content becomes available, data processor 102 may process the newly available video content units to obtain and store the transcripts (and other associated data) of each. In one example embodiment, the data processor 102 processes these video content units by demultiplexing packets within the content units. In the case of MPEG formatted content units, the audio and video portions of the content unit may be interleaved together into program streams. The program streams contain packets linked together by standard tables built when the content unit was multiplexed.
  • The data processor 102 creates a transcript from the caption data (which is thus a transcript of the dialog) and provides the transcript to the database server 101, which stores the transcript in the database 115 in association with information identifying the video content unit from which the transcript was obtained.
  • The cable headend 112 may comprise a master facility for interacting with set top boxes and transmitting video content units over a video content delivery system, such as a cable television system (which may use coaxial cable, fiber, or any suitable medium). The cable headend 112 may comprise all satellite and electronic equipment, systems, hardware, firmware (including conversion and transmission equipment), software, etcetera, necessary to receive and re-transmit content units over the infrastructure in communication with the cable headend 112 for this purpose and to interact with the set top boxes.
  • In the illustrated example embodiment, the cable headend 112 receives video content units from the satellite, the database server 101, or another content source. As discussed, in other embodiments, the selected video content units may be obtained by the cable headend 112 for delivery via any suitable means from any suitable source. For example, some video content may be stored locally, some may be received via satellite, and some may be received via a network. In this example, the database server 101, processor 102 and database 115 may server multiple cable headends 112.
  • The set-top boxes 104 a-c comprise interactive devices that connect televisions and other audio-visual devices (e.g., televisions, stereos, etc.) to external sources of video content units, such as the cable headend 112, in a manner that allows users 103 to request and receive video content. The display may comprise a television such as a cathode ray tube (“CRT”), a liquid crystal display (“LCD”), a plasma display panel (“PDP”), or the like. The set top box 104 may comprise a stand alone device or a circuit card configured to be inserted into a television.
  • The keywords 106 a-c, shown in FIG. 1, comprise search terms entered into a set-top box 104 by a user 103 via a remote control. Typically, the keywords 106 a-c form part of a search request and are received by the cable headend 112 and provided to the database server 101. The keywords 106 a-c may comprise phrases (one or more words) that relate to content which the user wishes to search, which content typically will relate to the subject matter of interest of the user.
  • As shown in FIG. 2, as the user enters the characters (via the remote control) of the keyword into the query box 301, suggested keywords are provided below the query box in a keyword suggestion menu 302. If the user sees the desired keyword in the keyword suggestion menu, the user can scroll to the desired keyword and select the desired keyword to enter a query for that keyword. If the desired keyword is not present in the keyword suggestion menu 302, the user may enter the entire keyword manually (by inputting additional characters) and press the enter key (on the remote control) to enter the keyword. The keyword and any other search criteria is sent to the cable headend 112 by the set top box 104.
  • In this embodiment, the suggested keywords populated into the keyword suggestion menu may be stored in local menu of the set top box 104. The cable headend 112 may update (transmit for storage by the set top box 104) the suggested keyword list (which may be stored in the database 115) periodically (e.g., daily or weekly) that is based on new contemporary topics of interest. For example, Olympics might be included as a suggested keyword in memory of the set top box 104 only around the time of the Olympics. Instead or additionally, the suggested keyword list may be updated based on the keyword searches of other users. Thus, if the number of users who enter a first keyword (e.g., baseball) over a given time period (e.g., day, week or month) is greater than a threshold, the first keyword may be added to the suggested keyword list, which is transmitted to all of the set top boxes 104. In response to reception of the search request including keywords 106 a-c, the headend 112 provides the search criteria to the database server 101, which searches the transcripts and other fields that may be required by the search request. It is worth noting that the video search interface of FIG. 2 is abbreviated and does not depict all of the search fields available to the user.
  • In another embodiment, each time the user enters a character the character is transmitted to the cable headend 112 and then to the database server 101. The database server 101 may search and retrieve the appropriate suggested keyword list for the characters entered by the user and transmit the suggested keyword list to the user (via the cable headend 112) for display. Thus, in this embodiment there is no need update the suggest keyword list in each set top box because it only needs to be updated (if at all) in the database 115.
  • The content library comprises the video content units and form part of or be separate from the database 115. In some embodiments, the content library is in logical communication with the database server 101 (or cable headend 112) via a distributed network, or through other means well-known to those of skill in the art. In the illustrated embodiment, the content units (that are not received via broadcast) are stored in a computer readable database 115 controlled by the database server 101, such as IBM DB2, Oracle, or SQL.
  • As discussed, the keywords 106 a-c are formulated into a database query and then associated with a query execution plan which identifies video content units that satisfy the database query by searching the transcripts of the audio portions of the content units. These database queries may be generated using various languages including SQL, XPATH, and the like. In addition to the keywords 106 a-c, the user may also supply other identifiers for creating, or identifying, the proper query execution plan such as the title, and the names of speakers featured in the content units, the ratings, price, genre, subject, channel, or other forms of additional information described above. Thus, the query may include keywords as well other search criteria.
  • The keywords 106 a-c may be formulated into database queries which are dynamic (meaning the query is generated as needed by a user with a form that is unknown until the query is received by the database server 101 and which form is likely to change between requests) or static (meaning the database query is predefined does not change form between requests, although the parametric data values of the query may change).
  • The dynamic database queries comprising the keywords 106 a-c may comprise more than just textualized words or phrases, but also logical bitwise operations that the database server 101 infers were intended by the user 103 such as NOR, OR, XOR, AND, NAND, (or quotes and plus signs) and the like, as well as other information. The users 103 a-c may be afforded a great deal of flexibility in searching the content library for not just words and phrases, but combinations of keywords, phrases, genre, channels, and orators featured throughout the content units.
  • The content units are described above in the relation to the other components of the dataflow interconnection. In the illustrated example embodiment, the search results 109 a-c identify the video content units that satisfy a database query comprising the keywords 106 a-c (which database query was applied to the transcripts associated with the content units stored in the content library) and may take the form as shown in FIG. 4.
  • FIG. 3 is a flow chart of an example method for processing video data in accordance with the present invention. The method 200 begins by receiving one or more compressed video content units into memory such as into a content library from a content source (e.g., a satellite communication, a network communication, etc.).
  • The method 200 progresses by processing the video content units to create transcripts of the content units at process 204. Next, the method 200 proceeds with the operation of storing the transcripts in memory, such as in database 115 at 206. Specifically, the stored transcript of each video content unit is stored in memory in association with (e.g., indexed to) information identifying the video content unit from which it was processed as well as the other parameters available for search (e.g., the ratings, price, genre, subject, channel, time slot, title, etc.).
  • Subsequently, a user 103 enters a search request that includes keywords (e.g., a static or dynamic search request) 106 from the set-top box 104, which request (including keywords) is transmitted to the cable headend 112, which is received at 208.
  • At 210, the method 200 comprises searching 210 the transcripts in memory for transcripts satisfying a similarity threshold with the keywords of the search request. For example, if a user provided a keyword “UFO,” the database server 101 may search the transcripts and identify the video content units that each have an associated transcript that includes the keyword “UFO.” If no transcripts include the keywords, the process may transmit a notification to the set top box 104 for display to the user.
  • As discussed, a search request may include parameters other than keywords. For example, the user may search based on any one or more channels on which the video content unit is to be broadcast (e.g., Discovery Channel®, ABC®, ESPN®, etc.), text in a title, text in a subject (i.e., description), the names of speakers or creators of the audio in the video content unit, duration, Motion Picture Association of American (MPAA) rating information, availability dates of the content unit(s), pricing information associated with the video content units, the media type (e.g., high definition (HD) or not HD), the genre of media featured in the content units (e.g., drama, comedy, horror, sports, news, etc.) and the like. Thus, the set top box 104 may provide a search form with fields to enter these parameters (and others field including keywords) to allow the user to provide a search request to search these parameters. To reduce search time, in this example embodiment only the transcripts of the video content units that satisfy all of the non-keyword parameters are searched.
  • Subsequently, at 213 information identifying the content units that satisfy the search request (e.g., a list) is sorted based on their conformity to the search request. For example, the content units that satisfy the non-keyword parameters and that include the highest number of the keywords (e.g., the twenty content units that include the most keywords) may be sorted in descending order (i.e., the video content units having the greatest number of keywords sorted at the top of the list).
  • In addition, in some embodiments the search results may be sorted based on the average user rating associated with each video content unit. For example, the twenty content units that include the most keywords (and that satisfy the other non-keyword parameters) may be sorted based the highest average user rating (i.e., the video content units having the highest average ratings sorted at the top of the list) — instead (or in addition to) being sorted based only on keywords. Furthermore, in addition to the search results, the database server 101 may also access and retrieve information of advertising to be displayed by the set top box concurrently with the search results. For example, based on the search request (the keywords and non-keyword data), the database server 101 may identify one or more video content units (such as video on demand or pay per view that requires the user to pay for the advertised content) to be advertised (listed) at the top of the display above the sorted search results (i.e., advertised) even though these video content units may not satisfy (or not fully satisfy) the search request. In addition to being based on the information forming the search request, the advertising may be selected based on the location of the user, the amount that an advertiser has agreed to pay (such as a bid if the advertisement is selected by the user or displayed to the user) given the keyword, location of the user, and other information. Thus, the advertising may be selected in a manner that is similar to pay per impression or pay per click advertising, where advertisers bid on keywords to have their advertisement presented and, in some embodiments, only pay the operator if their advertisement is selected by the user. The advertising may be for any commercially available product and is not limited to advertising of video content units. For example, the advertising may advertise audio content (MP3 files for downloading via the cable system), electronics, travel services, restaurants, local venders, and any other service or product.
  • At 214, the search results, which in this example embodiment may comprise a sorted list of video content units having transcripts that satisfy a similarity threshold with the search request (and any advertisements), are transmitted 214 to the set-top box 104 for presentation to a user 103. FIG. 4 depicts an example of a format for displaying search results that may be employed for some embodiments.
  • The user 103 may select a content unit identified in the search results 109 for viewing by scrolling to through the search results list and selecting the desired video content unit (with the remote control). This user selection is transmitted at 218 by the set-top box 104 to the cable headend 112, which, in response, transmits the selected video content unit to the set-top box 104 for presentation to the user at 222. In some embodiments, the content units identified in the search results 109 may not be available for immediate viewing, and the user 103 may elect to automatically receive and record the content unit when it becomes available (by responding to prompts provided by the set top box 104). In still further embodiments, the content units listed in the search results 109 may be associated with a price which the user 103 must agree to pay before the content unit itself is finally transmitted to the user 103 by the cable headend 112.
  • In some embodiments, the method 200 further comprises an operation of receiving rating data from the user 103 (e.g., after transmitting the content unit to the user 103 for viewing). The rating data comprises feedback from the user 103 on the quality of the content unit, the desirability of the content unit, or on the subjective degree to which the content unit satisfied the user's 108 user selection (expectations). Example of rating data includes scalable ranges of values, such as 1-100, 1-10, A-F, or 1-5 (e.g., 5 stars).
  • The rating data may be provided by users 103 through the set-top boxes 104 and supplied to the database server 110 by the cable headend 112. The database server 110 may process each user rating to provide an average user rating which is stored in the database 110 in association with information identifying the video content unit (and transcript). In such an example embodiment, the search results may be sorted based on the average user rating. In addition, the rating may be a non-keyword parameter that the user may include as part of a search request. For example, the user may elect to search for only four and five star movies having a particular keyword.
  • FIG. 5 depicts a process of using an example embodiment of the present invention. At 405, the set top box 104 receives a character for a keyword. At 410, the set top box 104 searches the suggested keyword list to identify keywords (which may be a word or phrase) that include a word that begins with the text entered so far by the user 103. In other embodiments, the suggested keyword list searching may be performed by the cable headend 112, the database server 101, another remote computer system, and/or a combination of these and the set top box 104. At 415, the suggested keywords (the results of the keyword list search) are displayed as shown in FIG. 2. If the user enters another character, the process continues at 405. If the user selects a suggested keyword or presses an enter key on the remote to enter the user provided keyword, the keyword and other search criteria parameters (e.g., selected rating, time slot, category, genre, media type, duration, etc.) are sent to and received by the cable headend 112 and subsequently the database server 101 (at 420). As the user enters each character an advertisement may be received by the set top box 104 (or retrieved from memory) and displayed. The displayed advertisements may be selected based one or more of the suggested keywords displayed (along with advertiser keyword bids), the user's location, the time of day, day of the week, non-keyword search criteria (e.g., a user provided rating, channel, category, etc.) entered by the user, and/or other data. The advertisements selected may be selected by the database server 101 (from the database 115), the set top box (from local memory), the cable headend 112, and/or another computer system.
  • At 425 the database server 101 (or alternately the set top box 104) constructs a database query based on the search criteria (keyword and other search criteria parameters). At 430, the database server 101 queries the database 115. As discussed above, the data to be queried may be stored in multiple databases in practice. The search results may be sorted based on the degree to which each video matches the search criteria and on the average user rating associated with each video. At 435 all or a portion of the sorted search results are transmitted to the set top box 104 for display to the user along with one or more advertisements. If only a portion of the search results is transmitted, the user may request to see additional search results, which are transmitted from the cable headend 112 to the set top box for display (e.g., along with a new advertisement). At 440 the set top box 104 receives a request for a video listed in the search results. At 445, the requested video content is transmitted by the cable headend 112 (e.g., retrieved from a video source such as the content library or from a live broadcast) to the set top box 104 for display. At 450 a request for a user rating may be displayed by the set top box 104 such as, for example, at the end of (or during the ending credits of) the presentation of the received video content. For example, as the user finishes watching the requested video the set top box 104 may display a dialog box asking the user to enter a number from 1-10 (or select one of 5 displayed stars) for video content just watched. In addition, the prompt for a user rating may also be displayed intermittently while the video content is being displayed—directly over the video content or during a commercial between video segments. At 455 the user rating is received via the set top box 104 by the database server 115 which may process the user rating and store a new average rating (that factors in the user provided rating) to the video content unit. In other embodiments, a separate computer system processes and stores the user rating data.
  • Instead of selecting a video from the search results, the user may select an advertisement. At 460 data indicating selection of an advertisement is received. In this example embodiment, advertisements are paid for on a per-click basis. Consequently, at 465 an account associated with the selected advertisement may be debited an amount associated with a bid (or the bid amount) supplied by the advertiser. In other embodiments such as those where advertisements are paid for on a per impression basis, the advertisers account may be debited each time the advertisement is presented. It is worth noting that the present invention may be used with both pay per impression and pay per click advertisements simultaneously (i.e., with both types of ads being displayed concurrently).
  • At 470 the cable headend 112 (or other computer system) processes the response to the advertisement, which may include (1) transmitting additional information for display by the set top box, (2) transmitting an email with additional information to an email address (stored in memory in association with the user) or sending a text message to a telephone of the user, (3) transmitting the advertised video unit or audio content for storage and/or presentation to the user, (4) cause information to be delivered to the home of the user (e.g., via postal mail), (5) execute a purchase (e.g., debit an account associated with the user) and cause a product to be delivered to the user's residence (e.g., food) or service be provided to the user (e.g., a new telephone service be provided).
  • It is to be understood that the foregoing illustrative embodiments have been provided merely for the purpose of explanation and are in no way to be construed as limiting of the invention. Words used herein are words of description and illustration, rather than words of limitation. In addition, the advantages and objectives described herein may not be realized by each and every embodiment practicing the present invention. Further, although the invention has been described herein with reference to particular structure, materials and/or embodiments, the invention is not intended to be limited to the particulars disclosed herein. Rather, the invention extends to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims. Those skilled in the art, having the benefit of the teachings of this specification, may affect numerous modifications thereto and changes may be made without departing from the scope and spirit of the invention.

Claims (32)

1. A method, implemented at least in party by a computer system, for processing video content, comprising:
processing audio portions of a plurality of video content units to create a transcript for each of the plurality of video content units that comprises a textual representation of the dialogue of each video content unit;
storing a transcript in memory for each of the plurality of video contents;
receiving a search request comprising one or more keywords originating from a user;
searching at least some of a plurality of transcripts stored in memory for the one or more keywords;
identifying search results that comprises a set of video content units having transcripts that satisfy a similarity threshold with the search request; and
transmitting at least a portion of the search results to a remote device for presentation to the user.
2. The method of claim 1, wherein the content units comprise an MPEG video files compatible with an MPEG decoder.
3. The method of claim 1, wherein said processing comprises extracting one of closed caption data and embedded video text from the plurality of video content units.
4. The method of claim 1, wherein the search request is received from a set-top box.
5. The method of claim 4, further comprising:
receiving a request via the set-top box for a first video content unit in the search results; and
transmitting the first video content unit to the set-top box for presentation to the user.
6. The method of claim 1, further comprising sorting a list of the video content units in the search results based, at least in part, on the number of keywords present in the transcript of each video content unit in the search results.
7. The method of claim 1, further comprising searching only the transcripts of a subset of the plurality of video content units in accordance with at least one search criteria selected from the group comprising: genre, broadcast channel, MPAA rating, and actor identity.
8. The method of claim 1, wherein at least some of the plurality of video content units comprises live broadcast video content.
9. The method of claim 1, further comprising:
receiving a user rating for a multitude of the video content units from a plurality of viewers;
based on user ratings, storing an average rating in memory in association with each of the respective multitude of video content units; and
sorting the search results according to the average rating.
10. The method of claim 1, wherein said receiving the search request comprises receiving a plurality of textual characters, the method further comprising:
after receiving each of a group of the textual characters, displaying a plurality of suggested keywords.
11. The method of claim 10, wherein said receiving the search request further comprises receiving a selection of one of the suggested keywords.
12. The method of claim 1, further comprising:
selecting an advertisement based, at least in part, on the one or more keywords; and
transmitting the advertisement for display to the user.
13. A method, implemented at least in party by a computer system, for processing video content, comprising:
storing a plurality of transcripts in a memory along with information identifying an associated video content unit;
receiving a search request comprising one or more keywords originating from a user;
searching at least some of the plurality of transcripts for the one or more keywords;
determining search results that comprises a plurality of video content units that satisfy a similarity threshold with the search request;
wherein the plurality of video content units that satisfy a similarity threshold with the search request comprises video content units having a transcript that includes at least some of the one or more keywords; and
transmitting information identifying at least some of the video content units of the search results to a remote device for presentation to the user.
14. The method of claim 13, further comprising:
selecting an advertisement based, at least in part, on the one or more keywords; and
transmitting the advertisement to the remote device for display to the user concurrently with the information identifying at least some of the video content units of the search results.
15. The method of claim 13, further comprising sorting a list of the video content units in the search results based, at least in part, on the number of keywords present in the transcript of each video content unit in the search results.
16. The method of claim 13, further comprising sorting a list of the video content units in the search results based, at least in part, on an average user rating associated with each video content unit.
17. The method of claim 13, further comprising:
receiving user rating data for a multitude of video content units from a plurality of viewers; and
based on the user rating data, storing an average rating in memory in association with each of the respective multitude of video content units.
18. The method of claim 17, further comprising displaying a request to rate a video content unit and wherein said user rating data is received in response to said displaying the request.
19. The method of claim 18, wherein said displaying is performed while the video content unit is being displayed.
20. The method of claim 13, wherein said receiving the search request comprises receiving a plurality of textual characters, the method further comprising:
after receiving each character of a group of the textual characters, displaying a plurality of suggested keywords.
21. The method of claim 20, further comprising:
selecting an advertisement based, at least in part, on the one or more suggested keywords; and
displaying the selected advertisement to the user.
22. The method of claim 20, wherein said selecting and displaying is performed each time the user enters a character of at least a multitude of a plurality characters entered.
23. The method of claim 13, further comprising selecting a pay per view video content unit based, at least in part, on the one or more keywords and displaying an advertisement for the selected pay per view video content unit to the user.
24. The method of claim 13, wherein in addition to one or more keywords the search request further includes at least one search criteria selected from the group of genre and MPAA rating.
25. A method, implemented at least in party by a computer system, for processing video content, comprising:
receiving a search request comprising one or more keywords originating from a user;
searching a plurality of transcripts for the one or more keywords;
wherein each of the plurality of transcripts comprises dialogue of an associated video content unit;
determining search results that comprises a plurality of video content units that satisfy a similarity threshold with the search request;
wherein the plurality of video content units that satisfy a similarity threshold with the search request comprises video content units having an associated transcript that includes at least some of the one or more keywords;
transmitting information identifying at least some of the video content units of the search results to a remote device for display to the user;
selecting a first advertisement based, at least in part, on the one or more keywords; and
transmitting the selected advertisement to the remote device for display to the user concurrently with the information identifying at least some of the video content units of the search results.
26. The method of claim 25, further comprising receiving a user selection of a selected video content unit that comprises one of the video content units displayed to the user; and
transmitting the selected video content unit to the remote device for presentation to the user.
27. The method of claim 25, wherein said receiving the search request comprises receiving a plurality of textual characters, the method further comprising:
after receiving each character of a group of the textual characters, displaying a plurality of suggested keywords.
28. The method of claim 27, further comprising:
selecting a second advertisement based, at least in part, on the one or more suggested keywords; and
displaying the second advertisement to the user concurrently with the one or more suggested keywords.
29. The method of claim 25, further comprising:
receiving user rating data for a multitude of video content units from a plurality of viewers; and
based on the user rating, storing an average rating in memory in association with each of the respective multitude of video content units.
30. The method of claim 29, further comprising displaying a request to rate a video content unit and wherein said user rating data is received in response to said displaying.
31. The method of claim 30, wherein said displaying is performed while the video content unit is being presented.
32. The method of claim 25, further comprising receiving a notification indicating a user response to the advertisement; and
in response, transmitting additional information to the remote device related to a product or service advertised in the first advertisement.
US12/728,681 2009-03-24 2010-03-22 System, Method and Computer Program Product for Processing Video Data Abandoned US20100251291A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/728,681 US20100251291A1 (en) 2009-03-24 2010-03-22 System, Method and Computer Program Product for Processing Video Data
US15/194,842 US10311102B2 (en) 2009-03-24 2016-06-28 System, method and computer program product for processing video data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US16284409P 2009-03-24 2009-03-24
US12/728,681 US20100251291A1 (en) 2009-03-24 2010-03-22 System, Method and Computer Program Product for Processing Video Data

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/194,842 Continuation US10311102B2 (en) 2009-03-24 2016-06-28 System, method and computer program product for processing video data

Publications (1)

Publication Number Publication Date
US20100251291A1 true US20100251291A1 (en) 2010-09-30

Family

ID=42785960

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/728,681 Abandoned US20100251291A1 (en) 2009-03-24 2010-03-22 System, Method and Computer Program Product for Processing Video Data
US15/194,842 Active US10311102B2 (en) 2009-03-24 2016-06-28 System, method and computer program product for processing video data

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/194,842 Active US10311102B2 (en) 2009-03-24 2016-06-28 System, method and computer program product for processing video data

Country Status (1)

Country Link
US (2) US20100251291A1 (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080295133A1 (en) * 2005-11-30 2008-11-27 Koninklijke Philips Electronics, N.V. Method and System for Generating a Recommendation for at Least One Further Content Item
US20110283324A1 (en) * 2010-04-14 2011-11-17 Electronics And Telecommunications Research Institute Method and apparatus of digital broadcasting service using automatic keyword generation
US20130067590A1 (en) * 2011-09-08 2013-03-14 Microsoft Corporation Combining client and server classifiers to achieve better accuracy and performance results in web page classification
US20140068678A1 (en) * 2012-09-05 2014-03-06 Verizon Patent And Licensing Inc. Integrated video content
US20140089288A1 (en) * 2012-09-26 2014-03-27 Farah Ali Network content rating
CN103984740A (en) * 2014-05-23 2014-08-13 合一网络技术(北京)有限公司 Combination label based search page display method and system
US20140280179A1 (en) * 2013-03-15 2014-09-18 Advanced Search Laboratories, lnc. System and Apparatus for Information Retrieval
CN104754373A (en) * 2013-12-27 2015-07-01 联想(北京)有限公司 Video acquisition method and electronic device
US20150293996A1 (en) * 2014-04-10 2015-10-15 Google Inc. Methods, systems, and media for searching for video content
US20170068661A1 (en) * 2015-09-08 2017-03-09 Samsung Electronics Co., Ltd. Server, user terminal, and method for controlling server and user terminal
US20170162228A1 (en) * 2015-12-07 2017-06-08 Cyberlink Corp. Systems and methods for media track management in a media editing tool
US20170293618A1 (en) * 2016-04-07 2017-10-12 Uday Gorrepati System and method for interactive searching of transcripts and associated audio/visual/textual/other data files
US20180027296A1 (en) * 2015-02-11 2018-01-25 Samsung Electronics Co., Ltd. Image processing device, and method and system for controlling image processing device
US20180113583A1 (en) * 2016-10-20 2018-04-26 Samsung Electronics Co., Ltd. Device and method for providing at least one functionality to a user with respect to at least one of a plurality of webpages
CN108184148A (en) * 2018-01-08 2018-06-19 武汉斗鱼网络科技有限公司 A kind of method, apparatus and computer equipment for being used to identify user
US10277953B2 (en) * 2016-12-06 2019-04-30 The Directv Group, Inc. Search for content data in content
CN109933691A (en) * 2019-02-11 2019-06-25 北京百度网讯科技有限公司 Method, apparatus, equipment and storage medium for content retrieval
US10372883B2 (en) 2016-06-24 2019-08-06 Scripps Networks Interactive, Inc. Satellite and central asset registry systems and methods and rights management systems
US10452714B2 (en) 2016-06-24 2019-10-22 Scripps Networks Interactive, Inc. Central asset registry system and method
US11068374B2 (en) * 2010-05-26 2021-07-20 Userzoom Technologies, Inc. Generation, administration and analysis of user experience testing
EP3769207A4 (en) * 2018-03-23 2022-01-05 Nedl.Com, Inc. Real-time audio stream search and presentation system
US20220155913A1 (en) * 2009-03-11 2022-05-19 Sony Group Corporation Accessing item information for an item selected from a displayed image
US11342002B1 (en) * 2018-12-05 2022-05-24 Amazon Technologies, Inc. Caption timestamp predictor
US11348148B2 (en) 2010-05-26 2022-05-31 Userzoom Technologies, Inc. Systems and methods for an intelligent sourcing engine for study participants
US11494793B2 (en) 2010-05-26 2022-11-08 Userzoom Technologies, Inc. Systems and methods for the generation, administration and analysis of click testing
US11544135B2 (en) 2010-05-26 2023-01-03 Userzoom Technologies, Inc. Systems and methods for the analysis of user experience testing with AI acceleration
US11868445B2 (en) 2016-06-24 2024-01-09 Discovery Communications, Llc Systems and methods for federated searches of assets in disparate dam repositories
US11909100B2 (en) 2019-01-31 2024-02-20 Userzoom Technologies, Inc. Systems and methods for the analysis of user experience testing with AI acceleration

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9781479B2 (en) * 2016-02-29 2017-10-03 Rovi Guides, Inc. Methods and systems of recommending media assets to users based on content of other media assets
US11028556B1 (en) 2020-02-03 2021-06-08 Cciip Llc Method of exposing a utility buried below a roadway and a bore hole cleaning device
US11359340B2 (en) 2020-06-19 2022-06-14 Cciip Llc Microtrenching system having a vacuum hose support and method of microtrenching
US11466428B1 (en) 2021-12-22 2022-10-11 Cciip Llc Microtrench plug and method for securing an optical fiber and/or microduct/innerduct in a microtrench
US11215781B1 (en) 2020-11-30 2022-01-04 Cciip Llc Roadway access hole cutter having a utility avoidance safety device, method of cutting a hole in a roadway, method of cutting a horizontal hole under a roadway
US11815728B2 (en) 2020-11-30 2023-11-14 Cciip Llc Roadway access hole cutter and method of cutting a square or rectangular roadway access hole
US11214450B1 (en) 2021-03-11 2022-01-04 Cciip Llc Method of proofing an innerduct/microduct and proofing manifold

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835667A (en) * 1994-10-14 1998-11-10 Carnegie Mellon University Method and apparatus for creating a searchable digital video library and a system and method of using such a library
US6177931B1 (en) * 1996-12-19 2001-01-23 Index Systems, Inc. Systems and methods for displaying and recording control interface with television programs, video, advertising information and program scheduling information
US20010001160A1 (en) * 1996-03-29 2001-05-10 Microsoft Corporation Interactive entertainment system for presenting supplemental interactive content together with continuous video programs
US6282713B1 (en) * 1998-12-21 2001-08-28 Sony Corporation Method and apparatus for providing on-demand electronic advertising
US20020066097A1 (en) * 1997-04-30 2002-05-30 Sony Corporation Information broadcasting method, information broadcasting system, and receiving apparatus for transmitting data other than program through digital broadcasting
US6408270B1 (en) * 1998-06-30 2002-06-18 Microsoft Corporation Phonetic sorting and searching
US20030208758A1 (en) * 1995-10-02 2003-11-06 Starsight Telecast, Inc. Method and system for displaying panel advertisements in an electronic program guide
US20060212897A1 (en) * 2005-03-18 2006-09-21 Microsoft Corporation System and method for utilizing the content of audio/video files to select advertising content for display
US20060248078A1 (en) * 2005-04-15 2006-11-02 William Gross Search engine with suggestion tool and method of using same
US20070027751A1 (en) * 2005-07-29 2007-02-01 Chad Carson Positioning advertisements on the bases of expected revenue
US7269837B1 (en) * 2000-05-31 2007-09-11 Your Choice Interactive, Inc. Interactive television advertising method
US20080033806A1 (en) * 2006-07-20 2008-02-07 Howe Karen N Targeted advertising for playlists based upon search queries
US20080126303A1 (en) * 2006-09-07 2008-05-29 Seung-Taek Park System and method for identifying media content items and related media content items

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8060906B2 (en) * 2001-04-06 2011-11-15 At&T Intellectual Property Ii, L.P. Method and apparatus for interactively retrieving content related to previous query results
WO2005109905A2 (en) * 2004-04-30 2005-11-17 Vulcan Inc. Time-based graphical user interface for television program information
US20060143236A1 (en) * 2004-12-29 2006-06-29 Bandwidth Productions Inc. Interactive music playlist sharing system and methods
US20070112758A1 (en) * 2005-11-14 2007-05-17 Aol Llc Displaying User Feedback for Search Results From People Related to a User
US20080059453A1 (en) * 2006-08-29 2008-03-06 Raphael Laderman System and method for enhancing the result of a query
US20080172413A1 (en) * 2007-01-12 2008-07-17 Fu-Sheng Chiu Mobile multimedia content distribution and access
US20080182587A1 (en) * 2007-01-29 2008-07-31 Bennett Daniel Luke Attractions network and mobile devices for use in such network
US20080263103A1 (en) * 2007-03-02 2008-10-23 Mcgregor Lucas Digital asset management system (DAMS)
US8781996B2 (en) * 2007-07-12 2014-07-15 At&T Intellectual Property Ii, L.P. Systems, methods and computer program products for searching within movies (SWiM)
US9304994B2 (en) * 2007-08-30 2016-04-05 At&T Intellectual Property Ii, L.P. Media management based on derived quantitative data of quality
US20090271283A1 (en) * 2008-02-13 2009-10-29 Catholic Content, Llc Network Media Distribution
US9202460B2 (en) * 2008-05-14 2015-12-01 At&T Intellectual Property I, Lp Methods and apparatus to generate a speech recognition library
US8447643B2 (en) * 2008-06-02 2013-05-21 Melvin L. Barnes, Jr. System and method for collecting and distributing reviews and ratings
US9928242B2 (en) * 2008-11-05 2018-03-27 Oracle International Corporation Managing the content of shared slide presentations

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835667A (en) * 1994-10-14 1998-11-10 Carnegie Mellon University Method and apparatus for creating a searchable digital video library and a system and method of using such a library
US20030208758A1 (en) * 1995-10-02 2003-11-06 Starsight Telecast, Inc. Method and system for displaying panel advertisements in an electronic program guide
US20010001160A1 (en) * 1996-03-29 2001-05-10 Microsoft Corporation Interactive entertainment system for presenting supplemental interactive content together with continuous video programs
US6177931B1 (en) * 1996-12-19 2001-01-23 Index Systems, Inc. Systems and methods for displaying and recording control interface with television programs, video, advertising information and program scheduling information
US20020066097A1 (en) * 1997-04-30 2002-05-30 Sony Corporation Information broadcasting method, information broadcasting system, and receiving apparatus for transmitting data other than program through digital broadcasting
US6408270B1 (en) * 1998-06-30 2002-06-18 Microsoft Corporation Phonetic sorting and searching
US6282713B1 (en) * 1998-12-21 2001-08-28 Sony Corporation Method and apparatus for providing on-demand electronic advertising
US7269837B1 (en) * 2000-05-31 2007-09-11 Your Choice Interactive, Inc. Interactive television advertising method
US20060212897A1 (en) * 2005-03-18 2006-09-21 Microsoft Corporation System and method for utilizing the content of audio/video files to select advertising content for display
US20060248078A1 (en) * 2005-04-15 2006-11-02 William Gross Search engine with suggestion tool and method of using same
US20070027751A1 (en) * 2005-07-29 2007-02-01 Chad Carson Positioning advertisements on the bases of expected revenue
US20080033806A1 (en) * 2006-07-20 2008-02-07 Howe Karen N Targeted advertising for playlists based upon search queries
US20080126303A1 (en) * 2006-09-07 2008-05-29 Seung-Taek Park System and method for identifying media content items and related media content items

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10382723B2 (en) * 2005-11-30 2019-08-13 S.I.Sv.El. Societa Italiana Per Lo Sviluppo Dell'elettronica S.P.A. Method and system for generating a recommendation for at least one further content item
US20080295133A1 (en) * 2005-11-30 2008-11-27 Koninklijke Philips Electronics, N.V. Method and System for Generating a Recommendation for at Least One Further Content Item
US20220155913A1 (en) * 2009-03-11 2022-05-19 Sony Group Corporation Accessing item information for an item selected from a displayed image
US20110283324A1 (en) * 2010-04-14 2011-11-17 Electronics And Telecommunications Research Institute Method and apparatus of digital broadcasting service using automatic keyword generation
US11348148B2 (en) 2010-05-26 2022-05-31 Userzoom Technologies, Inc. Systems and methods for an intelligent sourcing engine for study participants
US11068374B2 (en) * 2010-05-26 2021-07-20 Userzoom Technologies, Inc. Generation, administration and analysis of user experience testing
US11544135B2 (en) 2010-05-26 2023-01-03 Userzoom Technologies, Inc. Systems and methods for the analysis of user experience testing with AI acceleration
US11704705B2 (en) 2010-05-26 2023-07-18 Userzoom Technologies Inc. Systems and methods for an intelligent sourcing engine for study participants
US11709754B2 (en) 2010-05-26 2023-07-25 Userzoom Technologies, Inc. Generation, administration and analysis of user experience testing
US11494793B2 (en) 2010-05-26 2022-11-08 Userzoom Technologies, Inc. Systems and methods for the generation, administration and analysis of click testing
US9223888B2 (en) * 2011-09-08 2015-12-29 Bryce Hutchings Combining client and server classifiers to achieve better accuracy and performance results in web page classification
US20130067590A1 (en) * 2011-09-08 2013-03-14 Microsoft Corporation Combining client and server classifiers to achieve better accuracy and performance results in web page classification
US9584875B2 (en) * 2012-09-05 2017-02-28 Verizon Patent And Licensing Inc. Integrated video content
US20140068678A1 (en) * 2012-09-05 2014-03-06 Verizon Patent And Licensing Inc. Integrated video content
US20140089288A1 (en) * 2012-09-26 2014-03-27 Farah Ali Network content rating
US20140280179A1 (en) * 2013-03-15 2014-09-18 Advanced Search Laboratories, lnc. System and Apparatus for Information Retrieval
CN104754373A (en) * 2013-12-27 2015-07-01 联想(北京)有限公司 Video acquisition method and electronic device
US10311101B2 (en) 2014-04-10 2019-06-04 Google Llc Methods, systems, and media for searching for video content
US9672280B2 (en) * 2014-04-10 2017-06-06 Google Inc. Methods, systems, and media for searching for video content
US20150293996A1 (en) * 2014-04-10 2015-10-15 Google Inc. Methods, systems, and media for searching for video content
CN103984740A (en) * 2014-05-23 2014-08-13 合一网络技术(北京)有限公司 Combination label based search page display method and system
US20180027296A1 (en) * 2015-02-11 2018-01-25 Samsung Electronics Co., Ltd. Image processing device, and method and system for controlling image processing device
US10055406B2 (en) * 2015-09-08 2018-08-21 Samsung Electronics Co., Ltd. Server, user terminal, and method for controlling server and user terminal
US20170068661A1 (en) * 2015-09-08 2017-03-09 Samsung Electronics Co., Ltd. Server, user terminal, and method for controlling server and user terminal
US10217489B2 (en) * 2015-12-07 2019-02-26 Cyberlink Corp. Systems and methods for media track management in a media editing tool
US20170162228A1 (en) * 2015-12-07 2017-06-08 Cyberlink Corp. Systems and methods for media track management in a media editing tool
US10860638B2 (en) * 2016-04-07 2020-12-08 Uday Gorrepati System and method for interactive searching of transcripts and associated audio/visual/textual/other data files
US20170293618A1 (en) * 2016-04-07 2017-10-12 Uday Gorrepati System and method for interactive searching of transcripts and associated audio/visual/textual/other data files
US10372883B2 (en) 2016-06-24 2019-08-06 Scripps Networks Interactive, Inc. Satellite and central asset registry systems and methods and rights management systems
US10769248B2 (en) 2016-06-24 2020-09-08 Discovery, Inc. Satellite and central asset registry systems and methods and rights management systems
US10452714B2 (en) 2016-06-24 2019-10-22 Scripps Networks Interactive, Inc. Central asset registry system and method
US11868445B2 (en) 2016-06-24 2024-01-09 Discovery Communications, Llc Systems and methods for federated searches of assets in disparate dam repositories
US20180113583A1 (en) * 2016-10-20 2018-04-26 Samsung Electronics Co., Ltd. Device and method for providing at least one functionality to a user with respect to at least one of a plurality of webpages
US10277953B2 (en) * 2016-12-06 2019-04-30 The Directv Group, Inc. Search for content data in content
WO2019134284A1 (en) * 2018-01-08 2019-07-11 武汉斗鱼网络科技有限公司 Method and apparatus for recognizing user, and computer device
CN108184148A (en) * 2018-01-08 2018-06-19 武汉斗鱼网络科技有限公司 A kind of method, apparatus and computer equipment for being used to identify user
EP3769207A4 (en) * 2018-03-23 2022-01-05 Nedl.Com, Inc. Real-time audio stream search and presentation system
US11342002B1 (en) * 2018-12-05 2022-05-24 Amazon Technologies, Inc. Caption timestamp predictor
US11909100B2 (en) 2019-01-31 2024-02-20 Userzoom Technologies, Inc. Systems and methods for the analysis of user experience testing with AI acceleration
CN109933691A (en) * 2019-02-11 2019-06-25 北京百度网讯科技有限公司 Method, apparatus, equipment and storage medium for content retrieval

Also Published As

Publication number Publication date
US20170032032A1 (en) 2017-02-02
US10311102B2 (en) 2019-06-04

Similar Documents

Publication Publication Date Title
US10311102B2 (en) System, method and computer program product for processing video data
US11758237B2 (en) Television related searching
US10979772B2 (en) Method and system for presenting additional content at a media system
US9635418B2 (en) Systems and methods for searching based on information in commercials
JP4625656B2 (en) Interactive content without triggers
US9258610B2 (en) Correlation of media metadata gathered from diverse sources
AU2009225834B2 (en) Method and apparatus for replacement of audio data in a recorded audio/video stream
CA3008715C (en) Systems and methods for detecting inconsistent user actions and providing feedback
US7849481B2 (en) Notification for interactive content
US7950033B2 (en) Utilization of relational metadata in a television system
US20030097657A1 (en) Method and system for delivery of targeted programming
US20100262992A1 (en) Methods and apparatus for overlaying content onto a common video stream
US20030226141A1 (en) Advertisement data store
KR20080085677A (en) Apparatus and method for providing contents
US20110191153A1 (en) Method and system for presenting advertisements
US10796089B2 (en) Enhanced timed text in video streaming
US20170238066A1 (en) Method and computer program product for selectively displaying advertisments during media playback
KR20080087058A (en) Offering system of contents and advertisement of keyword base and method thereof
Thomas Metadata for Enhanced Electronic Program Guides
KR101299851B1 (en) System for making a nonstop broadcasting program schedule comprising commercial bridge and Method using the same

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION