WO2013181756A1 - System and method for generating and disseminating digital video - Google Patents

System and method for generating and disseminating digital video Download PDF

Info

Publication number
WO2013181756A1
WO2013181756A1 PCT/CA2013/050427 CA2013050427W WO2013181756A1 WO 2013181756 A1 WO2013181756 A1 WO 2013181756A1 CA 2013050427 W CA2013050427 W CA 2013050427W WO 2013181756 A1 WO2013181756 A1 WO 2013181756A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
synthesis
resources
generation engine
videos
Prior art date
Application number
PCT/CA2013/050427
Other languages
French (fr)
Inventor
Mohammad AAMIR
Ming Han
Original Assignee
Jugnoo Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jugnoo Inc. filed Critical Jugnoo Inc.
Publication of WO2013181756A1 publication Critical patent/WO2013181756A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring

Definitions

  • the following relates generally to generating and disseminating digital video.
  • a video generation engine for generating videos for a plurality of users, the video generation engine characterized by: (a) a profiler enabling each of said plurality of users to generate a plurality of video profiles each for a respective video, each said video profile specifying a plurality of assets to be included in said video; and (b) a synthesizer operable to allocate video synthesis resources among said video profiles to generate said videos.
  • a method for generating videos for a plurality of users characterized by: (a) enabling each of said plurality of users to generate a plurality of video profiles each for a respective video, each said video profile specifying a plurality of assets to be included in said video; and (b) allocating video synthesis resources, comprising one or more processors, among said video profiles to generate said videos.
  • Fig. 1 is a networked system comprising a video generation engine
  • Fig. 2 is a more detailed example of a video generation engine
  • Fig. 3 is a method for generating and disseminating digital video.
  • any module, engine, component, server, computer, terminal or device exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
  • Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto. Any application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media.
  • a system and method are provided for remotely generating and causing the dissemination of digital video.
  • the system and method provide a storyboarding unit enabling a user to apply visual or aural effect preferences (hereinafter collectively referred to as "visual effect preferences" though aural effect preferences are applied to audio assets) to a plurality of provided assets.
  • the assets are provided to a video generation engine that allocates resources among a plurality of generation tasks for a plurality of users to generate the videos and cause the dissemination of those videos. Users can demand video generation from the video generation engine which can operate in the cloud for a large number of concurrent users.
  • system and method described herein may be equally utilized by desktop and mobile users, and enable both of these types of users to generate similar quality digital video.
  • the system and method enable the generation and syndication of videos synthesized from still images, video clips, textual information and audio tracks, which can be distributed across the social web and consumed by the public.
  • the system comprises a video generation engine which may be operable to accept and store tokenized multimedia content (assets) obtained from a plurality of users by respective user interfaces, coordinate and allocate resources for video synthesis, orchestrate video synthesis by applying animation effects to the assets intelligently and distribute encoded videos with index preferences in real-time or near real-time.
  • a video generation engine which may be operable to accept and store tokenized multimedia content (assets) obtained from a plurality of users by respective user interfaces, coordinate and allocate resources for video synthesis, orchestrate video synthesis by applying animation effects to the assets intelligently and distribute encoded videos with index preferences in real-time or near real-time.
  • the video generation engine generates a digital video which may comprise placement of assets comprising text, images, audio, video clips and typography; transition of assets; and entry and exit of assets in sequenced frames.
  • assets comprising text, images, audio, video clips and typography; transition of assets; and entry and exit of assets in sequenced frames.
  • Each asset may be allocated a frame, such that frame length may vary, though it may be constant.
  • a video generation engine 100 is linked to a network 102, such as the internet.
  • the network may be in communication with a plurality of computers, referred to herein as the cloud 104.
  • the cloud 104 may be one or more servers providing one or more social networks 106.
  • the cloud 104 may be one or more servers and/or terminals providing content servers 108 that provide third party publicly accessible content.
  • One or more users are also connected to the network 102 by a client terminal 110.
  • the client terminal may be any of a desktop computer, server, laptop, mobile phone, smart phone, tablet, etc.
  • the client terminal comprises a web browser and/or is operable to execute third party applications.
  • the client terminal may be operable to exchange messages by any messaging medium including SMS, MMS, HTTP, FTP, etc.
  • SMS Short Message Service
  • MMS Mobility Management Entity
  • HTTP HyperText Transfer Protocol
  • FTP FTP Protocol
  • the video generation engine may comprise a user interface 202, upload optimizer 204, tokenizer 206, profiler 208, synthesizer 210, distributor 212, asset repository 214, effect repository 216 and video repository 218.
  • the video generation engine comprises at least one processor 220 and memory 222 operable to provide the functionality described herein.
  • a plurality of processors and/or memories are dedicated to the operation of the synthesizer and another plurality of processors and/or memory are dedicated to the operation of the distributor.
  • the video generation engine may comprise a plurality of distributed computers. These distributed computers may be linked to one another by network link (e.g., by the cloud) and may be subject to resource coordination by a single one of the distributed computers or on a distributed or hierarchical basis.
  • a user accesses the video generation engine via the user interface.
  • the user interface comprises a producer interface and a consumer interface.
  • the producer interface comprises an asset tool to enable a user to provide to the video generation engine one or more assets from which to generate a video.
  • Assets may comprise any digital content including images, video clips and audio tracks.
  • An asset may be further associated with one or more labels (textual information) which may be user-specified descriptions of the particular asset, or text that complements the asset in some other way.
  • Assets may be selected for upload by the user, selected from a user account, selected from a library (such as the asset repository 214, for example), or selected from the internet generally, such as from social networks 106 and/or content servers 108, for example.
  • the producer interface may associate each selected asset with the user by a user identifier.
  • Example asset file types for image, audio and video may include: image types: Bitmap (BMP, RLE, DIB), Camera raw (TIF, CRW, NEF, RAF, ORF, MRW, DCR, MOS, RAW, PEF, SRF, DNG, X3F, CR2, ERF), CompuServe GIF (GIF), Encapsulated PostScript (EPS), IFF (IFF, TDI), JPEG (JPG, JPE), PCX (PCX), PICT (PCT), Pixar (PXR), Portable Network Graphics (PNG), Radiance (HDR, RGBE, XYZE), SGI (SGI, BW, RGB), Softimage (PIC), Targa (TGA, VDA, ICB, VST), TIFF (TIF); video clip types: Adobe Flash Player (SWF), Adobe Flash Video (FLV, F4V), Animated GIF (GIF), H.264 (M4V) or a proprietary third party rendering such as from GoogleTM
  • the producer interface may further comprise an editing tool to apply editing rules to, and to enable the user to selectively edit, any of the assets. Editing rules may generally be used for preparing assets for video synthesis.
  • a user may wish to crop, resize or zoom a specific image asset, or edit the text of a textual asset.
  • the editing tool may further comprise a plurality of visual filters to be automatically applied as an editing rule or applied selectively by a user to images and video files.
  • Filters may include, for example, filters to adjust vibrancy, adjust brightness, adjust contrast, adjust red/green/blue levels, adjust tone/feel, apply vintage, toggle black and white, apply a ripple effect, apply a two-tone effect, sharpen, blur, colorize, apply vignette, increase or reduce noise, remove dust, remove scratches and other blemishes, liquify, magnify, apply displacement map, and apply any proprietary filters.
  • Proprietary filters may apply preselected treatments to images, such treatments comprising applying one or more of the foregoing filters simultaneously to provide genre-based effects including, for example, “1920's silent film”, “Grungy Found Footage”, “Horror Movie”, “Super-8”, “Bad TV” etc
  • the editing tool may further apply editing rules to automatically, or enable the user to, rotate an image or video asset.
  • an image or video clip may be rotated from portrait to landscape if the image or clip has a length larger than its width, or from landscape to portrait if the width is larger than the length.
  • the intelligent rotating tool may provide rules for cropping, such as requiring a cropped area to be proportional to the frame dimensions (length and width of image proportional to length and width of frame). For example, if the frame is in landscape orientation, the cropped area should have a width greater than a height; if the frame is portrait orientation, the cropped area should have a width less than a height; if the frame is square, the cropped area should have equal width and height.
  • the editing tool may further apply editing rules to automatically, or enable the user to, adjust video assets.
  • These editing rules may, for example, trim video clips having a length that exceeds a maximum length of the frame.
  • Other video-based editing rules comprise cut, slide and cross-dissolve.
  • the available frame lengths may be predetermined. For example, a minimal available frame length may be set at 1 second and the maximum available frame length may be set at the entire video length, or a finite limited value (e.g., 30 seconds).
  • the intelligent video tool may also merge two or more video clips and allow trimming of the merged video clips.
  • the editing tool may further enable the user to apply one or more textual labels to each asset.
  • the producer interface further comprises a storyboarding unit to enable a user to organize assets and select one or more visual effect preferences to be applied to each asset.
  • the storyboarding unit provides a user-configurable timeline that corresponds to the timeline of the video to be generated.
  • Each asset may be placed in a frame, which is a group along the timeline.
  • Each frame defines a unit comprising one or more assets to which a particular set of visual effect preferences will be applied.
  • a frame may be considered analogous to a scene of the video.
  • Particular assets may be placed along multiple frames, which are preferably contiguous.
  • a voiceover audio asset may be placed along the entire video length, or the final n frames of the video.
  • the storyboarding unit may enable the user to select a theme for the video, or for one or more frames therein, wherein the theme defines one or more visual effect preferences to be applied to the assets of the video, frame or group of frames, as the case may be.
  • themes are typically selected by an experienced video expert by selecting visual effects that are believed to be aesthetically (or, in the case of audio, aurally) pleasing in a particular application.
  • sales videos may optimally apply a particular set of visual effect preferences in a particular order, while travel-based videos may apply another set and/or another order.
  • the visual effect preferences are generally applied in predefined sequence by applying colours, filters and animations to the assets in order to create full motion video in the render process.
  • the filters and animations may be any of the visual effects stored in the effect repository.
  • Themes may be predefined collections of visual effect preferences to be applied to particular frames to be generated based on the assets. For example, a first set of particular visual effects may be appropriate for a video that is a personal reflection, while a second set of visual effects may be appropriate for a sales pitch.
  • Themes may further specify a predetermined output video length or range of lengths by specifying a number of frames and frame lengths to make up the video. Alternatively, the length of an output video may be determined by the length required for assets.
  • Visual effect preferences may be a preconfigured group of effects selected from the effect repository.
  • the effects may comprise color schemes, text fonts and motions (transitions).
  • Example videos that comprise each visual effect preference and theme may be provided by the producer interface as an example for selection by the user.
  • effects for image may include crop, resize, rotate, zoom and application of one or more image filters described above.
  • Effects for video may comprise trim, loop, application of one or more image filters and application of animations.
  • Animations may comprise image compositions, text compositions, effects compositions, transition layers and colour pickers, for example.
  • Effects for audio may comprise mix audio, adjust audio levels (volume, tone, equalizer), trim audio (such as to fit a timeline), loop audio and fade in/out.
  • Effects for labels may comprise character fade on/off, word fade on/off, slow fade on/off, typewriter effect with characters on/off, character shuffle (randomly, for example), blur characters or words, evaporate text, foggy text, algorithm loop, screen roll, composite words over images and justification (left, right, center, top, bottom, justified).
  • visual effect preferences may comprise sound layers and transitions.
  • a transition may be applied to each asset and define how a frame that is based on an asset will appear and disappear in the course of the generated video. Transitions may comprise pan left, right, up, down, diagonal, arc or another shape or style; wipe in/out; scale in/out; fade in/out; zoom in/out; slide in/out; dissolve in/out; wipe left, right, up, down, diagonal, arc or another shape or style; or apply the next asset as the transition.
  • Sound layers may comprise mixing a plurality of audio assets that have been allocated to the frame, including auto-mixing comprising adjusting audio levels of each asset to normalize them, or to emphasize one or more of them (e.g., to emphasize a voiceover relative to background music).
  • Further effects may include adaptive effects that apply visual effects to one asset based on another asset. For example, the pacing of a video in a frame may be increased when paired with a fast-paced sound asset in that frame, or may be slowed for a slow-paced asset.
  • the upload optimizer may perform a file type verification algorithm which verifies that each asset being uploaded is one of the file types supported by the video generation engine.
  • the upload optimizer enables resource optimization for any assets that are provided to the video generation engine from any source other than the asset library. For example, if a user wishes to upload an asset or provide an asset from a content server, the asset must be electronically transmitted to the video generation engine. Typically, transmission of a plurality of multimedia assets is resource intensive.
  • the upload optimizer may segment assets to be uploaded in parallel and then re- sequenced in the video generation engine.
  • An example of a suitable upload optimizer is the ApacheTM FileUploadTM tool.
  • the tokenizer may tokenize each asset.
  • the tokenizer generates a token by applying a tokenization algorithm to the asset.
  • a tokenization algorithm may be applied by the tokenizer to generate a unique semi-random identifier for the asset.
  • the token may be used for allocating a storage address for the asset in the asset library.
  • the tokenization algorithm may comprise the name of an asset, the asset type (image, text, audio, video), user identifier and access channel (e.g., desktop web interface versus mobile interface).
  • the name of the asset may be generated by the tokenizer or supplied by the user or the client terminal, e.g., derived from the file name, metadata for the file, a timestamp, etc.
  • the token may be provided in the form of: channel$type$category$userld$uuid-suffix$isThumbnail, where: channel represents the access channel; type represents type of the asset, including image, audio, video, text, other; category; UUID-suffix represents the semi-random identifier; and isThumbnail may be used to represent image assets with a reduced size version thereof, where 0 means original image and 1 means thumbnail.
  • Each asset may be stored in the asset library in a location corresponding to its token.
  • the tokens may then be used for handling assets throughout the video generation process.
  • the tokenization scheme applied is one that is operable to maintain a link between each particular user and that user's assets.
  • the profiler Once assets have been provided and stored, the profiler generates a video profile comprising compiling assets and visual effect preferences to construct a video profile.
  • the video profile comprises frame characteristics, setting out assets and visual effect preferences to be applied per frame or group of frames.
  • the video profile may comprise a plurality of sequentially indexed video meta- descriptors.
  • Each meta-descriptor may define the asset token and visual effect preferences for a respective video frame or group of frames.
  • the video meta descriptor may further define the maximum number of frames, maximum number of textual blocks per frame, and special keyframes.
  • the synthesizer obtains assets, visual effect preferences and themes as set out in the video meta-descriptor and corresponding annotations and performs a rendering process to generate a video.
  • Annotations may include metadata, such as a camera identifier that captures a particular image, date/time, size, geolocation, file size, color schema, etc.
  • the synthesizer applies a set of synthesis rules to generate the video.
  • the synthesis rules may comprise determining keyframes, applying image filtering and animation, generating a draft video, applying encoding and streaming options, and recording metadata.
  • the synthesizer may provide synthesis of all frames or select frames. The latter case maybe beneficial if the user wishes to view rendered video of only one frame, for example, while still applying assets to other frames using the storyboarding unit.
  • Keyframes may comprise an opening frame, closing frame, and one or more frames that correspond to calls to action.
  • the opening frame may be a title frame.
  • An initial action frame may be the second frame of the video.
  • the remaining action frames may be distributed throughout the video as determined by the storyboard unit.
  • the closing frame may be a credit frame, displaying credits associated with generation of the video (e.g., author, etc.).
  • the keyframes may be overlayed with annotations, including textual and background color treatments. Auto-resizing may also be applied. For example, auto-resizing of 80% of the width of the frame may be applied to provide aesthetically appealing video.
  • Image filtering and animation may be applied to the video to modify the video based on the selected visual effect preferences and themes, including zooming, panning, tiling, rotating, 3D treatment, and motion graphics treatment.
  • the specific visual effect preference and themes may vary. These visual effects and themes may be predefined in visual effect preferences.
  • a draft video comprising reduced frame rate, reduced resolution, or compression, may be generated for enabling the user to preview the video prior to committing resources for full frame rate and high resolution video.
  • encoding and streaming options may be applied to the video output.
  • the user may specify channels on which to disseminate the video, for example social media, mobile application, video website, and the synthesizer may determine corresponding file formats to be generated (e.g., mp4, flv, etc.).
  • the synthesizer may select a most optimal format for each targeted channel. Encoding may includes the placement of a mov-atom byte at the beginning of the video to support pseudo-streaming.
  • the video generation engine may apply a resource optimization process to enable the synthesizer to optimize use of computing resources during synthesis for all users having requested video synthesis.
  • the synthesizer may be operable to monitor in-process resources, idle resources, and load of in-process resources.
  • the synthesizer may allocate computing resources based on number of processors and number of nodes as a total consumable pool taking into consideration the then-current processor load for each processor as well as in- process and idle resources.
  • a set of dedicated render resources can be allocated to produce draft or preview version of the videos for fast turn around time while other resources can be reserved for final production of the videos and HD videos that may take much longer time and more CPU intensive.
  • Video meta descriptors can indicate the quality of video output and types to allocate the render resources.
  • pre-compiled and pre-cached assets for animation and motion graphics effects can be used to reduce use of resources.
  • Video render requests utilizing the same visual effects and animation sequence may additionally be pooled into the same set of resources.
  • historical resource consumption may be analyzed and allocation of resources can be adjusted to direct requests and tasks.
  • Resource optimization may further comprise monitoring the health of the synthesis tasks and render status and re-spawning the required resources in a timely manner to complete the tasks pursuant to a preconfigured service level. For example, resource optimization may monitor pending requests and resulting outputs, and correspondingly monitor various conditions during the render that result in long wait queuing, failure of corrupted asset files, and unexpected errors due to the physical resource constraints such as overflow of the render cache.
  • the synthesizer may comprise a monitor thread constantly monitoring, for example with heart beat, to check all render resources.
  • the synthesizer may store video profiles in a monitor folder and allocate a status of "waiting" to each video profile, to indicate that it is not yet ready for synthesis.
  • the monitor folder may comprise all queued video profiles.
  • the monitor folder may apply a first-in-first-out (FIFO) queue, or may dynamically prioritize video profiles to be released from the monitor folder, for example based on least expected resource requirement (or most, or by prioritizing based on user, for example users that have paid a subscription or synthesis fee, or users that are frequent users of the system, or by another prioritization scheme).
  • FIFO first-in-first-out
  • the synthesizer may determine that its respective render resource is delayed or may have failed. In such a case, the synthesizer may stop and restart (re-spawn) the resource. If the failure recurs, the video profiles in queue for that resource may be moved to queues for other resources that are believed to be operating appropriately.
  • the synthesizer may queue pending tasks and render requests. As resources become idle, pending tasks can be released based on availability and forecasted available resources. Resource that may be available may be forecast based on historical traffic and pre-scheduled special events. The forecast may be used to reconfigure the render resources to submit new requests and tasks. Once sufficient render resources are available, and provided a "ready" video profile is present, the synthesizer allocates resources to that profile, locates and loads all assets, visual effect preferences and themes in the profile, and synthesizes the video. Finally, the synthesizer may associate a plurality of the metadata to the video, comprising location, category, length, type, channel/community, user identifier, etc.
  • the video resource engine may constantly monitor the progress of video rendering and determine the video is successfully created or not.
  • the video may not successfully render if, for example, an asset was of an unsupported file format, was corrupt, did not upload successfully, etc.
  • the render program may assign successful videos to a processed folder, and mark error profiles as being in error and store such profiles in an error folder.
  • the generated video(s) may be stored to the video repository. Once video has been synthesized for each channel, the distributor may disseminate the video to desired channels. Distribution may comprise a batch upload, sequential upload or resource-allocated upload process, such as a scheduled upload to push the content across channels, in which scheduling may comprise near-immediate or scheduled dissemination to the channel, or dissemination to the channel at a time when a large number of users is likely to be online based on analytics.
  • the consumer interface enables a user to view generated video.
  • the consumer interface may be provided as a website, smartphone application or other proprietary software interface.
  • the consumer interface may enable community-based social interaction for generated video, and may further enable users to define visual effect preferences (templates), provide assets (image, video and audio libraries, etc.), display videos created for the communities, explore all or categorized videos and community specific videos, tag and search videos based on the hashtag, categories, and location, comment and review comments for videos, search videos based on computing resources based on the number of processors and the number of nodes as a total consumable pool, aggregated from multiple clusters distributed logically; share videos to various social media networks based on the user profile settings for each social networks, share videos to centralized social media networks channels based on user's consents, recommend videos for viewing and sharing similar in terms of using the same motion graphics libraries, common categories, and hashtags, from same users, and flag abused videos.
  • templates templates
  • assets image, video and audio libraries, etc.

Abstract

A system and method for generating and disseminating a video over the cloud. The system includes at least a user interface, video profiler and synthesizer. The user interface comprises a storyboarding unit that enables a user to prepare a timeline of one or more keyframes comprising one or more assets and one or more preconfigured visual effect preferences. The video profiler generates a video profile based on applying the visual effect preferences to the one or more assets in accordance with the timeline. The synthesizer operable to generate the video using the video profile.

Description

SYSTEM AND METHOD FOR GENERATING AND DISSEMINATING DIGITAL VIDEO CROSS REFERENCE
[0001] The present application claims priority to United States provisional patent application no. 61/657,509 filed June 8, 2012, which is incorporated herein by reference.
TECHNICAL FIELD
[0002] The following relates generally to generating and disseminating digital video.
BACKGROUND
[0003] Social networking has increased the amount of multimedia content shared on the internet. Image sharing and video sharing websites receive millions of visits monthly as users increasingly wish to consume content relevant to them. Users also now increasingly share video captured using mobile devices.
[0004] Most content uploaded to social networks by typical users lacks the production quality of content that could be reasonably produced by a sophisticated content provider. The quality of video content, in particular, is starkly different between that uploaded by the typical user and that uploaded by a sophisticated content provider. Usually, the typical user would upload a video as captured on a handheld video camera or mobile phone, without any post production. Most such video would be considered of low quality. Alternatively, if the user wanted to share higher quality video, the user might first upload the video to her desktop computer and perform cumbersome and time consuming post processing using third party post production software, which likely requires significant expertise and available computing resources.
[0005] Furthermore, it is a challenge to provide a resource enabling the output of a plurality of generated video clips for a great number of users, which could demand the resource at any moment. Generation of video can be resource intensive and, therefore, providing a relatively high quality of service to a large number of video generation users and tasks is a challenge.
SUMMARY
[0006] In one aspect, a video generation engine for generating videos for a plurality of users is provided, the video generation engine characterized by: (a) a profiler enabling each of said plurality of users to generate a plurality of video profiles each for a respective video, each said video profile specifying a plurality of assets to be included in said video; and (b) a synthesizer operable to allocate video synthesis resources among said video profiles to generate said videos. [0007] In another aspect, a method for generating videos for a plurality of users is provided, the method characterized by: (a) enabling each of said plurality of users to generate a plurality of video profiles each for a respective video, each said video profile specifying a plurality of assets to be included in said video; and (b) allocating video synthesis resources, comprising one or more processors, among said video profiles to generate said videos.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] Embodiments will now be described by way of example only with reference to the appended drawings wherein:
[0009] Fig. 1 is a networked system comprising a video generation engine;
[0010] Fig. 2 is a more detailed example of a video generation engine; and
[001 1] Fig. 3 is a method for generating and disseminating digital video.
DETAILED DESCRIPTION OF THE DRAWINGS
[0012] Embodiments will now be described with reference to the figures. It will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described herein.
[0013] It will also be appreciated that that any module, engine, component, server, computer, terminal or device exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto. Any application or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer readable media.
[0014] A system and method are provided for remotely generating and causing the dissemination of digital video. The system and method provide a storyboarding unit enabling a user to apply visual or aural effect preferences (hereinafter collectively referred to as "visual effect preferences" though aural effect preferences are applied to audio assets) to a plurality of provided assets. The assets are provided to a video generation engine that allocates resources among a plurality of generation tasks for a plurality of users to generate the videos and cause the dissemination of those videos. Users can demand video generation from the video generation engine which can operate in the cloud for a large number of concurrent users.
[0015] Importantly, the system and method described herein may be equally utilized by desktop and mobile users, and enable both of these types of users to generate similar quality digital video. The system and method enable the generation and syndication of videos synthesized from still images, video clips, textual information and audio tracks, which can be distributed across the social web and consumed by the public.
[0016] The system comprises a video generation engine which may be operable to accept and store tokenized multimedia content (assets) obtained from a plurality of users by respective user interfaces, coordinate and allocate resources for video synthesis, orchestrate video synthesis by applying animation effects to the assets intelligently and distribute encoded videos with index preferences in real-time or near real-time.
[0017] The video generation engine generates a digital video which may comprise placement of assets comprising text, images, audio, video clips and typography; transition of assets; and entry and exit of assets in sequenced frames. Each asset may be allocated a frame, such that frame length may vary, though it may be constant.
[0018] Referring now to Fig. 1 , a video generation engine 100 is linked to a network 102, such as the internet. The network may be in communication with a plurality of computers, referred to herein as the cloud 104. Among the cloud 104 may be one or more servers providing one or more social networks 106. Also among the cloud 104 may be one or more servers and/or terminals providing content servers 108 that provide third party publicly accessible content. [0019] One or more users are also connected to the network 102 by a client terminal 110. The client terminal may be any of a desktop computer, server, laptop, mobile phone, smart phone, tablet, etc. Preferably, the client terminal comprises a web browser and/or is operable to execute third party applications. Alternatively, the client terminal may be operable to exchange messages by any messaging medium including SMS, MMS, HTTP, FTP, etc. As seen in Fig. 1 , a large number of users could be connected to the network 102 at any given time through respective client terminals 1 10.
[0020] Referring now to Fig. 2, the video generation engine 100 is shown in more detail. The video generation engine may comprise a user interface 202, upload optimizer 204, tokenizer 206, profiler 208, synthesizer 210, distributor 212, asset repository 214, effect repository 216 and video repository 218. The video generation engine comprises at least one processor 220 and memory 222 operable to provide the functionality described herein. Preferably, a plurality of processors and/or memories are dedicated to the operation of the synthesizer and another plurality of processors and/or memory are dedicated to the operation of the distributor. It will be understood that the video generation engine may comprise a plurality of distributed computers. These distributed computers may be linked to one another by network link (e.g., by the cloud) and may be subject to resource coordination by a single one of the distributed computers or on a distributed or hierarchical basis.
[0021] Interaction between users and the video generation engine will now be described with reference generally to a single user, however it will be appreciated that the following applies to interaction between multiple users and the video generation engine in an intelligent manner wherein the video generation engine allocates resources among the users and maintains association between each user and the assets, data and resources for that user.
[0022] Referring now to Fig. 3, a user accesses the video generation engine via the user interface. The user interface comprises a producer interface and a consumer interface. The producer interface comprises an asset tool to enable a user to provide to the video generation engine one or more assets from which to generate a video. Assets may comprise any digital content including images, video clips and audio tracks. An asset may be further associated with one or more labels (textual information) which may be user-specified descriptions of the particular asset, or text that complements the asset in some other way.
[0023] Assets may be selected for upload by the user, selected from a user account, selected from a library (such as the asset repository 214, for example), or selected from the internet generally, such as from social networks 106 and/or content servers 108, for example. The producer interface may associate each selected asset with the user by a user identifier.
[0024] Example asset file types for image, audio and video may include: image types: Bitmap (BMP, RLE, DIB), Camera raw (TIF, CRW, NEF, RAF, ORF, MRW, DCR, MOS, RAW, PEF, SRF, DNG, X3F, CR2, ERF), CompuServe GIF (GIF), Encapsulated PostScript (EPS), IFF (IFF, TDI), JPEG (JPG, JPE), PCX (PCX), PICT (PCT), Pixar (PXR), Portable Network Graphics (PNG), Radiance (HDR, RGBE, XYZE), SGI (SGI, BW, RGB), Softimage (PIC), Targa (TGA, VDA, ICB, VST), TIFF (TIF); video clip types: Adobe Flash Player (SWF), Adobe Flash Video (FLV, F4V), Animated GIF (GIF), H.264 (M4V) or a proprietary third party rendering such as from Google™ Earth™, Bing™ Maps™, etc. available through an Application Programming Interface; and audio types: MP3 (MP3, MPEG, MPG, MPA, MPE) and Waveform (WAV).
[0025] The producer interface may further comprise an editing tool to apply editing rules to, and to enable the user to selectively edit, any of the assets. Editing rules may generally be used for preparing assets for video synthesis.
[0026] For example, a user may wish to crop, resize or zoom a specific image asset, or edit the text of a textual asset. The editing tool may further comprise a plurality of visual filters to be automatically applied as an editing rule or applied selectively by a user to images and video files. Filters may include, for example, filters to adjust vibrancy, adjust brightness, adjust contrast, adjust red/green/blue levels, adjust tone/feel, apply vintage, toggle black and white, apply a ripple effect, apply a two-tone effect, sharpen, blur, colorize, apply vignette, increase or reduce noise, remove dust, remove scratches and other blemishes, liquify, magnify, apply displacement map, and apply any proprietary filters. Proprietary filters may apply preselected treatments to images, such treatments comprising applying one or more of the foregoing filters simultaneously to provide genre-based effects including, for example, "1920's silent film", "Grungy Found Footage", "Horror Movie", "Super-8", "Bad TV" etc
[0027] The editing tool may further apply editing rules to automatically, or enable the user to, rotate an image or video asset. For example, an image or video clip may be rotated from portrait to landscape if the image or clip has a length larger than its width, or from landscape to portrait if the width is larger than the length. The intelligent rotating tool may provide rules for cropping, such as requiring a cropped area to be proportional to the frame dimensions (length and width of image proportional to length and width of frame). For example, if the frame is in landscape orientation, the cropped area should have a width greater than a height; if the frame is portrait orientation, the cropped area should have a width less than a height; if the frame is square, the cropped area should have equal width and height.
[0028] The editing tool may further apply editing rules to automatically, or enable the user to, adjust video assets. These editing rules may, for example, trim video clips having a length that exceeds a maximum length of the frame. Other video-based editing rules comprise cut, slide and cross-dissolve. The available frame lengths may be predetermined. For example, a minimal available frame length may be set at 1 second and the maximum available frame length may be set at the entire video length, or a finite limited value (e.g., 30 seconds). The intelligent video tool may also merge two or more video clips and allow trimming of the merged video clips.
[0029] The editing tool may further enable the user to apply one or more textual labels to each asset.
[0030] The producer interface further comprises a storyboarding unit to enable a user to organize assets and select one or more visual effect preferences to be applied to each asset. The storyboarding unit provides a user-configurable timeline that corresponds to the timeline of the video to be generated. Each asset may be placed in a frame, which is a group along the timeline. Each frame defines a unit comprising one or more assets to which a particular set of visual effect preferences will be applied. For the purposes of comprehension, a frame may be considered analogous to a scene of the video. Particular assets may be placed along multiple frames, which are preferably contiguous. For example, a voiceover audio asset may be placed along the entire video length, or the final n frames of the video.
[0031] The storyboarding unit may enable the user to select a theme for the video, or for one or more frames therein, wherein the theme defines one or more visual effect preferences to be applied to the assets of the video, frame or group of frames, as the case may be. Generally speaking, themes are typically selected by an experienced video expert by selecting visual effects that are believed to be aesthetically (or, in the case of audio, aurally) pleasing in a particular application. For example, sales videos may optimally apply a particular set of visual effect preferences in a particular order, while travel-based videos may apply another set and/or another order. The visual effect preferences are generally applied in predefined sequence by applying colours, filters and animations to the assets in order to create full motion video in the render process. The filters and animations may be any of the visual effects stored in the effect repository. [0032] Themes may be predefined collections of visual effect preferences to be applied to particular frames to be generated based on the assets. For example, a first set of particular visual effects may be appropriate for a video that is a personal reflection, while a second set of visual effects may be appropriate for a sales pitch. Themes may further specify a predetermined output video length or range of lengths by specifying a number of frames and frame lengths to make up the video. Alternatively, the length of an output video may be determined by the length required for assets.
[0033] Visual effect preferences may be a preconfigured group of effects selected from the effect repository. The effects may comprise color schemes, text fonts and motions (transitions). Example videos that comprise each visual effect preference and theme may be provided by the producer interface as an example for selection by the user.
[0034] Examples of various effects available in the effect repository may include effects for image, video, audio and labels. Effects for images may comprise crop, resize, rotate, zoom and application of one or more image filters described above.
[0035] Effects for video may comprise trim, loop, application of one or more image filters and application of animations. Animations may comprise image compositions, text compositions, effects compositions, transition layers and colour pickers, for example.
[0036] Effects for audio may comprise mix audio, adjust audio levels (volume, tone, equalizer), trim audio (such as to fit a timeline), loop audio and fade in/out.
[0037] Effects for labels may comprise character fade on/off, word fade on/off, slow fade on/off, typewriter effect with characters on/off, character shuffle (randomly, for example), blur characters or words, evaporate text, foggy text, algorithm loop, screen roll, composite words over images and justification (left, right, center, top, bottom, justified).
[0038] Additionally, visual effect preferences may comprise sound layers and transitions. A transition may be applied to each asset and define how a frame that is based on an asset will appear and disappear in the course of the generated video. Transitions may comprise pan left, right, up, down, diagonal, arc or another shape or style; wipe in/out; scale in/out; fade in/out; zoom in/out; slide in/out; dissolve in/out; wipe left, right, up, down, diagonal, arc or another shape or style; or apply the next asset as the transition. Sound layers may comprise mixing a plurality of audio assets that have been allocated to the frame, including auto-mixing comprising adjusting audio levels of each asset to normalize them, or to emphasize one or more of them (e.g., to emphasize a voiceover relative to background music). [0039] Further effects may include adaptive effects that apply visual effects to one asset based on another asset. For example, the pacing of a video in a frame may be increased when paired with a fast-paced sound asset in that frame, or may be slowed for a slow-paced asset.
[0040] The upload optimizer may perform a file type verification algorithm which verifies that each asset being uploaded is one of the file types supported by the video generation engine.
[0041] The upload optimizer enables resource optimization for any assets that are provided to the video generation engine from any source other than the asset library. For example, if a user wishes to upload an asset or provide an asset from a content server, the asset must be electronically transmitted to the video generation engine. Typically, transmission of a plurality of multimedia assets is resource intensive.
[0042] The upload optimizer may segment assets to be uploaded in parallel and then re- sequenced in the video generation engine. An example of a suitable upload optimizer is the Apache™ FileUpload™ tool.
[0043] Once uploaded, the tokenizer may tokenize each asset. In one example, the tokenizer generates a token by applying a tokenization algorithm to the asset. For example, a hash algorithm may be applied by the tokenizer to generate a unique semi-random identifier for the asset. The token may be used for allocating a storage address for the asset in the asset library. The tokenization algorithm may comprise the name of an asset, the asset type (image, text, audio, video), user identifier and access channel (e.g., desktop web interface versus mobile interface).
[0044] The name of the asset may be generated by the tokenizer or supplied by the user or the client terminal, e.g., derived from the file name, metadata for the file, a timestamp, etc. The token may be provided in the form of: channel$type$category$userld$uuid-suffix$isThumbnail, where: channel represents the access channel; type represents type of the asset, including image, audio, video, text, other; category; UUID-suffix represents the semi-random identifier; and isThumbnail may be used to represent image assets with a reduced size version thereof, where 0 means original image and 1 means thumbnail. Each asset may be stored in the asset library in a location corresponding to its token. The tokens may then be used for handling assets throughout the video generation process. As will be appreciated, the tokenization scheme applied is one that is operable to maintain a link between each particular user and that user's assets. [0045] Once assets have been provided and stored, the profiler generates a video profile comprising compiling assets and visual effect preferences to construct a video profile. The video profile comprises frame characteristics, setting out assets and visual effect preferences to be applied per frame or group of frames.
[0046] The video profile may comprise a plurality of sequentially indexed video meta- descriptors. Each meta-descriptor may define the asset token and visual effect preferences for a respective video frame or group of frames. The video meta descriptor may further define the maximum number of frames, maximum number of textual blocks per frame, and special keyframes.
[0047] The synthesizer obtains assets, visual effect preferences and themes as set out in the video meta-descriptor and corresponding annotations and performs a rendering process to generate a video. Annotations may include metadata, such as a camera identifier that captures a particular image, date/time, size, geolocation, file size, color schema, etc.
[0048] The synthesizer applies a set of synthesis rules to generate the video. The synthesis rules may comprise determining keyframes, applying image filtering and animation, generating a draft video, applying encoding and streaming options, and recording metadata.
[0049] The synthesizer may provide synthesis of all frames or select frames. The latter case maybe beneficial if the user wishes to view rendered video of only one frame, for example, while still applying assets to other frames using the storyboarding unit.
[0050] Keyframes may comprise an opening frame, closing frame, and one or more frames that correspond to calls to action. The opening frame may be a title frame. An initial action frame may be the second frame of the video. The remaining action frames may be distributed throughout the video as determined by the storyboard unit. The closing frame may be a credit frame, displaying credits associated with generation of the video (e.g., author, etc.). The keyframes may be overlayed with annotations, including textual and background color treatments. Auto-resizing may also be applied. For example, auto-resizing of 80% of the width of the frame may be applied to provide aesthetically appealing video.
[0051] Image filtering and animation may be applied to the video to modify the video based on the selected visual effect preferences and themes, including zooming, panning, tiling, rotating, 3D treatment, and motion graphics treatment. The specific visual effect preference and themes may vary. These visual effects and themes may be predefined in visual effect preferences. [0052] A draft video, comprising reduced frame rate, reduced resolution, or compression, may be generated for enabling the user to preview the video prior to committing resources for full frame rate and high resolution video.
[0053] Once the user has signalled that the draft video is satisfactory, encoding and streaming options may be applied to the video output. The user may specify channels on which to disseminate the video, for example social media, mobile application, video website, and the synthesizer may determine corresponding file formats to be generated (e.g., mp4, flv, etc.).
[0054] To develop the compatibility of the video player for different platforms ranging from the web browsers to mobile devices, the synthesizer may select a most optimal format for each targeted channel. Encoding may includes the placement of a mov-atom byte at the beginning of the video to support pseudo-streaming.
[0055] The video generation engine may apply a resource optimization process to enable the synthesizer to optimize use of computing resources during synthesis for all users having requested video synthesis. Provided the video generation engine makes available to the synthesizer more than one processor and/or pipelining, the synthesizer may be operable to monitor in-process resources, idle resources, and load of in-process resources.
[0056] To accommodate the video synthesis demands from a plurality of users having demanded concurrent or near-concurrent synthesis, the synthesizer may allocate computing resources based on number of processors and number of nodes as a total consumable pool taking into consideration the then-current processor load for each processor as well as in- process and idle resources. A set of dedicated render resources can be allocated to produce draft or preview version of the videos for fast turn around time while other resources can be reserved for final production of the videos and HD videos that may take much longer time and more CPU intensive.
[0057] Based on an analysis of the availability timing of each resource based on the video meta descriptors and historical performance, the synthesizer may allocate the computing resources accordingly. Video meta descriptors can indicate the quality of video output and types to allocate the render resources.
[0058] In addition, pre-compiled and pre-cached assets for animation and motion graphics effects can be used to reduce use of resources. Video render requests utilizing the same visual effects and animation sequence may additionally be pooled into the same set of resources. Further, historical resource consumption may be analyzed and allocation of resources can be adjusted to direct requests and tasks.
[0059] Resource optimization may further comprise monitoring the health of the synthesis tasks and render status and re-spawning the required resources in a timely manner to complete the tasks pursuant to a preconfigured service level. For example, resource optimization may monitor pending requests and resulting outputs, and correspondingly monitor various conditions during the render that result in long wait queuing, failure of corrupted asset files, and unexpected errors due to the physical resource constraints such as overflow of the render cache.
[0060] The synthesizer may comprise a monitor thread constantly monitoring, for example with heart beat, to check all render resources. As video profiles are generated by the plurality of users, the synthesizer may store video profiles in a monitor folder and allocate a status of "waiting" to each video profile, to indicate that it is not yet ready for synthesis. Thus, the monitor folder may comprise all queued video profiles. The monitor folder may apply a first-in-first-out (FIFO) queue, or may dynamically prioritize video profiles to be released from the monitor folder, for example based on least expected resource requirement (or most, or by prioritizing based on user, for example users that have paid a subscription or synthesis fee, or users that are frequent users of the system, or by another prioritization scheme). Once a video profile is complete, it may be allocated a status of "ready" to indicate that it is ready for synthesis.
[0061] If any particular video profile has retained a status of "waiting" longer than a predetermined threshold, the synthesizer may determine that its respective render resource is delayed or may have failed. In such a case, the synthesizer may stop and restart (re-spawn) the resource. If the failure recurs, the video profiles in queue for that resource may be moved to queues for other resources that are believed to be operating appropriately.
[0062] Provided that the available computing resources are less than the required resources, the synthesizer may queue pending tasks and render requests. As resources become idle, pending tasks can be released based on availability and forecasted available resources. Resource that may be available may be forecast based on historical traffic and pre-scheduled special events. The forecast may be used to reconfigure the render resources to submit new requests and tasks. Once sufficient render resources are available, and provided a "ready" video profile is present, the synthesizer allocates resources to that profile, locates and loads all assets, visual effect preferences and themes in the profile, and synthesizes the video. Finally, the synthesizer may associate a plurality of the metadata to the video, comprising location, category, length, type, channel/community, user identifier, etc.
[0063] The video resource engine may constantly monitor the progress of video rendering and determine the video is successfully created or not. The video may not successfully render if, for example, an asset was of an unsupported file format, was corrupt, did not upload successfully, etc. The render program may assign successful videos to a processed folder, and mark error profiles as being in error and store such profiles in an error folder.
[0064] The generated video(s) may be stored to the video repository. Once video has been synthesized for each channel, the distributor may disseminate the video to desired channels. Distribution may comprise a batch upload, sequential upload or resource-allocated upload process, such as a scheduled upload to push the content across channels, in which scheduling may comprise near-immediate or scheduled dissemination to the channel, or dissemination to the channel at a time when a large number of users is likely to be online based on analytics.
[0065] The consumer interface enables a user to view generated video. The consumer interface may be provided as a website, smartphone application or other proprietary software interface. The consumer interface may enable community-based social interaction for generated video, and may further enable users to define visual effect preferences (templates), provide assets (image, video and audio libraries, etc.), display videos created for the communities, explore all or categorized videos and community specific videos, tag and search videos based on the hashtag, categories, and location, comment and review comments for videos, search videos based on computing resources based on the number of processors and the number of nodes as a total consumable pool, aggregated from multiple clusters distributed logically; share videos to various social media networks based on the user profile settings for each social networks, share videos to centralized social media networks channels based on user's consents, recommend videos for viewing and sharing similar in terms of using the same motion graphics libraries, common categories, and hashtags, from same users, and flag abused videos.
[0066] Although the above has been described with reference to certain specific example embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the scope of the claims appended hereto.

Claims

CLAIMS We claim:
1. A video generation engine for generating videos for a plurality of users characterized by: a) a profiler enabling each of said plurality of users to generate a plurality of video profiles each for a respective video, each said video profile specifying a plurality of assets to be included in said video; and
b) a synthesizer operable to allocate video synthesis resources among said video profiles to generate said videos.
2. The video generation engine of claim 1 , characterized in that said video generation engine comprises a plurality of processors, said synthesizer allocating said video synthesis resources based on: (a) a total consumable pool of said processors; and (b) a then-current processor load for said processors.
3. The video generation engine of claim 1 , characterized in that said video generation engine synthesizes one of said videos upon said synthesizer determining that sufficient video synthesis resources are available and allocating said video synthesis resources a corresponding one of said video profiles.
4. The video generation engine of claim 3, further characterized in that said synthesizer obtains said assets corresponding to said one of said video profiles prior to allocating said video synthesis resources.
5. The video generation engine of claim 4, characterized in that said profiler further enables each said user to specify, by said video profile, visual effect preferences and themes to be applied to the video, and that said synthesizer obtains said visual effect preferences and themes corresponding to said one of said video profiles prior to allocating said video synthesis resources.
6. The video generation engine of claim 1 , characterized in that said synthesizer is operable to generate preview versions of said videos, synthesis of said preview versions requiring fewer resources than synthesis of said videos.
7. The video generation engine of claim 1 , characterized in that said synthesizer allocates said video synthesis resources based in part on historical performance of video synthesis.
8. The video generation engine of claim 1 , characterized in that said synthesizer monitors health of synthesis tasks and is operable to respawn select ones of said video synthesis resources if said health is insufficient.
9. The video generation engine of claim 1 , characterized in that said synthesizer prioritizes said video profiles to be allocated said video synthesis resources.
10. The video generation engine of claim 9, characterized in that said prioritization is by first-in- first-out.
1 1. A method for generating videos for a plurality of users characterized by:
a) enabling each of said plurality of users to generate a plurality of video profiles each for a respective video, each said video profile specifying a plurality of assets to be included in said video; and
b) allocating video synthesis resources, comprising one or more processors, among said video profiles to generate said videos.
12. The method of claim 1 1 , characterized in that said synthesis comprises allocating said video synthesis resources based on: (a) a total consumable pool of said processors; and (b) a then-current processor load for said processors.
13. The method of claim 1 1 , characterized in that said one of said videos is allocated resources upon determining that sufficient video synthesis resources are available and allocating said video synthesis resources a corresponding one of said video profiles.
14. The method of claim 13, further characterized by obtaining said assets corresponding to said one of said video profiles prior to allocating said video synthesis resources.
15. The method of claim 14, further characterized by enabling each said user to specify, by said video profile, visual effect preferences and themes to be applied to the video, and by obtaining said visual effect preferences and themes corresponding to said one of said video profiles prior to allocating said video synthesis resources.
16. The method of claim 11 , further characterized by generating a preview version of said videos, synthesis of said preview versions requiring fewer resources than synthesis of said videos.
17. The method of claim 11 , characterized in that said allocation is based in part on historical performance of video synthesis.
18. The method of claim 1 1 , further characterized by monitoring health of synthesis tasks and respawning select ones of said video synthesis resources if said health is insufficient.
19. The method of claim 1 1 , further characterized by prioritizing said video profiles to be allocated said video synthesis resources.
20. The method of claim 19, characterized in that said prioritization is by first-in-first-out.
PCT/CA2013/050427 2012-06-08 2013-06-04 System and method for generating and disseminating digital video WO2013181756A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261657509P 2012-06-08 2012-06-08
US61/657,509 2012-06-08

Publications (1)

Publication Number Publication Date
WO2013181756A1 true WO2013181756A1 (en) 2013-12-12

Family

ID=49711247

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2013/050427 WO2013181756A1 (en) 2012-06-08 2013-06-04 System and method for generating and disseminating digital video

Country Status (1)

Country Link
WO (1) WO2013181756A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110557649A (en) * 2019-09-12 2019-12-10 广州华多网络科技有限公司 Live broadcast interaction method, live broadcast system, electronic equipment and storage medium
CN112261314A (en) * 2020-09-24 2021-01-22 北京美摄网络科技有限公司 Video description data generation system, method, storage medium and equipment
US11445248B1 (en) * 2021-04-19 2022-09-13 Synamedia Limited Pooling user interface (UI) engines for cloud UI rendering

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5504670A (en) * 1993-03-31 1996-04-02 Intel Corporation Method and apparatus for allocating resources in a multiprocessor system
US5553248A (en) * 1992-10-02 1996-09-03 Compaq Computer Corporation System for awarding the highest priority to a microprocessor releasing a system bus after aborting a locked cycle upon detecting a locked retry signal
US20020116716A1 (en) * 2001-02-22 2002-08-22 Adi Sideman Online video editor
US20030115244A1 (en) * 2001-12-17 2003-06-19 International Business Machines Corporation Automatic data interpretation and implem entation using performance capacity management framework over many servers
US6651185B1 (en) * 1999-12-16 2003-11-18 Hewlett-Packard Development Company, L.P. High availability platform with fast recovery from failure by reducing non-response time-outs
US20100080528A1 (en) * 2008-09-22 2010-04-01 Ed Yen Online video and audio editing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5553248A (en) * 1992-10-02 1996-09-03 Compaq Computer Corporation System for awarding the highest priority to a microprocessor releasing a system bus after aborting a locked cycle upon detecting a locked retry signal
US5504670A (en) * 1993-03-31 1996-04-02 Intel Corporation Method and apparatus for allocating resources in a multiprocessor system
US6651185B1 (en) * 1999-12-16 2003-11-18 Hewlett-Packard Development Company, L.P. High availability platform with fast recovery from failure by reducing non-response time-outs
US20020116716A1 (en) * 2001-02-22 2002-08-22 Adi Sideman Online video editor
US20030115244A1 (en) * 2001-12-17 2003-06-19 International Business Machines Corporation Automatic data interpretation and implem entation using performance capacity management framework over many servers
US20100080528A1 (en) * 2008-09-22 2010-04-01 Ed Yen Online video and audio editing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CASARES ET AL.: "Simplifying Video Editing Using Metadata. Carnegie Mellon University", RESEARCH SHOWCASE: HUMAN-COMPUTER INTERACTION INSTITUTE. SCHOOL OF COMPUTER SCIENCE, 1 January 2002 (2002-01-01) *
JOSH LOWENSOHN, 12 July 2010 (2010-07-12), Retrieved from the Internet <URL:http:/lhowto.cnet.com/8301-11310_39-20010127-285/how-to-use-voutubes-video-editor> *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110557649A (en) * 2019-09-12 2019-12-10 广州华多网络科技有限公司 Live broadcast interaction method, live broadcast system, electronic equipment and storage medium
CN110557649B (en) * 2019-09-12 2021-12-28 广州方硅信息技术有限公司 Live broadcast interaction method, live broadcast system, electronic equipment and storage medium
CN112261314A (en) * 2020-09-24 2021-01-22 北京美摄网络科技有限公司 Video description data generation system, method, storage medium and equipment
US11445248B1 (en) * 2021-04-19 2022-09-13 Synamedia Limited Pooling user interface (UI) engines for cloud UI rendering

Similar Documents

Publication Publication Date Title
US8868465B2 (en) Method and system for publishing media content
US8411758B2 (en) Method and system for online remixing of digital multimedia
US8990693B2 (en) System and method for distributed media personalization
US9460752B2 (en) Multi-source journal content integration systems and methods
US9002175B1 (en) Automated video trailer creation
JP5068440B2 (en) Image-based video generation using stepped images
US8265457B2 (en) Proxy editing and rendering for various delivery outlets
US20100061695A1 (en) Method and apparatus for producing re-customizable multi-media
US20070169158A1 (en) Method and system for creating and applying dynamic media specification creator and applicator
JP2023518388A (en) Video special effects processing method, apparatus, electronic equipment and computer program
US9473756B2 (en) Systems and methods for generation of composite video from multiple asynchronously recorded input streams
US20090103835A1 (en) Method and system for combining edit information with media content
US20140147100A1 (en) Methods and systems of editing and decoding a video file
CN110418191A (en) A kind of generation method and device of short-sighted frequency
WO2013181756A1 (en) System and method for generating and disseminating digital video
US20190141366A1 (en) System and method for insertion of an asset into a source dynamic media
US20170163555A1 (en) Video file buffering method and system
WO2007084870A2 (en) Method and system for recording edits to media content
CN114218409A (en) Front-end animation performance optimization method and device and computer equipment
US20120290437A1 (en) System and Method of Selecting and Acquiring Still Images from Video
CN115022677B (en) Video playing method, device, equipment and medium
KR102445589B1 (en) Systems, methods, and devices for managing segmented media content
WO2024007988A1 (en) Image processing method and apparatus, electronic device, medium, and program product
US11871095B2 (en) Methods and systems for providing dynamically composed personalized media assets
US8707152B2 (en) Presenting images from slow image-event stream

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13799934

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 20/04/2015)

122 Ep: pct application non-entry in european phase

Ref document number: 13799934

Country of ref document: EP

Kind code of ref document: A1