CN104272377A - Motion picture project management system - Google Patents

Motion picture project management system Download PDF

Info

Publication number
CN104272377A
CN104272377A CN201380018690.7A CN201380018690A CN104272377A CN 104272377 A CN104272377 A CN 104272377A CN 201380018690 A CN201380018690 A CN 201380018690A CN 104272377 A CN104272377 A CN 104272377A
Authority
CN
China
Prior art keywords
frame
mask
camera lens
project
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201380018690.7A
Other languages
Chinese (zh)
Other versions
CN104272377B (en
Inventor
贾里德·桑德鲁
安东尼·洛佩兹
蒂莫西·传奎因
克雷格·切萨雷奥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Legend3D Inc
Original Assignee
Legend3D Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/366,899 external-priority patent/US9031383B2/en
Application filed by Legend3D Inc filed Critical Legend3D Inc
Publication of CN104272377A publication Critical patent/CN104272377A/en
Application granted granted Critical
Publication of CN104272377B publication Critical patent/CN104272377B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/103Workflow collaboration or project management

Abstract

Motion picture project management system for reviewers, coordinators and artists. Artists utilize image analysis and image enhancement and computer graphics processing for example to convert two-dimensional images into three-dimensional images or otherwise create or alter motion pictures. Enables the efficient management of projects related to motion pictures to enable enterprises to manage assets, control costs, predict budgets and profit margins, reduce archival storage and otherwise provide displays tailored to specific roles to increase worker efficiency.

Description

Motion picture project management system
The application is the sequence number submitted on February 17th, 2011 is No.13/029, the part continuation case of the U.S. Utility Patent application of 862, No.13/029, 862 be on Dec 22nd, 2010 submit to sequence number be No.12/976, the part continuation case of the U.S. Utility Patent application of 970, No.12/976, 970 be on October 27th, 2010 submit to sequence number be No.12/913, the part continuation case of the U.S. Utility Patent application of 614, No.12/913, 614 be on August 17th, 2009 submit to sequence number be No.12/542, the part continuation case of the U.S. Utility Patent application of 498, No.12/542, 498 be on February 18th, 2008 submit to sequence number be No.12/032, the part continuation case of the U.S. Utility Patent application of 969 and as United States Patent (USP) 7, 577, 312 and issue, No.12/032, 969 is the United States Patent (USP)s 7 submitted on January 4th, 2006, 333, the continuation case of 670, 7, 333, 670 is the United States Patent (USP)s 7 submitted on June 18th, 2003, 181, the division of 081, 7, 181, 081 be the sequence number submitted on May 6th, 2003 is that the thenational phase of the PCT application of No.PCT/US02/14192 enters case, No.PCT/US02/14192 requires the U.S. Provisional Patent Application 60/288 that May 4 calendar year 2001 submits to, the rights and interests of 929, the instructions of these documents is all herein incorporated by reference.
Background of invention
Technical field
One or more embodiment of the present invention relates to the project management field in motion picture industry, and relates to syndic, the Production Manager of management art man.Production Manager is also referred to as " film-making ".Artist utilizes graphical analysis and image enhaucament and computer graphics processing example as two dimensional image to be converted to the 3-D view associated with motion picture, or otherwise creates or change motion picture.But more particularly without limitation, one or more embodiment of the present invention makes the motion picture project management system being configured to manage efficiently the project relevant with motion picture can manage assets, controls cost, forecast budget and profit margin, reduction archives storage, and otherwise provides the display of applicable specific role to improve worker's efficiency.
Background technology
The gray level region in mark picture is related to for the known method painted to black and white feature film, then the colour switching or look-up table selected in advance are applied for the gray level in each region of the masked operation restriction of the scope in the region by covering each selection, and the region of sheltering described in subsequently from a frame to many subsequent frame application.U.S. Patent No. 4,984,072 (system and method for color image enhancement) and U.S. Patent No. 3,705, the key distinction between 762 (for black-and-white film being converted to the method for color film) is separated and shelters the mode of area-of-interest (ROI), how by this information transfer to subsequent frame and how to revise this mask information to meet the change of bottom layer image data.4, in 984,072 system, region is scribbled covering by operator via single-bit and is sheltered, and operator uses digital paintbrush method to handle frame by frame so that matched motion.3, in 705,762 processes, each region use vector polygon to sketch the contours by operator or turn retouch (rotoscope), described vector polygon is now regulated frame by frame by operator, so as making of cartoon shelter ROI.In the conversion of 2D film to 3D film, usually also utilize various different macking technique.
In above-described two kinds of systems, the colour switching look-up table selected the application of every frame artificially and amendment in succession and region are so that the change of view data that visually detects of compensating operation person.The all changes of potential brightness/gray scale level and motion are subjectively detected by operator, and by use such as mouse and so on for mobile or regulate the interfacing equipment artificially of mask shape successively correction mask to compensate the motion detected.In all cases, potential gray level is the passive recipient of the mask comprising the colour switching selected in advance, and all modifications of mask is under operator's detection and amendment.In these existing inventions, mask information does not comprise any information specific to potential brightness/gray scale level, and therefore to from a frame to the be shifted automated location of the mask corresponding with distortion and shape correction of the characteristics of image of another frame be impossible.
The existing system being utilized to two dimensional image to convert to 3-D view also may need the wire-frame model creating the 3D shape limiting masked target for the target in image.The establishment of wire-frame model is a huge cause with regard to work.These systems do not utilize the potential brightness/gray scale level of the target in image automatically to locate with the shape of the mask of correction target yet in case to be shifted from a frame to the characteristics of image of another frame corresponding with distortion.Therefore, a large amount of work is needed so that artificially formalizes to for the mask to the intended application degree of depth or Z dimension data and resets shape.From frame to frame, therefore the moving target of movement needs a large amount of human interventions.In addition, do not exist for strengthening the known solution of two dimensional image to 3-D view, these solutions utilize the compound background of multiple image depth information to be disseminated to background and masked target in frame.This comprises the data from background object, and no matter it is pre-existing in, or for there is missing data, and the region that is blocked that namely moving target opens background place never generates.In other words, known system uses the algorithm being used for inserting in the place that there is not view data view data to carry out gap-fill, which results in pseudomorphism.
The current method for the element comprising Practical computer teaching or effect that film is transformed into 3D from 2D mostly just utilizes the final 2D image sequence forming film.This is for converting all films to left images pairing so that the current method of three-dimensional viewing from 2-D data.There is not so known current method, it obtains and utilizes with the metadata of the elements correlation of Practical computer teaching for film to be converted.Like this situation is exactly, because the studio having older 2D film may not be preserved for the intermediate data of film, namely with the metadata of the elements correlation of Practical computer teaching, because the data volume in past is so big, to such an extent as to studio only can retain the final cinematic data with the computer graphics element of reproduction, and abandon metadata.For the film of tool associated metadata with a grain of salt, (such as, namely with the intermediate data of the elements correlation of Practical computer teaching, mask or α and/or depth information), the use of this metadata will accelerate degree of depth transfer process widely.
In addition, what can tackle the conversion of the thousands of frames of film with a large amount of work or computing power in industrial setting make use of iteration workflow for the typical method that film is transformed into 3D from 2D.Iteration workflow comprises the target of sheltering in every frame, adds the degree of depth, and then this frame is reproduced as the left and right viewpoint or left images pairing that form stereo-picture.If such as there is error in the edge of masked target, so typical workflow relates to " iteration ", the working group's (it may be in the country that world's other end has cheap non-skilled labor) being responsible for covering over the object is sent back to by frame, thereafter mask is sent to working group's (another country may be in again) of responsible reproduced image, thereafter the pairing of the image of reproduction is sent back to quality assurance group.In this workflow environment, the many times iteration that complex frames occurs is unrare.Workflow that this is referred to as " throwing hedge over ", because different working groups works independently to minimize their current working loads, instead of considers that whole efficiency is as teamwork.There is thousands of frame in film, may become higher by the time quantum of the frame iteration cost comprising pseudomorphism, cause the delay of overall project.Even if reproducing processes occurs partly again, again reconstruction of scenes all images or a large amount of process may be caused to the time quantum of its ray tracing and thus at least little time magnitude delay.The elimination of such a iteration by the huge saving of end-to-end time of providing conversion project to spend or wall hanging time, thus increases profit, minimizes the labour realized needed for described workflow.
Common simple Item Management Concept is known, but the form of the project management in the engineering project of large complexity and system application start from mid-term in 20th century.Project management at least relates to planning and management resource and worker usually to complete the template activity being called project.Project normally time guiding, and also by the constraint of scope and budget.First Frederick Winslow Taylor and his student Henry Gantt and Henri Fayol describes project management in the mode of system.Initial use work breakdown structure (WBS) and Gantt chart (Gantt chart), developed critical path method " CPM " and programme evaluation and review technique " PERT " respectively afterwards in industry and national defence are arranged.Project cost is estimated to follow these development closely.Basic project management generally includes startup, project planning, execution, monitoring/control and complete.More complicated project management techniques can attempt the object realizing other, such as, described in Capability Maturity Model integrated approach, guarantees such as to define, quantitative management and optimum management process.
As described above, the motion picture project based on industry typically comprises thousands of frame, but in addition, the project of these types also may utilize the memory space of flood tide, comprises every frame number hundred layer mask possibly, and hundreds of workers.The project of these types manages in the quite special mode that wherein cost is difficult to predict so far, and control feedback to be successfully redirected project towards finance, asset management and other best project management techniques of putting into practice of great majority are utilized by minimally.In addition, the project management tool of utilization comprises not for the project management details of the such as motion picture effect in unique vertical industry and so on and the ready-made project management tool of conversion Project settings.Therefore, forecast cost and quality and repeatedly project implementation is difficult to realize in film industry so far.Such as, existing motion picture project needs three people to evaluate the frame of editor in some cases sometimes, such as people locating resource in ample resources, and a people evaluates this resource, and another person provides note for feedback and does over again.Although there is the standalone tool performing these tasks, they are usually not integrated, and the personnel of different role are difficult to utilize.
No matter these known technology how, there is not optimization or the implementation of the project management solution of the unique requirements considering motion picture industry.Therefore, there is the demand for motion picture project management system.
Summary of the invention
Embodiments of the invention are generally for generation, the process of motion picture or change relevant project management.Large motion picture project utilizes the worker of some roles to process the every width image forming motion picture usually, and its quantity may with thousands of picture frame countings.One or more embodiment of the present invention makes computing machine and database can be configured to accept the distribution of the task relevant with artist, for the time term of artist's task and the expeditor evaluation to artistical time and actual achievement, also known as " production " of work product and editor's role to the evaluation of work product.Therefore system allows the artist being engaged in the to be managed camera lens be made up of multiple image successfully by budget finished item, together with the usually huge memory requirement minimized for motion picture assets, and allow when the given worker used and dispatch quality, grading the prediction project cost of to submit a tender future.
Relating in motion picture project of task generally includes the task of distributing with the project evaluation, project access, task matching, execution or project work, evaluation product and the work product to project are filed and transport relevant task.One or more embodiment of the present invention makes the worker of difference " role " can check project task in the mode of its role consistent with its role and auxiliary.This is unknown in motion picture industry.In one or more embodiment of the present invention, the role comprising " editor ", " asset managers ", " visual effect chief inspector ", " expeditor " or " film-making ", " artist ", " Art Director ", " stereoscopic photograph teacher ", " combined division ", " syndic ", " production assistant " can be utilized.In simpler meaning, for ease of illustrating, three major types does not relate to film-making worker, its management art man, perform most work product related work art worker and based on work product evaluation and the editor providing feedback.Each in these roles can utilize the uniqueness of motion picture image frame or shared view, and/or the information relevant with the other assets that every width image or its role are assigned with to work.
For assessment of the general work flow process in stage
Usually, editor and/or asset managers and/or visual effect chief inspector role utilize the instrument showing motion picture on the display of computing machine.Motion picture can be resolved into the scene or camera lens that will work by the various different role that this instrument such as makes this stage relate to.A kind of such instrument comprises commercially can be from " the FRAME obtained ".
For the general work flow process in access stage
Usually, asset managers is by other resources of the element layer of various different scene breaks and such as α mask, Practical computer teaching and so on or be input in database with any other resource of the scene relating in motion picture.In one or more embodiment of the present invention, the database of any type can be utilized.Can utilize to store with motion picture and comprise for a kind of such instrument of the relevant information of the assets of project management commercially can from left-handed person Science and Technology Ltd. tMproject management database " the TACTIC obtained tM".In one or more embodiment of the present invention, any database can be utilized, as long as motion picture special characteristic is included in this project management database." snapshot " and " file " table in one or more embodiment renewal item management database of the present invention.The framework of project management database is briefly described in this section, and embodiment one is below described in saving in more detail.
For the general work flow process of allocated phase
Usually, film-making worker utilizes the interface be coupled with project management database particular job person to be distributed to the particular task with its role association, and distributes the image with the camera lens in given motion picture or scene relating to worker.One or more embodiment of the present invention utilizes elementary item management database digital asset management table and adds the added field of improvement elementary item management function to optimize the project management process for motion picture industry." task " table in one or more embodiment renewal item management database of the present invention.
For the general work flow process in project work stage
Usually, artist, stereoscopic photograph teacher and combined division perform a big chunk of overall work to motion picture.These roles utilize clock facility to obtain its task usually, and arrange task status and for the startup of task and stand-by time.Usually, artist performs the mask of frame and zone design and initial depth amplification.Artist usually with commercially can from " THE FOUNDRY tM" NUKE that obtains tMutilize the ray-trace program that can comprise such as robotization mask traceability for such as mask cleaning together.Once client agrees with visual effect in scene and/or degree of depth work, so combined division's same tool of utilizing artist to use and usually utilize such as commercially can be from obtain with and so on other instruments complete scene.In one or more embodiment of the present invention, the people working in special assets is stored in the such as custom field in project management database.
In certain workflow scheme, the element classification in scene is such as become two independent classifications by the worker in zone design.Scene generally includes such as chronological two width or more width image.The movement elements (such as performer, automobile etc.) that these two classifications comprise background element (namely static set and foreground elements) or move in whole scene.In an embodiment of the present invention, be similar to the mode making traditional animation and treat these background element and movement elements individually.In addition, present many films comprise the element of Practical computer teaching (also referred to as computer graphics or CG, or image or CGI also referred to as Practical computer teaching), these elements comprise in fact non-existent target, such as such as robot or spaceship, or it is added into film as effect, such as dust, smog, cloud etc.The element of Practical computer teaching can comprise background element or movement elements.
Movement elements: movement elements is shown as the frame set of a series of order tiling or is furnished with the thumbnail image of background element.Numerous operator interface instrument that movement elements uses vanish system common in key frame and unique tools are sheltered, described unique job is relatively bimodal thresholding such as, wherein optionally mask is applied to the bright or dark areas of the vicinity by cursor brush bifurcated.After having designed and sheltered key frame completely, so the mask information from key frame is applied to all frames in display by use mask fitting technique, described mask fitting technique has comprised:
1. use the automatic mask matching of fast fourier transform and gradient descent algorithm based on brightness and pattern match, the identical masking regional of its reference key frame, then quotes all existing subsequent frames continuously.Because the computer system realizing embodiments of the invention at least can reset shape to the profile of mask from frame to frame, therefore a large amount of work can be saved by the process that hand completes traditionally according to this.In 2D to 3D conversion project, when mankind's recognizable object such as rotates, can in area-of-interest the mask of manual shift, and can, to this process " between benefit (tween) ", make computer system from frame to frame, automatically regulate sub-mask between key frame to save additional work.
2. rim detection is as the Bezier animation of automatic animation guide
3. rim detection is as the polygon animation of automatic animation guide
In one or more embodiment of the present invention, use RGBAZ file to import the element of Practical computer teaching, these RGBAZ files comprise for Practical computer teaching element by pixel or by the optional α mask on sub-pixel basis and/or the degree of depth.The example of such file comprises EXR file layout.Any other file layout spirit all according to the invention of the degree of depth and/or α information can be imported.Embodiments of the invention import with the file of any type of the elements correlation of Practical computer teaching to provide instant depth value for the part of the image of the elements correlation with Practical computer teaching.In this manner, for the element of Practical computer teaching any from frame to frame all without the need to mask matching or reset shape, because for the element of Practical computer teaching, exist by pixel or by the α on sub-pixel basis and the degree of depth, or be otherwise imported into or obtain.For have a large amount of Practical computer teaching element complex movie for, the importing of α and the degree of depth and conversion two dimensional image being arrived for the image pairing of right and left eyes viewing for the element of Practical computer teaching feasible economically.The degree of depth that one or more embodiment of the present invention allows background element and movement elements to have to associate with it or otherwise arrange or regulate, makes all targets of the target being different from Practical computer teaching all pass through artistic depth adjustment.In addition, embodiments of the invention allow the translation of the degree of depth such as imported from the RGBAZ file of the target association with Practical computer teaching, convergent-divergent or normalization, so that for the relative fullness of all elements maintenance degree of depth in frame or frame sequence.In addition, the such as feature matte existed for the element of the image forming film or any other metadata of α or other masks and so on also can be imported into and utilize to improve the mask for the Operation Definition changed.Can be imported into so that a kind of form obtaining the file of the metadata for the photographic element in scene comprises RGBA file layout.By different target hierarchies being stacked from the darkest to nearest, i.e. " stacked ", apply any α or the mask of each element, and for the target that left images the most flatly translation is nearest, thus create final depth-enhanced image pairing based on the element metadata of input picture and any Practical computer teaching.
In another embodiment of the present invention, the single frames that these background element and movement elements are combined into multiframe is individually represented, as the frame set of tiling or the single frames complex as all elements (namely comprise motion and background/foreground), then it become the visual reference database of the computer-controlled application of the mask in the sequence that forms for numerous frame.Each pixel address in this reference videotex data bank and the mask in digital frame/look-up table address and X, Y, Z position being used for creating with reference to follow-up " original " frame of videotex data bank corresponding.Based on the various differential image processing methods of the such as rim detection and so on be combined with pattern-recognition and other sub-mask analyses, be aided with the detection that the area-of-interest of operator's segmentation and the operator of the subsequent sections corresponding to original region-of-interest from reference target or frame guide, mask is applied to subsequent frame.In this manner, the gray level position of each mask (and the respective color being used for painted project from frame to frame is searched or for 2 d-to-3 d inverted term object depth information) determining on one's own initiative to apply in the mode of keying in the area-of-interest that controls predetermined and operator and shape.
Camera pan background and static foreground element: use that a series of phase place is relevant, image matching and focal length estimation technique are by the static prospect comprised in several consecutive images of camera pan and background element combination and be assembled together to create the synthesis single frames of the image sequence representing use in its structure.During this construction process, the overall situation regulated by the operator of overlapping successive frame is arranged and is removed movement elements.
For painted project, use and only carry out colour planning by the single width background image of multiple colour switching look-up tables to expression camera pan image sequence of the pixel quantity restriction in display.This allows designer to comprise the details as much as possible of hope, comprises the air-brush of mask information and provides other mask application technologies of maximum Expression of Originality.For degree of depth conversion project (i.e. such as 2 d-to-3 d movie conversion), represent that the single width background image of camera pan image sequence can be utilized to arrange the degree of depth of different items (item) in background.Once background color/depth design completes, mask information is automatically transferred to all frames for creating single width composograph.In this manner, color or the every multiple image of the degree of depth and/or scene perform once instead of every frame performs once, and color/depth information is automatically disseminated to each frame via embodiments of the invention.Mask from painted project can change projects combo or grouping for the degree of depth, because colored mask can comprise subregion more more than degree of depth conversion mask.Such as, for painted project, face may have some masks in the region being applied to such as lip, eyes, hair and so on, and degree of depth conversion project may only need the nose profile of the contouring head of people or people or the sub-mask of several geometric configuration to apply the degree of depth to it.Mask from painted project can be used as degree of depth inverted term object starting point, because the profile of definition mankind recognizable object itself is consuming time, and can be utilized to start degree of depth conversion masking procedure to save time.The element of any Practical computer teaching of background level can be applied to single width background image.
In one or more embodiment of the present invention, during creating the single width composograph representing pan, the image displacement information relative to every frame is registered in text, and uses it for the frame be applied to by single synthesis mask for creating composograph.
Owing to having sheltered foreground moving element individually before application background mask, thus in the mask information of application background in any case of the mask information be not pre-existing in.
Have and do not have film to rock, the still camera scene with camera drift followed by small camera: the scene that the small camera motion caused by transmitting from 35mm or 16mm film to the sprocket wheel of digital format or film rock, first use above-named technology to shelter moving target completely wherein.Then, all frames automatically in process scene, to create the single image representing static foreground element and background element, eliminate all moving targets sheltered when it blocks and exposes background.
Background or prospect is exposed in any case at the moving target sheltered, preferential and copy in single image to compensate camera motion with the example of suitable skew by the background be previously blocked and prospect.Offset information is included in the text associated with each single representation of background, makes the mask information obtained can with suitable mask offset applications to the every frame in scene.
Use and only carry out colour planning by the single width background image of multiple colour switching look-up tables to expression still camera frame series of the pixel quantity restriction in display.In successive frame series, movement elements blocks background element continuously, they are counted as the shadow being left in the basket and sheltering.During masked operation, black objects is left in the basket in only painted project, because the background mask obtained only is applied to all frames of the single expression for background when the mask be not pre-existing in later.If for always unexposed region background information, so these data are taken as any other background data scattered by a series of images based on synthesis background.This allows minimum artefacts or changes without the 2 d-to-3 d of pseudomorphism, because need stretching target or expansion pixel with regard to the data of disappearance never, because during degree of depth transfer process, the view data for the believable generation of human viewer generates for occlusion area when needed and then takes from occlusion area.Therefore, for the element of movement elements and Practical computer teaching, when what does not exist, data true to nature may be used for the region after these elements.This allows designer to comprise the details as much as possible of hope, comprises the air-brush of mask information and provides other mask application technologies of maximum Expression of Originality.Once background color has designed, mask information is automatically transferred to all frames for creating single width composograph.For degree of depth project, in synthetic frame, be automatically transferred to all frames for creating single width composograph from camera to the distance of each items.By the target context that more or less flatly movement is sheltered, thus its perceived depth is set in the auxiliary view frame corresponding to the every frame in scene.This moves horizontally the data that artist can be utilized to generate for shield portions, or alternatively, in one or more embodiment of the present invention, the user's define color mark allowing to create missing data can be used for the second viewpoint, still to there is not the region of view data, to guarantee not occur pseudomorphism during 2 d-to-3 d transfer process.In an embodiment of the present invention, any known technology can be utilized to cover and to there is unknown data in background, i.e. (as with certain color showing the to exist missing data display) region that cannot such as use from another scene/frame by allowing artist create complete background, or there is the less occlusion area of the target that artist draws.After depth assignment being given the target in synthesis background, or by importing the degree of depth with the elements correlation of the Practical computer teaching of background depth, can such as pass through for the second viewpoint flatly translation foreground target, or can alternately by flatly left and right translation foreground target in case create with two viewpoints of original viewpoint offsets and be every width image creation second visual point image in scene to produce the three-dimensional view of film, such as, primitive frame in its Scene is assigned to the left-eye view of right eye viewpoint.
One or more instruments that system adopts can as the portable translation file by pixel edition file allows real-time edition 3D rendering without the need to again reproducing such as to change layer/color/mask and/or remove pseudomorphism and minimize or eliminate the iteration workflow path returning different operating group by generating.Such as, mask set takes source images, and is the items in every frame of the image sequence of formation film, region or mankind's recognizable object establishment mask.The degree of depth and such as shape are applied to the mask that mask set creates by degree of depth amplification group.When reproduced image matches, left and right visual point image and left and right translation file can be generated by one or more embodiment of the present invention.Left and right visual point image allows the 3D of original 2D image to watch.Translation file is such as given for the pixel-shift of each source pixel in original 2D image with the form of UV or U figure.These files are usually relevant with the α mask for every layer, and described layer is such as the layer, the layer for door, the layer for background etc. of actress.These translation files or figure are passed to Quality Assurance group from the degree of depth amplification group reproducing 3D rendering.This real-time edition allowing Quality Assurance group (or other working groups of such as degree of depth amplification group and so on) to perform 3D rendering when again not reproducing such as in case when not with require so again reproduce or mask is sent it back mask set does over again processing time/again reproduce and/or delay that iteration workflow associates change layer/color/mask and/or remove the pseudomorphism such as sheltering error and so on, wherein mask set can be in the third world countries with non-skilled labor of earth opposite side.In addition, when reproduction left images, namely, during 3D rendering, the Z degree of depth in the region of the such as such as performer and so in image also can be passed to quality assurance group together with α mask, this quality assurance group then also can when not utilizing original reproduction software again to reproduce regulation depth.This can such as utilize the deleted background data from the generation of any layer to perform in case when such as again do not reproduce or ray tracing allow " downstream " real-time edition.Quality assurance can shelter group or degree of depth amplification group with feedback for individuality, these individualities can be indicated on do not wait for or require upstream group for the work product desired by making for given project when current project is done over again to anything.This allows feedback, but eliminates the associated delay of work product being sent back to the work product that iteration postpones and wait is done over again related to of doing over again.The elimination of such a iteration provides the end-to-end time of conversion project cost or the huge saving of wall hanging time, thus increases profit, and minimizes the labour realized needed for workflow.
For evaluating the general work flow process in stage
The type of the project work no matter given assets performed how, all such as uses and be coupled to allow to check that the interface of work product evaluates assets with project management database.Usually, the user based on editor's role uses this interface maximum, artist and stereoscopic photograph Shi Qici, and Art Director is minimum.Such as can utilize and cover in image or scene so that the clear background surrounding text of the given worker's expedited review and feedback that allow to have specific role checks Review explanation and image simultaneously.Other improvement of project management database comprise artist's grading and assets predicament.These fields allow worker's grading and forecast expected cost when bid project, and this is unknown in motion picture project planning field.
For filing the general work flow process with the haulage stage
All assets that can regenerate can be deleted and/or compress to asset managers, and this can be the disk space that typical motion picture saves hundreds of terabyte.This allows the huge saving of disc driver hardware purchase and is unknown in the art.
The database realizing that one or more embodiments of system can utilize computing machine and be coupled with computing machine.Have such as via any computer architecture spirit all according to the invention of any amount of computing machine of computer communication network coupling.The database be coupled with computing machine at least comprises repertory, shut list, task list and time list and shows.Repertory generally includes item identifier and the item description relevant with motion picture.Shut list generally includes camera lens identifier and quotes the multiple image having initial frame value and terminate frame value, and wherein said multiple image associates with the motion picture of item association.Shut list generally includes at least one camera lens, has the state relevant with the job schedule performed on camera lens.Task list uses the item identifier being also arranged in repertory to quote this project usually.Task list generally includes at least one task, described task generally includes task identifier and assignment person, such as artist, and also can comprise the context associated with the task type relevant with the motion picture work of evaluation with being selected from zone design, installation, motion, synthesis and arrange.At least one task described has generally included the time that at least one task described is distributed.Item identifier in the usual REFER object table of time single items table and the task identifier in task list.Task list generally includes at least one time comprising initial time and end time single items.In one or more embodiment of the present invention, computing machine is configured to present the first display being configured to be checked by artist, this first display comprises at least one daily distribution, and it has context, project, camera lens, the state input being configured to the state upgraded in task list and the initial time be configured in single items table update time and the input of the timer of end time.Computing machine is configured to present the second display checked by expeditor or " film-making " worker (i.e. film-making) usually, it comprises and has context, project, camera lens, state and artistical search display, and wherein the second display comprise multiple artistical list further and based on the time spent at least one described single items time relatively according to corresponding states and the actual achievement of the time of at least one task matching described in associating with at least one camera lens described.Computing machine is usually also configured to present and is configured to by editing the 3rd display checked, and it comprises and is configured to accept about at least comment of piece image or drawing or comment and the annotation frame both drawing described in the described multiple image associated with at least one camera lens described.One or more embodiments of computing machine can be configured to provide and are configured to by editing the 3rd display checked, and the 3rd display comprises the annotation at least one width of covering in described multiple image.This ability provides the information about a display, and it needs three workers to integrate in known systems usually, and itself is novel.
The embodiment of database also can comprise snapshot table, this snapshot table comprises snapshot identifier and search-type and comprises the snapshot of at least one camera lens described, such as comprise the subset of at least one camera lens described, wherein this snapshot buffer memory is on computers to reduce the access for shut list.Other contexts that embodiment also can comprise for the task classification (such as source and cleaning inter-related task) of other types are arranged.Also can comprise any other context relevant with motion picture work meeting spirit of the present invention to arrange or value.The embodiment of database also can comprise asset request table, and this asset request table comprises asset request identifier and the camera lens identifier that can be utilized to ask to work in assets or ask to be worked in by such as other workers or created assets itself.The embodiment of database also can comprise required list, and this required list comprises mask request identifier and camera lens identifier and can be utilized to ask the action of any type of such as another worker.The embodiment of database also can comprise remarks table, this remarks table comprise remarks identifier and REFER object identifier and comprise with from relevant at least one remarks of at least one width in the described multiple image of motion picture.The embodiment of database also can comprise comprising pays the payment table of identifier, its REFER object identifier and comprise the information relevant with the payment of motion picture.
The grading that one or more embodiments of computing machine are configured to accept the work performed based on artist in the blind mode commented alternatively from film-making or editor inputs, blindly comment in mode described, syndic does not know artistical identity to prevent from such as acting unfairly from selfish motives.One or more embodiments of computing machine are configured to the difficulty accepting at least one camera lens described, and the work performed based on artist and based on the Time Calculation grading that the difficulty of camera lens and camera lens spend.One or more embodiments of computing machine are configured to the grading input accepting the work performed based on artist from film-making or editor's (i.e. editor), or accept the difficulty of at least one camera lens described and the work performed based on artist and based on the Time Calculation grading that the difficulty of camera lens and camera lens spend, and to accept based on computing machine or the grading of computer calculate shows for artistical excitation.One or more embodiments of computing machine are configured to estimate residual cost based on actual achievement, described actual achievement be based upon with in project described in whole camera lenses at least one camera lens associate described in T.T. of spending of whole tasks at least one task relative for in project described in whole camera lenses at least one camera lens associate described in time of whole task matching at least one task.One or more embodiments of computing machine are configured to the actual achievement with the first item association and compare with the actual achievement of the second item association, and show that at least one worker will distribute to the second project from the first project based at least one grading distributing to Section 1 object first worker.One or more embodiments of computing machine are configured to analyze the perspective project of the camera lens with some and estimate the difficulty of every camera lens, and based on the actual achievement with item association, calculate the forecast cost being used for this perspective project.One or more embodiments of computing machine are configured to analyze the perspective project of the camera lens with some and estimate the difficulty of every camera lens, and based on the actual achievement of the second item association with previous the first project performed and the previous execution completed after the first project previously performed, calculate the derivative of actual achievement, the derivative calculations based on actual achievement is used for the forecast cost of perspective project.Such as, along with the improvement of process, the improvement of instrument and the improvement of worker, work efficiency is improved and how budget and tendering process can be changed by counting yield and the relation of time and be considered this point, and uses this change rate forecast for the cost of perspective project.One or more embodiments of computing machine are configured to analyze the actual achievement with described item association, and with the camera lens completed divided by the total camera lens with described item association, provide the deadline of project.One or more embodiments of computing machine are configured to analyze the actual achievement with described item association, and with the camera lens completed divided by the total camera lens with described item association, provide the deadline of project, accept the input with at least one additional artist of grading, accept the camera lens wherein using the some of additional artist, calculate based at least one additional artist described and number of shots and save time, from the deadline of project, deduct this save time, and the update time that the project that provides completes.One or more embodiments of computing machine are configured to calculate the disk space amount that can be utilized to project filing, and show that at least one assets can rebuild from other assets are to avoid filing these at least one assets.One or more embodiments of computing machine are configured to show error message when artist to when current frame number work not at least one camera lens described.This may appear at such as be fade-in fade-out, fade out or other effects make close-up elongated time, wherein this case for lense is containing frame not in original source assets.
Accompanying drawing explanation
Fig. 1 shows multiple feature film or the film television frame that expression wherein exists the single instance of background or the scene of perception or editing (cut).
Fig. 2 shows from the processed scene of the isolated background of the described multiple frame shown in Fig. 1, wherein uses various different subtraction and differential technique to remove all movement elements.Then, use single width background image to create and represent that the background mask of the color lookup table that designer selects covers, wherein dynamic pixel color automatically compensates or regulates mobile shade and other brightness change.
The representative sample that Fig. 3 shows each moving target (M target) in scene receives and represents that the mask of the color lookup table that designer selects covers, and wherein dynamic pixel color automatically compensates along with the motion of moving target in scene or regulate mobile shade and other brightness change.
All masking elements that Fig. 4 shows scene are then reproduced to create completely painted frame, wherein moving target mask is applied to each suitable frame in scene, is then only in the background mask of the place of the mask be not pre-existing in application in boolean's mode.
Fig. 5 A and Fig. 5 B shows the series of successive frames be loaded in display-memory, and wherein a frame utilizes background (key frame) to shelter completely and prepares to propagate into subsequent frame via automatic mask approximating method mask.
Fig. 6 A and Fig. 6 B show display display-memory in the amplification of consecutive image series and the subwindow of scalable single image.This subwindow make operator can in real time or between the moving period of slowing down on single frames or in multiframe interactively handle mask.
Fig. 7 A and Fig. 7 B shows single mask (human body) and is automatically transmitted to all frames in display-memory.
Fig. 8 shows all masks associated with moving target and is transmitted to all successive frames in display-memory.
Fig. 9 A shows the picture of face.
Fig. 9 B illustrates the feature of the face in Fig. 9 A, and " little dark " pixel wherein shown in Fig. 9 B is used for using bilinear interpolation to calculate weighted index.
Figure 10 A-D shows the best-fit on search error surface: the error surface in gradient descent search method calculates and relates to reference image frame and corresponding (skew) position (x on search graph picture frame, the mean square deviation of the pixel in the square matching frame centered by reference picture pixel (x0, y0) y).
Figure 11 A-C shows the second search box from deriving along (evaluating individually) error surface Gradient Descent, for it, the error function of evaluation reduces relative to original reference frame, is minimized possibly (from the visual comparison of the reference block these frames and Figure 10 A-D obviously).
Figure 12 depicts gradient component evaluation.The surface graded definition according to gradient of error calculates.Vertical and lateral error deviation four positions in the search box near heart position are evaluated, and combine the estimation of the error gradient to provide this position.
Figure 13 shows the mask of the propagation in the first order example, wherein between bottom layer image data and mask data, there is little difference.Can be clear that, full dress mask and hand mask are close relative to view data.
Figure 14 shows by using automatic mask fitting routine, the bottom layer image data of mask data by reference in prior images and be adjusted to view data.
The mask data that Figure 15 to show in sequence in image below reveals significant difference relative to bottom layer image tables of data.Eye adornment, lipstick, blush, hair, face, full dress and hand view data be all shifted relative to mask data.
Figure 16 shows and automatically regulates mask data based on bottom layer image data according to previous mask and bottom layer image data.
Figure 17 shows after the matching of whole frame automatic mask, to utilize suitable colour switching that mask data from Figure 16 is shown.Mask data is regulated to be applicable to bottom luminance patterns based on from previous frame or from the data of initial key frame.
Figure 18 shows the polygon for sketching the contours for the area-of-interest sheltered in frame 1.The polygon form point of square snaps to the edge of interesting target.Use Bezier, Bezier point snaps to interesting target and reference mark/curve is suitable for edge.
Figure 19 shows whole polygon or Bezier is taken to the last frame selected in display-memory, and wherein operator uses and automatically alignment (snap) function that point and curve snap to the edge of interesting target regulated polygon form point or Bezier point and curve.
Figure 20 shows when there is the adjustment of operator's interactive mode, if there is significant difference between the point in the frame between two frames and curve, so operator will regulate the frame that there is maximum error of fitting in the middle of described multiframe further.
Figure 21 show when determine polygon or Bezier between two frames regulated correctly animation time, suitable mask is applied to all frames.
Figure 22 shows the mask obtained to polygon or the Bezier animation at edge from point and curve automatic aligning.Brown mask is colour switching, and green mask is any see-through mask.
Figure 23 shows an example of twice mixing: the object in twice mixing eliminates moving target from the mosaic pattern of final mixing.This can complete by first mixing described frame, and therefore moving target removes from the left side of background mosaic pattern completely.As shown in Figure 23, personage can remove from scene, but still can see on the right side of background mosaic pattern.
Figure 24 shows second time mixing.Now, generate the second background mosaic pattern, wherein use hybrid position and width, moving target is removed from the right side of final background mosaic pattern.As shown in Figure 24, personage can remove from scene, but still can see in the left side of background mosaic pattern.In time mixing of second as shown in Figure 24, the personage of motion is shown in the left side.
Figure 25 shows the final background corresponding to Figure 23-24.The described background mosaic pattern mixing to generate the final mixing removing moving target from scene for twice.As shown in Figure 25, the background with the final mixing of sport figure is removed.
Figure 26 shows editing frame pairing window.
Figure 27 shows the successive frame of the expression camera pan be loaded in storer.Moving target (being moved to the left to the house keeper of door) is sheltered by a series of colour switching information, leaves the black and white background not applying mask or colour switching information.
Figure 28 shows six representative successive frames of the pan for the sake of clarity shown above.
Figure 29 shows synthesis or the montage image of the whole camera pan using phase coherent techniques to build.Moving target (house keeper) is to the phase place of both direction relevant average and be included as the transparent positive for reference with last frame by maintenance first.Use the single montage of same color conversion macking technique to pan being used for foreground target to represent and carry out colour planning.
Figure 30 shows the frame sequence in the camera pan after background mask colour switching, montage is applied to the every frame for creating montage.At the place of the mask be not pre-existing in application mask, thus keep moving target mask and colour switching information while there is in application the background information of suitably skew.
Figure 31 for the sake of clarity shows the frame sequence of the selection in the pan be automatically applied to by color background mask after the frame of the mask be not wherein pre-existing in.
Figure 32 shows the frame sequence wherein utilizing independent colour switching to shelter all moving targets (performer).
Figure 33 for the sake of clarity shows the frame sequence of the selection before background mask information.All movement elements all use automatic mask fitting algorithm to shelter completely.
Figure 34 shows the static background and foreground information that deduct the moving target previously sheltered.In this case, colour switching is utilized to shelter the single representation of complete background in the mode similar with moving target.It should be pointed out that the profile of the foreground target removed seems to be truncated across the reason of incoming frame train interval and not identifiable design due to it, the black objects namely in frame represents that wherein moving target (performer) never exposes the region of background and prospect.In only painted project, black objects is left in the basket during masked operation, because the background mask obtained only was applied to all frames of the single representation for background afterwards when the mask be not pre-existing in.In degree of depth conversion project, missing data region can be shown, make view data can obtain/generate to provide visually believable view data when horizontal translation foreground target is to generate the second viewpoint for missing data region.
Figure 35 shows background mask information with the successive frame after suitable offset applications to every frame and when the mask information be not pre-existing in still camera scene cut.
Figure 36 shows from the representative sample of the frame of still camera scene cut after suitable offset applications background information and when the mask information be not pre-existing in.
Figure 37 A-C shows the embodiment of mask fitting function, comprises digital simulation grid and insert mask on fitted mesh difference scheme.
Figure 38 A-B shows the embodiment extracting background function.
Figure 39 A-C shows the embodiment of snap point function.
Figure 40 A-C shows the embodiment that bimodal threshold value shelters function, wherein Figure 40 C is corresponding to the step 2.1 in Figure 40 A, namely " create the image of light/dark cursor shape ", and Figure 40 B is corresponding to the step 2.2 in Figure 40 A, namely " light/dark shape is applied to mask ".
Figure 41 A-B shows the embodiment of digital simulation value function.
Figure 42 shows two picture frames of time upper separately some frames of the people of floating crystal ball, wherein will convert each the different target in these picture frames to objective from two dimension target.
Figure 43 shows and will convert sheltering of first object the first picture frame of 3-D view to from two dimensional image.
Figure 44 shows sheltering of the second target in the first picture frame.
Figure 45 shows two see-through masks allowing to check in the first picture frame of the part associated with mask.
Figure 46 shows sheltering of the 3rd target in the first picture frame.
Figure 47 shows three see-through masks allowing to check in the first picture frame of the part associated with mask.
Figure 48 shows sheltering of the 4th target in the first picture frame.
Figure 49 shows sheltering of the 5th target in the first picture frame.
Figure 50 shows the control panel for creating 3-D view, comprises associating of layer and objective and the mask in picture frame, specifically illustrates the establishment of the plane layer of the coat-sleeve for the people in image.
Figure 51 shows the 3-D view of each the different mask shown in Figure 43-49, and the mask wherein associated with the coat-sleeve of people is illustrated as on the right of the page towards the plane layer that left and right viewpoint rotates.
Figure 52 shows the view rotated a little of Figure 51.
Figure 53 shows the view rotated a little of Figure 51.
Figure 54 illustrates control panel, and its crystal ball specifically illustrated before into the people in image creates spherical object.
Figure 55 shows flat mask spherical object being applied to crystal ball, and it illustrates and projects to the front and back of spheroid to illustrate the degree of depth distributing to crystal ball in spheroid.
Figure 56 shows the top view of the three dimensional representation of the first picture frame, the Z dimension distributing to crystal ball is shown, shows before the people that crystal ball is in scene.
Figure 57 show to rotate to make in X-axis coat-sleeve to seem from image out more coat-sleeve plane.
Figure 58 shows control panel, and it illustrates especially and creates head target for being applied to the face in image, namely when without the need to giving face the degree of depth true to nature when such as line model.
Figure 59 shows the head target in 3-D view, and it too greatly and do not aim at the number of people of reality.
Figure 60 shows the head target in 3-D view, and it with applicable face and through adjustment, such as, is moved to the position of the actual number of people by resizing.
Figure 61 shows the head target in 3-D view, and Y-axis is rotated through circle and illustrates, and Y-axis with the head of people for initial point, therefore allow the orientation of correct rotation corresponding to face of head target.
Figure 62 shows and also turns clockwise a little so that the head target corresponding to the head tilted a little of people around Z axis.
Figure 63 shows and is propagated in second and final image frame by mask.
Figure 64 shows the original position of the mask corresponding to the hand of people.
Figure 65 show can automatically and/or the mask that performs of artificially reset shape, wherein any intermediate frame obtains between the first picture frame mask and the second picture frame mask between benefit depth information.
Figure 66 shows the missing information of the left viewpoint as outstanding in the left side the covered over the object colour in image below when foreground target (being crystal ball) moves to the right herein.
Figure 67 shows the missing information of the right viewpoint as outstanding in the right side the covered over the object colour in image below when foreground target (being crystal ball) moves to the left side herein.
Figure 68 shows the stereopsis that the ultimate depth that red/blue 3-D glasses can be utilized to watch strengthens the first picture frame.
Figure 69 shows the ultimate depth that red/blue 3-D glasses can be utilized to watch and strengthens second and the stereopsis of last picture frame, notes the motion of the rotation of the number of people, the motion of staff and crystal ball.
Figure 70 shows the right side with the crystal ball that fill pattern " is smeared ", wherein has missing information for left viewpoint, and the pixel on namely on the right side of crystal ball is taken from the right hand edge of missing image pixel and flatly " smeared " to cover missing information.
Figure 71 shows for the upper body of performer and the mask of head (and transparent wing) or α plane.Mask can comprise show for black zone of opacity and show transparent region for gray area.
Figure 72 shows occlusion area, and it is corresponding to the performer of Figure 71, and it illustrates the region of the background exposed in any frame of scene never.This can be such as synthesize background.
Figure 73 shows upper reproduction of art and is used in the background complete and true to nature in 2 d-to-3 d conversion to allow the occlusion area without pseudomorphism conversion with generation.
Figure 73 A shows and partly draws or otherwise reproduce to generate the occlusion area of the background enough true to nature be used in the pseudomorphism minimizing 2 d-to-3 d conversion.
Figure 74 shows the bright area of the shoulder on the right side of Figure 71, and it represents when foreground target is moved on to the left side to use the gap at (it is also shown in Figure 70) place that stretches when creating right viewpoint.The dark-part of figure take from data at least one frame of its Scene can background.
Figure 75 shows when not using the background of generation, if namely do not have background data to be used in the region be blocked in all frames of scene, and the example of the stretching (namely smearing) of the pixel corresponding to the bright area in Figure 74.
The edge that Figure 76 shows the shoulder of people does not have the result of the right viewpoint of pseudomorphism, wherein dark areas comprises pixel available in one or more frames of scene, and the data of generation for the region of always blocking of scene.
Figure 77 shows an example of the element (herein for robot) of Practical computer teaching, its modeling and be projected as two dimensional image in three dimensions.If the metadata of such as α, mask, the degree of depth or its combination in any and so on exists, so can utilize this metadata with accelerate from two dimensional image to for right and left eyes so that the transfer process of the two dimensional image pairing of three-dimensional viewing.
Color and the degree of depth of the importing of the element (namely having the robot of the degree of depth automatically arranged via the depth metadata imported) of Figure 78 and Practical computer teaching together illustrate the original image being separated into background and foreground elements (soldier of the mountain range in background and sky and lower left, also see Figure 79).As shown in background, any region covered for scene can be reproduced such as to provide believable missing data artistically, as shown in Figure 73 of the missing data based on Figure 73 A, its cause such as shown in figure 76 without pseudomorphism edge.
Figure 79 shows and to associate with the soldier's photo in prospect so that by good application to the mask of different piece of soldier in degree of depth front of element (i.e. robot) being positioned at Practical computer teaching.The dotted line flatly extended from masks area shows there occurs the horizontal translation of foreground target, and illustrate when other elements for film exist metadata, such as, when there is α for the target before the element appearing at Practical computer teaching, the degree of depth of the metadata of importing accurately in the automatic calibration target of sheltering or the place of excessively scribbling of color can be utilized.The file that can be utilized to the type obtaining mask edge data is the file with α file and/or mask data, such as RGBA file.
Figure 80 shows and also can be used as mask layer so that limit operator's definition and more coarse for by the importing α layer of good application to the mask at the edge of three soldiers A, B and C possibly.In addition, along the line being labeled as " dust ", the element of the Practical computer teaching being used for dust can be inserted in scene to increase the authenticity of scene.
The mask of operator's definition is used not carry out the result regulated when Figure 81 shows on the element of the Practical computer teaching movement elements of such as soldier and so on being covered such as robot and so on.Be applied to the α metadata of Figure 80 of the mask edge of the Operation Definition of Figure 79 by using, thus allow to realize on overlapping region without pseudomorphism edge.
Figure 82 shows source images, it will carry out degree of depth enhancing and provide together with α mask with left and right translation file, make downstream working group can perform 3D rendering real-time edition and without the need to again reproducing such as to change layer/color/mask when not returning to the iteration workflow path of original working group and/or to remove and/or regulation depth.
Figure 83 shows the mask generated by mask working group for the degree of depth amplification group application degree of depth, the target association of mankind's recognizable object in the source images of wherein mask and such as such as Figure 82 and so on.
Figure 84 show wherein usual for nearer target darker and for farther target brighter apply the region of the degree of depth.
Figure 85 A shows the left UV figure each source pixel being comprised to translation in horizontal direction or skew.
Figure 85 B shows the right UV figure each source pixel being comprised to translation in horizontal direction or skew.
Figure 85 C shows the black level value movable part of the left UV figure of Figure 85 A of performance minor element wherein.
Figure 85 D shows the black level value movable part of the right UV figure of Figure 85 B of performance minor element wherein.
Figure 86 A shows the left U figure each source pixel being comprised to translation in horizontal direction or skew.
Figure 86 B shows the right U figure each source pixel being comprised to translation in horizontal direction or skew.
Figure 86 C shows the black level value movable part of the left U figure of Figure 86 A of performance minor element wherein.
Figure 86 D shows the black level value movable part of the right U figure of Figure 86 B of performance minor element wherein.
Figure 87 shows the known application of UV figure, and wherein three-dimensional model is unfolded, and makes the image in UV space that UV figure painting can be used to sign on 3D model.
Figure 88 shows disparity map, and it illustrates the region that difference wherein between left and right translation figure is maximum.
The left eye that Figure 89 shows the source images of Figure 82 reproduces.
The right eye that Figure 90 shows the source images of Figure 82 reproduces.
Figure 91 shows the stereopsis of the image of Figure 89 and Figure 90 used together with red/blue glasses.
Figure 92 shows image that is masked and that be in the process that the degree of depth for each different layers strengthens.
The UV that Figure 93 shows on the α mask that covers and associate with the actress shown in Figure 92 schemes, and its degree of depth based on each different pixels in α mask arranges and arranges the translational offsets in the left and right UV figure obtained.
It is that second degree of depth strengthens program or such as that Figure 94 shows and so on synthesis program generate, be the work space that each different layers shown in Figure 92 generates, namely for the left and right UV translation figure of each α, wherein this work space allow quality assurance personnel (or other working groups) perform 3D rendering real-time edition and without the need to again reproducing such as so as when not iteratively to any other working group send repair change layer/color/mask and/or remove pseudomorphism or otherwise regulate mask and thus change 3D rendering pairing (or stereopsis).
Figure 95 shows the workflow for iteration correction workflow.
Figure 96 shows the embodiment being allowed the workflow realized by the one or more embodiments of system, wherein each working group can perform 3D rendering when again not reproducing real-time edition such as so as to change layer/color/mask and/or remove pseudomorphism and the work product that otherwise corrects from another working group and not with again reproduce/ray tracing or send work product back to by workflow to correct the iteration associated and postpone.
Figure 97 illustrates the framework view of one embodiment of the invention.
Figure 98 illustrates and is utilized to limit to its work or the band annotation view of the session manager window of multiple image that shares out the work.
Figure 99 illustrates the view of film-making display, and itself and the state of each task context for associating with camera lens illustrate project, camera lens and the task relevant with the camera lens selected together.
Figure 100 illustrates the view of the actual achievement associated with this camera lens in project for each task context associated with close-up, wherein " low bid " task actual achievement illustrates in the first way, and the task in the predefine number percent in bid amount illustrates in a second manner, high task of submitting a tender illustrates with Third Way simultaneously.
Figure 101 illustrates and can pay wages and the disk space amount of saving to save disc driver by such as deleting the file can rebuild from alternative document after project completes.
Figure 102 illustrates the view of artist's display, its illustrate task context, project, camera lens, state, instrument, initial time button, registration input, reproduce input, internal lens evaluation input, meals, initial/time/stoppings, evaluation and submission input.
Figure 103 illustrates the band annotation view of the menu bar of artist's display.
Figure 104 illustrates the band annotation view of the task row of artist's display.
Figure 105 illustrates the band annotation view of the major part of the user interface of artist's display.
Figure 106 illustrates the structure timeline display of the artist's display for creating the timeline that will work.
Figure 107 illustrates and browses snapshot display for artist's display, and it makes artist can check the snapshot of camera lens or important information that otherwise buffer memory is relevant with camera lens, makes database need not the data of the frequent utilization of on-the-spot request.
Figure 108 illustrates artist's actual window, and it illustrates that the time that such as task spends is the actual achievement of the time of task matching relatively, has the drop-down menu for special time list.
The remarks that Figure 109 illustrates for artist's display show, and it makes artist can input the remarks relevant with camera lens.
The registration that Figure 110 illustrates for artist's display shows, registration work after its permission work on camera lens completes.
Figure 111 illustrates the view of edit display, and it illustrates project, filtration input, timeline input and Search Results, and itself and the work context major part at window together with assignment person illustrates camera lens.
Figure 112 illustrates the view of the session manager display of the edit display for selecting the camera lens evaluated.
Figure 113 illustrates the view of the Advanced Search display of edit display.
Figure 114 illustrates the view of the simple search display of edit display.
Figure 115 illustrates the view of the evaluation pane for camera lens, and it also illustrates integrated remarks in same number of frames and/or SNAPSHOT INFO.
Figure 116 illustrates the view selected for the timeline of evaluation and/or registration after revising.
Figure 117 illustrates and uses the instrument of Figure 117 to add the annotation of frame for feedback to.
Embodiment
Figure 97 illustrates the framework view of one embodiment of the invention.The database 9701 that one or more embodiments of system comprise computing machine 9702 and are coupled with computing machine 9702.Have such as via any computer architecture spirit all according to the invention of any amount of computing machine of computer communication network coupling.The database 9701 be coupled with computing machine 9702 at least comprises repertory, shut list, task list and time list and shows.Repertory generally includes item identifier and the item description relevant with motion picture.Shut list generally includes camera lens identifier and quotes the multiple image having initial frame value and terminate frame value, and wherein said multiple image associates with the motion picture of item association.Shut list generally includes at least one camera lens, has the state relevant with the job schedule performed on camera lens.Task list uses the item identifier being also arranged in repertory to quote this project usually.Task list generally includes at least one task, described task generally includes task identifier and assignment person, such as artist, and also can comprise the context associated with the task type relevant with the motion picture work of evaluation (or any other of motion picture correlation type task is gathered) with being selected from such as zone design, installation, motion, synthesis and arrange.Context is arranged also can imply or have default workflow, and zone design is flow in the degree of depth of complex.This makes system can distribute next task type, or the context that camera lens will allow work perform thereon.This flow process can be straight line, or can such as iteration do over again.At least one task described has generally included the time that at least one task described is distributed.Item identifier in the usual REFER object table of time single items table and the task identifier in task list.Task list generally includes at least one time comprising initial time and end time single items.In one or more embodiments, the next task completed in the sequence that the context of task can be set in workflow of task, and system automatically can notify the next worker in workflow based on the next work context that will perform, and worker can work as described earlier under different contexts.In one or more embodiments, context can have sub-context, namely according to the workflow of the hope of the specific operation type for performing in motion picture project, zone design can resolve into shelter shelters with outsourcing, and the degree of depth can resolve into key frame and motion context.
The embodiment of database also can comprise snapshot table, this snapshot table comprises snapshot identifier and search-type and comprises the snapshot of at least one camera lens described, such as comprise the subset of at least one camera lens described, wherein this snapshot buffer memory is on computers to reduce the access for shut list.Resource on other embodiment tracking networks of snapshot table, stores the information about resource and the version management of tracking assets.Other contexts that embodiment also can comprise for the task classification (such as source and cleaning inter-related task) of other types are arranged.Also can comprise any other context relevant with motion picture work meeting spirit of the present invention to arrange or value.The embodiment of database also can comprise asset request table, and this asset request table comprises asset request identifier and the camera lens identifier that can be utilized to ask to work in assets or ask to be worked in by such as other workers or created assets itself.The embodiment of database also can comprise required list, and this required list comprises mask request identifier and camera lens identifier and can be utilized to ask the action of any type of such as another worker.The embodiment of database also can comprise remarks table, this remarks table comprise remarks identifier and REFER object identifier and comprise with from relevant at least one remarks of at least one width in the described multiple image of motion picture.The embodiment of database also can comprise comprising pays the payment table of identifier, its REFER object identifier and comprise the information relevant with the payment of motion picture.
One or more embodiments of database can utilize following framework or as programme especially in computing machine 9702 and any other framework can supporting function of the present invention described in following combination in any or sub-portfolio, as long as motion picture project management can execution as detailed herein, or in order to use exemplary specifications here management movement picture items better in any other manner.
Repertory
The recent release of unique items identifier, item code (text title), motion picture title, item types (test or for hire out), recently database update date and time, state (retired or active service), database upgrades, item types (painted, effect, 2D->3D conversion, feature film, catalogue), foreman, evaluation driver (wherein store and evaluate camera lens).
Task list
Unique job identifier, assignment person, describe (will what do), state (is taken in, wait for, complete, return, approval), bid from date, submit a tender the Close Date, submit a tender the duration, actual from date, the physical end date, priority, context is (stacking, assets, motion, movement vision effect, outsourcing, cleaning, α generation, synthesis, shelter, cleaner plate, install, key frame, quality control), item code in repertory, chief inspector or film-making or editor, the time of every process took.
Snapshot table
Unique snapshot identifier, search-type (searching for which project), describe (remarks relevant with camera lens), log in (worker associated with snapshot), timestamp, context (is installed, cleaning, motion, synthesis ...), the version of the snapshot on camera lens, snapshot type (catalogue, file, information, evaluation), item code, evaluation sequence data (wherein storing data on network), assets title (α, mask ...), the snapshot code of other snapshots making this snapshot (be used for) used, registry path (path to from the place of its registration data), instrument version, review date, filing, can rebuild (true or false), source is deleted, the date is deleted in source, source is deleted and is logged in.
Remarks table
Unique remarks identifier, item code, search-type, search id, login (worker id), context (synthesis, evaluation, motion, editor ...), timestamp, remarks (describing with the text by searching for the remarks that the image collection that limits associates).
Pay table
Unique payment identifier, (worker's) login, timestamp, state (whether retired), delivery method (how it pays), the medium of what type (be used for TK project) is described, return goods (true or false), driver (sequence number of driver), case (sequence number of case), date of payment, item identifier, client's (text title of client), producer's (producer's title).
Delivery time is shown
Unique payment item identification symbol, timestamp, payment code, item identifier, file path (wherein store and pay items).
Time single table
Time single unique identifier, login (worker), timestamp, T.T., time single approval, initial time, end time, meals 1 (rest start time half an hour), meals 2 (rest start time half an hour), state (co-pending or approved).
Time single items table
Time single items unique identifier, login (worker), timestamp, context (zone design, synthesis, reproduction, motion, management, mask cleaning, training, cleaning, management), item identifier, time single identifier, initial time, end time, state (co-pending or approved), (worker) approval, task identifier.
Sequence table
Sequence unique identifier, login (limiting the worker of sequence), timestamp, camera lens order (it forms sequence).Shut list
Camera lens unique identifier, the login worker of camera lens (limit), timestamp, camera lens state (in carrying out, final, End-Customer approval), customer status (synthesis carry out in, degree of depth client evaluation, synthesis client evaluation, final), describe (text description of camera lens, such as 2 airplanes flys each other near), the first frame number, last frame number, frame number, assignment person, zone design, the degree of depth target date, distribution degree of depth worker, synthesis chief inspector, synthesize foreman, synthesize the target date.
Asset request table
The assets worker of assets unique identifier, timestamp, distribution, state (co-pending or determined), the description of camera lens identifier, question letters, film-making worker, the foreman of distribution, priority, due date.
Mask required list
Mask request unique identifier, login (making the worker of mask request), timestamp, degree of depth artist, degree of depth leader, degree of depth expeditor or film-making worker, shelter problem, mask (there is the version of the problem relevant with mask request), the source, the due date that use, remarks of doing over again.
In one or more embodiment of the present invention, computing machine is configured to present session manager to select to work and/or to a series of images in its allocating task or the camera lens that will evaluate usually.Computing machine is configured to present the first display comprising search display being configured to be checked by film-making usually, this search display has context, project, camera lens, state and artist, and wherein the second display comprise multiple artistical list further and based on the time spent at least one described single items time relatively according to corresponding states and the actual achievement of the time of at least one task matching described in associating with at least one camera lens described.
Figure 98 illustrates and is utilized to limit such as to its work or the band annotation view of the session manager window of multiple image that shares out the work or evaluate.Computing machine 9702 accepts to be used for the input of project, sequence (such as utilizing the motion picture of the camera lens in particular sequence or trailer) together with the vertical shift different with each of camera lens, mask version, and alternatively such as by download image to local computer for processing locality.Each field on figure annotates by further details.
Figure 99 illustrates the view of film-making display, and itself and the state of each task context for associating with camera lens illustrate project, camera lens and the task relevant with the camera lens selected together.As shown in the left side, the camera lens of formation project can be selected, the major part of window is redirected as display and the relevant information of camera lens selected by this, comprise " assets " for using in " shot information ", camera lens, frame that selection is checked, remarks, mission bit stream, actual achievement, registration and data integrity tab.As shown in the major part of window, some task contexts illustrate together with its association status, assignment person etc.Film-making utilize this display and computing machine accept from film-making worker (such as user) via the input of this display in case with distribute time together with arrange for artistical task.The lower left that the neglecting of camera lens is illustrated in display is to give the view that film-making worker and task and context arrange relevant motion picture camera lens.A potential object of film-making role is allocating task and evaluates state and actual achievement, and an artistical potential object is use instrument set steers image, and syndic potential object be have there is the integrated metadata relevant with camera lens and state thereof higher resolution image for evaluation.In other words, the display in system is role's customization, then has importance for information about mainly to pay the utmost attention in motion picture establishment/transfer process to this role through integrated.
Figure 100 illustrates the view of the actual achievement associated with this camera lens in project for each task context associated with close-up, wherein " low bid " task actual achievement illustrates in the first way, and the task in the predefine number percent in bid amount illustrates in a second manner, high task of submitting a tender illustrates with Third Way simultaneously.
Figure 101 illustrates and can pay wages and the disk space amount of saving to save disc driver by such as deleting the file can rebuild from alternative document after project completes.As shown in the figure, the rebuild amount that only part completes for project may easily be within the scope of terabyte.Can assets be rebuild by deleting safely after project completes and compress other assets, the disc driver of flood tide can be saved.In one or more embodiments, computer access database and determine which resource depends on other resources, they whether can be compressed and with can in advance and/or which kind of Normal squeezing calculated based on such as sundry item than compression.The memory space that then computing machine is calculated total storage capacity and can be discharged by compression and/or resource recovery, and such as show this information on a computer display.
Computing machine is also configured to present the second display being configured to be checked by artist usually, this second display comprises at least one daily distribution, and it has context, project, camera lens, the state input being configured to the state upgraded in task list and the initial time be configured in single items table update time and the input of the timer of end time.
Figure 102 illustrates the view of artist's display, its illustrate task context, project, camera lens, state, instrument, initial time button, registration input, reproduce input, internal lens evaluation input, meals, initial/time/stoppings, evaluation and submission input.This allows check and upgrade motion picture inter-related task along with the carrying out of the work in project, and this gives, and the special exercise picture items management of the state of film-making for special efficacy film project and/or conversion is relevant to be checked.
Figure 103 illustrates the band annotation view of the menu bar of artist's display.This menu bar is shown in the upper left of the display in Figure 102
Figure 104 illustrates the band annotation view of the task row of artist's display.This band annotation view illustrate only a line, but, multirow can be shown according to Figure 102.
Figure 105 illustrates the band annotation view of the major part of the user interface of artist's display of Figure 102.
Figure 106 illustrates the structure timeline display of the artist's display for creating the timeline that will work.
Figure 107 illustrates and browses snapshot display for artist's display, and it makes artist can check the snapshot of camera lens or important information that otherwise buffer memory is relevant with camera lens, makes database need not the data of the frequent utilization of on-the-spot request.Snapshot follows the tracks of the position of each the different file associated from camera lens, and follows the tracks of other information relating to the work product relevant with camera lens, i.e. source, mask, resolution, file type.In addition, snapshot is followed the tracks of the version management of each different file and file type and is used for the version of the instrument worked on each different file alternatively.
Figure 108 illustrates artist's actual window, and it illustrates that the time that such as task spends is the actual achievement of the time of task matching relatively, has the drop-down menu for special time list.
The remarks that Figure 109 illustrates for artist's display show, and it makes artist can input the remarks relevant with camera lens.
The registration that Figure 110 illustrates for artist's display shows, registration work after its permission work on camera lens completes.
Computing machine is usually also configured to present and is configured to by editor's (i.e. editor) the 3rd display of checking, and it comprises and is configured to acceptance about at least comment of piece image or drawing or comment and the annotation frame both drawing described in the described multiple image associated with at least one camera lens described.One or more embodiments of computing machine can be configured to provide and are configured to by editing the 3rd display checked, and the 3rd display comprises the annotation at least one width of covering in described multiple image.This ability provides the information about a display, and it needs three workers to integrate in known systems usually, and itself is novel.
Figure 111 illustrates the view of edit display, and it illustrates project, filtration input, timeline input and Search Results, and itself and the work context major part at window together with assignment person illustrates camera lens.
Figure 112 illustrates the view of the session manager display of the edit display for selecting the camera lens evaluated.
Figure 113 illustrates the view of the Advanced Search display of edit display.
Figure 114 illustrates the view of the simple search display of edit display.
Figure 115 illustrates the view of the evaluation pane for camera lens, and it also illustrates integrated remarks in same number of frames and/or SNAPSHOT INFO.This view needs three workers to create in the past usually, saves the plenty of time and accelerates evaluation course widely.The information of any type can be covered on image to allow to realize the integration different views of image and relevant data on a display.
Figure 116 illustrates the view selected for the timeline of evaluation and/or registration after revising.
Figure 117 illustrates and uses the instrument of Figure 117 to add the annotation of frame for feedback to.
The grading that one or more embodiments of computing machine are configured to accept the work performed based on artist in the blind mode commented alternatively from film-making or editor inputs, blindly comment in mode described, syndic does not know artistical identity to prevent from such as acting unfairly from selfish motives.One or more embodiments of computing machine are configured to the difficulty accepting at least one camera lens described, and the work performed based on artist and based on the Time Calculation grading that the difficulty of camera lens and camera lens spend.One or more embodiments of computing machine are configured to accept the grading input of the work performed based on artist from film-making or editor, or accept the difficulty of at least one camera lens described and the work performed based on artist and based on the Time Calculation grading that the difficulty of camera lens and camera lens spend, and to accept based on computing machine or the grading of computer calculate shows for artistical excitation.One or more embodiments of computing machine are configured to estimate residual cost based on actual achievement, described actual achievement be based upon with in project described in whole camera lenses at least one camera lens associate described in T.T. of spending of whole tasks at least one task relative for in project described in whole camera lenses at least one camera lens associate described in time of whole task matching at least one task.One or more embodiments of computing machine are configured to the actual achievement with the first item association and compare with the actual achievement of the second item association, and show that at least one worker will distribute to the second project from the first project based at least one grading distributing to Section 1 object first worker.One or more embodiments of computing machine are configured to analyze the perspective project of the camera lens with some and estimate the difficulty of every camera lens, and based on the actual achievement with item association, calculate the forecast cost being used for this perspective project.One or more embodiments of computing machine are configured to analyze the perspective project of the camera lens with some and estimate the difficulty of every camera lens, and based on the actual achievement of the second item association with previous the first project performed and the previous execution completed after the first project previously performed, calculate the derivative of actual achievement, the derivative calculations based on actual achievement is used for the forecast cost of perspective project.Such as, along with the improvement of process, the improvement of instrument and the improvement of worker, work efficiency is improved and how budget and tendering process can be changed by counting yield and the relation of time and be considered this point, and uses this change rate forecast for the cost of perspective project.One or more embodiments of computing machine are configured to analyze the actual achievement with described item association, and with the camera lens completed divided by the total camera lens with described item association, provide the deadline of project.One or more embodiments of computing machine are configured to analyze the actual achievement with described item association, and with the camera lens completed divided by the total camera lens with described item association, provide the deadline of project, accept the input with at least one additional artist of grading, accept the camera lens wherein using the some of additional artist, calculate based at least one additional artist described and number of shots and save time, from the deadline of project, deduct this save time, and the update time that the project that provides completes.One or more embodiments of computing machine are configured to calculate the disk space amount that can be utilized to project filing, and show that at least one assets can rebuild from other assets are to avoid filing these at least one assets.One or more embodiments of computing machine are configured to show error message when artist to when current frame number work not at least one camera lens described.This may appear at such as be fade-in fade-out, fade out or other effects make close-up elongated time, wherein this case for lense is containing frame not in original source assets.
Each different motion picture workflow is summarized
The feature film strengthened for painted/degree of depth and TV series data encasement: feature film is made into film television or uses such as 10 bit SPIRIT or the high-resolution scanner of similar devices and so on is transferred to HDTV (1920x108024P) from 35mm or 16mm film, or 2000 line to 4000 lines and up to 16 bit gradation levels in the such as U.S. in a larger format the laser film scanner of the scanner that company manufactures and so on makes data film.Then, higher resolution frame file transform is become the standard digital file being typically in 16 bit triple channel linear formats or 8 bit triple channel linear formats of such as uncompressed TIP file or uncompressed TGA file and so on.If source data is HDTV, so 10 bit HDTV frame file transform are become the uncompressed file of similar TIF or TGA of every passage 16 bit or 8 bits.Then, average to each frame pixel, triple channel is merged to create single 16 bit channels or 8 bit channels respectively.Any other scanning technique that existing film scanning can be become digital format can be utilized.Current, many films generate completely in a digital format, and therefore can utilize when not scanning film.For the digital movie with associated metadata, such as utilizing the film of the personage of Practical computer teaching, background or any other element, this metadata can be imported such as so that by pixel or by the basis of sub-pixel obtaining for the α of the element of Practical computer teaching and/or mask and/or the degree of depth.A kind of form comprising the file of α/mask and depth data is RGBAZ file layout, and its a kind of implementation is EXR file layout.
35 or the digital to television film of 16mm negative film or positive and form independence individual color elements in high resolving power Film scanner with various different resolution and bit-depth digitizing, such as utilize sPIRIT and EASTMAN performed, its such as transmit 525 or 625 forms, HDTV, (HDTV) 1280x720/60Hz line by line, 2K, DTV (ATSC) form, such as 1920x1080/24Hz/25Hz line by line and 1920x1080/48Hz/50Hz split frame or 1920x1080501.The invention provides the method for the improvement for film being compiled motion picture.Visual image is transferred to high definition video storage medium from the motion picture film of development, this medium be a kind of be suitable for storage figure picture and combine display equipment display image storage medium, its have much larger than NTSC compatible video storage medium and associate show equip scanning density.Visual image is also transferred to the numerical data storage format being suitable for equipping for numerical non-linear motion picture editor from motion picture film or high definition video storage medium.After visual image is transferred to high definition video storage medium, numerical non-linear motion picture editor equipment is used for Generation Edit decision lists, and motion picture film is then in accordance with this editorial decision list.High definition video storage medium is suitable for storing and showing the visual image with at least 1080 horizontal scanning densities usually.Electronics or optical transform can be used for allowing to use the vision depth-width ratio making full use of the storage format used in the method.This digitizing No. of film is inputted such as from the data of one of the film numerous forms being transferred to such as HDTV and so on according to this and the HDTV STILL that technology company manufactures and so on converting system in.Such large scale digital impact damper and data converter can convert digital picture to all standard format, such as 1080iHDTV form, such as 720p and 1080p/24.Asset management system's server provides powerful this locality and server backup and filing to standard scsi device, C2 level security, streamlined menu setecting and multiple criteria database search.
During the process of digitizing from the image of motion picture film, the machinery location of the film frame in film television machine suffers the inexactness being called " film rocks " that can not eliminate completely.But, various different sheet registration and offset or flatten door assembly and can use, such as U.S. Patent No. 5,328, the assembly implemented in 073 (sheet registration and offset door assembly), it relates to the focus location door with position location or aperture being used for the picture frame with verge-perforated banded film.The first and second pins that size is less enter the perforation pairing of the lateral alignment of film so that by picture frame and aperture registration.The 3rd pin that size is less enters sells along film and second the 3rd perforation separated, so and by film obliquely tractive to the reference line extended between first and second pin the first and second pins are overlapped in the perforation of there, and picture frame is accurately registrated to position location or aperture place.The pair of flexible band that positioned adjacent position extends along film edge moves gradually, incrementally increases with the contact of film and bores a hole to offset it and to clamp it facing to door.Pin is by picture frame and position location accurately registration, and picture frame is maintained exact focus position by band.The accurate mechanical that the image of the method by the middle method implemented of such as U.S. Patent No. 4,903,131 (method for the automatic calibration of the error in image registration during film scanning) and so on can be followed in location is caught and strengthens further.
In order to remove or reduce being superimposed upon the random structure being called crystal grain on image and making the ambiguous cut of the light of transmission or dust granule or other fragments in the feature film of exposure; use various different algorithm; such as U.S. Patent No. 6; 067; 125 (structure reduced for film grain noise and method) and U.S. Patent No.s 5; the algorithm implemented in 784,176 (methods of image noise reduction process).
Create as visible database the film element prepared oppositely to edit:
Digital movie is resolved into scene and editing.Then, sequentially process whole film, automatically to detect scene changes, comprise and fade out, wipe and shear.These change and resolve into camera pan, camera zoom and expression further seldom or not have the static scene of motion.Above-described all databases will be quoted to the editorial decision list (EDT) be input in database based on standard smpte time code or other suitable continuous naming conventions.Exist in a large number for detecting the technology of drama in movie contents and delicate transformation, such as:
US0595969709/28/1999 is for detecting the method and system fading out transformation in vision signal
US0592036007/06/1999 is for detecting the method and system changed of being fade-in fade-out in vision signal
The method of US0584151211/24/1998 preview and editor's motion picture
US0583516311/10/1998 is for detecting the device of the editing in video
US576792316/06/1998 is for detecting the method and system of the editing in vision signal
US577810807/06/1996 is for detecting the method and system of the transformation mark of the such as uniform field and so in vision signal
US592036006/07/1999 is for detecting the method and system changed of being fade-in fade-out in vision signal
Wherein camera is seemed all groups of clips of the identical content in the expression of tackling between two the transaudient heads dialogue such as between two or more people are incorporated in a file entries for later batch processing.
Operator visually checks all data base entries to guarantee:
1. scene is broken down into camera movement
2. editing is merged into single batch of element in a suitable case
3. Kinematic Decomposition is become simple and compound movement according to blocking element, moving target quantity and optical device quality (mildness of such as element etc.).
Film-making in advance---scene analysis and scene are decomposed for reference frame ID and the establishment of data basis:
File uses continuous smpte time code or other continuous naming conventions to be numbered.Use aFTER or image file is edited into DVD on so that establishment has the operation video of the audio frequency of feature film or TV series with the speed of 24 frames/second (field not having use in video standard NTSC30 frame/second relevant 3/2 drop-down) by similar program together.This is used for auxiliary scene analysis and scene and decomposes.
Scene and editing are decomposed:
1. database allows input scene, editing, design, the critical data of key frame and other times code form and the descriptor for each scene and editing.
2. identify each scene cut relative to camera technique.Timing code is used for pan, zoom, static background, the static background with instability or drift camera and the uncommon camera cut paid particular attention to.
3. designer and assistant designer are for color clue and color references or for degree of depth project case study feature film, for Depth cue, usually for the goals research film of off-standard size.Research is provided for color/depth accuracy under usable condition.The Internet such as can be utilized to determine the color of specific items or the size of specific items.For degree of depth project, know that the size of target allows calculated example as the degree of depth of the items in scene.For wherein depth metadata can be used for the element of the Practical computer teaching in film for the degree of depth project becoming three-dimensional movie relevant two-dimentional movie conversion, can by depth metadata convergent-divergent or translation or the coordinate system otherwise normalized to for such as background and movement elements or unit.
4. select from each scene single frames be used as design frame.Colour planning is carried out to these frames, or import the degree of depth of element and/or the metadata of mask and/or α that are used for Practical computer teaching, or depth assignment (see Figure 42-70) is made to represent the overall perception of feature film to the background element in frame or movement elements.For feature film, approximate 80-100 design frame is typical.
5. in addition, select the single frames being called key frame of each editing from feature film, it comprises in each editing all elements needing color/degree of depth to consider.Nearly 1000 key frames can be there are.These frames by be included in do not have additional color to select when by color/good application to the necessary all colours/depth conversion information of all successive frames in each editing.
Color/degree of depth is selected:
History reference, studio's archives and film analysis provide color references to designer.Use the input equipment of such as mouse and so on, designer shelters the feature in the single frames of the selection comprising multiple pixel, and use HSL color space model based on intention consider and each mask below gray level and Luminance Distribution by color assignment to they.Select one or more primary colours for the view data below each mask, and apply it to the certain luminance mode attribute of the characteristics of image of selection.Based on unique gray-scale value of the feature below mask by often kind of color application selecting to the specific characteristic in the luminance patterns of whole target of sheltering or target.
Thus create look-up table or the colour switching of the unique luminance patterns being used for target or feature, it represents that the color being applied to target is to brightness value.Because the color being applied to feature extends the gamut of the potential gray-scale value from dark to bright, thus designer can ensure, when the introducing such as along with shade or light, when the gray-scale value of intermediate scheme change is distributed in the dark or bright area in the subsequent frame of film equably, color for each feature is also consistent evenly, and correctly brightens for this color application pattern thereon or dimmed.
The degree of depth can be imported for the target of the Practical computer teaching of wherein metadata existence, and/or embodiments of the invention can be used to use the input equipment of such as mouse and so on to carry out regulating to distribute target with certain depth by depth assignment to target, comprise profile depth, such as, distribute the geometric configuration of such as ellipse and so on to such as face.This allows target to look like when being converted into three-dimensional image naturally.For the element of Practical computer teaching, if desired, the degree of depth of importing and/or α and/or mask shape can be regulated.Fixed range is distributed to foreground target to tend to make target show as outline, namely smooth.Also see Figure 42-70.
Mask colour switching/depth information propagates into a series of subsequent frame from a frame:
Then, by one or more methods the mask of the colour switching/depth profile of the design alternative represented in single design frame copied to all subsequent frames in moving-picture frame series, described method such as by Bezier automatic Fitting to edge, based on being associated with relative to design frame or the gradient descent algorithm of luminance patterns continuously in the subsequent frame of first frame and the automatic mask matching of fast fourier transform, by the target of scribbling in an only frame, coating is sheltered multiple successive frame, by vector point automatic Fitting to edge and by independent mask or multiple mask copy with paste the subsequent frame of selection.In addition, can to depth information " between benefit " so that forward direction/reverse about camera catch position or convergent-divergent to be described.For the element of Practical computer teaching, α and/or mask data are normally correct, and can be skipped for heavy shaping process, because obtain from the master pattern of target in a digital manner with the metadata of the elements correlation of Practical computer teaching, and therefore usual without the need to regulating.(for mask matching position being set to the border of CG element to skip matching mask in subsequent frame possibly edge to be heavily shaped to a large amount of process of aiming in photo element, see Figure 37 step C 3710).Alternatively, to the element distortion of Practical computer teaching or shape can be reset to provide the special efficacy do not had at first in film scene.
Single frames One Design Inc. and painted:
In an embodiment of the present invention, by the montage of the background from series of successive frames or composograph are created to by each scene and editing comprise all background element single frames in, integrate camera motion and it be separated with the movement elements in each scene.The single frames obtained becomes the expression of the whole common background of the numerous frames in film, thus the visible database of all elements created in these frames and camera offset information.
In this manner, single frames montage can be used in one time to carry out design and painted/degree of depth enhancing to the background that great majority are arranged.Each montage is sheltered, and does not consider the foreground moving object sheltered individually.Then, from single width background montage image, automatically extract the background mask of montage, and use all skews stored in view data to apply it to the subsequent frame for creating single montage correctly to be aimed at each subsequent frame by mask.
In film making, there is basic formula, its within feature film and between change very little (except those films adopting a large amount of hand-held or stabilization camera lens).Scene is made up of editing, and its combination for standard camera motion (i.e. pan, zoom) and static state or the camera angle locked and these motions is stopped.Editing or single generation event, or the combination of cutaway (cut-a-way), exist in the dialogue wherein such as between two individualities and turn back to particular camera camera lens.Such cutaway can be considered to single sequence of scenes or simple shear is collected, and can integrate in an image procossing.
Pan can use special panoramic mosaic technology but do not carry out lens compensation and integrate in single frames visible database.Every frame in pan relates to:
1., in the side of frame, some information are lost in top and/or bottom
2. relative to the public information continued immediately preceding front and rear in most of frames of frame, and
3. the opposite side of frame, the fresh information of top and/or bottom.
By based on the common element in successive frame these frames are stitched together and thus the panorama of background element, create visible database, it has to be used in single mask to cover and is applied to all pixel-shifts quoted in whole successive frame set.
The establishment of visible database:
Because each pixel in the single frames visible database of background is corresponding from the proper address in its corresponding " original " (integration) frame created to it, the thus proper address of the masked operation determined of any designer and corresponding each pixel of sheltering in original film frame that look-up table title all will correctly be applied to for creating single frames complex of being applied to visible database.
In this manner, represented by single frames (visible database) for each in the set of each scene and editing, wherein pixel has single or multiple expression in the primitive frame series therefrom deriving them.Every region 1 bit mask creating suitable look-up table all represents by all the sheltering in single visible database frame, and this look-up table is corresponding to the common or unique pixel address in the successive frame creating single synthetic frame.The pixel of sheltering limited these addresses is applied to full resolution frames, wherein uses feature, rim detection and pattern-recognition routine automatically check and regulate total sheltering where necessary.When needs regulate, namely when the edges of regions of sheltering applied is not corresponding to the most of edge features in grayscale image, " red marker " abnormal comment sends to operator the signal that adjustment frame by frame may be necessary.
The single frames of the motion in multiframe represents:
Difference algorithm for detecting moving target can distinguish the violent pixel region change of the expression moving target from frame to frame usually.When may obscure with moving target from the projection of moving target in background wherein, the mask obtained will be assigned to default α layer, and this layer will make this partially transparent of moving target mask.In some cases, use the operator of one or more vector or painting painting tool by the description between designated movement target and projection.But in most of the cases, project the surface that will be detected as relative to two critical movements targets.In the present invention, projection will by the process of background look-up table, and this background look-up table is along the determined intensity level of the spectrum automatically adjustable colors of the bright dark gray level value in image.
Action in every frame is via comprising the difference of (direction and speed) difference (when namely action occurs in pan) or frame-frame subtracting techniques and being separated the machine vision technique of target and behavior modeling thereof.
Then, differential pixel is synthesized the single frames (or being isolated under tiled pattern) representing numerous frame, thus allow operator to area-of-interest windowing and otherwise guide the image processing operations being used for computer-controlled subsequent frame and sheltering.
As setting discussed above or background montage, the action occurred in the multiframe in scene can be represented by single frames visible database, and wherein each unique pixel position experiences 1 suitable bit and shelters, and applies corresponding look-up table therefrom.But, with wherein apply in single frames travels through and specify the setting of all colours/degree of depth or background montage unlike, the object creating action synthesis visible database be windowing in or otherwise specify each feature of interest or the region that will receive particular task, and from a key frame element to subsequent key frame element application area-of-interest vector, thus provide the help of the computer disposal by following the tracks of each area-of-interest to operator.
During the design phase, moving target is appeared to the single instance (namely single frames action appears in interior suitable x, y coordinate corresponding to the single frames action of therefrom deriving it of background in background or splicing synthesis background) in background, mask is applied to the area-of-interest that designer specifies.Use the input equipment of such as mouse and so on, operator uses following instrument in the area-of-interest sheltered creating.Alternatively, the project of the element metadata of the related Practical computer teaching of tool can import, and if necessity, this metadata is zoomed to the unit being used for the degree of depth in this project.Because these masks create with numerical approach, thus can suppose that they are accurate in whole scene, and therefore for heavy setting operation, profile and the degree of depth in the region of Practical computer teaching can be ignored.Therefore the element adjacent with these targets can reset shape, more accurately because the profile of the element of Practical computer teaching is considered to correct.Therefore, even for have contiguous motion or background element identical bottom gray level Practical computer teaching element for, even if node place does not exist vision difference, the shape of node place mask also can be considered to accurate.Again, for mask matching position being set to the border of CG element to skip matching mask in subsequent frame possibly edge to be heavily shaped to a large amount of process of aiming in photo element, see Figure 37 step C 3710.
1. the edge detection algorithm of such as standard Laplace filter and so on and the combination of pattern-recognition routine
2. region is automatic or auxiliary closed
3. the automated seed in the region selected is filled
4. bimodal brightness that is bright or dark areas is detected
5. operator assists slide rule and other instruments to create " best-fit " corresponding to the dynamic range of bottom brightness value, pattern and weight variable and underlying pixel data to distribute
6. create relative to the subsequent analysis of next-door neighbour's bottom gray level of neighboring area, brightness, area, pattern and multiple weighting characteristic and be called that the uniqueness of detecting device file determines/identification set.
Make the key frame stage in early stage---the single design motion database of above-described synthesis with comprise selection key frame moving target all subsequent motion together with present.Can open and close in background by sequentially opening and closing each continuous motion complex or in background, watch all motion complexs being in motion.
Key frame moving target creates: operator is continuously to all area-of-interest windowings of sheltering on design frame, and by various different sensing instrument and routine by computer guiding to the relevant position (area-of-interest) on the key frame moving target of the selection in visible database, thus (that is, operator follows with the approximate establishment at the center of area-of-interest that represents in the visible database of key frame moving target from designing the vector of frame moving target to each subsequent key frame moving target to reduce region that computing machine must operate thereon.The detection that this operator's householder method limits mask being applied to the needs that must be performed by computing machine in the corresponding area-of-interest in primitive frame operates).
In the film-making stage---above-described synthetic key frame moving target database with comprise the selection of sheltering completely key frame moving target all subsequent motion together with present.As mentioned above, all motion complexs can open and close or sequentially open and close continuously to imitate actual motion in background in background.In addition, all regions (area-of-interest) of sheltering can be presented when lacking its corresponding moving target.Under these circumstances, 1 bit color mask is shown as translucent or opaque random color.
During film-making process and under operator's visual spatial attention, each area-of-interest in the subsequent motion target frame between two critical movements target frame experiences computing machine masked operation.This masked operation relates to comparing of mask in first moving target frame and the new or subsequent detectors file operation continued in frame and potential parameter (being namely arranged in the mask dimension of the parameter vector of subsequent key frame moving target, gray-scale value and multiple weighting factor).This process is assisted by the vector application in visible database and windowing or sensing (using various different sensing instrument).If the value in operator's auxiliary detection region of subsequent motion target falls in the scope relative to the respective regions at first moving target of surrounding values, and if these values decline along by value (vector) track compared desired by the first key frame and the second key frame, so computing machine will be determined coupling and will attempt best-fit.
Unpressed high-definition picture all resides in server level, all follow-up masked operation on area-of-interest is all presented on the compression synthetic frame in display-memory, or be presented on the tiling condensed frame in display-memory, make operator can determine correct tracking and the coupling in region.Illustrate that the area-of-interest window of the convergent-divergent in uncompressed region is presented on screen visually to determine optimal area-of-interest.This high resolving power window also can move completely and check, operator can be determined, and whether masked operation is accurate in motion.
In the first embodiment as shown in Figure 1, multiple feature film or film television frame 14a-n represent wherein there is the single instance of background 16 (Fig. 3) or the scene of perception or editing.In shown scene 10, some performers or movement elements 18 ', 18 " and 18 " ' out of doors step moves, and camera is just performing pan left.Fig. 1 shows the sample of the selection of 120 frames 14 altogether of formation 5 seconds pans.
In fig. 2, to separating background 16 process scene from the multiframe 14a-n represented in Fig. 1, wherein use various different subtraction and differential technique to remove all movement elements 18.The independent frame creating pan is combined into visible database, in the single width synthesis background image 12 shown in Fig. 3, wherein represents the uniqueness from each frame in 120 frames 14 of the original pan of composition and common pixel.Then, single width background image 12 is used for creating and represents that the background mask of the color lookup table that designer selects covers 20, and wherein dynamic pixel color automatically compensates or regulates shade and other brightness change of movement.For degree of depth project, any degree of depth of any Target Assignment in background can be given.Various instrument can be utilized to perform the distribution of depth information to any part of background, comprise painting painting tool, based on geometric graph target instrument, its permission arranges profile depth to target or the text field inputs to allow numeral to input.Synthesis background shown in Fig. 2 such as also can have the ramp function of distribution to allow by nearer depth assignment to the left-hand component of scene, and the linear increase of the degree of depth automatically on the right of distribution diagram picture.Also see Figure 42-70.
In an illustrative embodiment of the present invention, what operator assisted is used for detecting other neighboring edges in obvious anchor point represented by the intersection point that detected by sharp edge and formation single width composograph 12 and every frame 14 of mask 20 of covering with the operation of robotization.These anchor points to be also illustrated in composograph 12 and to be used for auxiliary mark correctly being distributed to the every frame 14 represented by single width composograph 12.
Anchor point is become single masks area with the target limited with passing through closed or close closed clear-cut margin and/or zone design and gives single look-up table.In the region that these clearly define, create the polygon that its anchor point is domination point.When the region that the sharp edge that there is not detection ideally closes with establishment, the edge of the mask of application is used to generate polygon.
The inside that the polygonal mesh obtained comprises anchor point domination region adds all perimeters between these regions.
The mode parameter created by Luminance Distribution in each polygon to be registered in database so as the corresponding polygon address applications of coverage mask to be used for the proper address of the frame creating synthesis single image 12 time quote.
In figure 3, the representative sample of each moving target (M target) 18 in scene 10 receives and represents that the mask of color lookup table/depth assignment that designer selects covers, wherein when M target 18 is moved in scene 10, dynamic pixel color automatically compensates or regulates shade and other brightness change of movement.This representative sample is each is considered to crucial M target 18, is used for limiting the light characteristic etc. of the bottom pattern in the M target 18 of sheltering, edge, grouping.These characteristics are used for, along the parameter vector of the restriction of leading to crucial M target 18c, designing mask is moved to follow-up M target 18b from a crucial M target 18a, and each follow-up M target becomes new crucial M target in succession along with the application of mask.As shown in the figure, the degree of depth from camera capture point 32 feet can be distributed to crucial M target 18a, the degree of depth from camera capture point 28 feet can be distributed to crucial M target 18c simultaneously.Each different degree of depth of target can between the depth point that each are different " between benefit " in case wire-frame model without the need to all targets in the target of such as frame allow three-dimensional motion true to nature to appear in editing.
As background operation above, what operator assisted be used for detecting the obvious anchor point represented by the intersection point that detected by sharp edge with the operation of robotization and be used for other neighboring edges of creating in each moving target of key frame.
By anchor point and by closed or close to closed clear-cut margin particular region of interest in each moving target of limiting be appointed as single masks area and give single look-up table.In the region that these clearly define, create the polygon that its anchor point is domination point.When the region that the sharp edge that there is not detection ideally closes with establishment, the edge of the mask of application is used to generate polygon.
The inside that the polygonal mesh obtained comprises anchor point domination region adds all perimeters between these regions.
Mode parameter by the brightness value profile creation in each polygon to be registered in database so as the corresponding polygon address applications of coverage mask to be used for the proper address of the frame creating synthesis single frames 12 time quote.
Polygon sample is larger, bottom brightness value and assessment more detailed, and on to cover the matching of mask more accurate.
Sequentially process motion key frame target 18 follow-up or between two parties.The mask set comprising motion key frame target remains in its correct address position in subsequent frame 14 or in the subsequent instance of next moving target 18.Mask is illustrated as opaque or transparent color.Operator utilizes mouse or other sensing equipments and adjoining land indicates each mask together with the relevant position in its subsequent frame at moving target and/or example.Then computing machine uses and represents that bottom luminance texture and the corresponding polygon both mask edge and existing anchor point create the best-fit of the subsequent instance of moving target.
In an identical manner to next example operation of moving target 18, until complete all moving targets 18 in editing 10 and/or scene between critical movements target.
In the diagram, then all masking elements of reconstruction of scenes 10 are to create frame that is completely painted and/or degree of depth enhancing, wherein M target 18 mask is applied to each suitable frame in scene, then be background mask 20, it is only applied in boolean's mode when the mask be not pre-existing in.Then, arrange according to the priority of programming in advance foreground elements is applied to every frame 14.The accurate application of auxiliary background mask 20 be vector point, it is applied to visible database when there is the reference point of such as edge and/or the obviously good definition of luminance point and so on sheltering Shi You designer.These vectors create reference point matrix, ensure the degree of accuracy of independent frame mask being rendered to each scene of composition.It will be understood to those of skill in the art that the degree of depth of each applied different target determines the horizontal translation amount applied when generating the left and right viewpoint utilized in three-dimensional viewing.In one or more embodiment of the present invention, dynamically show when the target of wishing can move the operator arranging and observe the degree of depth true to nature.In other embodiments of the invention, what the depth value of target determined to apply moves horizontally, as it will be recognized by those skilled in the art and it is at least at the USPN6 of the people such as Ma, 031,564 are instructed, and the instructions of the document is specially herein incorporated by reference.
Operator adopts some instruments that mask is applied to continuous moving-picture frame.
Display: the key frame comprising all moving targets of this frame is sheltered completely and is loaded in display buffer together with multiple subsequent frames of thumbnail image format; Typically 2 seconds or 48 frames.
Fig. 5 A and Fig. 5 B shows the series of successive frames 14a-n be loaded in display-memory, and wherein a frame 14 utilizes background (key frame) to shelter completely and prepares to propagate into subsequent frame 14 via automatic mask approximating method mask.
All frames 14 also can use second (son) window real-time (24 frames/second) sequentially to show, to determine whether automatic masked operation correctly works together with colour switching/degree of depth enhancing of association mask and/or application.When degree of depth project, anaglyph spectacles or red/blue anaglyph spectacles can be utilized to watch two viewpoints corresponding to every eyes.The degree of depth viewing technology of any type can be utilized to watch depth-enhanced image, and comprise video display, it is without the need to anaglyph spectacles, but utilizes the image pairing more than two that embodiments of the invention can be utilized to create.
Fig. 6 A and Fig. 6 B show display display-memory in the amplification of consecutive image series and the subwindow of scalable single image.This subwindow make operator can in real time or between the moving period of slowing down on single frames or in multiframe interactively handle mask.
Mask is revised: mask can be copied to frame that is all or that select, and automatically revise in thumbnail view or in the preview window.In the preview window, mask amendment occurs on independent frame in the display or occurs in multiframe during real time kinematics.
Mask propagates into the multiple successive frames in display-memory: use various different copy function the key frame mask of foreground moving object to be applied to all frames in display buffer:
Copy all masks in a frame to all frame;
All masks in one frame are copied to the frame of selection;
Copy the mask of the one or more selections in a frame to all frame;
The mask of the one or more selections in one frame is copied to the frame of selection; And
The direct copying at the identical address place in every other frame is utilized to create the mask generated in a frame.
Referring now to Fig. 7 A and Fig. 7 B, single mask (human body) is automatically transmitted to all frames 14 in display-memory.Operator can specify the frame of selection so that the mask of application choice, or instruction applies it to all frames 14.Mask is copying of initial mask in the first frame sheltered completely.The amendment of this mask only occurs over just after they are propagated.
As shown in Figure 8, all masks associated with moving target are transmitted to all successive frames in display-memory.These images show the displacement of bottom layer image data relative to mask information.
Above-named transmission method does not have a kind of target be fitted to by mask on one's own initiative in frame 14.They only apply identical mask shape and associated color information converting from a frame (typically being key frame) to the frame of every other frame or selection.
Mask uses various different instrument to carry out regulating to compensate the target travel in subsequent frame based on the brightness of image, pattern and local edge.
Automatic mask matching: the successive frame of feature film or collection of TV plays shows the motion of performer and other targets.In the present example, design these targets in single representative frame, the feature that operator is selected or region have the unique color identified by the unique mask containing whole feature and convert.The object of mask fitting tool is to provide the robotization means for correctly placing and reset each mask area-of-interest (ROI) in shape successive frame, make when ROI is shifted from the original position single representative frame, mask accurately meets correct locus and the two-dimensional geometry structure of ROI.The method is intended to allow masks area to propagate into successive frame from original reference or design frame, and automatically makes it can according to each image shift adjustable shape of association bottom layer image feature and position.For the element of Practical computer teaching, association mask creates with numerical approach, and can suppose in whole scene it is accurate, and therefore for automatic mask matching or heavy setting operation, can ignore profile and the degree of depth in the region of Practical computer teaching.Therefore the element adjacent with these targets can reset shape, more accurately because the profile of the element of Practical computer teaching is considered to correct.Therefore, even for have contiguous motion or background element identical bottom gray level Practical computer teaching element for, even if node place does not exist vision difference, the shape of node place mask also can be considered to accurate.Therefore, no matter when the shape on the border of the element mask with Practical computer teaching is taked in the automatic mask matching of mask, and the element mask of this Practical computer teaching can be utilized to the border of the mask limiting operator's definition according to the step 3710 of Figure 37 C.Which save the processing time, because utilize the automatic mask matching of the element mask of many Practical computer teaching to minimize in scene.
For automatically location revision and in the picture all masks of correctly matching so that the method for the motion of the respective image data between compensated frame relates to following content:
Reference frame mask and respective image data are set:
1. reference frame (frame 1) uses the various means of such as scribbling with polygon tool and so on to shelter by operator, and all area-of-interests (i.e. feature) are strictly covered.
2. calculate minimum and maximum x, y coordinate figure of each masking regional to create the square boundary frame containing all bottom layer image pixels of each masking regional around each masking regional.
3. for each region of interest domain identifier pixel subset in its bounding rectangles (i.e. every 10 pixels).
All subsequent frames are copied to: copy all subsequent frames from the mask of reference frame, bounding box and respective pixel position subset to by operator with reference to frame mask and respective image data.
Approximate region between reference frame and next subsequent frame offsets:
1. calculate fast fourier transform (FFT) so that approximate image data shift between frame 1 and frame 2
2. each mask using FFT result of calculation to move to have in the frame 2 of adjoint bounding box is to compensate the displacement of the respective image data from frame 1.
3. around described region, additional for bounding box amplification enough and to spare is moved and shape anamorphic effect to hold other.
Mask is fitted to reposition:
1. use the offset vector determined by FFT, calculate the least error Gradient Descent in the view data below each mask in the following manner:
2. create matching frame around each pixel in bounding box subset
3. use the weighted index of all pixels in bilinear interpolation method digital simulation frame.
4. determine skew and the best-fit of each subsequent frame, use gradient descent algorithm mask to be fitted to the region of hope.
Mask matching initialization: operator selects characteristics of image in the frame (reference frame) of the single selection of scene, and convert the mask of (color lookup table) for each feature-modeling all colours comprised for bottom layer image data.The characteristics of image of the selection identified by operator has the geometry latitude of emulsion of the good definition identified by scanning the feature below each mask for minimum and maximum x, y coordinate figure, thus limits the square boundary frame around each mask.
Fitted mesh difference scheme for fitted mesh difference scheme interpolation: for the object optimized, utilizes the sparse subset of the correlation masking latitude of emulsion area pixel of described method only in each bounding box of matching; This pixel subset limit as Fig. 9 A bright pixel regular grid in the image that marks.
" little dark " pixel shown in Fig. 9 B is used for using bilinear interpolation to calculate weighted index.Grid interval current setting is 10 pixels, makes substantially to be no more than in 50 pixels 1 and currently utilizes gradient descent search matching.This grid interval can be user's controllable parameter.
Fast fourier transform (FFT) estimates shift value: copy the mask with respective rectangular bounding box and fitted mesh difference scheme to subsequent frame.Forward and inverted-F FT is calculated to determine x, y shift value of the characteristics of image corresponding with bounding box with each mask between reference frame to next subsequent frame.The method generates relevant surfaces, and its maximal value is provided for " best-fit " position of the individual features position in searching image.Then, in the second frame, each mask and bounding box are adjusted to suitable x, y position.
Match value calculates (gradient descent search): FFT provides shift vector, and it uses the guiding of gradient descent search method for the search of ideal mask matching.Gradient descent search requires that translation or skew are less than the radius in the basin around the minimum value on matching error surface.Minimum requirements will be created for each masks area is relevant with the successful FFT of bounding box.
Best-fit on search error surface: the error surface in gradient descent search method calculates and relates to computing reference picture frame and corresponding (skew) position (x on search graph picture frame, y) between with reference picture pixel (x0, the mean square deviation of the pixel in the square matching frame y0), as shown in Figure 10 A-D.
Respective pixel values in two (reference and search) matching frames is subtracted each other, square, summation/cumulative, and the root sum square obtained finally divided by pixel count in frame (pixel count=height x width=height 2) so that root-mean-square misfit (" the error ") value at the matching searching position place of generation selection
Error (x0, y0; X, y)=∑ i ∑ j (reference block (x0, y0) pixel [i, j] – search box (x, y) pixel [i, j]) 2}/(height 2)
Match value gradient: the shift vector data creation search matching position of deriving from FFT, and error surface calculates and starts from this deviation post, along (in face of) the surface graded local minimum proceeding to this surface downwards of error, this local minimum is assumed that best-fit.The method uses the normalization difference of two squares in such as 10x10 frame based on previous frame and finds minimum value along mean square deviation gradient and find the best-fit of each next frame pixel or pixel groups.This technology type is similar to cross-correlation, but has the limited sample boxes for calculating.In this manner, the corresponding matching pixel in previous frame can be checked with regard to its mask index, and complete the distribution obtained.
Figure 11 A-C shows the second search box from deriving along (evaluating individually) error surface Gradient Descent, for it, the error function of evaluation reduces relative to original reference frame, is minimized possibly (from the visual comparison of the reference block these frames and Figure 10 A-D obviously).
Calculate by the surface graded definition according to gradient of error.Vertical and lateral error deviation four positions in the search box near heart position are evaluated, and combine the estimation of the error gradient to provide this position.Gradient component evaluation makes an explanation by means of Figure 12.
The gradient of surface S at coordinate (x, y) place is provided by the directional derivative on surface:
Gradient (x, y)=[dS (x, y)/dx, dS (x, y)/dy],
It is provided by following formula for the discrete case of digital picture:
Gradient (x, y)=[(error (x+dx, y)-error (x-dx, y))/(2*dx), (error (x, y+dy)-error (x, y-dy))/(2*dy)]
Wherein dx, dy are the half that frame is wide or frame is high, are also defined as matching frame " frame radius ": and frame is wide=and frame is high=2x frame radius+1.
It should be pointed out that the increase along with frame radius, matching frame dimension increases and the size of the characteristics of image thus wherein comprised and details also increase; Therefore, frame is larger, and the matching degree of accuracy of calculating is modified, and data to be processed are more, but increases along with the increase of radius squared the computing time that each matching (error) calculates.
For the element masks area pixel of any Practical computer teaching of specific pixel x, the discovery of y position, so this position is counted as the edge of the mask of the Operation Definition covered, and mask fits within other location of pixels to be continued, until checked all pixels of mask
Previously with the reference picture propagated: the reference picture for mask matching is generally the contiguous frames in film image frame sequence.But, sometimes preferably use the mask of exquisite matching as reference image (such as key frame mask, or the source frame propagating/copy masks area from it).Present example provides the switch of forbidding " vicinity " reference frame, uses the propagation mask of reference picture when this frame is limited by nearest communication events.
Mask fit procedure: in the present example, n frame is loaded in display buffer by operator.One frame comprises the mask will propagating and be fitted to every other frame.Described mask whole or some then propagate into all frames in display buffer.Due to mask fitting algorithm quote in series at first frame or the first frame mask is fitted to subsequent frame, thus the first frame mask and/or must strictly be applied to interesting target and/or region at photomask.If do not done like this, so mask error will be accumulated, and mask matching will failure.Operator shows subsequent frame, regulates the sample radius of matching, and performs the order for the matching of whole frame calculating mask.This fill order can be thump or mouse hot button command.
As shown in Figure 13, wherein there is little difference in the mask of the propagation in the first order example between bottom layer image data and mask data.Can be clear that, full dress mask and hand mask are close relative to view data.
Figure 14 shows by using automatic mask fitting routine, the bottom layer image data of mask data by reference in prior images and be adjusted to view data.
In fig .15, the mask data in sequence in image below reveals significant difference relative to bottom layer image tables of data.Eye adornment, lipstick, blush, hair, face, full dress and hand view data be all shifted relative to mask data.
As shown in Figure 16, automatically mask data is regulated based on bottom layer image data according to previous mask and bottom layer image data.In this Figure 13, mask data is to show the region automatically regulated based on bottom pattern and brightness data to utilize random color to illustrate.To blush and eye adornment is not quoted marginal date and automatically regulates on the basis of brightness and grey-scale modes.
In fig. 17, after the matching of whole frame automatic mask, utilize suitable colour switching that mask data from Figure 16 is shown.Mask data is regulated to be applicable to bottom luminance patterns based on from previous frame or from the data of initial key frame.
Utilize and use the mask of the Bezier of justified margin and polygon animation to propagate: the mask that the Bezier that surrounds area-of-interest or polygon promote for moving target can be used.Multiframe is loaded in display-memory, and near its mid point automatic aligning to the area-of-interest application Bezier point at the edge of detection in view data and curve or polygon form point.Once the target in frame 1 is surrounded by polygon or Bezier, so operator regulates polygon in the last frame being loaded into frame in display-memory or Bezier.Then operator performs fitting routine, and polygon or Bezier point are added that controlling curve snaps to all intermediate frames by it, makes all frames of mask in display-memory form animation.Polygon and Bezier algorithm comprise for rotating, convergent-divergent and mobile whole reference mark be to process camera zoom, pan and complicated camera motion.
In figure 18, polygon is used for sketching the contours the area-of-interest for sheltering in frame 1.The polygon form point of square snaps to the edge of interesting target.Use Bezier, Bezier point snaps to interesting target and reference mark/curve is suitable for edge.
As disclosed in Figure 19, whole polygon or Bezier are taken to the last frame selected in display-memory, and wherein operator uses and automatically the alignment function that point and curve snap to the edge of interesting target regulated polygon form point or Bezier point and curve.
As shown in Figure 20, when there is the adjustment of operator's interactive mode, if there is significant difference between the point in the frame between two frames and curve, so operator will regulate the frame that there is maximum error of fitting in the middle of described multiframe further.
As shown in Figure 21, when determine polygon or Bezier between two frames regulated correctly animation time, suitable mask is applied to all frames.In these figures, any mask color filling polygon or Bezier is seen.
Figure 22 shows the mask obtained to polygon or the Bezier animation at edge from point and curve automatic aligning.Brown mask is colour switching, and green mask is any see-through mask.For degree of depth project, can be a kind of color through the region of depth assignment, and those regions waiting depth assignment can be such as another kind of colors.
Background in feature film and collection of TV plays painted/degree of depth strengthens: process mask information being applied to the successive frame in feature film or collection of TV plays is known, but For several reasons is effort.In all cases, these processes relate to from frame to frame correction mask information to compensate the motion of bottom layer image data.The correction of mask information not only comprises again sheltering of performer in scene or editing and other moving targets, and comprises correction of movement target and block between its moving period or the background that exposes and foreground information.This wherein camera follow left, to the right, up or down in scene cut in the camera pan of action be especially difficulty.Under these circumstances, operator must the motion of not only correction of movement target, and operator must also the blocking and expose and add and correct when camera motion is to the exposure of the stylish background information of new portion of background and prospect of correcting background information.Typically, these examples substantially increase scene cut painted time and degree-of-difficulty factor due to the reason of the extremely many manual labor amount related to.Embodiments of the invention comprise multiframe for rocking the scene cut comprising complicated camera motion and the camera of erratic action that wherein there is accompany movement target or drift about in the scene cut of the camera motion automatically Method and Process that strengthens of painted/degree of depth.
Camera pan: for pan camera sequence, with the major part of the background formation sequence of the non-athletic target association in scene.Strengthening to carry out painted/degree of depth for panning sequence to a large amount of target context, creating the mosaic pattern comprising the target context of whole panning sequence removing moving target.This task utilizes pan background splicer instrument to complete.Once generate the background mosaic pattern of panning sequence ,/degree of depth painted to it can strengthen once, and automatically apply it to independent frame, and painted/depth assignment need not be carried out to the target context in every frame of sequence in artificially.
Pan background splicer instrument uses two general operations to generate the background image of panning sequence.First, the motion of camera is estimated by calculating the conversion needed for being aimed at previous frame by the every frame in sequence.Because moving target forms the major part of film sequence, thus use minimum movement target on the technology of the impact of frame registration.Secondly, from final mosaic pattern, effectively remove twice Mixed Zone of moving target by interactive mode selection and frame be mixed in final background mosaic pattern.
Background synthesis exports data and comprises: gray level/(colored possibly for degree of depth project) image file of the such as standard digital format of tiff image file (bkg.*.GIF) and so on, it is made up of the background image of whole pan camera lens, remove the moving target of hope, prepare to use the masked operation described to carry out colour planning/depth assignment; And establish association background mask/painted/depth data component (bkg.*.msk, bkg.*lut ...) after carry out needed for background mask extraction associated context text data file.Background text data file provides the frame position in filename, mosaic pattern and other frame dimension comformed information for each formation (input) frame with Background Contexture, there is following (every frame) content of often going: frame filename, frame x position, frame y position, frame width, vertical frame dimension degree, the left enough and to spare x maximal value of frame, the right enough and to spare x minimum value of frame.Except as except first (frame filename) of character string, each data field is integer.
Generate conversion: in order to generate the background image for pan camera sequence, first calculate the motion of camera.The moving through of camera checks and a frame is brought to the conversion needed for aiming at previous frame and determine.By the motion of often pair of consecutive frame in the sequence of calculation, the Transformation Graphs of the relative position providing every frame in sequence can be generated.
Translation between image pairing: most of image registration techniques uses the intensity of certain form to be correlated with.Regrettably, the method based on image pixel intensities will be lost biased because of any moving target in scene, makes it be difficult to the motion estimating to cause due to camera motion.The method of feature based is also for image registration.These methods are limited to the following fact: most of feature appears on moving object boundary, also provide coarse result for pure camera motion.Artificially selects the unique point cost of a large amount of frame high too.
The method for registering used in pan splicer uses the character of Fourier transform to avoid being partial to the moving target in scene.The autoregistration of frame pairing is calculated and for final background image assembly.
The Fourier transform of image pairing: the first step in process of image registration comprises the Fourier transform getting every width image.Camera motion can be estimated as translation.The specified quantitative translation that second image provides according to the following formula:
I 2(x,y)=I 1(x-x 0,y-y 0) (1)
The Fourier transform getting the every width image in pairing obtains following relation:
F 2 ( α , β ) = e - j · 2 π · ( αx 0 - βy 0 ) · F 1 ( α , β ) - - - ( 2 )
Phase shift calculates: next step relates to the phase shift between computed image.Do the phase shift expression formula of the Fourier transform caused according to the first and second images like this:
e - j · 2 π · ( αx 0 - βy 0 ) = F 1 * · F 2 | F 1 * · F 2 | - - - ( 3 )
Inverse Fourier transform
By getting the inverse Fourier transform of the phase shift calculating provided in (3), obtain the δ function that its peak is positioned at translation place of the second image.
δ ( x - x 0 , y - y 0 ) = F - 1 [ e - j · 2 π · ( αx 0 - βy 0 ) ] = F - 1 [ F 1 * · F 2 | F 1 * · F 2 | ] - - - ( 4 )
Peak position: the two-dimensional surface obtained by (4) will have maximum peak from the first image to the translation of the second image point.By searching for the maximal value in surface, the conversion finding the camera motion represented in scene is simple.Although because the reason of moving target will exist spike, the domination campaign of camera should represent peak-peak.For each continuous print frame pairing in whole panning sequence, perform this calculating.
Process picture noise: regrettably, may occur false result due to the reason of picture noise, picture noise may change the result of transformation calculations up hill and dale.Pan background splicer uses and detects and two kinds of these exceptional values of method process of error recovery situation: peak match recently and location of interpolation.If these corrections are matched unsuccessfully for specific image, so splice the option that application program has the position of any frame pairing in the correction sequence of artificially.
Nearest coupling peak: after calculating the conversion for image pairing, determine the percentage difference between this conversion and previous transformation.If this difference is higher than predetermined threshold, so carry out the search for adjacent peak.If find more closely coupling and lower than the peak of difference threshold, so use this value to replace peak-peak.
This supposition, for pan camera lens, is moved relatively stable, and the difference for each frame pairing between motion will be little.This corrects such situation, and wherein picture noise may cause and convert higher peak, corresponding true peak than to camera a little.
Location of interpolation: if coupling peak calculates the legitimate result failing to obtain being provided by percentage difference threshold value, so based on the result estimated position of matching from prior images recently.Again, this usually provides good result for stable panning sequence, because the difference continued between camera motion should be roughly the same.Peak correlation and interpolation result are shown in splicing application program, therefore if necessary, can carry out manual synchronizing.
Generation background: once calculate each continue frame pairing contrast camera motion, these frames can be synthesized to represent sequence whole background mosaic pattern in.Owing to needing the moving target removed in scene, thus use different image blend options effectively to remove the domination moving target in sequence.
Assembly background mosaic pattern: first, generation background frame buffer, its large must being enough to crosses over whole sequence.In single pass, background can be mixed, if or need to remove moving target, then use twice mixing described in detail below.The position of mixing and width can be edited in splicing application program, and can arrange globally, or arrange individually for each frame pairing.Each mixing is added in final mosaic pattern, and then writes out as single image file.
Twice mixing: the object of twice mixing eliminates moving target from the mosaic pattern of final mixing.This can complete by first mixing described frame, and therefore moving target removes from the left side of background mosaic pattern completely.An example has been shown in Figure 23, and wherein personage removes from scene, but still can see on the right side of background mosaic pattern.Figure 23.In the first pass mixing shown in Figure 23, sport figure is shown on the step of the right.
Then, generate the second background mosaic pattern, wherein use hybrid position and width, moving target is removed from the right side of final background mosaic pattern.Such a example has been shown in Figure 24, and wherein personage can remove from scene, but still can see in the left side of background mosaic pattern.In time mixing of second as shown in Figure 24, the personage of motion is shown in the left side.
Finally, the described background mosaic pattern mixing to generate the final mixing removing moving target from scene for twice.The final background corresponding to Figure 23 and Figure 24 has been shown in Figure 25.As shown in Figure 25, the background with the final mixing of sport figure is removed.
In order to promote effectively to remove the moving target that may occupy the zones of different of frame during panning sequence, splicer application program has individually or is often arrange all over every frame interactive mode the option mixing width and position globally.Can see the sample screen shot from mixed editorial instrument that first and second times hybrid positions are shown in Figure 26, Figure 26 is the screenshot capture of mixed editorial instrument.
Background text data are preserved: comprise the output text data file extracting relevant parameter value with background mask and generate from above-described initial phase.As mentioned above, each text data record comprises: frame filename, frame x position, frame y position, frame width, vertical frame dimension degree, the left enough and to spare x maximal value of frame, the right enough and to spare x minimum value of frame.
Export text data file name by pre-attached " bkg. " prefix and additional " .txt " expansion and claim to form from the first synthetic input frame root name.
Example: be called that the representative line of " bkgA.00233.txt " exports text data file and can comprise data from form vision-mix 300 or more frames:
4.00233.GIF001436108001435
4.00234.GIF701436108001435
4.00235.GIF2001436108001435
4.00236.GIF3701436108001435
4.00237.GIF5801436108001435
The image displacement information that the synthesis being used for creating frame series represents is included in the text that associate with composograph, and is used for single synthesis mask being applied to all frames for establishment composograph.
In figure 27, will represent that the successive frame of camera pan is loaded in storer.Moving target (being moved to the left to the house keeper of door) is sheltered by a series of colour switching information, leaves the black and white background not applying mask or colour switching information.Alternatively, for degree of depth project, the degree of depth and/or depth shape can be distributed to moving target.See Figure 42-70.
In Figure 28, for the sake of clarity show six representative successive frames of pan above.
Figure 29 shows synthesis or the montage image of the whole camera pan using phase coherent techniques to build.Moving target (house keeper) is to the phase place of both direction relevant average and be included as the transparent positive for reference with last frame by maintenance first.Use the single montage of same color conversion macking technique to pan being used for foreground target to represent and carry out colour planning.
Figure 30 shows the frame sequence in the camera pan after background mask colour switching, montage is applied to the every frame for creating montage.At the place of the mask be not pre-existing in application mask, thus keep moving target mask and colour switching information while there is in application the background information of suitably skew.Alternatively, for degree of depth project, the right and left eyes view of every frame can illustrate as pairing, or such as every eyes shown in independent window.In addition, image also may be displayed on three-dimensional viewing display.
In Figure 31, for the sake of clarity, automatically color background/degree of depth enhancing background mask is applied to the frame sequence of the selection in the pan after the frame of the mask be not wherein pre-existing in.
Static and drift camera lens: with motion " prospect " target by contrast, not move in film scene editing and the target that changes can be considered to " background " target.If camera does not move in whole frame sequence, the target context so associated looks like static for sequence time duration, and only can shelter with once painted for related frame.This is and motion (such as pan) camera scenario contrary " still camera " (or " static background ") of needing above-described splicing tool generation background complex.
Relate to seldom or do not have the editing of camera motion or frame sequence to provide most simple scenario for generating for the useful two field picture background " complex " of editing background coloration.But, because even " static state " camera also experiences slight vibration for various reason, thus static background synthetics can not present perfect pixel aligning from frame to frame, the interframe needing assessment to be accurate to 1 pixel moves, so that the pixel before being added to by its contribution data in complex (mean value) optimally between disassociation frame.Static background synthetics provides this ability, is all data that each disassociation frame is painted and extract needed for background coloration information after generating.
The moving foreground object of such as performer etc. and so on is masked, leaves unshielded background and static foreground target.Background or prospect is exposed in any case at the moving target sheltered, preferential and suitably to offset, the example of the background of previously having blocked and prospect is copied in single image so that compensating motion.Offset information is included in the text associated with the single representation of background, makes the mask information obtained can with suitable mask offset applications to the every frame in scene cut.
Background synthesis exports data and uses the gray level tiff image file (bkg.*.GIF) comprising average input background pixel value being suitable for painted/degree of depth and strengthening, and the associated context text data file (bkg.*.msk established after the background mask/coloring data/degree of depth associated strengthens component needed for background mask extraction, bkg.*.lut ...).Background text data provide filename, mask to offset and other frame dimension comformed information of each formation (input) frame for associating with complex, there is following (every frame) form of often going: frame filename, frame x offsets, frame y offsets, frame width, vertical frame dimension degree, the left enough and to spare x maximal value of frame, the right enough and to spare x minimum value of frame.Except as except first (frame filename) of character string, each in these data fields is integer.
Initialization: the initialization of static background building-up process relates to initialization and gathers to create synthesizes background image impact damper and the data required for data.This needs in the cocycle of all formation input picture frames.Before any generated data initialization may occur, must identify, load synthetic input frame, and allow all foreground targets identified/painted (namely using mask label, to get rid of synthesis).These steps are not the parts of static background building-up process, but occur in browsing database or the relevant incoming frame of directory tree, selection and loading, to foreground target scribble/depth assignment after call synthetics before.
Acquisition frame moves: the image background data of the vicinity in still camera editing can show little mutually vertical and horizontal-shift.Get the first frame in sequence as baseline, by comparing of the background image of all successive frames and the first frame, line by line and matching by column, so that according to all measurable image lines and column-generation two " measurement " horizontal and verticals skew histograms.These histogrammic patterns provide at every frame [iframe] array DVx [iframe], the vertical shift that in DVy [iframe], mark and storage the most frequent (and possibility) is assessed.These offset array generate in the circulation to all incoming frames.
Acquisition largest frames moves: circulate when generating DVx [], DVy [] offset array data, to find bare maximum DVxMax, DVyMax value from DVx [], DVy [] value to incoming frame during initialization.When the size of the background composograph suitably determining to obtain is to need these values when holding the pixel of all synthetic frames when nothing is sheared.
Obtain frame enough and to spare: to incoming frame circulation time during initialization, call additional process to find the left hand edge of the right hand edge of left image enough and to spare and right image enough and to spare.Because the pixel in enough and to spare has null value or the value close to zero, thus to the column index at these edges by asking for the average image row pixel value and variation thereof and finding.Edge columns index is stored in every frame [iframe] array lMarg [iframe] and rMarg [iframe] respectively.
Utilize maximal value to expand frame to move: the frame asked in described GetFrameShift () process moves " baseline " first frame relative to synthetic frame sequence, and the frame movement value found is the movement/skew relative to the background synthetic frame obtained.The dimension of background synthetic frame equals by the vertical of all sides and horizontal enough and to spare respectively with the dimension of the first synthetic frame of width D VxMax, DVyMax pixel-expansion.Therefore vertical shift must comprise the enough and to spare width relative to the background frames obtained, and therefore needs every iframe to add the skew with the calculating of the first frame to:
DVx[iframe]=DVx[iframe]+DVxMax
DVy[iframe]=DVy[iframe]+DVyMax
Initialization composograph: frame buffer class object example is created for the background complex obtained.The background complex obtained has the dimension that the first incoming frame increases 2*DVxMax (horizontal direction) and 2*DVyMax (vertical direction) pixel respectively.First incoming frame background image pixels (the non-foreground pixel of maskless) is copied in background image impact damper with suitable vertical shift.By associated pixel synthesis counting buffer values, one (1) is initialized as the initialized pixel of reception, otherwise is initialized as zero (0).For by the such as treatment scheme for extracting background that occurs for all frame delta frame masks of scene, see Figure 38 A.Figure 38 B illustrates the frame caused by such as camera pan and moves the determination with the amount of enough and to spare.After determining according to the frame of such as each hope and covering the image of movement, preserve composograph.
Figure 39 A shows the determination (being respectively 1.1 and 1.2) of edgeDetection (rim detection) and snap point, its describe in detail in Figure 39 B and Figure 39 C respectively and its make those skilled in the art can via average filter, gradient filter, filling gradient image and with comparing of threshold value and realize image edge detection routine.In addition, GetSnapPoint (acquisition snap point) routine of Figure 39 C shows and determines NewPoint (new point) based on the determined BestSnapPoint of Rangelmage being less than shown MinDistance (minor increment) (best alignment point).
Figure 40 A-C shows how to realize bimodal threshold value instrument in one or more embodiment of the present invention.The establishment of the image of bright dark cursor shape utilizes MakeLightShape (making light shape) routine to realize, and wherein utilizes the corresponding routine application as shown in Figure 40 A afterbody for the light/dark value of shape.These routines are shown in Figure 40 C and Figure 40 B.Figure 41 A-B shows the calculating for the FitValue (match value) in the routine above one or more and gradient.
Synthetic frame circulates: incoming frame is synthesized (interpolation) in the background obtained via the circular order on frame.Utilize the associated excursion (DVx [iframe] being used for every frame, DVy [iframe]) incoming frame background pixel is added in background image impact damper, and for the pixel receiving synthesis interpolation, associated pixel synthesis counting value is increased by one (1) (for this reason providing independent synthesis counting array/impact damper).Only have background pixel, those pixels synthesis (interpolation) not associating input mask index are in the background obtained; The pixel with non-zero (tagged) mask value is used as foreground pixel, and thus without the synthesis being subject to background; Therefore, they are left in the basket.Status bar in gill (Gill) is often all increased all over being circulated by incoming frame.
Synthesis completes: generate the final step exported in composograph impact damper and ask for the pixel average forming composograph.When completing synthetic frame circulation time, background image pixel values represent all produce contribution aligning incoming frame pixel and.Because the output pixel obtained must be the average of these values, thus require the counting divided by the input pixel producing contribution.As mentioned, every pixel counts is provided by associated pixel synthesis counting impact damper.All pixels with non-zero synthesis counting are by average; Other pixels remain zero.
Composograph is preserved: the tiff format output gray level image with every pixel 16 bit generates from synthesis average background frame buffer.Export file name passes through pre-attached " bkg. " prefix (and if if required, additional common " .GIF " image spreading name) and write the associated context file (if available) at path " ../BckgrndFrm " place, otherwise write default path (identical with incoming frame) and be made up of the first synthetic input frame filename.
Background text data are preserved: comprise the initial phase of output text data file described in (40A-C) extracting relevant parameter value with background mask and generate.As in brief introduction (see Figure 39 A) that mention, each text data record comprises: frame filename, and frame x offsets, and frame y offsets, frame width, vertical frame dimension degree, the left enough and to spare x maximal value of frame, the right enough and to spare x minimum value of frame.
Export text data file name and pass through pre-attached " bkg. " prefix and additional " .txt " extension name, and write the associated context file (if available) at path " ../BckgrndFrm " place, otherwise write default path (identical with incoming frame) and claimed to form by the first synthetic input frame root name.
Example: the complete output text data file being called " bkg.02.00.06.02.txt ":
C:\NewYolder\Static_Backgrounding_Test\02.00.06.02.GIF141920108001919
C:\New_Folder\Static_Backgrounding_Test\02.00.06.03.GIFl41920108001919
C:\New_Folder\Static_Backgrounding_Test\02.00.06.04.GIFl31920108001919
C:\New_Folder\Static_Backgrounding_Test\02.00.06.05.GIF231920108001919
C:\New_Folder\Static_Backgrounding_Test\02.00.06.06.GIFl31920108001919
Data scrubbing: the storer of the data object used by static background building-up process is distributed in release.These comprise background synthesis GUI session object and member's array DVx [], DVy [], lMarg [], rMarg [], and background composograph buffer object, and its content had previously been saved in disk and had no longer needed.
Synthesis background painted/depth assignment
Once be extracted background as described above, so single frames can be sheltered by operator.
Being transferred to covering the mask data of background by being used for background synthesis offset data, making suitably to place for being used for the mask of each successive frame creating complex.
At the mask be not pre-existing in (such as foreground actors) in any case, by background mask market demand to each successive frame.
Figure 32 shows and wherein utilizes independent colour switching/degree of depth to strengthen the frame sequence sheltering all moving targets (performer).
Figure 33 for the sake of clarity shows the frame sequence of the selection before background mask information.All movement elements all use automatic mask fitting algorithm to shelter completely.
Figure 34 shows the static background and foreground information that deduct the moving target previously sheltered.In this case, colour switching is utilized to shelter the single representation of complete background in the mode similar with moving target.It should be noted that, the profile of the foreground target removed seems to be truncated across the reason of incoming frame train interval and not identifiable design due to it, namely the black objects in frame represents that wherein moving target (being performer in this case) never exposes background and prospect, the i.e. region of deleted background view data 3401.For only painted project, black objects is left in the basket during masked operation, because the background mask obtained only was applied to all frames of the single representation for background afterwards when the mask be not pre-existing in.For degree of depth relevant item, artisticly or the black objects that wherein there is deleted background view data 3401 can be reproduced realistically, such as, to fill the information being converted to by two dimensional image and utilize in 3-D view.Because these regions are that wherein pixel cannot be borrowed from the region of other frames, because they expose from the scape that is absent from the scene, thus draw them or otherwise create credible image there, allowing all background informations exist and change for the 2 d-to-3 d without pseudomorphism.Such as, in order to match from having the 3-D view created without pseudomorphism from the two dimensional image in unexposed region in scene, the background of all of the background area had for being always blocked or enough information needed can be generated.Can scribble, draw, create, Practical computer teaching or otherwise such as obtain the background image data 3401 of disappearance from studio, make to there is enough information in the background comprising black region, so that flatly translation foreground target, and for using the background data of generation for the edge of the translation of occlusion area.This allows to generate and matches without pseudomorphism 3-D view, because the horizontal translation that can be exposed to the foreground target in the region be always blocked in scene causes the use of the new background data created instead of the target or make pixel deformation of stretching, this creates the pseudomorphism as the error of human-detectable.Therefore, for the frame that the degree of depth strengthens, obtain occlusion area to fill with utilizing enough horizontal photorealism data divisions, or all occlusion areas background of being rendered in the region seeming enough true to nature and fully filling (namely drawing and painted and/or depth assignment) causes the edge without pseudomorphism.Also respectively see Figure 70 with Figure 71-76 and the description that associates.The generation of deleted background data also can be utilized to create edge without pseudomorphism along the element of Practical computer teaching.
Figure 35 shows background mask information with the successive frame after suitable offset applications to every frame and when the mask information be not pre-existing in still camera scene cut.
Figure 36 shows from the representative sample of the frame of still camera scene cut after suitable offset applications background information and when the mask information be not pre-existing in.
Color reproduction: after color treatments is completed for each scene, in 24 bits or 48 bit RGB color spaces, combine follow-up or continuous multicolor motion mask and dependent look-up table, and be reproduced as TIF or TGA file.Then, these unpressed high-definition pictures are rendered to the various different medium (via digital film scanner) of such as HDTV35mm negative film and so on, or various other standards and non-standard video and movie formats are so that viewing and exhibition.
Treatment scheme:
Digitizing, stabilization and noise reduction:
1. with any one in some digital formats by 35mm film digitization for 1920x1080x10.
2. every frame experience standard stabiliser technology is to be minimized in inherent naturally rocking in film when it crosses camera sprocket wheel, and any suitable Digital Television motion picture technique adopted.Also frame difference technology is adopted so that further stabilized image stream.
3. then, every frame experience noise reduction is so that the electronic noise minimizing random film crystal grain and may enter in acquisition procedure.
Camera element is resolved in Making Movies in early stage and visible database creates:
1. use various different subtraction, phase place to be correlated with and each scene of film is resolved into background and foreground elements and moving target by focal length algorithm for estimating.Background and foreground elements can comprise the element existed in the element of Practical computer teaching or such as original film material.
2. use uncompensated (lens) to splice routine the background of m pan and foreground elements are combined in single frames.
3. prospect is defined as and moves upward in the side identical with background, but any target and/or the region of vector faster can be represented due to its contiguous camera lens.In the method, pan is reduced to single width presentation graphics, it comprises the institute taking from multiframe and has powerful connections and foreground information.
4. sometimes zoom is treated to tiling database, wherein by matrix application to key frame, wherein reference vector point is corresponding to the unique point in image, and corresponding to the unique point on the mask of the application on the synthesis mask containing any distortion.
5. from the frame creation database (distributing to them by common and pixel of novelty each during pan to derive or their total described multiframes from it) forming single representativeness or synthetic frame.
6. in this manner, represent that the mask of bottom look-up table covers novel for the correspondence of the background be properly assigned in respective frame and prospect and common pixel is represented.
Make design background design early stage:
1. whole background is carried out painted/depth assignment as single frames, wherein remove all moving targets.Background shelter use employing standard to scribble, fill, digital air brushing, transparent, texture and similar means routine complete.Color selecting uses and automatically regulates so that the 24 bit color look-up tables mating the density of bottom gray level and brightness complete.Depth assignment via distribute in single synthetic frame the degree of depth, distribute geometric configuration, input about target numerical value or complete in any other manner.In this manner, apply the color/degree of depth creatively selected, it is suitable for the scope of the gray level/degree of depth be mapped to below each mask.Be used for selecting the standard colour wheel of color gamut to detect bottom grayscale dynamic range and determine that designer can therefrom (namely from those color saturations of gray level brightness below coupling mask) respective color scope of selecting.
2. each look-up table allows the grey level range below numerous color application to mask.The color of distributing automatically regulates according to brightness and/or according to the color vector selected in advance, compensates the change of bottom grey levels intensity and brightness.
Make the design of design movement elements early stage:
1. create design moving target frame, it comprises the single representative moment of the motion in scene that all personages in whole scene background and its Scene and element exist.These non-background element of moving are called design frame target (DFO).
2. each DFO is resolved into the design section interested (area-of-interest) that special concern focuses on the element with distinct contrast in DOF, described element can easily use various different gray level and brightness analysis (such as pattern-recognition with or edge detection routine) to be separated.Strengthening because existing color film may be used for the degree of depth, thus can select area-of-interest when considering color.
3. the bottom gray level of each masking regional and Luminance Distribution to represent from the figure of the region shape with area, girth and various different weighting parameters with figure and other gray level analyses of comprising pattern analysis and show.
4. based on for film types, period, creation intention etc. suitable research and use the 24 bit color look-up tables automatically regulating the density of mating bottom gray level and brightness to be each area-of-interest determination color selecting comprising each target, apply the suitable and color creatively selected.Standard colour wheel detects bottom grey level range and limit design teacher only selects from by those color saturations of the gray level brightness below coupling mask.Can make for degree of depth project or regulation depth distribution, until such as obtain the degree of depth true to nature.
5. this process continues, until be that all targets of moving in scene create reference design mask.
Make design key frame target Computer Aided Design teacher early stage:
1., once substantially complete all colours selection/depth assignment for special scenes, so design moving target frame is used as the reference of the key frame target of the larger quantity created in scene.
2. select key frame target (all movement elements not comprising background element in scene, such as people, automobile etc.) for sheltering.
3. for each continuous key frame target really determining cause element be the amount of new information between a key frame and next key frame target.
To the movement elements in successive frame painted/degree of depth strengthen method:
1. multiframe is loaded in display buffer by film-making toning teacher (operator).
2. one of frame in display buffer to obtain the key frame of all masking informations from it by comprising operator.Operator does not carry out creativeness or color/degree of depth decision-making, because all colours information converting is coded in key frame mask.
3. but operator can be switched to from look-up table that is painted or application the semi-transparent masks distinguished by any color with distinct contrast.
4. operator can check the motion of all frames in display buffer, observe the motion occurred in successive frame, or they can from a key frame to next key frame progressively pursuit movement.
5. key frame mask information is propagated (copy) to all frames in display buffer by operator.
6. then operator performs mask fitting routine continuously on every frame.Figure 37 A shows the general processing flow chart of mask matching resolving into follow-up detail flowchart Figure 37 B and Figure 37 C.This program carries out best-fit based on gray level/brightness, edge parameters, and carries out pattern-recognition based on the gray level of key frame in display or previous frame and luminance patterns.For the element of Practical computer teaching, skip mask fitting routine, because mask or α define the edge that numeral creates (and thus not operation person limits), these edges accurately define the element border of Practical computer teaching.Mask fit operation considers element mask or the α of Practical computer teaching, and stops when hitting the edge of element mask of Practical computer teaching, because these borders are considered to accurate according to the step 3710 of Figure 37 C, and no matter gray level is how.Which enhance the degree of accuracy of mask edge, and such as reset shape for during identical basic brightness at the element of Practical computer teaching with the mask that operator limits.As shown in Figure 37 A, mask matching initialization area and fitted mesh difference scheme parameter, then digital simulation grid routine is called, and then in fitted mesh difference scheme routine to mask interpolation, it performs on any computing machine as described herein, and wherein these routines are configured to the digital simulation grid as defined in Figure 37 B and Figure 37 C especially.Figure 37 B flows into CalculateFitValue routine from initialization area routine to the treatment scheme of initialisation image row and image column and reference picture, this routine call matching gradient routine, this matching gradient routine calculates xx and yy as the difference between xfit, yfit and the gradient for x and y successively.If be greater than fit for x, y and xx and yy, FitValue, so xfit and yfit value is stored in FitGrid.Otherwise process is got back to the matching gradient routine place had for the new value of xfit and yfit and is continued.When completing the process of the size for grid for x and y, so according to Figure 37 C to mask interpolation.Upon initialization, determine index i and j of FitGridCell, and perform bilinear interpolation in fitGridA-D position, wherein mask is fitted to any border (namely for known α border or the border being considered to examine errorless correct mask border of depth value with the element such as limiting digital reproduction) found for any CG element at 3710 places.Mask fitting continues, until the size of the mask of xend and yend restriction.
7., if motion creates large deviation from a frame in the zone to next frame, so operator can select the independent region of wanting mask matching.The region of displacement is moved on to the apparent position of area-of-interest, there, program attempts to create best-fit.This routine successively continues for each area-of-interest, until all masking regionals to be applied to the moving target in all successive frames in display-memory.
A. operator clicks the single mask in each successive frame on the respective regions belonging to frame 2.Computer based carries out best-fit in gray level/brightness, edge parameters, grey-scale modes and other analyses.
B. this routine successively continues for each region, until all reorientated for all area-of-interests in frame 2.
C. then operator utilizes mouse click instruction to complete, and the gray level parameter in the mask in frame 2 and frame 3 is compared.
D. this operation continues, until all frames between two or more key frame are completely masked.
8., when existence is blocked, use the best fit parameters of amendment.Once block over, the frame before blocking is used as the reference of the frame after blocking by operator.
9. after all motions complete, successively by background/arrange mask to be applied to every frame.Be applied as: there is no the place application mask of mask.
10. the mask that the Bezier surrounding area-of-interest or polygon also can be used to be used in moving target forms animation.
A. multiframe is loaded in display-memory, and near area-of-interest application Bezier point and Polygonal Curves point, wherein these points automatically snap to the edge detected in view data.
B. once polygon or Bezier surround the target in frame 1, operator regulates polygon in the last frame in the frame loaded in display-memory or Bezier.
C. then operator performs fitting routine, and polygon or Bezier point are added that controlling curve snaps to all intermediate frames by this routine, makes the mask on all frames in display-memory form animation.
D. polygon and Bezier algorithm comprise reference mark, and it is for rotation, convergent-divergent and entirely move to process zoom, pan and complicated camera motion where necessary.
Figure 42 shows two picture frames of time upper separately some frames of the people of floating crystal ball, wherein will convert each the different target in these picture frames to objective from two dimension target.As shown in the figure, when occurring to the second frame (being shown in bottom), crystal ball moves about the first frame (being shown in top).Because these frames are associated with each other, although thus separate in time, most of masking information may be used for this two frame, as previous use embodiments of the invention described above reset shape.Such as, for the heavy boarding techniques of painted described mask above use, be used for following the tracks of mask and resetting shape by bottom gray level, eliminate and become the great majority involved by three-dimensional movie to work two-dimentional movie conversion.This is owing to the following fact: once key frame has the color or depth information that are applied to them, so mask information can automatically be propagated in whole frame sequence, which eliminates the needs such as regulating wire-frame model.Although for the sake of clarity only there are the two width images illustrated, when crystal ball moves right lentamente in image sequence, these images separate other images some in time.
Figure 43 shows and will convert sheltering of first object the first picture frame of 3-D view to from two dimensional image.In the figure, the first object sheltered is crystal ball.Without the need to covering over the object with any order.In this case, utilize simple free form drawing instrument that mask circular is to a certain extent applied to crystal ball.Alternatively, can circular masks be dropped on image, resizing and move to tram so as to circle crystal ball corresponding.But the target of sheltering due to great majority is not simple geometric configuration, thus there is illustrated interchangeable method.Therefore the gray-scale value covered over the object is utilized to reset shape to mask in subsequent frames.
Figure 44 shows sheltering of the second target in the first picture frame.In the figure, the hair of the people after crystal ball and face use free form drawing instrument masked as the second target.Rim detection or gray level thresholding can be utilized to as the previous edge accurately arranging mask about painted description above.Do not require that target is simple target, namely the hair of people and face can as or do not shelter as single items, and therefore can give the two by depth assignment or distribute individually as desired.
Figure 45 shows two see-through masks allowing to check in the first picture frame of the part associated with mask.Mask shows for colourful transparent mask by this figure, thus if desired, can regulate mask.
Figure 46 shows sheltering of the 3rd target in the first picture frame.In the figure, picking is selected as the 3rd target.Free form instrument is utilized to the shape limiting mask.
Figure 47 shows three see-through masks allowing to check in the first picture frame of the part associated with mask.Again, if desired, these masks can be regulated based on transparent mask.
Figure 48 shows sheltering of the 4th target in the first picture frame.As shown in the figure, the jacket of people forms the 4th target.
Figure 49 shows sheltering of the 5th target in the first picture frame.As shown in the figure, the coat-sleeve of people forms the 5th target.
Figure 50 shows the control panel for creating 3-D view, comprises associating of layer and objective and the mask in picture frame, specifically illustrates the establishment of the plane layer of the coat-sleeve for the people in image.On the right side of screen dump, enable " rotation " button, " translation Z " rotation amount is shown, as shown in next figure, shows that coat-sleeve rotates forward.
Figure 51 shows the 3-D view of each the different mask shown in Figure 43-49, and the mask wherein associated with the coat-sleeve of people is illustrated as on the right of the page towards the plane layer that left and right viewpoint rotates.Similarly, as shown in the figure, the mask associated with face with jacket be assigned with background before Z dimension or the degree of depth.
Figure 52 shows the view rotated a little of Figure 51.The plane layer that this coat-sleeve illustrating rotation tilts towards viewpoint.Crystal ball is illustrated as the flat target be still in two dimension, just as it is not yet assigned with objective type.
Figure 53 shows the view rotated a little of Figure 51 (and Figure 52), and wherein coat-sleeve is illustrated as turning forward, and is never defined for the wire-frame model of coat-sleeve again.Alternatively, can by the objective type application of post to coat-sleeve to form 3D shape target even more true to nature.Here, for the sake of clarity plane type is shown.
Figure 54 shows control panel, and its crystal ball specifically illustrated before into the people in image creates spherical object.In the figure, created spherical objective by " create and the select " button clicked in the middle of frame and knocked down in 3-D view, described frame then (in next figure translation and in resizing to crystal ball after) illustrates.
Figure 55 shows flat mask spherical object being applied to crystal ball, and it illustrates and projects to the front and back of spheroid to illustrate the degree of depth distributing to crystal ball in spheroid.Spherical object can translation, namely moves on three axles, and resizing is to be applicable to the target associated with it.To the projection of spheroid, crystal ball illustrates that spherical object is a bit larger tham crystal ball, but, which ensure that whole crystal ball pixel is assigned with the degree of depth.Also can as desired by spherical object resizing to the physical size of spheroid for meticulousr job.
Figure 56 shows the top view of the three dimensional representation of the first picture frame, the Z dimension distributing to crystal ball is shown, shows before the people that crystal ball is in scene.
Figure 57 show to rotate to make in X-axis coat-sleeve to seem from image out more coat-sleeve plane.Having the Plane of rotation being defined objective by the circle of the line (X-axis line) of its projection, is the plane associated with coat-sleeve mask here.
Figure 58 shows control panel, and it illustrates especially and creates head target for being applied to the face in image, namely when without the need to giving face the degree of depth true to nature when such as line model.Head target uses " create and the select " button in the middle of screen to create, and shown in next figure.
Figure 59 shows the head target in 3-D view, and it too greatly and do not aim at the number of people of reality.After creating head target according to Figure 58, head target appears in 3-D view as the general degree of depth primitive being usually applicable to head.This is owing to the following fact: depth information is not very necessary for human eye.Therefore, in depth assignment, general degree of depth primitive can be utilized to eliminate the needs for three-dimensional wireframe.As will be detailed later, head target translation in subsequent figure, rotation and resizing.
Figure 60 shows the head target in 3-D view, and it with applicable face and through adjustment, such as, is moved to the position of the actual number of people by resizing.
Figure 61 shows the head target in 3-D view, and Y-axis is rotated through circle and illustrates, and Y-axis with the head of people for initial point, therefore allow the orientation of correct rotation corresponding to face of head target.
Figure 62 shows and also turns clockwise a little so that the head target corresponding to the head tilted a little of people around Z axis.Mask shows need not be credible and face is completely in one line to human eye in order to result 3-D view.When hope, stricter rotation and resizing can be utilized.
Figure 63 shows and is propagated in second and final image frame by mask.Previously disclosed for mobile mask be not only applied to painted to all methods that they reset shape above, and be applied to the degree of depth and strengthen.Once mask be propagated in another frame, between can therefore mending all frames between this two frame.Between frame is mended, thus depth information (and if not color film, colouring information) is applied to non-key frame.
Figure 64 shows the original position of the mask corresponding to the hand of people.
Figure 65 shows and automatically performs if desired, can artificially regulates in key frame mask reset shape, wherein any intermediate frame obtains between the first picture frame mask and the second picture frame mask between benefit depth information.Mask allow greatly to save work from the shape that resets of motion tracking and mask.When hope, manually becoming more meticulous of mask is allowed to allow accurate work.
Figure 66 shows the missing information of the left viewpoint as outstanding in the left side the covered over the object colour in image below when foreground target (being crystal ball) moves to the right herein.In the left viewpoint of generating three-dimensional figures picture, outstanding data must be generated to fill the missing information from this viewpoint.
Figure 67 shows the missing information of the right viewpoint as outstanding in the right side the covered over the object colour in image below when foreground target (being crystal ball) moves to the left side herein.In the right viewpoint of generating three-dimensional figures picture, outstanding data must be generated to fill the missing information from this viewpoint.Alternatively, single camera viewpoint can with the viewpoint offsets of original camera, but missing data is large for new viewpoint.This when there is a large amount of frame and such as some missing information find in contiguous frames can be utilized.
Figure 68 shows the stereopsis that the ultimate depth that red/blue 3-D glasses can be utilized to watch strengthens the first picture frame.Original two dimensional image illustrates with three-dimensional now.
Figure 69 shows the ultimate depth that red/blue 3-D glasses can be utilized to watch and strengthens second and the stereopsis of last picture frame, notes the motion of the rotation of the number of people, the motion of staff and crystal ball.Original two dimensional image illustrates with three-dimensional now, because mask is by using mask as described above to follow the tracks of/reset shape and depth information being applied to mask in from this subsequent frame of image sequence and move/resetting shape.As described above, use have CPU (central processing unit) (CPU), storer, bus between CPU and storer multi-purpose computer perform the operation being used for depth parameter being applied to subsequent frame, described multi-purpose computer is such as programmed to do so especially, wherein here illustrate that the figure of computer screen display is intended to represent such computing machine.
Figure 70 shows the right side with the crystal ball that fill pattern " is smeared ", wherein has missing information for left viewpoint, and the pixel on namely on the right side of crystal ball is taken from the right hand edge of missing image pixel and flatly " smeared " to cover missing information.Any other method for data being introduced hidden area spirit all according to the invention.Stretch or smear missing information place pixel create can be human viewer be identified as mistake pseudomorphism.Be used for the data true to nature of missing information by obtaining or otherwise creating, the background of the generation be namely such as filled via missing information, can avoid the method for filling missing data, and therefore eliminate pseudomorphism.Such as, can be used for creating the mode of seemingly reasonably drawing or scribbling of absent region with artist provides all missing information appointed synthesis background or frame to be a kind of methods of the acquisition missing information be used in 2 d-to-3 d conversion project.
Figure 71 show for the upper body of performer and head 7101 and transparent wing 7102, for scene to the mask of framing or α plane.Mask can comprise show for black zone of opacity and show transparent region for gray area.α plane such as can be generated as the 8 bit gradation levels " or (OR) " of all foreground mask.Generate any other method spirit all according to the invention with the foreground mask of moving target or the foreground target correlation masking of restriction.
Figure 72 shows occlusion area, and namely as the deleted background view data 7201 in the territory, colored subareas of the performer of Figure 71, it opens bottom background never, i.e. the place that missing information in the background of scene or frame occurs.This region is the region of the background exposed in any frame of scene never, and thus can not borrow from another frame.When such as generating synthesis background, any background pixel that passive movement target mask or foreground mask cover can have simple Boolean true, and therefore every other pixel is also occluded pixels as shown in Figure 34.
Figure 73 shows the occlusion area of the data 7201a of the generation had for deleted background view data, and these data are drawn artistically or otherwise reproduced to generate the background complete and true to nature be used in without in the 2 d-to-3 d conversion of pseudomorphism.Also see Figure 34 and description thereof.As shown in the figure, Figure 73 also has with the mask that the target context shown in the color being different from source images is drawn.This allows such as desired painted or painted amendment.
Figure 73 A shows the occlusion area having and partly draw or otherwise reproduce to generate the deleted background view data 7201b be used in without the background enough true to nature in the 2 d-to-3 d conversion of pseudomorphism.In this illustration, artist can draw the narrower version of occlusion area, and make when projection second view, when namely exposing the horizontal translation foreground target of occlusion area, the skew to foreground target will have enough backgrounds true to nature that will work.In other words, the edge of deleted background image data area can enough inwardly flatly be drawn, to allow the data using some to generate, or the data of all generations are used in generation in the second viewpoint of 3-D view set.
In one or more embodiment of the present invention, the scene from the some of film can such as be generated by computer graphics by artist, or is sent to artist for completing background.In one or more embodiments, can create website and submit a tender background finished item for artist, wherein website custody is such as being connected in the computer system of the Internet.For obtaining the background with enough information two-dimensional frames to be reproduced as any other method spirit all according to the invention of 3d viewpoint pairing, comprising and utilizing for all occlusion areas (it is shown in Figure 73) of Figure 72 or the complete background of the data reproduction true to nature of the only part (it shows for Figure 73 A) at the edge of Figure 72 occlusion area.By the estimated background degree of depth and to foreground target the degree of depth and know for the offset distance desired by two viewpoints, therefore obtain be used in without pseudomorphism 2 d-to-3 d change in whole occlusion area that is less than is possible.In one or more embodiments, the certain percentage (namely such as 5%) of the size of constant offset (100 pixels on each edge of such as each occlusion area) or foreground target can be labeled as and be created, and if need more data, so frame flag is used for upgrade, or can utilize smear or pixel stretch to minimize the pseudomorphism of missing data.
Figure 74 shows the bright area of the shoulder on the right side of Figure 71, wherein when generating the right viewpoint of the right image being used for 3-D view pairing, there is deleted background view data 7201.Deleted background view data 7201 represents when foreground target is moved on to the left side to use the gap at stretching (it is also shown in Figure 70) or other pseudomorphism generation technology places when creating right viewpoint.The dark-part of figure take from data at least one frame of its Scene can background.
Figure 75 shows the corresponding stretching of pixel or the example of " smearing pixel " 7201c with the bright area (i.e. deleted background view data 7201) in Figure 74, wherein when not using the background of generation, if namely do not have background data to be used in the region be blocked in all frames of scene, create pixel.
Figure 76 show by will generate data 7201a (or 7201b) for being illustrated as the deleted background view data 7201 in the region of always blocking for scene, the edge of the shoulder of people does not have the result of the right viewpoint of pseudomorphism.
Figure 77 shows an example of the element (herein for robot 7701) of Practical computer teaching, its modeling and be projected as two dimensional image in three dimensions.Background is that grey is to show invisible area.As shown in figure below, the such as metadata of α, mask, the degree of depth or its combination in any and so on be utilized to accelerate from two dimensional image to for right and left eyes so that the transfer process of the two dimensional image pairing of three-dimensional viewing.With hand or even to shelter this personage in a computer-assisted way by operator be very consuming time, because there is the sub-mask degree of depth (and/or color) being correctly rendered to hundreds of (if not thousands of) needed for this complex target.
Color and the degree of depth of the importing of the element (namely having the robot 7803 of the degree of depth automatically arranged via the depth metadata imported) of Figure 78 and Practical computer teaching together illustrate the original image being separated into background 7801 and foreground elements 7802 and 7803 (soldier of the mountain range in background and sky and lower left, also see Figure 79).Although soldier is present in original image, but its degree of depth is arranged by operator, and the shape of the usual vicissitudinous degree of depth of tool or mask are applied in these depths relative to original object to obtain stereo-picture pairing (see Figure 79) for right and left eyes viewing.As shown in background, any region of the profile 7804 and so on that such as (projects to the soldier's head in background) covered for scene can be reproduced such as to provide believable missing data artistically, as as shown in Figure 73 of the missing data based on Figure 73 A, its cause such as shown in figure 76 without pseudomorphism edge.Import be used for the data of the element of Practical computer teaching can comprise read have for Practical computer teaching element 7701 by the depth information in pixel basis and on a computer display with skeleton view by the element of this information displaying for importing, such as robot 7803.This importing process saves a large amount of operator's time, and makes two-dimentional film to the conversion economically feasible of three-dimensional movie.The data of mask and importing are stored in computer memory and/or computer disc driver and are used in transfer process for one or more computing machine by one or more embodiment of the present invention.
Figure 79 shows the mask 7901 (forming the part of the helmet of rightmost soldier) associated with soldier 7802 photo in prospect.Mask 7901 with on soldier with together with the mask of the every other Operation Definition shown in multiple artificial color by good application to the different piece appearing at the soldier of original image in degree of depth front of element (i.e. robot 7803) being arranged in Practical computer teaching.The dotted line flatly extended from masks area 7902 and 7903 shows there occurs the horizontal translation of foreground target, and illustrate when other elements for film exist metadata, can utilize the metadata of importing accurately automatic calibration cover over the object on the degree of depth or the place of excessively scribbling of color.Such as, when there is α for the target before the element appearing at Practical computer teaching, accurately edge can be determined.The file that can be utilized to the type obtaining mask edge data is the file with α file and/or mask data, such as RGBA file (see Figure 80).In addition, the absent region at masks area 7902 and 7903 place of these horizontal translations, the data of generation being used for background allows to realize changing without the 2 d-to-3 d of pseudomorphism.
Figure 80 show also can be used as mask layer in case limit operator definition and possibly more coarse for by good application to three soldiers 7802 and be called the mask at the edge of soldier A, B and C, be illustrated as skipper cover importing α layer.In addition, along the line being labeled as " dust ", the element of the optional Practical computer teaching of such as dust and so on can be inserted in scene to increase the authenticity (if desired) of scene.Any one in the element of background, prospect or Practical computer teaching can be utilized to fill final left images pairing when needed.
The mask of operator's definition is used not carry out the result regulated when Figure 81 shows on the element of the Practical computer teaching movement elements of such as soldier and so on being covered such as robot and so on.Do not use the metadata with original image target association, such as, there is pseudomorphism in matte (matte) or α 8001, and the mask of wherein operator's definition is not exclusively aimed at the edge covered over the object.In uppermost picture, the lip of soldier shows light border 8101, and picture below shows without pseudomorphism edge, because the α of Figure 80 is used for limiting the edge of the mask of any operator definition.Be applied to the α metadata of Figure 80 of the mask edge of the Operation Definition of Figure 79 by using, thus allow to realize on overlapping region without pseudomorphism edge.It will be understood to those of skill in the art that the application of the continuously nearer element combined with its α is used for arriving from behind all target hierarchies to be deposited in its each different depth above and to sentence just to create and be used for the final image that left eye and right eye watch and match.
Embodiments of the invention can as the portable translation file by pixel edition file allows real-time edition 3D rendering without the need to again reproducing such as to change layer/color/mask and/or remove pseudomorphism and minimize or eliminate the iteration workflow path returning different operating group by generating.Such as, mask set takes source images, and is the items in every frame of the image sequence of formation film, region or mankind's recognizable object establishment mask.The degree of depth and such as shape are applied to the mask that mask set creates by degree of depth amplification group.When reproduced image matches, left and right visual point image and left and right translation file can be generated by one or more embodiment of the present invention.Left and right visual point image allows the 3D of original 2D image to watch.Translation file is such as given for the pixel-shift of each source pixel in original 2D image with the form of UV or U figure.These files are usually relevant with the α mask for every layer, and described layer is such as the layer, the layer for door, the layer for background etc. of actress.These translation files or figure are passed to Quality Assurance group from the degree of depth amplification group reproducing 3D rendering.This real-time edition allowing Quality Assurance group (or other working groups of such as degree of depth amplification group and so on) to perform 3D rendering when again not reproducing such as in case when not with require so again reproduce or mask is sent it back mask set does over again processing time/again reproduce and/or delay that iteration workflow associates change layer/color/mask and/or remove the pseudomorphism such as sheltering error and so on, wherein mask set can be in the third world countries with non-skilled labor of earth opposite side.In addition, when reproduction left images, namely, during 3D rendering, the Z degree of depth in the region of the such as such as performer and so in image also can be passed to quality assurance group together with α mask, this quality assurance group then also can when not utilizing original reproduction software again to reproduce regulation depth.This can such as utilize the deleted background data from the generation of any layer to perform in case when such as again do not reproduce or ray tracing allow " downstream " real-time edition.Quality assurance can shelter group or degree of depth amplification group with feedback for individuality, these individualities can be indicated on do not wait for or require upstream group for the work product desired by making for given project when current project is done over again to anything.This allows feedback, but eliminates the associated delay of work product being sent back to the work product that iteration postpones and wait is done over again related to of doing over again.The elimination of such a iteration provides the end-to-end time of conversion project cost or the huge saving of wall hanging time, thus increases profit, and minimizes the labour realized needed for workflow.
Figure 82 shows source images, it will carry out degree of depth enhancing and with left and right translation file (for the embodiment of translation file, see Figure 85 A-D and Figure 86 A-D) and α mask (such as shown in Figure 79) together with provide, so that allow (such as downstream working group) perform 3D rendering real-time edition and without the need to again reproduce or whole image sequence in ray tracing scene such as to change layer/color/mask when not returning to the iteration workflow path of original working group and/or to remove and/or regulation depth or otherwise change 3D rendering (according to the relative Figure 95 of Figure 96).
Figure 83 shows the mask generated by mask working group for the degree of depth amplification group application degree of depth, the target association of mankind's recognizable object in the source images of wherein mask and such as such as Figure 82 and so on.Usually, the mankind's recognizable object in the key frame utilizing non-skilled labor to shelter in scene or image sequence.Non-skilled labor is cheap and be usually located at coastal waters.Can to employ hundreds of worker at a low price to perform and to shelter this hard row to hoe associated.Any existing colored mask can be used as the starting point of 3D mask, and it can combine to be formed the 3D mask profile of the sub-mask resolving into the different depth limited in mankind's recognizable object.Obtain any other method spirit all according to the invention of the mask being used for image-region.
Figure 84 show wherein usual for nearer target darker and for farther target brighter apply the region of the degree of depth.This view provides the quick overview of the relative depth of target in frame.
Figure 85 A shows the left UV figure each source pixel being comprised to translation in horizontal direction or skew.When utilizing the degree of depth reconstruction of scenes of application, the translation figure of the skew of the tangential movement graphically mapping independent pixel can be utilized.Figure 85 B shows the right UV figure each source pixel being comprised to translation in horizontal direction or skew.Because each width in these images seems identical, thus by the black level value of mobile color so that the difference of the specific region of outstanding Figure 85 A and Figure 85 B, more easily observe and there is delicate difference in described two file.Figure 85 C shows the black level value movable part of the left UV figure of Figure 85 A of performance minor element wherein.Branch shown in this region to Figure 82, Figure 83 and Figure 84 upper right corner is corresponding, directly over concrete mixer and on the left side of lamp stand.Figure 85 D shows the black level value movable part of the right UV figure of Figure 85 B of performance minor element wherein.Branch shown in the Light Difference of color shows, the relevant position that those pixels will be moved in pure UV figure, and redness will be mapped to the brightest by this UV figure in the horizontal direction from the most secretly, and will be mapped to the brightest from the most secretly by green in vertical direction.In other words, the translation figure in UV embodiment is the graphic depiction of the movement occurred when generating left and right viewpoint relative to original source image.UV figure can be utilized, but, be included in and can be utilized by (or more fine granularity) in pixel basis any other file type from the horizontal-shift of source images, comprise and be not easy as image the compressed format seen.Some software packages for editing provide together with the UV small tool built in advance, and therefore if desired, can thus utilize UV translation file or figure.Such as, some synthesis programs have the object built in advance, and it makes UV figure easily to utilize, and otherwise handle on figure, and thus for these implementations, the file that figure can be watched can be utilized, but not required.
Move horizontally owing to employing from 2D image creation left and right viewpoint, thus likely monochrome is used for translation file.Such as, because the often row of translation file is indexed based on the position in storer in vertical direction, thus likely use a kind of lifting color simply, such as, use red to show the original position of pixel in the horizontal direction.Therefore, any pixel in translation figure moves and is all illustrated as to pixel value from a horizontal-shift to another movement, and when such as moving less in background, this causes trickle color change.Figure 86 A shows the left U figure each source pixel being comprised to translation in horizontal direction or skew.Figure 86 B shows the right U figure each source pixel being comprised to translation in horizontal direction or skew.Figure 86 C shows the black level value movable part of the left U figure of Figure 86 A of performance minor element wherein.Figure 86 D shows the black level value movable part of the right U figure of Figure 86 B of performance minor element wherein.Again, do not require the file layout utilizing the mankind to watch, and any form by the horizontal-shift in pixel basis stored relative to source images can be utilized.Because storer and memory storage are so cheap, thus can utilize any form no matter whether compressed, and not have any remarkable increase of cost.Usually, the prospect part that the establishment of eye image makes U scheme (or UV figure) seems darker, because they are moved to the left, and vice versa.This is watched attentively the something in prospect by only right eye and is then easy to slightly to moving right observe (observing foreground target to be really moved to the left) with opening.Due to the simple color slope that the U figure (or UV figure) under invariant state is from dark to bright, thus something is moved to the left, namely for right viewpoint, is mapped to the more dark areas of U figure (or UV figure).Therefore, relative to the pixel of not movement, the identical branch in the same area of each U figure (or UV figure) is darker for right eye, and for left eye Yan Gengliang.Again, do not require to use the figure that can watch, but show the concept of the movement that given viewpoint is occurred.
Figure 87 illustrates the known application of UV figure, and wherein three-dimensional model is unfolded, and makes the image in UV space that UV figure painting can be used to sign on 3D model.The figure shows and how to utilize UV to scheme texture maps is applied to 3D shape traditionally.Such as, here for earth acquisition image scribble or the texture of plane set is mapped to U and V coordinate system, this coordinate system is converted to X, Y and Z coordinate on 3D model.Traditional animation performs in this manner, and wherein wire-frame model is unfolded and flattens, and it limits U and the V coordinate system wherein applying texture maps.
Embodiments of the invention described herein utilize UV and U to scheme in new ways, wherein utilization figure pairing is defined for the horizontal-shift of two width images (left and right), each source pixel is by translation, relative with single figure, it is utilized to limit coordinate texture maps be placed on 3D model or wire frame.That is, embodiments of the invention utilize UV figure and U figure (or any other horizontal translation file layout) to allow to be adjusted to offset target when again not reproducing whole scene.Again, relative with the known use of the UV figure such as two normal coordinates being mapped to objective is, here the embodiments of the invention enabled make use of two figure, and namely for a figure of a left eye and figure for right eye, it maps the horizontal translation being used for left and right viewpoint.In other words, due to pixel only (for left and right eyes) translation in the horizontal direction, thus embodiments of the invention map on level basis line by line in one dimension.That is, 2 dimensions are mapped to 3 dimensions by prior art, and embodiments of the invention utilize 2 translation figure in 1 dimension (therefore, the visible embodiment of translation figure utilizes a kind of color).Such as, if a line of translation file comprises 0,1,2,3...1918,1919, and the 2nd and the 3rd pixel is to right translation 4 pixels, this of so file is about to be read as 0,4,5,3...1918, and 1919.The extended formatting of performance relativity shift can not be checked as slope colored region, but can provide large compression level, such as, uses the row of the file of relativity shift may be read as 0,0,0,0...0,0, and 4 pixels of the 2nd and the 3rd pixel move to right and will this file be made to be read as 0,4,4,0 ... 0,0.If there is the large background parts in the viewpoint of left and right with zero level skew, so such file can compress to a great extent.But this file can be counted as standard U file, just as it is pure and fresh, namely relative with relative to being counted as coloud coding translation file, make it be absolute.In an embodiment of the present invention, any other form of the skew moved horizontally that can store for left and right viewpoint can be utilized.Also on Y or Z-axis, have ramp function like UV files classes, the value in such file is such as arranged corresponding to each pixel (0,0) for the end of image, (0,1), (0,2) ... (0,1918), (0,1919), to be (1 for such as the second horizontal line or row, 0), (1,1) etc.Such skew file allows the motion of pixel in non-horizontal row, but, embodiments of the invention move horizontally data simply for left and right viewpoint, and therefore move to which vertical row without the need to tracing source pixel, because tangential movement is in identical row according to definition.
Figure 88 shows disparity map, and it illustrates the region that difference wherein between left and right translation figure is maximum.This shows to have pixels mobile maximum between two UV (or U) figure shown in Figure 85 A-B (or Figure 86 A-B) near the target of beholder
The left eye that Figure 89 shows the source images of Figure 82 reproduces.The right eye that Figure 90 shows the source images of Figure 82 reproduces.Figure 91 shows the stereopsis of the image of Figure 89 and Figure 90 used together with red/blue glasses.
Figure 92 shows image that is masked and that be in the process that the degree of depth for each different layers strengthens, and described layer comprises actress's layer, gate layer, background layer (the deleted background information can filled by generating missing information is shown---see such as Figure 34, Figure 73 and Figure 76).That is, the part of the sky of the background after the actress in Figure 92 can fill (profile see the head of the actress on background wall) by the view data generated.By the view data of generation is used for each layer, with again reproduce or all images in ray tracing scene so that real-time edition is relative, such as can utilize synthesis program.Such as, if the hair mask of the actress in Figure 92 is changed more correctly to cover hair, any pixel so do not covered by new mask from background obtain and almost at once can be used for watching (with when edit scene anything time the standard of all images of a few hours processing power again in reconstruction of scenes may be spent again to reproduce or ray tracing relative).This can comprise is that any layer of data obtaining generation comprising background generate for the 3D rendering without pseudomorphism.
The UV that Figure 93 shows on the α mask that covers and associate with the actress shown in Figure 92 schemes, and its degree of depth based on each different pixels in α mask arranges and arranges the translational offsets in the left and right UV figure obtained.This UV layer can utilize together with other UV layers in case to Quality Assurance group (or other working groups) be provided in again do not reproduce entire image when real-time edition 3D rendering, such as correct pseudomorphism, or correct the ability of sheltering mistake.But iteration workflow may require to send frame back to third world countries to do over again to mask, then these masks send the different operating group of the such as U.S. back to so that reproduced image again, this image and then checked by Quality Assurance group.Such iteration workflow completely eliminates small pseudomorphism, because Quality Assurance group can reset shape to α mask simply, and the pixel-shift regenerated from original source image is so that real-time edition 3D rendering, and avoid such as relating to other working groups.Determine according to such as Figure 42-70 or the degree of depth that arranges actress in any other way the UV figure do not changed and generate the amount of movement that UV schemes to experience, described UV figure, handles (or the U figure in Figure 86 A-D) for eye image for one for left eye according to Figure 85 A-D mono-.These figure can for every layer be supplied to such as any synthesis program together with α mask, and wherein the Change Example of mask obtains the pixel from other layers " to add up " image in real time simply as allowed synthesis program.This can comprise and the view data of generation is used for any layer (or when there are not the data of generation for darker layer, gap-fill data).It will be understood to those of skill in the art that by arbitration or otherwise determine to be placed in over each other by which layer and respective image to form output image, in synthesis program, combination has the layer set of mask to form output image.When add again again not reproduce after the degree of depth or ray tracing use horizontal translation figure combinations of pairs source image pixels so that any method spirit all according to the invention forming output pixel.
Figure 94 shows based on each different layers shown in Figure 92, namely be the work space that second degree of depth strengthens Program Generating for the left and right UV translation figure of each α, wherein this work space allow quality assurance personnel (or other working groups) when again do not reproduce or ray tracing and/or do not send repair to any other working group iteratively regulate mask in real time and thus change 3D rendering pairing (or stereopsis).One or more embodiment of the present invention can be circulated by source file for number of plies amount, and creates the script generating work space as shown in fig. 94.Such as, once mask working group creates the mask for each different layers and generates mask file, so reproduction group can read in mask file with programming mode and generation script code, this scripted code comprises the output of reproducing based on reproduction group to generate source icon, copies icon for the α of every layer, scheme for the left and right UV of every layer, and each different layers is combined to other icons in the visual point image of left and right.This allow Quality Assurance group to utilize them to be familiar with and the representation tool that may utilize than rendering tasks group sooner and more uncomplicated instrument.Graphical user interface is generated to allow any method of real-time edition 3D rendering for worker, comprise for every frame creates the α mask artwork target source icon that is connected to for every layer and the translation figure be connected to each other generated for left and right viewpoint and for every layer of circulation, until combine the method for watching for 3D, spirit according to the invention with output viewpoint.Alternatively, when not by using translation figure pairing again to reproduce, allow any other method of real-time edition image spirit all according to the invention, even if these translations figure is invisible or do not illustrate to user for user.
Figure 95 shows the workflow for iteration correction workflow.At 9501 places, mask working group generates the mask of the target of the mankind's recognizable object or any other shape and so on that are used in such as image sequence.This can comprise the sub-mask set of generation and generate the layer limiting different depth region.This step is performed by the unskilled and/or sweated labour in the usual country having very low labor cost usually.The target of sheltering is checked by the employee (normally artist) of high professional qualification, described employee at 9502 places by the degree of depth and/or color application to the masking regional in scene.Artist is usually located at has Geng Gao labor cost industrialized country.The image obtained then is checked by another working group (being generally quality assurance group) at 9503 places, and determines whether there is based on the requirement of specific project any pseudomorphism or mistake that need to repair.If like this, so by vicious for tool mask or wherein find that the position in the image of mistake is sent back to and sheltered working group to do over again, namely from 9504 to 9501.Once not have more mistake, this process completes at 9505 places.Even in less working group, mistake can by do over again to mask and again to reproduce or all images otherwise in ray tracing scene are corrected again, and this may spend processing time a few hours such as to make simple change.Mistake during the degree of depth judges occurs not too frequent usually, because more the laborer of high professional qualification is based on the more high professional qualification horizontal application degree of depth, and therefore returning reproduction group occurs not too frequent usually, and therefore this circulation does not for the sake of clarity illustrate in the drawings, although this Iterative path may occur.Shelter " returning " and plenty of time retrieval system work may be spent, because work product must again be sheltered by other working groups and then again reproduce.
Figure 96 shows the embodiment being allowed the workflow realized by the one or more embodiments of system, wherein each working group can perform 3D rendering when again not reproducing real-time edition such as so as to change layer/color/mask and/or remove pseudomorphism and the work product that otherwise corrects from another working group and not with again reproduce/ray tracing or send work product back to by workflow to correct the iteration associated and postpone.The life of mask is imaged in Figure 95 and occurs at 9501 places like that, and the application of the degree of depth occurs at 9502 places as in Figure 95.In addition, at 9601 places, reproduction group generates the translation figure arriving quality assurance group with the image reproduced.Quality assurance group checks work product at 9503 places as in Figure 95, and also as in Figure 95, checks pseudomorphism at 9504 places.But, it will be understood to those of skill in the art that thus they such as can use such as at 9602 places because Quality Assurance group (or other working groups) has translation figure and adjoint layer and α mask and so on commercially available synthesis program edit 3D rendering or otherwise correcting image partly in real time.Such as, as shown in fig. 94, Quality Assurance group can open the graphics programs (relative with the complicated playback program that artist uses) that they are familiar with, and such as regulate α mask, skew wherein in each left and right translation figure resets shape as required by Quality Assurance group, and output image is successively formed (use the deleted background information of any generation according to Figure 34, Figure 73 and Figure 76, and use the element layer of any Practical computer teaching according to Figure 79).It will be recognized by those skilled in the art, generating two width output images from backing layer farthest to foreground layer can when not carrying out ray tracing by only almost immediately covering the pixel from every layer on final output image and completing.This allows Quality Assurance group to replace 3D modeling and ray tracing etc. as the utilization of rendering tasks group to carry out local and handle by pixel image effectively.This can save multiple hours processing time and/or with wait for the delay that other workers again reproduce the image sequence that forms scene and associate.
Although describe invention disclosed herein by means of specific embodiment and application thereof, those skilled in the art can make many amendments and modification when not departing from scope of the present invention illustrated in claims to it.

Claims (20)

1. a motion picture project management system, comprising:
Computing machine;
Database, it is coupled with described computing machine, and wherein said database comprises
Repertory, it comprises the description of item identifier and the project relevant with motion picture;
Shut list, it comprises camera lens identifier and utilizes initial frame value and terminate frame value quotes multiple image, and wherein said multiple image associates with the described motion picture of described item association, and it comprises
There is at least one camera lens of the state relevant with the job schedule that described camera lens performs; Task list, it quotes described item identifier in described repertory and it comprises
At least one task, it comprise task identifier with assignment person and its comprise the context that the task type relevant with motion picture work associate further and arrange, wherein said task at least comprises the region limited in described multiple image, to the synthetic work in described regional work and described region, and at least one task wherein said has comprised the time that at least one task described is distributed;
Time single items table, it quotes the described item identifier in described repertory and the described task identifier in described task list, and it comprises
Comprise at least one time single items of initial time and end time;
Described computing machine is configured to
Present the first display comprising search display being configured to be checked by film-making worker, this search display comprises context, project, camera lens, state and artist, and wherein said second display comprises multiple artistical list further and based on the time spent at least one described single items time relatively according to corresponding states and the actual achievement of the described time of at least one task matching described in associating with at least one camera lens described;
Present be configured to by artist check second display, this second display comprise at least one daily distribution, its have context, project, camera lens and
Be configured to the described state upgraded in described task list state input and
Be configured to upgrade the described initial time in single items table of described time and the input of the timer of described end time;
Present the 3rd display being configured to be checked by editor, the 3rd display comprises and is configured to accept about the comment of at least piece image in the described multiple image associated with at least one camera lens described or drawing or comment and the annotation frame both drawing.
2. the motion picture project management system of claim 1, comprises further:
Snapshot table, it comprises snapshot identifier and search-type, and it comprises
The snapshot of at least one camera lens described, it comprises at least one position of the resource associated with at least one camera lens described.
3. the motion picture project management system of claim 1, wherein said context arranges and comprises region definition subtype further, and described subtype comprises shelters and outsourcing mask, comprises the work subtype on the described region of the degree of depth, crucial framing and motion.
4. the motion picture project management system of claim 1, comprises further
Asset request table, it comprises asset request identifier and camera lens identifier.
5. the motion picture project management system of claim 1, comprises further
Mask required list, it comprises mask request identifier and camera lens identifier.
6. the motion picture project management system of claim 1, comprises further
Remarks table, it comprise remarks identifier and quote described item identifier and comprise with from relevant at least one remarks of at least one width in the described multiple image of described motion picture.
7. the motion picture project management system of claim 1, comprises further
Pay table, it comprises pays identifier and quotes described item identifier and comprise the information relevant with the payment of described motion picture.
8. the system of claim 1, wherein said computing machine is configured to further
Described 3rd display being configured to be checked by described editor presents the annotation at least one width covered in described multiple image.
9. the system of claim 1, wherein said computing machine is configured to
Accept the grading input of the work performed based on described artist from described film-making worker or described editor's editor.
10. the system of claim 1, wherein said computing machine is configured to
Accept the grading of the work performed based on described artist from described film-making worker or described editor's editor, wherein before described computing machine accepts the described grading from described film-making worker or described editor's editor, described computing machine is not to film-making worker or the described artistical identity of described editor display.
The system of 11. claims 1, wherein said computing machine is configured to
Accept the difficulty of at least one camera lens described, and based on the work that described artist performs, based on the Time Calculation grading that the described difficulty of described camera lens and described shooting base spend.
The system of 12. claims 1, wherein said computing machine is configured to
Accept the grading input of the work performed based on described artist from described expeditor or described editor; Or
Accept the difficulty of at least one camera lens described and the work performed based on described artist, based on the Time Calculation grading that the described difficulty of described camera lens and described shooting base spend, and
Described grading based on described computing machine acceptance or described computer calculate shows for described artistical excitation.
The system of 13. claims 1, wherein said computing machine is configured to
Estimate residual cost based on described actual achievement, described actual achievement be based upon with in described project described in whole camera lenses at least one camera lens associate described in T.T. of spending of whole tasks at least one task relative for in described project described in whole camera lenses at least one camera lens associate described in time of whole task matching at least one task.
The system of 14. claims 1, wherein said computing machine is configured to
By the described actual achievement with the first item association with compare with the actual achievement of the second item association;
Show that at least one worker will distribute to described second project from described first project based at least one grading distributing to described Section 1 object first worker.
The system of 15. claims 1, wherein said computing machine is configured to
Analyze the perspective project and the difficulty estimating every camera lens with the camera lens of some, and based on the actual achievement with described item association, calculate the forecast cost being used for described perspective project.
The system of 16. claims 1, wherein said computing machine is configured to
Analyze the perspective project and the difficulty estimating every camera lens with the camera lens of some, and based on the actual achievement of the second item association with previous the first project performed and the previous execution completed after described first project previously performed, calculate the derivative of described actual achievement, the described derivative calculations based on described actual achievement is used for the forecast cost of described perspective project.
The system of 17. claims 1, wherein said computing machine is configured to
Analyze the described actual achievement with described item association, and with the camera lens completed divided by the total camera lens with described item association; And
Provide the deadline of described project.
The system of 18. claims 1, wherein said computing machine is configured to
Analyze the described actual achievement with described item association, and with the camera lens completed divided by the total camera lens with described item association;
Provide the deadline of described project;
Accept the input with at least one additional artist of grading;
Accept the camera lens wherein using the some of described additional artist;
Calculate based at least one additional artist described and described number of shots and save time,
Save time described in deducting from the described deadline of described project;
Provide the update time that described project completes.
The system of 19. claims 1, wherein said computing machine is configured to
Calculating can be utilized to the disk space amount of filing described project, and shows that at least one assets can rebuild from other assets are to avoid described at least one assets filing.
The system of 20. claims 1, wherein said computing machine is configured to
Error message is shown to when current frame number work not at least one camera lens described when described artist.
CN201380018690.7A 2012-02-06 2013-04-05 Moving picture project management system Expired - Fee Related CN104272377B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/366,899 US9031383B2 (en) 2001-05-04 2012-02-06 Motion picture project management system
PCT/US2013/035506 WO2013120115A2 (en) 2012-02-06 2013-04-05 Motion picture project management system

Publications (2)

Publication Number Publication Date
CN104272377A true CN104272377A (en) 2015-01-07
CN104272377B CN104272377B (en) 2017-03-01

Family

ID=48948173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201380018690.7A Expired - Fee Related CN104272377B (en) 2012-02-06 2013-04-05 Moving picture project management system

Country Status (5)

Country Link
EP (1) EP2812894A4 (en)
CN (1) CN104272377B (en)
AU (1) AU2013216732B2 (en)
CA (1) CA2866672A1 (en)
WO (1) WO2013120115A2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330410A (en) * 2017-07-03 2017-11-07 南京工程学院 Method for detecting abnormality based on deep learning under complex environment
CN108027433A (en) * 2015-08-03 2018-05-11 联邦科学和工业研究组织 Monitoring system and method
CN109074658A (en) * 2016-03-09 2018-12-21 索尼公司 The method for carrying out the reconstruction of 3D multiple view by signature tracking and Model registration
CN109076172A (en) * 2016-04-06 2018-12-21 脸谱公司 From the effective painting canvas view of intermediate view generation
CN110060213A (en) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111526422A (en) * 2019-02-01 2020-08-11 网宿科技股份有限公司 Method, system and equipment for fitting target object in video frame
CN112183629A (en) * 2020-09-28 2021-01-05 海尔优家智能科技(北京)有限公司 Image identification method and device, storage medium and electronic equipment
CN115131409A (en) * 2022-08-26 2022-09-30 深圳深知未来智能有限公司 Intimacy matrix viewpoint synthesis method, application and system based on deep learning
CN115188091A (en) * 2022-07-13 2022-10-14 国网江苏省电力有限公司泰州供电分公司 Unmanned aerial vehicle grid inspection system and method integrating power transmission and transformation equipment
TWI783718B (en) * 2021-10-07 2022-11-11 瑞昱半導體股份有限公司 Display control integrated circuit applicable to performing real-time video content text detection and speech automatic generation in display device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109218750B (en) * 2018-10-30 2022-01-04 百度在线网络技术(北京)有限公司 Video content retrieval method, device, storage medium and terminal equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040151471A1 (en) * 2002-11-15 2004-08-05 Junichi Ogikubo Method and apparatus for controlling editing image display
CN1650622A (en) * 2002-03-13 2005-08-03 图象公司 Systems and methods for digitally re-mastering or otherwise modifying motion pictures or other image sequences data
CN101142585A (en) * 2005-02-04 2008-03-12 Dts(Bvi)Az研究有限公司 Digital intermediate (di) processing and distribution with scalable compression in the post-production of motion pictures
US20110164109A1 (en) * 2001-05-04 2011-07-07 Baldridge Tony System and method for rapid image sequence depth enhancement with augmented computer-generated elements

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3736790A1 (en) * 1987-10-30 1989-05-11 Broadcast Television Syst METHOD FOR AUTOMATICALLY CORRECTING IMAGE ERRORS IN FILM SCANNING
US5328073A (en) * 1992-06-24 1994-07-12 Eastman Kodak Company Film registration and ironing gate assembly
US5835163A (en) * 1995-12-21 1998-11-10 Siemens Corporate Research, Inc. Apparatus for detecting a cut in a video
US5841512A (en) * 1996-02-27 1998-11-24 Goodhill; Dean Kenneth Methods of previewing and editing motion pictures
US5920360A (en) * 1996-06-07 1999-07-06 Electronic Data Systems Corporation Method and system for detecting fade transitions in a video signal
US5767923A (en) * 1996-06-07 1998-06-16 Electronic Data Systems Corporation Method and system for detecting cuts in a video signal
US5778108A (en) * 1996-06-07 1998-07-07 Electronic Data Systems Corporation Method and system for detecting transitional markers such as uniform fields in a video signal
US5959697A (en) * 1996-06-07 1999-09-28 Electronic Data Systems Corporation Method and system for detecting dissolve transitions in a video signal
US6067125A (en) * 1997-05-15 2000-05-23 Minerva Systems Structure and method for film grain noise reduction
US6031564A (en) * 1997-07-07 2000-02-29 Reveo, Inc. Method and apparatus for monoscopic to stereoscopic image conversion
US8156532B2 (en) * 2002-07-15 2012-04-10 Sony Corporation Video program creation system, table providing device, terminal device, terminal processing method, program, recording medium
US9047915B2 (en) * 2004-04-09 2015-06-02 Sony Corporation Asset revision management in media production
US20090196570A1 (en) * 2006-01-05 2009-08-06 Eyesopt Corporation System and methods for online collaborative video creation
US8443284B2 (en) * 2007-07-19 2013-05-14 Apple Inc. Script-integrated storyboards
US8225228B2 (en) * 2008-07-10 2012-07-17 Apple Inc. Collaborative media production

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110164109A1 (en) * 2001-05-04 2011-07-07 Baldridge Tony System and method for rapid image sequence depth enhancement with augmented computer-generated elements
US20120242790A1 (en) * 2001-05-04 2012-09-27 Jared Sandrew Rapid workflow system and method for image sequence depth enhancement
CN1650622A (en) * 2002-03-13 2005-08-03 图象公司 Systems and methods for digitally re-mastering or otherwise modifying motion pictures or other image sequences data
US20040151471A1 (en) * 2002-11-15 2004-08-05 Junichi Ogikubo Method and apparatus for controlling editing image display
CN101142585A (en) * 2005-02-04 2008-03-12 Dts(Bvi)Az研究有限公司 Digital intermediate (di) processing and distribution with scalable compression in the post-production of motion pictures

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108027433B (en) * 2015-08-03 2022-09-06 联邦科学和工业研究组织 Monitoring system and method
CN108027433A (en) * 2015-08-03 2018-05-11 联邦科学和工业研究组织 Monitoring system and method
CN109074658A (en) * 2016-03-09 2018-12-21 索尼公司 The method for carrying out the reconstruction of 3D multiple view by signature tracking and Model registration
CN109076172A (en) * 2016-04-06 2018-12-21 脸谱公司 From the effective painting canvas view of intermediate view generation
CN109076172B (en) * 2016-04-06 2020-03-03 脸谱公司 Method and system for generating an efficient canvas view from an intermediate view
CN107330410B (en) * 2017-07-03 2020-06-30 南京工程学院 Anomaly detection method based on deep learning in complex environment
CN107330410A (en) * 2017-07-03 2017-11-07 南京工程学院 Method for detecting abnormality based on deep learning under complex environment
CN111526422A (en) * 2019-02-01 2020-08-11 网宿科技股份有限公司 Method, system and equipment for fitting target object in video frame
CN110060213A (en) * 2019-04-09 2019-07-26 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN110060213B (en) * 2019-04-09 2021-06-15 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN112183629A (en) * 2020-09-28 2021-01-05 海尔优家智能科技(北京)有限公司 Image identification method and device, storage medium and electronic equipment
TWI783718B (en) * 2021-10-07 2022-11-11 瑞昱半導體股份有限公司 Display control integrated circuit applicable to performing real-time video content text detection and speech automatic generation in display device
CN115188091A (en) * 2022-07-13 2022-10-14 国网江苏省电力有限公司泰州供电分公司 Unmanned aerial vehicle grid inspection system and method integrating power transmission and transformation equipment
CN115188091B (en) * 2022-07-13 2023-10-13 国网江苏省电力有限公司泰州供电分公司 Unmanned aerial vehicle gridding inspection system and method integrating power transmission and transformation equipment
CN115131409A (en) * 2022-08-26 2022-09-30 深圳深知未来智能有限公司 Intimacy matrix viewpoint synthesis method, application and system based on deep learning
CN115131409B (en) * 2022-08-26 2023-01-24 深圳深知未来智能有限公司 Intimacy matrix viewpoint synthesis method, application and system based on deep learning

Also Published As

Publication number Publication date
CA2866672A1 (en) 2013-08-15
EP2812894A2 (en) 2014-12-17
AU2013216732B2 (en) 2014-10-02
AU2013216732A1 (en) 2014-09-25
CN104272377B (en) 2017-03-01
WO2013120115A2 (en) 2013-08-15
EP2812894A4 (en) 2016-04-06
WO2013120115A3 (en) 2013-10-24

Similar Documents

Publication Publication Date Title
CN104272377B (en) Moving picture project management system
US9595296B2 (en) Multi-stage production pipeline system
US9615082B2 (en) Image sequence enhancement and motion picture project management system and method
CN101479765B (en) Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
US9031383B2 (en) Motion picture project management system
US8385684B2 (en) System and method for minimal iteration workflow for image sequence depth enhancement
Wu et al. Content‐based colour transfer
US8897596B1 (en) System and method for rapid image sequence depth enhancement with translucent elements
US8078006B1 (en) Minimal artifact image sequence depth enhancement system and method
US8396328B2 (en) Minimal artifact image sequence depth enhancement system and method
CN101375315B (en) Methods and systems for digitally re-mastering of 2D and 3D motion pictures for exhibition with enhanced visual quality
AU2015213286B2 (en) System and method for minimal iteration workflow for image sequence depth enhancement
Wang et al. People as scene probes
Wang et al. Repopulating street scenes
Willment et al. What is virtual production? An explainer and research agenda
Hillman et al. Issues in adapting research algorithms to stereoscopic visual effects
Sylwan The application of vision algorithms to visual effects production
Schneider et al. Semi-automatic digital landform mapping
Chandra et al. Aerial image relighting: simulating time of day variations

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170301

Termination date: 20190405

CF01 Termination of patent right due to non-payment of annual fee