WO1987007108A1 - Video production system and method - Google Patents

Video production system and method Download PDF

Info

Publication number
WO1987007108A1
WO1987007108A1 PCT/US1987/001100 US8701100W WO8707108A1 WO 1987007108 A1 WO1987007108 A1 WO 1987007108A1 US 8701100 W US8701100 W US 8701100W WO 8707108 A1 WO8707108 A1 WO 8707108A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
visual
sequence
camera
audio
Prior art date
Application number
PCT/US1987/001100
Other languages
French (fr)
Inventor
Glenn G. Gauer
Original Assignee
Gauer Glenn G
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US06/862,130 external-priority patent/US4704796A/en
Priority claimed from US07/046,721 external-priority patent/US4827620A/en
Application filed by Gauer Glenn G filed Critical Gauer Glenn G
Publication of WO1987007108A1 publication Critical patent/WO1987007108A1/en

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B27/00Photographic printing apparatus
    • G03B27/32Projection printing apparatus, e.g. enlarger, copying camera
    • G03B27/52Details
    • G03B27/58Baseboards, masking frames, or other holders for the sensitive material
    • G03B27/582Baseboards, easels

Definitions

  • the present invention relates to video production systems. More particularly, the invention relates to an automated video production system in which (a) video editing instructions are prepared off-line, as by a layman, (b) these instructions are converted into an edit control program off-line; and (c) in which the resulting edit control program is used on-line to control complex electronic equipment and produce a professional quality video presentation.
  • a photograph or other print is placed on an animation table which is movable horizontally in X and Y directions.
  • a camera is mounted at a fixed location and a fixed distance above the table.
  • the focal length of the camera lens is varied to capture particular sections of the photograph.
  • a video technician typically stores the initial coordinates of the table and initial focal length of the camera lens.
  • the table is then shifted to a new position and/or the lens focal length is adjusted.
  • the new position data and/or camera focal length data is then stored.
  • the producer and technician eventually develop and store a sequence of data corresponding to the moves of the animation table and the adjustments of the lens focal length.
  • the stored sequence is run on a real time basis to provide a clip of film or video from the photograph on the animation table. Separate photographs are processed in this time consuming and labor intensive manner to provide a video clip from each of the photographs. Alternately, cameras have been mounted on complex articulated arm assemblies for movement in X, Y and Z directions to focus on a print. Again, an animation sequence is developed on a step by step basis, stored, and then run in real time to produce a clip of video as explained above. These clips of video are delivered to a separate editing station where, under the control of an edit controller, they are assembled into a video presentation with transitions being inserted between the clips. Frequently, audio is added to the video presentation as it is assembled.
  • a producer normally oversees a technician who develops an edit control program in a step by step manner at a computer terminal which forms a part of the edit processing equipment.
  • the completed edit control program is run to control this equipment and produce a video presentation.
  • the video staff is kept busy switching between even two audio/ video input sources (i.e. two video sources and two audio sources) and can be overwhelmed if faced with developing an edit control program for more than three such sources. Consequently, edit controllers have heretofore only been used to control a limited number of sources.
  • the known video production approach described above requires highly trained video technicians, editors and producers to combine information into a final video tape presentation.
  • a two-step process is utilized in which the animated clips are first created at an animation station. This first step is followed by the assembly of the completed video presentation at a separate edit control station.
  • This approach is labor intensive and time consuming. For example, it may take as much as thirty hours or more for a video production staff to produce a ten minute high quality video tape. In addition, the costs of such productions can be prohibitive.
  • Still other prior art video systems rely on digitization of still images before they are incorporated into a video presentation. Such systems are unable to produce satisfactory closeups from digitized still images due to image distortion and mosiac-like graininess resulting from the digitization process.
  • the resulting video presentation consists of a video tape of individual photographs which have not been edited to any signif ⁇ icant extent. For example, selection of designated portions of photographs for inclusion in the finished video tape is typically not provided. If provided, it is left to the skill of a video technician who focuses a video camera on a photograph in hopes of capturing the desired part of the photograph.
  • significant editing such as by adding audio, action (i.e. transitions between photographs) either cannot be accomplished in these systems or is extremely limited. Also, to the limited extent such editing is available, substantial amounts of labor, time and money are required to accomplish the editing.
  • consumers or other individuals requesting video tapes of their photographs typically have little or no control over the editing process.
  • a means is provided by which a consumer or other individual desiring a video presentation can record basic edit instructions for the finished video presentation.
  • the basic edit instructions are generated off-line and typically at locations which are remote from the editing station at which the video presen ⁇ tation is produced. For example, an individual consumer may prepare these edit instructions at home, without the need for a technician or a computer terminal.
  • the system is versatile and offers great flexibility to the user. That is, the system allows a user to customize the final video presentation by allowing the user control over a substantial number of editing variables. These variables include combinations of one or more of the following:
  • a unique and mechanically simple apparatus for precisely defining a portion of a larger photograph or still image which is to be included in the video presentation.
  • This apparatus is also used to define relative positions of multiple portions of a single work which are to be included in the video presentation.
  • this apparatus comprises a novel framer for selecting and defining the position of a portion of a visual work. This is done in a manner that can be readily communicated to enable subsequent relocation of the selected portion in a very precise manner during production of the video presentation.
  • a framer comprises an overlay having a visual aperature of a preselected size and proportion. This aperature is capable of being placed over the visual work and moved relative thereto to position the aperature over a desired portion of the work. Communication, or notation indicia is positioned on the overlay for denoting the position of the overlay and its included aperature relative to boundaries, or reference portions of the work. This indicia is then recorded on the data tickets by the user for communicating the same during subsequent stages of production of the video presentation. That is, the indicia permits subsequent precise location of the selected portion of the work at a later time.
  • the framer has an aperature which is variable in size, yet which retains a consistent proportionality throughout its intended range of size variability.
  • the system includes an edit control program generating apparatus which converts the b&sic instruc ⁇ tions from the user into a program for controlling the production of the video presentation.
  • the edit program development portion of the system electronic ⁇ ally reads, and in the case of optically encoded tickets optically scans, the basic edit instructions developed by the user. In addition, this portion of the system converts these instructions into the edit control program.
  • the edit control program is developed off-line from editing equipment. Consequently, the relatively expensive editing equipment is not tied up during the edit program development stage. This enhances the volume and rate at which video presen- tations may be produced by the editing apparatus. In addition, it permits the editing apparatus to control a large number of sources.
  • the basic video editing apparatus of the system includes an edit controller, a switching circuit, an animation controller and at least one animation stand to which an animation camera is mounted.
  • the animation stand supports the camera at a fixed distance from a still image source for movement in X and Y directions to desired positions over the image.
  • the focal length of the camera lens is adjusted to capture the desired section of the image.
  • at least two of such animation stands and cameras are provided to enable the cameras to capture successive photographs or other still visual image sources which are to be included in the final video presentation.
  • the positions to which the animation stands are to be shifted, as well as the focal length of the camera lenses, are generated from the user's basic instruction list during the assembly of the edit control program.
  • the position instructions and lens adjustment data are loaded in proper sequence into the animation controller.
  • the animation controller shifts the first animation stand through its desired positions and controls the first camera lens to obtain animated video from a first still image source.
  • the edit controller controls the transfer of this information through the switching circuit to a recorder which records the information on a destination video tape or other media as the first animation stand and first lens is shifted.
  • the second animation stand is operated to transfer desired video information from a second still image source to the destination tape while the first still image source is replaced with a third still image source.
  • the edit controller in response to the edit control program, controls the video switcher and animation controller to provide the designated transitions on the destination tape between images from the first, second and other still image sources.
  • the final destination tape is produced.
  • One or more audio sources may also be controlled by the edit controller, together with an audio mixer circuit within the switching circuit, to simultaneously include audio on the destination tape as video is recorded thereon.
  • one or more motion video sources may also be controlled by the edit controller to mix video generated from still image sources and motion picture sources onto the destination tape.
  • the edit controller may also control a still frame source apparatus and the switching circuit to incorporate captions, background frames and other video information, by way of the switching circuit, on the destination tape.
  • An optional character generator means, controlled by the edit controller, may also be included for adding electronically generated captions to the destination tape.
  • a fully automated video editing system responsive to an edit control program generated from a user's basic instruction set, is used to produce a finished video presentation in one continuous operation. It is therefore an overall object of the present invention to provide an improved video production system and method.
  • Another object of the present invention is to provide a means by which a consumer or layman may have control over the editing of a video presentation.
  • a further object of the present invention is to provide a system and method by which a layman may determine basic editing instructions for a compu ⁇ terized system which produces a video presentation, without the need for a computer terminal or any computer programming experience.
  • a still further object of the present invention is to provide a system and method for producing customized video presentations on a volume basis.
  • Fig. 1 is a schematic diagram of the edit control program assembly portion of the system, which is utilized in producing an edit control program for editing a video presentation;
  • Fig. 2 is a block diagram of a video editing portion of the system for controlling the production of a video presentation in response to the program generated by the apparatus of Fig. 1;
  • Fig. 3 is a detailed block diagram of the editing apparatus of Fig. 2;
  • Fig. 4 is an isometric view of one form of table for supporting photographs or other still images which are to be included in the video presentation
  • Fig. 5 is a side elevational view of the table of Fig. 4, in which the table top comprises a screen, and shown with a slide projected onto the underside of the table top;
  • Fig. 6 is a view of one face of a form of overlay framer in accordance with an embodiment of the invention.
  • Fig. 7 is a view of the opposite face of the overlay of Fig. 6;
  • Figs. 8a and 8b illustrate two forms of a series of recording means used to record and communicate information determined by use of the framer of Figs. 6 and 7;
  • FIGs. 9 and 10 illustrate a use of, and a product resulting from the use of, the system shown in the prior figures
  • FIGs. 11-16 illustrate further uses of, and results from the use of, the system shown in the prior figures;
  • Figs. 17 and 18 illustrate a modified embodiment of the overlay of Figs. 6 and 7;
  • Fig. 19 is an enlarged cross-sectional view taken along line 19-19 of Fig. 18;
  • Fig. 20 is a view taken along line 20-20 of Fig. 19;
  • Fig. 21 is a plan view of a photograph illustrating one form of transition between designated sections of the photograph;
  • Fig. 22 illustrates a further embodiment of the overlay of Figs. 6 and 7;
  • Fig. 23 illustrates one form of a series of recording means used to record and communicate information concerning transitions between images from different visual sources, as well as audio information, for inclusion in the video presentation;
  • Fig. 24 illustrates one form of a series of recording means used to record and communicate information concerning the inclusion of sound effects in the video presentation;
  • Fig. 25 illustrates one form of a series of recording means used to record and communicate information concerning segments of video or motion picture information to be included in the video presentation;
  • Fig. 26a, b and c illustrate portions of pages from a catalogue listing music, sound effects and still art which a user may select for inclusion in the video presentation;
  • Fig. 27 illustrates one form of a projection sheet or screen which may be used in generating information to locate sections of a projected slide for inclusion in the video presentation.
  • the system and method of the present invention comprises a three-stage process for producing video presentations.
  • the first stage comprises a data generation and recording or preas- sembly stage.
  • users who are frequently untrained in the art of preparing video presentations, record basic instructions for control ⁇ ling the finished video presentation.
  • data may be recorded in any media, such as on floppy disks or using touch panels or other data input devices, a simplified data recording approach is preferred.
  • data sheets, tickets, or slips, identified as frame tickets, and topic zone tickets are encoded by the user with the desired instructions.
  • the data is recorded in a manner that permits optical mark scanning, as by marking the desired information in a dark color or reflective material (i.e. with a graphite pencil) on the data tickets.
  • the user does not require expensive computers and other equipment to generate data.
  • consumers can easily complete the data recording tickets in their own home, and thereby have creative control over the finished product.
  • frame size is typically defined by a simple number on the user's frame ticket.
  • position, relative to a corner of the flat art is described by coordinate identifying data recorded by the user on the frame tickets.
  • the data tickets further define information concerning desired transitions from one image within a piece of flat art to another image in the same piece of art, as well as transitions between images from successive pieces of art.
  • Movies or other motion picture sources may be included in the finished video presentation by identifying these sources on the data recording tickets, and also identifying the segment of the motion picture source to be included. More specifically, clip tickets may be used to identify the segment which is to be included in the video presentation.
  • the user may select images from a library of still images for inclusion in the video presentation.
  • the user may also record captions on the data tickets for inclusion in the video presentation.
  • the user may designate that the captions be electronically inserted at the appropriate places in the presentation.
  • Audio information can also be selected by the user for inclusion in the finished video presentation. The user has the flexibility of designating audio from a library of music or other audio, from the user's own personally supplied audio, as well as sound effects from a library of sound effects.
  • the user generates and records a sequence of basic instructions for the finished video presentation.
  • This raw data is generated at the user's convenience and does not tie up expensive video processing equipment.
  • the second stage of the system and method comprises a data conversion stage during which an edit control program is developed or assembled.
  • the assembly of the edit control program is accomplished off-line from the video editing equipment so as to further avoid unnecessarily tying up expensive editing equipment.
  • the data recorded by the user on the data tickets is in a form which does not resemble edit decision lists normally used in the video production industry.
  • this basic data comprising the decisions indicated on a user's data tickets, is converted into a set of software instructions for distribution to various pieces of video processing equipment.
  • This edit control program assembly stage is described below in connection with Fig. 1. Of course, this second stage would be eliminated if data is recorded directly in a form which is usable by the editing equipment.
  • the final or third stage of the system and method comprises the video presentation completion stage.
  • video editing equipment as well as video animation equipment, operate under the control of an edit controller programmed with the edit control program to complete the finished video tape.
  • the video editing process is accomplished in essentially one continuous pass as all video and audio components of the finished video tape, as well as animation of video information, is accomplished under the control of a single edit controller.
  • This third stage of the system is described below in connection with Figs. 2-5.
  • the system of the present invention includes a mechanism for selecting and designating portions of still visual images for inclusion in a finished video presentation.
  • visual images typically comprise flat art work such as still photographs, slides, text and printed art from magazines and the like, as well as from other sources.
  • Preferred embodiments of one type of such a mechanism is illustrated in the figures.
  • the system is not limited to the specifically illustrated mechanisms.
  • the illustrated forms facilitate the development of basic editing instructions by consumers and others who have not been trained in the art of preparing video presentations.
  • video screens have generally defined height-to-width proportionalities. Therefore, the proportions of the selected parts of the work discussed herein are illustrated and described in height-to-width ratios corresponding to those found on video display screens. It should be recognized, however, that this invention has applications far beyond those described herein.
  • the term "frame" when used in this document is to be understood to include both frames and individual fields of video.
  • FIGs. 6 and 7 illustrate opposite sides of a transparent unitary framing overlay sheet 10.
  • the sheet includes four transparent visual apertures 12, 14, 16 and 18 defined by borders 12a, 14a, 16a, and 18a.
  • Adjacent the boundaries of each of these apertures is a notation of the frame size (i.e. the number 1 is for aperature 12) number on one side of the sheet and on the other side of the sheet is a notation, "Wrong Side.” These notations indicate to the user which side of the sheet should be facing up (toward the user) during use. Thus, the user will see the frame size number of the aperature being used when the overlay is properly positioned.
  • Each of the apertures is of a different size and, in the illustrated example, is proportioned in the same height-to-width ratio as a common video display screen.
  • Remainder portions of transparent sheet 10, intermediate apertures 12, 14, 16 and 18, are, for convenience, imprinted with a plurality of perpen ⁇ dicularly disposed horizontal and vertical gridlines 22, 24, respectively.
  • aperture 12 has indicia 12b, 12£ associated therewith.
  • Apertures 14, 16 and 18 have respective indicia 14b_, 14£, 16b, 16£ and 18b_, 18£.
  • alphabetic indicia are illustrated, numeric, or another symbolic indicia by which infor ⁇ mation can be indicated, may be used also.
  • adjacent the aperture 18 are alphabetic indicia AE-CH, extending along the horizontal axis, and alphabetic notations KL-QR, extending along the vertical axis.
  • a plurality of recording, or communicating means 32, 34, 36 and 37 are illustrated. These include elongate cards, or slips, having boxes defined thereon into which; information can be entered. Typically, a number of these cards are included on a single sheet of paper.
  • a picture frame notation 38 used to denote the picture to which the particular recording card refers.
  • Numerals 1-6 are positioned in the central region of the card in markable boxes at 42. These numerals correspond to the frame size notations, such as previously described for apertures 12, 14, 16 and 18; (i.e. aperture 16 is frame size 12) .
  • numerals 42 To the left of numerals 42 are alphabetic notations A-J in markable boxes 46. Above the numeral designations 42, are alphabetic notations K-T in markable boxes 50. As described below, the boxes 46 and 50 are used to designate the coordinates of the border or other selected reference on a photo which places the desired portion of the photo within the designated aperature.
  • the boxes 42, 46 and 50 relate to a "custom frame” designation in which the user is selecting a particular portion of a visual image for inclusion in the finished video presentation.
  • the data recording cards also include an "auto frame” markable box 52. This box is marked when the user desires to include the largest possible portion of the visual image, within the screen aspect ratio, in the finished video tape.
  • a markable box 54 is also included in each of the cards for designating whether the particular visual image associated with a particular frame number is a slide. If so, this box 54 is marked.
  • vertical and horizontal markable boxes 55 may be provided for a user to indicate whether a slide is vertical (its long dimension being vertical) or horizontal (its long dimension being horizontal) . In this case, a user is typically instructed to project a slide on a screen so that the projected image has a long dimension which is a particular size (i.e. eight inches) .
  • the framer 10 (Fig. 9) is positioned over the projected image with the appropriate aperature bounding the desired portion of the projected slide.
  • Boxes 42, 46 and 50 are then marked to indicate the size and coordinates of the selected portion of the slide, relative to a reference point on the projected slide, such as the upper left- hand corner of the projected image.
  • the slide is projected in the same manner. Then, the frame size and coordinates are used to identify the selected portion of the slide for inclusion in the video presentation.
  • Data for use in processing slides may also be generated as explained below in connection with the screen of Fig. 27.
  • the Fig. 8a data recording cards 34, 36 and 37 (i.e. those data cards after the first) are provided with markable boxes 56, 58.
  • box 56 is marked and the coordinates of the new section of the same photo are indicated in boxes 42, 46 and 50.
  • the data recording cards also include a means by which timing information can be indicated. That is, the length of time which a particular image is to appear in the finished video presentation.
  • timing information for each of the data cards 32, 34, 36 and 37 is recorded by marking respective boxes 60, 62 on each of these cards. Box 60 is designated a. "double play” box, while box 62 is designated a “flash” box.
  • the time periods may of course be varied to fit a particular application. Typically, however, if neither of these boxes 60, 62 are marked, the selected image is played for a first time period, for example six seconds. In contrast, if the "double play" box 60 is marked, the first time period is doubled to, for example, twelve seconds.
  • the initial time period is halved to, for example three seconds.
  • the minimum time period is halved again to, for example, one and one-half seconds.
  • the user can designate the length of time which a particular image is to appear in the finished video presentation, because the video editing or processing system uses the time information from the recorded data cards in controlling the video editing.
  • a user need only mark two frame cards identically, except that the "same photo" box 56 in the second card is marked.
  • the video processing system causes the display of the selected image for the sum of the time designated on the initial frame card, the time designated on the following frame card, and the time designated on any immediately succeeding frame cards that are marked in the same manner.
  • Transition information may also be recorded by a user on the frame cards. The term transition means the manner that the visual image shifts from one designated image to another in the video presentation.
  • Topic zone cards are recorded by the user on topic zone cards as shown in Fig. 23. These topic zone cards are described below.
  • the topic zone cards are typically utilized to designate transitions from one photo or slide to another. However, they may also be used to designate transitions from one portion of a particular photo or slide to another portion of the same photo or slide.
  • the topic zone cards also indicate transitions from one motion picture source to another such source, or between motion picture sources and slides, photos or other still art.
  • the typical transition between one designated section of a photograph or slide and another designated section of the same photograph or slide involves the creation of a panning or animation path between the sections. This path is described below in connection with Fig. 21. Animation in this manner creates the impression of motion in the resulting video presentation.
  • the user may designate a "slow-zoom" option by marking a box 64 on the cards 32, 33, 36 and 37.
  • the video processing system responds to such designations by zooming a camera slowly toward the center of the selected frame area.
  • the frame cards 32, 34, 36 and 37 also include a caption recording block or box 66 within which a user may manually write, type or otherwise record a caption.
  • Such captions are included in the finished video presentation along with the video frame associated with the caption.
  • These captions are processed by the video editing system, as explained below, so as to overlie the video and typically appear on the lower one-third of the video image on a display screen.
  • a user may mark a markable "TS" box 68. In this case, during processing, the caption in the associated caption box 66 is processed for electronic insertion into the finished video presentation at the appropriate place.
  • the video processing system may include a library of still frames for inclusion in the finished video presen ⁇ tation.
  • the still frames may comprise background scenery, title lines, credit lines (for example indicating who assisted in producing the video presentation) and the like.
  • Virtually any type of desired information may be provided in a still frame library.
  • the recording cards 32, 34, 36 and 37 include a section 70 for designating that a particular frame is from the still frame library.
  • a user is provided with a catalog of possible still frame images, each image being identified by a particular reference number. An example of such a catalogue is described below in connection with Fig. 26c. Although other identifying indicia may of course be used, in Fig.
  • a combination of markable boxes 72 and numeral markable boxes 74 are provided for use in designating still frames from the catalog. Boxes 72 and 74 are merely marked to select a particular still frame image. Caption blocks or boxes 76 are provided in which the user can write, type or otherwise record desired captions for inclusion in the still frame. These captions are processed in the same manner as captions included in caption boxes 66. In addition, a "TS" box 78 may be marked by a user to designate the electronic generation and insertion of the caption from box 16 in the video presentation.
  • a user has substantial creative control over the sequencing and selection of visual images for inclusion in the finished video presentation.
  • the user also has substantial control over transitions between video images, as further described below in connection with the topic zone cards of Fig. 23.
  • the duration of the appearance of particular video images in the video presentation is also under the user's control.
  • the user may intersperse still frames and captions to further customize the finished video presentation.
  • the user can control audio to be included in the finished video presentation.
  • An alternate form of frame tickets or cards is shown in Fig. 8b. These cards are typically included on a single sheet of paper and are respectively designated as 80, 82 and 84. As the Fig.
  • the Fig. 8b form of cards have many elements in common with the Fig. 8a form, similar elements are numbered with the same numbers and therefore will not be discussed in detail. Also, for purposes of clarity, only the card 80 is numbered in Fig. 8b.
  • the Fig. 8b form of card does not have a preprinted frame number such as indicated in box 38 in Fig. 8a. Instead, the Fig. 8b form of card includes a series of markable boxes indicated at 86. These boxes are filled in with the number of a particular photograph or slide. If the same photo is used in a succeeding portion of the video presentation, the user merely marks the same number in box 82 of another of the frame cards. Consequently, the same photo and new photo boxes 56, 58 of the Fig. 8a form of card are eliminated.
  • a markable "still frame” box 88 is included in the cards 80, 82 and 84.
  • the video processing system of the present invention abruptly changes the prior video image to the designated video image in the video presentation. In otherwords, panning or zooming between the two images is eliminated.
  • the data recording cards of Figs. 8b also may include the "TS" markable boxes 68 and 78 in the event the video processing system includes an electronic caption generator and inserter. Alternately, separate data cards may be used to indicate that, for example, all captions are to be electronically inserted into the video presentation.
  • the data recording cards of Fig. 8b include a markable "clip" box 90.
  • This box is marked to indicate that the data card refers to a clip of video, motion film, or other motion picture source material provided by the user.
  • Box 86 is filled in to indicate the number of the particular motion clip. The remaining information on the data recording card is then typi ⁇ cally left blank, as not being applicable.
  • a clip ticket such as shown in Fig. 25 and described below, may be marked by a user to indicate the beginning and end portions of video material which is to be included in the video presentation.
  • Each of the slides, photographs, motion picture clips and other source material provided by the user is identified with a number which is the same as the frame number in box 38 of the corresponding Fig. 8a data recording card or cards or the same as the number indicated in box 86 of the corresponding Fig. 8b card or cards.
  • a sticker bearing the appropriate number may be affixed to the visual material for identification purposes.
  • FIGs. 9 and 10 A user may have a photograph such as noted at 91, here a photo of a skier in mountainous terrain. However, the user may wish to use only a small portion of this overall photograph.
  • the present invention allows the user to select a specific portion of the photograph, denote the precise location of the selected portion, and to record and/or communicate that information.
  • the user places overlay sheet 10 on, or in front of, the photograph.
  • the amount of photograph 91 to be used is determined by which of apertures 12, 14, 16 and 18 are positioned over the selected portion of the photograph, recognizing that the apertures vary in size.
  • aperture, 18 frame size 12
  • the location is easily determined by noting at 92, 94 the position of the border or other selected reference on the photo relative to the alphabetic notations along the borders of overlay sheet 10.
  • the user merely reads the alphabetic notations on the horizontal and vertical borders as illustrated and then records the same on the data card associated with that photo.
  • the frame size has been marked as 1-2 (to denote frame size 12) .
  • the lower, horizontal, border notation is marked on this card as AH.
  • the right side, vertical, border is marked on the card as LM. This conforms to the locations denoted by indicia 18b, 18£.
  • the card may be machine readable by an optical mark scanner to control a computer operated animation camera as explained below.
  • a computer operated animation camera as explained below.
  • this recording mechanism one can precisely locate the selected portion of the photograph to place it on video tape for projection on a video screen 98, as illustrated in Fig. 10.
  • a user can easily take a photograph 91 and, with the use of an aperture in the transparent overlay sheet, select a portion of the photograph to be used. The position of the selected portion is noted by the notation indicia at the edges of the overlay sheet. This can be recorded and communicated easily by use of the described data recording cards. The recorded information can then be subsequently used by others to relocate that exact portion of the photograph.
  • the user has selected the portion of photo 91 which shows only the upper portion of the skier.
  • the precise location of this portion of the photo has been communicated by the border notation indicia and recorded. From this information, the same user, or another, can precisely relocate the selected portion for capture on video tape or some other media, if desired.
  • the selected portion has been put on video tape and is shown in replay.
  • the Fig. 9 caption block 66 of data recording card 32 has been filled in by the user with the words "Super Mom”. During preparation of the video presentation, this caption is processed so as to appear at 100 (Fig. 10) in the displayed video presentation. A shadow has been added to the caption during processing so that the displayed caption 100 has greater depth than the caption written in box 66. This is accomplished as explained below.
  • Figs. 11-16 illustrate additional uses of this system.
  • a photograph 110 including two subjects 112, 114 is shown.
  • sheet 10 is illustrated in a position overlaying photo 110 with aperture 16 positioned over subject 112.
  • this information can be communicated for subsequent video processing.
  • This enables the production of a video presentation with the segregated image 112 on video tape.
  • Image 112 can then be displayed on a video screen 115, as shown in Fig. 14.
  • the video presentation then includes the video image shown in Fig. 15.
  • the notations developed in the Fig. 11 and Fig. 12 applications also serve to denote the relative positions of selected portions of the single work. This information on relative positions of multiple selected portions within a single work can be used to develop an animation or panning path for a camera to follow in scanning from the image in Fig. 11 to the image in Fig. 12.
  • a photograph 120 is depicted of a banana 122, an apple 124 and an orange 126.
  • a first section 128 of photograph 120 may be designated.
  • a second section 130 of photograph 120 may also be designated.
  • photograph sections 128 and 130 are to be positioned in succession on the finished video presentation and further assume that the still frame and slow-zoom boxes 88, 64 (Fig. 8b) have not been marked.
  • the overlay 10 defines the boundaries of camera motion, controlled during the video editing process, over a single photo. That is, the size and position coordinates of the begin frame section 128 and end frame section 130 define the beginning and end of areas from photograph 120 which are to be captured by the camera. Starting with section 128, the camera is controlled to follow an animation path and pan or capture visual information - 27 -
  • Section 130 the camera first captures frame section 128, the banana, on the video tape.
  • the intermediate image of the apple 124 is captured on the video tape.
  • Section 130 is captured.
  • the relationship of the selected frames 128 and 130 define the boundaries of the animation path and thus the intermediate portions of photograph 120 which are included in the finished video presentation.
  • Fig. 13 illustrates a use of another overlay sheet 140 with larger transparent apertures 142, 144.
  • This overlay also includes a gridwork pattern imprinted thereon and alphabetic notations along its borders for use with these larger apertures. Apertures 142, 144 are sized so as to accommodate both subjects 112, 114. Information recorded using overlay 140 is communicated during video processing to produce a video presenta- tion which displays the visual image illustrated in Fig. 16.
  • the data recording cards of Fig. 8b are used in the same manner as described above in connection with the cards of Fig. 8a.
  • slides may be processed as well, in which case the slide box 54 is marked.
  • the slide is then projected in the place of photograph 91 of Fig. 9.
  • the boundaries of the border of the projected slide are then recorded along with the frame size to designate the portion of the slide which is to be included in the video presentation.
  • a small sheet or screen 147 may be used in conjunction with framer 10 (i.e. Figs. 6 and 7) for this purpose.
  • Sheet 146 is square, may be eight inches by eight inches or other convenient size, and has perpendic ⁇ ularly disposed side and top reference edges 147, 148 which intersect at a reference point 149.
  • parallel spaced apart vertical lines 150 are provided on sheet 146. These lines, together with the upper and lower edges of sheet 146, enclose a vertically oriented region corresponding to the aspect ratio of a standard thirty-five millimeter slide projector.
  • parallel spaced apart lines 151 are provided on sheet 146.
  • Specialty overlays such as overlay 154 as shown in Fig. 22, may also be utilized;.
  • a user may want to position the woman's face from photograph 91 (Fig. 9) in the gorilla's head 156 of the overlay 154.
  • the frame size box 42 of the Figs. 8a and 8b data recording cards is marked with the designation 1, 2, 3.
  • the coordinate indicating boxes, 46 and 50 on the data cards are marked with the coordinates from overlay 154 which center the woman's face in the gorilla head.
  • the gorilla figure is obtained from a still frame store and super imposed over the designated portion of photograph 91.
  • Figs. 17 and 18 illustrate still another embodiment of the invention.
  • the aperture size is variable, while the height-to-width proportionality of the aperture is maintained in a preselected ratio, or proportionality.
  • a substantially transparent overlay sheet 160 is provided, having an essentially straight-line slot 162 formed therein. Arrayed along regions disposed substantially perpendicularly to each other and along the borders of sheet 160 are boundary, or border, symbolic indicia 166, 168 similar to that previously described.along the borders of aperture sheet 10. A pair of elongate slits 170, 172 extend through portions of the overlay sheet 100 adjacent to boundary notations 166, 168 respectively. A second, or L-shaped, member 180 is mounted on overlay 160. Element 180 has perpendicularly disposed legs 182, 184. As is shown, these extend through slits 170, 172.
  • Figs. 19 and 20 At the vertex or intersection of legs 182, 184 is means to aid in mounting element 180 on sheet 160 while allowing movement therebetween. This is better illustrated in Figs. 19 and 20. In Fig. 19, it can be seen that an enlarged portion 186 projects upwardly from the juncture of legs 182, 184. A holding portion 188, which is wider than slot 162, underlies the marginal edges of sheet 160 which bound slot 162. An elongate neck portion 190 joins portions 186, 188 and fits slidably in slot 162.
  • This construction allows the L-shaped element 180 to slide relative to sheet 160 along slot 162 in the direction indicated by dual-headed arrow 194. Movement of element 180 relative to element 160 varies the size of the aperture defined between slits 170, 172 and the inner edges of legs 182, 184, while maintaining a preselected height-to-width ratio, or proportionality of the aperature.
  • Arrayed along leg 184 of element 180 are a series of numerical indicia which indicate the size of the aperture at any given position of element 180 relative to overlay sheet 160. For example, in
  • FIG. 17 the aperture size would be noted as 24.
  • Fig. 18 the aperture size has been reduced and would be indicated as an aperture size 14.
  • Use of the variable aperture size framer illustrated in Figs. 17-19 is similar to that previously described for the other forms of framer.
  • the framer apparatus and method set forth herein is useful, not only in the video production application described herein, but also in a variety of other areas. Such could include, but need not be limited to, commercial art, business graphics, and any and all other areas in which one wishes to select and/or segregate a portion of a work, locate its position, and record or communicate these for subsequent use.
  • a mechanism is provided by which a user can control additional parameters of the final video editing process.
  • Topic zone cards or tickets such as illustrated at 200 and 202, may be marked to record data concerning these parameters.
  • Each of these cards include a block 204 which contains plural markable boxes for indicating the beginning of each topic zone.
  • a topic zone is comprised of a series of photographs or other art work related to a common topic, such as concerning a user's birthday.
  • a topic zone may consist of various images from a single photograph. For convenience, the end of a topic zone automatically occurs at the beginning of the next topic zone. Therefore, the end of a topic zone need not be s «_narately indicated.
  • topic zone one begins at the user's first photograph. This is designated by marking the zero box in the left hand column of block 204 of card 200 and one box in the right hand column of this block.
  • topic zone two begins at the user's fifth photograph. This is indicated by marking the zero box in the left hand column of block 204 of card 202 and the five box in the right hand column of this block.
  • Each of the topic zone cards also include a series of markable “transition” boxes 206 labelled as “medley”, “wipes”, “dissolves” and “cut”. These boxes are marked by the user to designate and control the manner in which the video processing equipment will shift from one photograph or piece of art work to the next within a topic zone.
  • transition effects are well known in the video editing art and are more particularly described as follows.
  • the "wipes” transition involves revealing a second video image through a moving geometric pattern or curtain as the prior image is in effect revealed on the video screen by the pattern.
  • the “dissolves” transition involves fading out the first image as the next image is faded in. This provides a soft transition effect.
  • the "cuts" transition involves replacing the first image by a second, image in a rapid blinking motion. This accentuates action videos, such as those of sporting events and is the transition most often used by professionals in broadcast television and motion pictures.
  • the "medley" box is marked to designate that a collage of the various other transition effects is to be used in shifting from image to image. This transition approach is automatically selected by the video edit control program if none of the other boxes are marked. Other transitions (i.e., digital video effects) may also be used by including appropriate markable boxes and corresponding editing equipment in the video processing system.
  • the topic zone cards 200, 202 also include a sound library section 208 which includes markable boxes that are filled in by a user to designate audio for inclusion as a background in the associated topic zones.
  • a user is provided with a catalog of audio selections ranging from medleys to individual songs. By marking the appropriate boxes corresponding to the identity of songs in the catalog, the user selects audio for the topic zone. For example, referring to the portion of an exemplary music library catalogue page shown in Fig. 26a, selection TOO comprise the song "Happy Birthday".
  • the topic zone cards 200, 202 additionally include personal sound source markable boxes indicated generally at 210. These boxes are marked when a user desires the inclusion of his or her own audio in the topic zone. Typically, this audio information is provided on audio cassette tape and is labelled with the same designation that -is marked in block 210 for the associated topic zone. During video processing, the appropriate personal audio material is matched to the corresponding topic zone. As explained below, both a sound library and personal sound source selection may be made for a given topic zone. In this case, the two audio sources would overlay one another and be included in the video presentation as the video information for the topic zone is recorded.
  • Sound effect cards 212:, 214 may also be included for use in recording the positions in the video presentation at which sound effects are desired. Each of these cards 212, 214 are similar. Therefore, only card 212 will be described in detail.
  • Card 212 includes a markable begin sound effect block 216 and an end sound block 218.
  • the boxes of block 216 are used to mark the number of the motion picture material, photograph, or slide at which a particular sound effect is to start in the video presentation.
  • the boxes of block 218 are marked to designate the number of the motion picture material, photograph, slide or other art at which the sound effect is to end.
  • the zero box in the left hand column of block 216 is marked and the one box in the right hand column of this block is marked.
  • the zero markable box in the lefthand column of block 218 is marked and the six markable box in the righthand column of block 218 is marked.
  • the user is provided with a catalog identifying possible sound effects for inclusion in the video presentation.
  • Markable boxes in a block 220 are provided with indicia corresponding to the indicia in the catalog for the sound effects.
  • markable sound effect box A0 is marked to, select the sound effect of discharging fireworks.
  • topic zone and sound effect cards as shown in Figs. 23 and 24, users may designate combinations of audio from a sound library, personal sound source material, and sound effects for inclusion in a video presentation.
  • transitions between images of the video presentation may also be selected and recorded on- the topic zone cards.
  • Clip tickets 221, 222 may also be provided for use in recording information concerning motion picture and video material provided by the user, assuming such material is to be included in the video presentation.
  • a set of such cards identified by a serial number, is provided for each video tape.
  • a sticker or label bearing the serial number is affixed to the tape.
  • clip card 221 includes a markable begin block 223 for identifying the location, in the motion picture or video material supplied by a user, at which the segment to be included in the video presentation commences.
  • a markable end block 224 is provided for identifying the end of this segment. To generate this information, the user may simply monitor the counter on a conventional video player and note the count at which the desired segment commences. This count is recorded in markable block 223. Similarly, the count at the end of the segment may be recorded in block 224. This information is then used during video processing to locate the segment of video to be included in the video presentation. Motion picture material may be transferred by the user to video and, when played as described above, the start and stop counts for the desired segment may be located for marking on clip ticket 221. A second segment of video from the tape can be noted on clip ticket 222, and so on with additional clip tickets.
  • a user may first send motion picture or video material to the processing station described below. This material is then duplicated on a video tape which is marked in a conventional manner with timing information which is displayed on a television or other screen when the duplicate tape is played. The duplicate tape is then returned to the user, who then notes the precise starting and stopping points of the desired segment from the dis ⁇ played timing information. These points are recorded on clip ticket 221. Thereafter, the video material and clip tickets are returned to the processing station. During processing, a video player runs the tape to the designated starting location in a conventional manner. At the appropriate time, the designated segment of video is then included in the video presentation. Clip ticket 221 also includes a "slow-motion" markable box 225.
  • this box When this box is marked by a user, the identified motion picture or video segment is played at a slow speed at the time it is incorporated into the finished video presentation.
  • a "freeze frame" markable box 226 is also included on the clip ticket. If this box is marked, a single frame of video, identified in block 223 of the clip ticket, is designated for inclusion in the finished video presentation. To precisely identify the selected frame, the above described precision timing approach is used as tape counters are typically not accurate enough to precisely identify a desired frame.
  • the clip tickets also include a still art designation section 227, usable like the section 70 of the Figs. 8a and 8b frame tickets to designate still art for inclusion in the video presentation.
  • Caption blocks 228, 229 are also provided on the clip tickets for use in recording captions for inclusion in the finished video presentation along with the identified clip. These captions are processed in the same manner as the captions recorded in blocks 66, 76 of the Figs. 8a and 8b frame tickets.
  • a user is provided with a kit containing instructions and a number of framers of various sizes, such as shown in Figs. 11 through 13.
  • the kit may also include specialty framers, such as shown in Fig. 22.
  • Frame tickets, cards or slips, such as shown in Figs. 8a and 8b, are also included in the kit.
  • the kit includes a number of topic zone cards and sound effect cards, such as shown in Figs. 23 and 24. If inclusion of user supplied motion picture or video into a video presentation is an available option, clip tickets are typically also included in the kit.
  • the instructions in the kit typically include a catalog or listing which identifies a number of still frame images, credit slide images and other art work.
  • These still frame images are selected by the user for inclusion into the video presentation through use of the still frame selection portion 70 of the Figs. 8a and 8b frame tickets.
  • the still frame selection boxes are marked with the designation U12 to designate the inclusion of a picture of a birthday cake in the video presentation.
  • the catalog also includes a list of selections available in the sound library and a list of sound effects as explained above. This audio material is selected and designated on the Figs. 23 and 24 topic zone and sound effect tickets.
  • the user uses the illustrated framer and the data recording means, the user generates the basic instructions for controlling the editing of a video presentation.
  • the data recorded by users on the frame tickets, topic zone tickets, and sound effect tickets. and clip tickets is of a form which does not resemble* edit decision lists normally used in the video produc- tion industry.
  • the user generated decisions are assembled into a set of soft ⁇ ware instructions in a proper format for controlling the editing of the final video presentation from the user's instructions.
  • the sound effect tickets, frame tickets, topic zone tickets and clip tickets are optically readable to facilitate the conversion process.
  • the group of frame tickets 230, topic zone tickets 232, and sound effect tickets 234 and clip tickets 235 from a particular user are optically scanned by an optical mark reader 236, such as from Scan-tron Corporation of Los Angeles, California.
  • the output on line 238 from the scanner is fed to a personal computer 240, with conversion software which converts the raw data from the optical scanner into an edit control program.
  • This conversion software is identified as APES (Automated Photo Editing Software) conversion soft ⁇ ware in Fig. 1.
  • the resulting control program is identified as APES operating software 252 in Fig. 1, with components noted in this figure.
  • the position coordinates from the frame tickets 230 are converted by the APES conversion software in personal computer 240 into a format which can be used by the animation controller to move the animation stands and cameras mounted thereto. That is, the position on the art work is described by character strings of data set forth in blocks 46 and 50 in the Figs. 8a and 8b frame tickets. This data is converted to X and Y position coordinates of the animation stand.
  • frame size as defined by the simple number in block 42 on the frame tickets, is converted to a particular position of the zoom lens of the camera. Specifically, successive pieces of flat art work or slides are displayed in alternating order on the animation tables.
  • a first photograph may be placed on the first table underneath the first camera while a second photograph is placed on the second table under the second camera.
  • a third photo ⁇ graph is then placed on the first table following the recording of images from the first photograph.
  • the APES conversion software assembles a sequence of instructions in order for the respective animation stands and camera lenses.
  • This animation sequence is stored as an animation control program part of the APES operating software, as indicated at 252 in Fig. 1.: Under the control of an edit controller as set forth below, the animation stands and lenses are stepped through the animation sequence during the production of the finished video presentation.
  • the APES conversion software also assembies a still image or frame control program which comprises a sequential list of still images from the library of stills (i.e. background images, credit slides, etc.) designated by the user on the various data tickets, such as frame tickets 230.
  • This sequence of still images is also stored as a part of the APES operating software. Once organized into sequential form, the still frames are called up by the edit controller individually, as explained below, at the appropriate times for inclusion in the finished video presentation.
  • a character generator 242 such as a VP-1 character generator from Chyron Corporation of Melville, New York, may be used for this purpose. As this list of captions is generated, the operator can verify its accuracy on a screen 246, which forms a part of the character generator 242. The character generator is interfaced, as indicated by line 248, with the personal computer 240.
  • the transition and other edit information from the user's data tickets is also converted by the APES conversion software into a software edit control program compatible with the video processing equipment.
  • Audio identifying information from the topic zone tickets 232 and sound effect tickets 234 are converted into an audio decision list, which is used to assemble a continuous audio bed for the video presentation. This audio decision list is typically integrated into the edit control program. If caption boxes 66 and 76 of the frame tickets (see Figs. 8a and 8b) are marked and electronic insertion of captions is not specified, the APES conversion software recognizes that a still frame containing the graphics or caption will be subsequently generated during video processing. The existence of this still frame is recognized by the optical scanner and is added to the still frame list in the correct order with other still frames on the list.
  • these captions are retrieved from the still frame store apparatus and overlaid on video sources during production of the video presentation.
  • Time information from the frame tickets is also monitored by the APES conversion software. From the above information, an edit decision list or edit control program is developed with actual running time. This decision list is used by an edit controller to control the various components of the video processing system to produce the finished video presentation.
  • the resulting software comprises the automated photo editing system (APES) operating software indicated at 252 in Fig. 1.
  • This operating software contains the edit decision list or edit control program, including the timing, transition and audio control information.
  • the APES operating software also includes the stored animation control sequence, the stored still frame sequence and the stored character data.
  • the APES operating software is thus generated off-line from video processing equipment. Therefore, generation of this software does not tie up the processing equipment. Because of this off-line preparation of the APES operating software, time delays associated with generating this program are eliminated from the actual video editing process. Therefore, video presentations may be prepared on a volume basis.
  • Video Presentation Completion Stage With reference to Figs. 2 through 5, the video processing portion of the system will next be described. More specifically and with reference to the block diagram of Fig.
  • the video processing system 260 includes a personal computer 262 into which the APES operating software 252 is loaded.
  • the animation control program or sequence is loaded from personal computer 262 to an animation controller 264 and the still frame sequence or program list is delivered from this personal computer to a still frame store apparatus 266.
  • the edit decision list or edit control program is loaded from computer 262 into an edit controller 268.
  • the edit controller is coupled to the animation controller, the still frame store and also to a destination tape recorder 270, which records the finished video presentation on a video tape or other media.
  • the animation controller 264 is coupled to an animation camera stand 270, and also to a lens 273 of a camera 272 mounted on the stand.
  • the animation controller controls the positioning of the stand so as to place the camera over the desired portion of a photograph or other art work.
  • the animation controller also controls the zooming of the lens 273 to select the size of the image captured by the camera.
  • the camera is connected to a video switcher circuit 274 of a switching circuit 276, the switcher 274 also being connected to a destination tape recorder 277.
  • the edit controller 268 and still frame store apparatus 266 are coupled to video switcher 274.
  • edit controller 268 sends a "next" sequence instruction to animation controller 264 at times when camera 272 is to be shifted to capture a section or sections of a particular photograph or to a section of a different photograph.
  • the captured video is delivered to the video switcher.
  • a "next sequence" instruction is delivered from the edit controller to still frame store 266, causing the next still frame in the list to be delivered to the video switcher.
  • the video switcher 274 is commer ⁇ cially available and, in response to instructions from edit controller 268, performs the desired transitions between the successive images in the sequence.
  • the resulting video material is delivered from video switcher 274 to the destination tape recorder 277 for inclusion in the finished video presentation.
  • One or more audio sources 278 may also be included in the processing system 260. These sources receive control signals from edit controller 268. In addition, the output of these sources is coupled via an audio mixer 280 of switching circuit 276 to the destination tape recorder 277.
  • the edit controller 268 is connected to the audio mixer as well for con- trolling the audio mixer to mix audio from the various audio sources which are triggered to operate by the edit controller. As a result, audio information may be simultaneously included in the destination tape, along with video information from the video switcher. In the event the video processing system 260 is also to handle motion pictures material, this material is typically first transferred to video. This trans ⁇ ferred video source material, as well as other video source material, is played by one or more video sources 282 which are included in the system.
  • Video sources 282 are coupled to and controlled by the edit controller.
  • the video output of such sources is delivered by a line 284 to the video switcher 274 for incorporation into the destination tape.
  • audio from sources 282 is coupled via a line 286 to the audio mixer and thus to the destination tape. This is done in the event audio from the user supplied video material is to be included in the finished video presentation.
  • the detailed operation of the video processing system 260 will become apparent with reference to the description of Figs. 3, 4 and 5, which proceeds as follows.
  • computer 262 is coupled to edit controller 268 for delivering the edit control program to the edit controller.
  • computer 262 delivers the animation sequence program to animation controller 264 by way of a data bus or link 290.
  • the still frame store control program is also delivered via this bus to the still frame store apparatus 266.
  • the stored character data 254 (Fig. 1) is delivered over bus 290 to a commercially available character generator inserter 292, such as from Chyron Corporation.
  • Frame tickets containing captions in caption boxes 66, 76 are then placed on animation table 294a and 294b for filming by respective cameras 272a and 272b.
  • Other caption blocks for example, from clip tickets 235, are also filmed as well.
  • Each video frame containing a caption filmed by camera 272a is delivered via a line 296a to video switcher 274 and then via a line 298 to the still- frame store apparatus 266. These caption frames are subsequently recalled at the appropriate time for inclusion in the finished video presentation.
  • each frame containing a caption filmed by camera 272b is delivered, via line 296b and the video switcher, to line 298 and then to the still frame store apparatus.
  • the operator of the video processing system places the caption bearing data tickets on the respective animation tables 294a and 294b in the sequence they are to appear in the video presentation so that they are stored in the still frame store apparatus in the correct order. Filming of the captions may be manually controlled. Alternately, control signals, delivered via a line 300 to the animation controller 264 from the edit controller 268, may control this caption filming subprocess.
  • video processing is ready to commence.
  • the operator of the system cues the first two flat art images on the respective animation tables 294a and 294b and manually confirms that the rest of the system has been properly cued fox start of the edit operation. If any device in the video processing system is not available when the edit controller calls for its operation as explained below, editing is stopped at that point. The system is then re-cued for a clean edit after the equipment is repaired or otherwise made available.
  • each of the animation tables 294a and 294b are similar so that only table 294a will be described in detail.
  • the animation tables are stationary.
  • the animation tables define a fixed reference point 302 against which a corner of a photograph 30.4 or other flat part is placed.
  • right angularly disposed legs or stops 306, 308 project upwardly from the marginal edges of table 294a. The intersection of these legs and the table top defines the reference point 302.
  • photograph 304 may be quickly placed at the desired position on the table against the stops 306, 308 so as to establish the reference point 302 at the corner of the photograph.
  • the coordinates recorded in the blocks 46 and 50 on frame tickets correspond to the coordinates relative to this reference point 302. Consequently, the desired por ⁇ tion of the photograph is readily determined once the flat art is placed on the animation table.
  • This construction of the animation tables facilitates the automatic editing process performed by system 260. That is, the position of each piece of still art on the animation table 294a is automa ⁇ tically known by the video processing system and does not have to be individually programmed in. In addition, by positioning all flat art work at corner 302, substantially constant lighting of the art work is provided. That is, lighting typically varies somewhat over the surface of an animation table. Consequently, by always positioning art relative to a fixed reference, greater constancy of lighting is achieved.
  • the animation table top 301 may comprise a projection screen.
  • slides provided by the user are projected by a slide projector 310 and off a mirror 312 to the underside of screen 301.
  • the slides can be reversed and projected directly onto the screen, without mirror 312.
  • the slides could be projected directly to the cameras. These slides are thus projected for capture by the associated video camera 272a in the same manner as photographs or other flat art work.
  • the slide is projected to be the same size as the image projected by the user at the time coordinate data is generated as previously explained in connection with Fig. 27. Consequently, data pertaining to the slide on the user's frame tickets 230 (Fig. 1) correlates to the projected slide.
  • This data is used, via the APES operating software, to capture the designated portion of the slide for inclusion in the finished video presentation.
  • the user's frame tickets could include a place for recording the size of the projected slide and the corresponding coordinates of the designated portion of the image relative to a corner of the projected slide.
  • camera 310 would be adjusted to project an image of the same size as the case when the user generated the coordinate data.
  • the coordinate data could be adjusted by computer 240 during generation of the APES operating software to compensate for differences between the size of the image from which the user obtained coordi ⁇ nate data and the image generated by projector 310.
  • Edit controller 268 which may comprise a 205T Edit Control System from Convergence, Inc., sends a command via line 300 and a general purpose commercially available interface to the animation controller 264.
  • animation controller comprises a separate animation controller for each of the animation stands 270a and 270b.
  • each of the animation controllers may comprise a model 2000 Animation Control System, with software, available from Cinetron Computer System Company of Norcross, Georgia.
  • the animation stands 270a, 270b may comprise model AC20 animation control tables from Motion Tech Company of Reseda, California.
  • the animation stands 270a, 270b are positioned above respective tables 294a, 294b for supporting cameras 272a, 272b.
  • animation stand 270a is moved to position camera 272a at the desired X-Y position over the photograph 316.
  • lens 273a is controlled to capture the desired size frame from photograph 216, as determined from the user generated basic data.
  • the captured first image is delivered via line 296a to the video switcher 274, operating in response to control signals from edit controller 268 transmitted to the video switcher on a line 302.
  • Video switcher may comprise a model 100N video switcher available from Grass Valley Group, of Grass Valley, California.
  • This switcher is equipped with a serial interface which is also available from this company and couples the switcher to the edit controller.
  • the video output from switcher 274 is delivered on a line 304 through the character generator inserter 292 and to a pair of outputs 306, 308 of the character generator.
  • the character generator output 306 is connected to a VHS recorder 310, such as a JVC Model BR-8600 VHS play/record apparatus.
  • VHS recorder 310 such as a JVC Model BR-8600 VHS play/record apparatus.
  • out ⁇ put 308 is connected to a three-quarter inch video recorder 312, such as a JVC model CR-850 LU three- quarter inch play/record apparatus.
  • Edit controller 268 is connected via interfaces and lines 314, 316 to the respective recorders 310, 312 so that the edit controller controls the starting and stopping of these recorders during the preparation of video presentations.
  • recorder 312 is eliminated.
  • a three- quarter inch destination tape is recorded by recorder 312 and then transferred to beta, eight millimeter or other formats.
  • Character generator 292 also includes a conventional shadow generator for automatic- ally adding a shadow or depth to the caption to make it more distinctive when displayed on a video screen. Although shown separately in Fig. 3, character gener ⁇ ator 292 may also be incorporated into the video switcher 274. Transitions between selected portions of the same photograph 316 are controlled by edit controller 268, which sends transition control signals on line 300 to the animation controller 264. These signals cause camera 272a to slow-zoom, still frame and otherwise make transitions between these images. For still frame transitions, recorders 310 and 312 are stopped as the camera 272a is shifted from focusing on one section of photograph ⁇ 316 to another section of this photograph.
  • Edit controller 268 controls the time during which camera 272a captures a particular section of photo ⁇ graph 316, as determined by the edit decision list obtained from user timing information taken from the frame tickets 230.
  • the video switcher 274 is controlled to accomplish the desired transition between the two selections.
  • switcher 274 is designed to operate, in response to signals from edit controller 268 via line 302, to gradually fade one video input (i.e. the first selection) out while the other video input (i.e. the second selection) is faded in. This accomplishes the previously described "dissolve" transition.
  • the first video selection is stopped.
  • the operator of the video processing system then removes the first photograph 316 and places a third image on animation table 294a for subsequent filming.
  • a tally light (not shown) alerts the operator to the status of the system so that a particular photograph or image is not removed while it is still being filmed. This process continues until all of the images have been captured on the destination tape.
  • the input from line 296a is abruptly cut off while the input from 296b is immediately started. This in essence replaces the first selection with the second selection in a blink.
  • the "wipe” transition is also accomplished by the video switcher 274 which in effect gradually blocks the video input from line 296a while gradually receiving the input pattern or from line 296. This in effect reveals, by a geometric pattern or curtain, the image from line 296b on the screen while the image from line 296a is removed from the screen.
  • the "medley” transition involves a mixture of the other transition effects.
  • additional modes of transition between images may be provided.
  • the next image to be included in the video presentation comprises an image from the still frame library stored in the still frame store apparatus.
  • This apparatus may comprise an 842 dual channel frame store device from Abekas Company of Foster City, California, together with a model WD350 350 FRM/700 FLD disk drive also from this source.
  • Edit controller 268 is connected via an interface and a line 324 to the still frame store apparatus 266. When a still frame of video is required, edit controller 268 triggers the still frame 266 to advance to the next image in the still frame list stored in apparatus 266. This causes a transmission of this image via line 298 to the switcher, which then delivers the image via line 304 to the recorder devices 310, 312.
  • Edit controller 268 calls the next frame from the still frame store apparatus 266. This next frame comprises the desired caption because the still frame list in the still frame apparatus is organized in the correct order.
  • the frame containing the caption is transmitted via a line 326 to the video switcher 274.
  • Video switcher 274 in turn overlays the video inputs from channels 298 and 326 to position the caption over the video.
  • video switcher 274 may include a commercially available borderline generator. For example, from Grass Valley Group, for adding a shadow to the captions in a known manner.
  • the video switcher By providing two video input lines 298, 326 from the still frame apparatus to the video switcher, the video switcher has the capability, under control of edit controller 268, to cross fade and otherwise make transitions between two frames of video from apparatus 266.
  • overlays such as the gorilla depicted in the Fig. 22 framer, may be called from the still frame store for combining with video on one of the input lines 296a or 296b.
  • Video sw.tcher 274 is operable, in response to edit controlled 268, to provide the same modes of transitions between information from the still frame store apparatus and other video sources as between video on lines 296a and 296b.
  • Switching circuit 276 includes audio mixer 280, which may comprise a model AMX-100 audio mixer stereo circuit from Grass Valley Group. This audio mixer delivers audio from a variety of audio sources via lines 330 and 332, to the respective recorders 310, 312. Consequently, audio may be simultaneously recorded with the video on the destination tape. Edit controller 268 is coupled via an interface and line
  • three audio sources are included in the video processing system 260. These include first and second compact disk audio players 278a, 278b, such as model NCD-600 multi-CD players from Nikko Company of Japan. Outputs 336a and 336b from the respective audio players 278a, 278b are connected to audio mixer circuit 280.
  • edit controller 268 is connected to players 278a and 278b by interfaces and respective lines 338a and 338b.
  • an audio cassette tape player 278c also comprises one of the audio sources.
  • Cassette player 278c may comprise a computer controllable audio cassette deck from Revox Company of Arlington, Tennessee. The output of cassette player 278c is connected via an interface and a line 336c to the audio mixer.
  • Edit controller 268 is connected by an interface and a control line 338c to the cassette player.
  • Audio players 278a and 278b each contain a full library of audio selections and sound effects corresponding to the audio selections identified in the topic zone and sc ⁇ nd effect cards of Figs. 23 and 24. i ' .
  • the edit controller monitors the audio which is to be included in the video presentation. For example, assume a first audio selection is -to be used in conection with a particular series of images from photographs 316 and 318. Edit controller ;268 cues audio player 278a to the next audio selection and then triggers this audio player to play the selection at the appropriate time. The audio material is then delivered via audio mixer 280 to the recorder for inclusion in the video presentation. In addition, audio player 278b may be cued to the next audio selection, whether it be additional background music or another series sound effects, as audio player 278a is operated. Audio mixer 280 may mix background audio from audio player 278a or 278b and sound effects from the other audio player for inclusion in the destination tape.
  • cassette tape player 278c is used for this purpose. Specifically, edit controller 268 triggers this cassette player to play the personal audio information at the appropriate time, as found in the edit decision list generated from the basic user supplied data. Audio mixer 280 then includes the personal audio information with any other audio information from other audio sources. This audio is combined with video being recorded at recorders 310, 312.
  • the illustrated system 260 also includes video sources 282 for processing video tapes or motion pictures transformed to video and .supplied by the user.
  • video sources 282 for processing video tapes or motion pictures transformed to video and .supplied by the user.
  • VHS video player 282a and three-quarter inch video player 282b is provided.
  • Source 282a may be the same type of equipment as recorder 310.
  • source 282b comprises a model BVU-820 three-quarter inch play/recorder (slow-mo) from Sony Company. This source has the capability of playing video tape in slow motion as desired (i.e. if clip ticket box 225 of Fig. 25 is marked) .
  • a beta format video player source 282c is also provided, such as a model Slo-1800 Betamax play/recorder from Sony.
  • Time base correction circuit may comprise model BVT-820 time base correctors from Sony Company. Such circuits compensate for stretching of video tape and other non-uniformities in video output from the video sources. Therefore, when delivered to video switcher 274, the video from these sources arrives at the appropriate time for inclusion in the destination tape.
  • Edit controller 268 is coupled to the video sources by interfaces and respective lines 340a, 340b and 340c for controlling the playing of video from these sources.
  • the audio outputs from these video sources is delivered by respective lines 286a, 286b and 286c to audio mixer 280. Therefore, if audio from this material is to be included in the destination table, this information is passed to audio mixer 280 and then to the tape.
  • Video sources enables a user to include segments of motion picture or video clips in the finished video presentation.
  • Clip tickets (Fig. 25) are used as explained above to designate the beginning and end of the segment to be transferred to the video presentation. Therefore, by selecting segments for inclusion into the video presentation, the user is not forced to dump undesired material from a motion picture or video tape onto the destination tape.
  • a number of audio and video monitors 350 are provided in the system 260 for monitoring the performance of various components of the system.
  • the system includes a synchronization generator 352, such as a model 1410R Option 4 sync generator from Tektronix Company of Beaverton, Oregon.
  • Sync generator 352 provides a synchronization ("S") output at 354 for synchronizing video processing components of the video processing system.
  • output 354 is coupled to the edit controller or 268, the video switcher 274, the time base circuits 339, the still frame store apparatus 266, the cameras 272a, 272b and the character generator 292, as shown, for synchronization purposes.
  • the above described video processing system provides a unique way of producing video presentations from user generated basic editing instructions.
  • the illustrated components are operated from the same data base or APES operating software generated off-line from system 260.
  • system 260 In its basic form, the system involves the control of an animation stand, a still frame store source and various audio sources from a single edit controller.
  • simultaneous animation and editing of video presenta ⁇ tions is accomplished by system 260. That is, during the editing process, the video output from cameras 272a, 272b is transferred on line under the control of the edit controller, to a destination tape.
  • audio, video, captions and animation are accomplished in one pass on essentially a real time basis.

Abstract

A video production system and method in which users generate off-line a set of video-animation and editing instructions. These instructions designate parameters such as sections of photographs and sections of motion video and film sources for inclusion in a video presentation, transitions between video images, timing information, captions and audio. The user generated editing and animation instructions are converted into a form suitable for controlling a video processing system. The video processing system operates in response to the editing program to control the production of a finished video presentation. This system includes a single edit controller (263) which simultaneously controls the animation of still images, including portions of photographs and slides.

Description

VIDEO PRODUCTION SYSTEM AND METHOD
Technical Field The present invention relates to video production systems. More particularly, the invention relates to an automated video production system in which (a) video editing instructions are prepared off-line, as by a layman, (b) these instructions are converted into an edit control program off-line; and (c) in which the resulting edit control program is used on-line to control complex electronic equipment and produce a professional quality video presentation.
In a typical prior art video production system, a photograph or other print is placed on an animation table which is movable horizontally in X and Y directions. A camera is mounted at a fixed location and a fixed distance above the table. The focal length of the camera lens is varied to capture particular sections of the photograph. With such a system, under the direction of a producer, a video technician typically stores the initial coordinates of the table and initial focal length of the camera lens. The table is then shifted to a new position and/or the lens focal length is adjusted. The new position data and/or camera focal length data is then stored. Working closely together, the producer and technician eventually develop and store a sequence of data corresponding to the moves of the animation table and the adjustments of the lens focal length. The stored sequence is run on a real time basis to provide a clip of film or video from the photograph on the animation table. Separate photographs are processed in this time consuming and labor intensive manner to provide a video clip from each of the photographs. Alternately, cameras have been mounted on complex articulated arm assemblies for movement in X, Y and Z directions to focus on a print. Again, an animation sequence is developed on a step by step basis, stored, and then run in real time to produce a clip of video as explained above. These clips of video are delivered to a separate editing station where, under the control of an edit controller, they are assembled into a video presentation with transitions being inserted between the clips. Frequently, audio is added to the video presentation as it is assembled. During the editing process, a producer normally oversees a technician who develops an edit control program in a step by step manner at a computer terminal which forms a part of the edit processing equipment. The completed edit control program is run to control this equipment and produce a video presentation. Typically, the video staff is kept busy switching between even two audio/ video input sources (i.e. two video sources and two audio sources) and can be overwhelmed if faced with developing an edit control program for more than three such sources. Consequently, edit controllers have heretofore only been used to control a limited number of sources.
The known video production approach described above requires highly trained video technicians, editors and producers to combine information into a final video tape presentation. In addition, a two-step process is utilized in which the animated clips are first created at an animation station. This first step is followed by the assembly of the completed video presentation at a separate edit control station. This approach is labor intensive and time consuming. For example, it may take as much as thirty hours or more for a video production staff to produce a ten minute high quality video tape. In addition, the costs of such productions can be prohibitive.
Still other prior art video systems rely on digitization of still images before they are incorporated into a video presentation. Such systems are unable to produce satisfactory closeups from digitized still images due to image distortion and mosiac-like graininess resulting from the digitization process.
Recently, video transfer businesses have arisen for producing video tapes of photographs and movies. Although in video format, the resulting video presentation consists of a video tape of individual photographs which have not been edited to any signif¬ icant extent. For example, selection of designated portions of photographs for inclusion in the finished video tape is typically not provided. If provided, it is left to the skill of a video technician who focuses a video camera on a photograph in hopes of capturing the desired part of the photograph. In addition, significant editing, such as by adding audio, action (i.e. transitions between photographs) either cannot be accomplished in these systems or is extremely limited. Also, to the limited extent such editing is available, substantial amounts of labor, time and money are required to accomplish the editing. In addition, consumers or other individuals requesting video tapes of their photographs typically have little or no control over the editing process.
Therefore, a need exists for an improved video production system which overcomes these and other disadvantages of prior art systems. Summary of the Invention In accordance with the invention, a means is provided by which a consumer or other individual desiring a video presentation can record basic edit instructions for the finished video presentation. Although any data recording media may be used, the data is typically recorded on optically encoded sheets or tickets. The basic edit instructions are generated off-line and typically at locations which are remote from the editing station at which the video presen¬ tation is produced. For example, an individual consumer may prepare these edit instructions at home, without the need for a technician or a computer terminal. The system is versatile and offers great flexibility to the user. That is, the system allows a user to customize the final video presentation by allowing the user control over a substantial number of editing variables. These variables include combinations of one or more of the following:
(a) The size and location of particular video images from slides, photographs or other still image sources which are to be included in the video presentation;
(b) The size and location of multiple video images from a single photograph or other still image source which are to be included in the video presentation;
(c) The motion or transition in the video presentation between selected images from a single image source and between selected images from different image sources; (d) The inclusion and location of designated portions of movies, video tapes or other motion picture sources in the finished video presentation; (e) The manner in which a transition is made in the video presentation between images from motion picture sources and from still image sources;
(f) The length of time that a particular image appears in the video presentation;
(g) The selection of image dressings in the video presentation, such as the super imposition of one image over another and the inclusion of specialty borders surrounding a particular image;
(h) The inclusion of identifying information in the video presentation, such as by the way of captions and the like;
(i) The designation of the sequence of images;
(j) The selection of visual images for inclusion in the video presentation from a library of visual images;
(k) The selection of audio for inclusion in the video presentation from combinations of personally supplied audio material, a library of audio material, and sound effects.
To assist the user in developing the basic edit instructions, a unique and mechanically simple apparatus is provided for precisely defining a portion of a larger photograph or still image which is to be included in the video presentation. This apparatus is also used to define relative positions of multiple portions of a single work which are to be included in the video presentation. In the illustrated form, this apparatus comprises a novel framer for selecting and defining the position of a portion of a visual work. This is done in a manner that can be readily communicated to enable subsequent relocation of the selected portion in a very precise manner during production of the video presentation.
One illustrated form of a framer comprises an overlay having a visual aperature of a preselected size and proportion. This aperature is capable of being placed over the visual work and moved relative thereto to position the aperature over a desired portion of the work. Communication, or notation indicia is positioned on the overlay for denoting the position of the overlay and its included aperature relative to boundaries, or reference portions of the work. This indicia is then recorded on the data tickets by the user for communicating the same during subsequent stages of production of the video presentation. That is, the indicia permits subsequent precise location of the selected portion of the work at a later time. In another form, the framer has an aperature which is variable in size, yet which retains a consistent proportionality throughout its intended range of size variability.
The system includes an edit control program generating apparatus which converts the b&sic instruc¬ tions from the user into a program for controlling the production of the video presentation. The edit program development portion of the system electronic¬ ally reads, and in the case of optically encoded tickets optically scans, the basic edit instructions developed by the user. In addition, this portion of the system converts these instructions into the edit control program. The edit control program is developed off-line from editing equipment. Consequently, the relatively expensive editing equipment is not tied up during the edit program development stage. This enhances the volume and rate at which video presen- tations may be produced by the editing apparatus. In addition, it permits the editing apparatus to control a large number of sources.
The basic video editing apparatus of the system includes an edit controller, a switching circuit, an animation controller and at least one animation stand to which an animation camera is mounted.
The animation stand supports the camera at a fixed distance from a still image source for movement in X and Y directions to desired positions over the image. The focal length of the camera lens is adjusted to capture the desired section of the image. Typically, at least two of such animation stands and cameras are provided to enable the cameras to capture successive photographs or other still visual image sources which are to be included in the final video presentation. The positions to which the animation stands are to be shifted, as well as the focal length of the camera lenses, are generated from the user's basic instruction list during the assembly of the edit control program. The position instructions and lens adjustment data are loaded in proper sequence into the animation controller. In response to the edit controller, the animation controller shifts the first animation stand through its desired positions and controls the first camera lens to obtain animated video from a first still image source. The edit controller controls the transfer of this information through the switching circuit to a recorder which records the information on a destination video tape or other media as the first animation stand and first lens is shifted. After the desired information from the first still image source is obtained and trans¬ ferred to the destination video tape, the second animation stand is operated to transfer desired video information from a second still image source to the destination tape while the first still image source is replaced with a third still image source. The edit controller, in response to the edit control program, controls the video switcher and animation controller to provide the designated transitions on the destination tape between images from the first, second and other still image sources. Thus, on a substantially real time basis, the final destination tape is produced. One or more audio sources may also be controlled by the edit controller, together with an audio mixer circuit within the switching circuit, to simultaneously include audio on the destination tape as video is recorded thereon. In addition, one or more motion video sources may also be controlled by the edit controller to mix video generated from still image sources and motion picture sources onto the destination tape. The edit controller may also control a still frame source apparatus and the switching circuit to incorporate captions, background frames and other video information, by way of the switching circuit, on the destination tape. An optional character generator means, controlled by the edit controller, may also be included for adding electronically generated captions to the destination tape.
Thus, a fully automated video editing system, responsive to an edit control program generated from a user's basic instruction set, is used to produce a finished video presentation in one continuous operation. It is therefore an overall object of the present invention to provide an improved video production system and method.
It is another object of the invention to provide such a system and method in which remotely generated editing instructions are used to assemble an edit control program off-line from a video editing apparatus, and in which the edit control program controls the operation of the editing apparatus to produce a video presentation in a single continuous procedure.
Another object of the present invention is to provide a means by which a consumer or layman may have control over the editing of a video presentation. A further object of the present invention is to provide a system and method by which a layman may determine basic editing instructions for a compu¬ terized system which produces a video presentation, without the need for a computer terminal or any computer programming experience.
A further object of the present invention is to provide a system and method for producing a video presentation in which a wide variety of editing decisions can be made to develop the presentation. Still another object of the present invention is to provide a cost effective, labor and time saving system and method for producing video presentations.
A still further object of the present invention is to provide a system and method for producing customized video presentations on a volume basis.
These and other objects, features and advantages of the present invention will become apparent with reference to the following detailed description and accompanying drawings. Brief Description of the Drawings Fig. 1 is a schematic diagram of the edit control program assembly portion of the system, which is utilized in producing an edit control program for editing a video presentation;
Fig. 2 is a block diagram of a video editing portion of the system for controlling the production of a video presentation in response to the program generated by the apparatus of Fig. 1; Fig. 3 is a detailed block diagram of the editing apparatus of Fig. 2;
Fig. 4 is an isometric view of one form of table for supporting photographs or other still images which are to be included in the video presentation; Fig. 5 is a side elevational view of the table of Fig. 4, in which the table top comprises a screen, and shown with a slide projected onto the underside of the table top;
Fig. 6 is a view of one face of a form of overlay framer in accordance with an embodiment of the invention;
Fig. 7 is a view of the opposite face of the overlay of Fig. 6;
Figs. 8a and 8b illustrate two forms of a series of recording means used to record and communicate information determined by use of the framer of Figs. 6 and 7;
Figs. 9 and 10 illustrate a use of, and a product resulting from the use of, the system shown in the prior figures;
Figs. 11-16 illustrate further uses of, and results from the use of, the system shown in the prior figures;
Figs. 17 and 18 illustrate a modified embodiment of the overlay of Figs. 6 and 7; Fig. 19 is an enlarged cross-sectional view taken along line 19-19 of Fig. 18;
Fig. 20 is a view taken along line 20-20 of Fig. 19; Fig. 21 is a plan view of a photograph illustrating one form of transition between designated sections of the photograph;
Fig. 22 illustrates a further embodiment of the overlay of Figs. 6 and 7; Fig. 23 illustrates one form of a series of recording means used to record and communicate information concerning transitions between images from different visual sources, as well as audio information, for inclusion in the video presentation; Fig. 24 illustrates one form of a series of recording means used to record and communicate information concerning the inclusion of sound effects in the video presentation;
Fig. 25 illustrates one form of a series of recording means used to record and communicate information concerning segments of video or motion picture information to be included in the video presentation;
Fig. 26a, b and c illustrate portions of pages from a catalogue listing music, sound effects and still art which a user may select for inclusion in the video presentation; and
Fig. 27 illustrates one form of a projection sheet or screen which may be used in generating information to locate sections of a projected slide for inclusion in the video presentation. Detailed Description of Preferred Embodiments
Overview The system and method of the present invention comprises a three-stage process for producing video presentations. The first stage comprises a data generation and recording or preas- sembly stage. During this stage, users, who are frequently untrained in the art of preparing video presentations, record basic instructions for control¬ ling the finished video presentation. Although data may be recorded in any media, such as on floppy disks or using touch panels or other data input devices, a simplified data recording approach is preferred. In the illustrated embodiment, data sheets, tickets, or slips, identified as frame tickets, and topic zone tickets, are encoded by the user with the desired instructions. To facilitate downstream processing, the data is recorded in a manner that permits optical mark scanning, as by marking the desired information in a dark color or reflective material (i.e. with a graphite pencil) on the data tickets. With this approach, the user does not require expensive computers and other equipment to generate data. Moreover, consumers can easily complete the data recording tickets in their own home, and thereby have creative control over the finished product.
Typically, user generated data for flat art, such as slides and photographs, is described by a frame size and set of characters which identify the position of a desired portion of the flat art to be captured in the finished video presentation. Frame size is typically defined by a simple number on the user's frame ticket. In addition, position, relative to a corner of the flat art, is described by coordinate identifying data recorded by the user on the frame tickets.
The data tickets further define information concerning desired transitions from one image within a piece of flat art to another image in the same piece of art, as well as transitions between images from successive pieces of art. Movies or other motion picture sources may be included in the finished video presentation by identifying these sources on the data recording tickets, and also identifying the segment of the motion picture source to be included. More specifically, clip tickets may be used to identify the segment which is to be included in the video presentation. As an added option, the user may select images from a library of still images for inclusion in the video presentation. In the illustrated system, the user may also record captions on the data tickets for inclusion in the video presentation. In addition, as an added option, the user may designate that the captions be electronically inserted at the appropriate places in the presentation. Audio information can also be selected by the user for inclusion in the finished video presentation. The user has the flexibility of designating audio from a library of music or other audio, from the user's own personally supplied audio, as well as sound effects from a library of sound effects.
In short, the user generates and records a sequence of basic instructions for the finished video presentation. This raw data is generated at the user's convenience and does not tie up expensive video processing equipment.
The second stage of the system and method comprises a data conversion stage during which an edit control program is developed or assembled. The assembly of the edit control program is accomplished off-line from the video editing equipment so as to further avoid unnecessarily tying up expensive editing equipment. In general, the data recorded by the user on the data tickets is in a form which does not resemble edit decision lists normally used in the video production industry. During the intermediate edit control program development stage, this basic data, comprising the decisions indicated on a user's data tickets, is converted into a set of software instructions for distribution to various pieces of video processing equipment. This edit control program assembly stage is described below in connection with Fig. 1. Of course, this second stage would be eliminated if data is recorded directly in a form which is usable by the editing equipment.
The final or third stage of the system and method comprises the video presentation completion stage. During this last stage, video editing equipment, as well as video animation equipment, operate under the control of an edit controller programmed with the edit control program to complete the finished video tape. The video editing process is accomplished in essentially one continuous pass as all video and audio components of the finished video tape, as well as animation of video information, is accomplished under the control of a single edit controller. This third stage of the system is described below in connection with Figs. 2-5.
Data Generation and Recording (Pre-Assembly Stage) As mentioned above, the system of the present invention includes a mechanism for selecting and designating portions of still visual images for inclusion in a finished video presentation. The still - 15 -
visual images typically comprise flat art work such as still photographs, slides, text and printed art from magazines and the like, as well as from other sources. Preferred embodiments of one type of such a mechanism is illustrated in the figures. However, the system is not limited to the specifically illustrated mechanisms. Nevertheless, the illustrated forms facilitate the development of basic editing instructions by consumers and others who have not been trained in the art of preparing video presentations. In addition, video screens have generally defined height-to-width proportionalities. Therefore, the proportions of the selected parts of the work discussed herein are illustrated and described in height-to-width ratios corresponding to those found on video display screens. It should be recognized, however, that this invention has applications far beyond those described herein. Finally, for convenience, the term "frame" when used in this document is to be understood to include both frames and individual fields of video.
Referring first to Figs. 6 and 7, these figures illustrate opposite sides of a transparent unitary framing overlay sheet 10. The sheet includes four transparent visual apertures 12, 14, 16 and 18 defined by borders 12a, 14a, 16a, and 18a.
Adjacent the boundaries of each of these apertures is a notation of the frame size (i.e. the number 1 is for aperature 12) number on one side of the sheet and on the other side of the sheet is a notation, "Wrong Side." These notations indicate to the user which side of the sheet should be facing up (toward the user) during use. Thus, the user will see the frame size number of the aperature being used when the overlay is properly positioned. Each of the apertures is of a different size and, in the illustrated example, is proportioned in the same height-to-width ratio as a common video display screen.
Remainder portions of transparent sheet 10, intermediate apertures 12, 14, 16 and 18, are, for convenience, imprinted with a plurality of perpen¬ dicularly disposed horizontal and vertical gridlines 22, 24, respectively.
Arrayed around the border of sheet 10 are a series of letters in ascending order, each of which is adjacent the end of one of the gridlines. These constitute one form of communication, or notation, indicia. Each aperture- has its own discrete set of such notations. For example, aperture 12 has indicia 12b, 12£ associated therewith. Apertures 14, 16 and 18 have respective indicia 14b_, 14£, 16b, 16£ and 18b_, 18£. Although alphabetic indicia are illustrated, numeric, or another symbolic indicia by which infor¬ mation can be indicated, may be used also. As a more specific example, in Fig. 7, it should be noted that adjacent the aperture 18 (frame size 12) are alphabetic indicia AE-CH, extending along the horizontal axis, and alphabetic notations KL-QR, extending along the vertical axis.
Referring to Fig. 8a, a plurality of recording, or communicating means 32, 34, 36 and 37 are illustrated. These include elongate cards, or slips, having boxes defined thereon into which; information can be entered. Typically, a number of these cards are included on a single sheet of paper. In the upper left-hand corner of card 32 is a picture frame notation 38, used to denote the picture to which the particular recording card refers. Numerals 1-6 are positioned in the central region of the card in markable boxes at 42. These numerals correspond to the frame size notations, such as previously described for apertures 12, 14, 16 and 18; (i.e. aperture 16 is frame size 12) .
To the left of numerals 42 are alphabetic notations A-J in markable boxes 46. Above the numeral designations 42, are alphabetic notations K-T in markable boxes 50. As described below, the boxes 46 and 50 are used to designate the coordinates of the border or other selected reference on a photo which places the desired portion of the photo within the designated aperature. The boxes 42, 46 and 50 relate to a "custom frame" designation in which the user is selecting a particular portion of a visual image for inclusion in the finished video presentation. The data recording cards also include an "auto frame" markable box 52. This box is marked when the user desires to include the largest possible portion of the visual image, within the screen aspect ratio, in the finished video tape.
A markable box 54 is also included in each of the cards for designating whether the particular visual image associated with a particular frame number is a slide. If so, this box 54 is marked. In addition, vertical and horizontal markable boxes 55 (Fig. 8b) may be provided for a user to indicate whether a slide is vertical (its long dimension being vertical) or horizontal (its long dimension being horizontal) . In this case, a user is typically instructed to project a slide on a screen so that the projected image has a long dimension which is a particular size (i.e. eight inches) . To designate a particular portion of the projected slide for inclu¬ sion in the finished video tape, the framer 10 (Fig. 9) is positioned over the projected image with the appropriate aperature bounding the desired portion of the projected slide. Boxes 42, 46 and 50 are then marked to indicate the size and coordinates of the selected portion of the slide, relative to a reference point on the projected slide, such as the upper left- hand corner of the projected image. During subsequent processing of the video presentation, the slide is projected in the same manner. Then, the frame size and coordinates are used to identify the selected portion of the slide for inclusion in the video presentation. Data for use in processing slides may also be generated as explained below in connection with the screen of Fig. 27. The Fig. 8a data recording cards 34, 36 and 37 (i.e. those data cards after the first) are provided with markable boxes 56, 58. These boxes are used to indicate whether the particular card refers to the same photo as the prior card (in which case box 56 is marked) or a new photo (in which case box 58 is marked) . To designate a different section of the same photo or slide for inclusion in the finished video presentation, box 56 is marked and the coordinates of the new section of the same photo are indicated in boxes 42, 46 and 50.
The data recording cards also include a means by which timing information can be indicated. That is, the length of time which a particular image is to appear in the finished video presentation. In the illustrated embodiment, timing information for each of the data cards 32, 34, 36 and 37 is recorded by marking respective boxes 60, 62 on each of these cards. Box 60 is designated a. "double play" box, while box 62 is designated a "flash" box. The time periods may of course be varied to fit a particular application. Typically, however, if neither of these boxes 60, 62 are marked, the selected image is played for a first time period, for example six seconds. In contrast, if the "double play" box 60 is marked, the first time period is doubled to, for example, twelve seconds. Similarly, if the flash box 62 is marked, the initial time period is halved to, for example three seconds. On the other hand, if both boxes 60 and 62 are marked, the minimum time period is halved again to, for example, one and one-half seconds.
Thus, the user can designate the length of time which a particular image is to appear in the finished video presentation, because the video editing or processing system uses the time information from the recorded data cards in controlling the video editing. To increase the time duration still further, a user need only mark two frame cards identically, except that the "same photo" box 56 in the second card is marked. In this case, the video processing system causes the display of the selected image for the sum of the time designated on the initial frame card, the time designated on the following frame card, and the time designated on any immediately succeeding frame cards that are marked in the same manner. Transition information may also be recorded by a user on the frame cards. The term transition means the manner that the visual image shifts from one designated image to another in the video presentation. Additional transition designations are recorded by the user on topic zone cards as shown in Fig. 23. These topic zone cards are described below. The topic zone cards are typically utilized to designate transitions from one photo or slide to another. However, they may also be used to designate transitions from one portion of a particular photo or slide to another portion of the same photo or slide. Likewise, the topic zone cards also indicate transitions from one motion picture source to another such source, or between motion picture sources and slides, photos or other still art. In the absence of another transition designation, the typical transition between one designated section of a photograph or slide and another designated section of the same photograph or slide involves the creation of a panning or animation path between the sections. This path is described below in connection with Fig. 21. Animation in this manner creates the impression of motion in the resulting video presentation. As another form of transition, the user may designate a "slow-zoom" option by marking a box 64 on the cards 32, 33, 36 and 37. The video processing system responds to such designations by zooming a camera slowly toward the center of the selected frame area. The frame cards 32, 34, 36 and 37 also include a caption recording block or box 66 within which a user may manually write, type or otherwise record a caption. Such captions are included in the finished video presentation along with the video frame associated with the caption. These captions are processed by the video editing system, as explained below, so as to overlie the video and typically appear on the lower one-third of the video image on a display screen. As another option, a user may mark a markable "TS" box 68. In this case, during processing, the caption in the associated caption box 66 is processed for electronic insertion into the finished video presentation at the appropriate place.
As an additional feature, the video processing system may include a library of still frames for inclusion in the finished video presen¬ tation. The still frames may comprise background scenery, title lines, credit lines (for example indicating who assisted in producing the video presentation) and the like. Virtually any type of desired information may be provided in a still frame library. The recording cards 32, 34, 36 and 37 include a section 70 for designating that a particular frame is from the still frame library. Typically, a user is provided with a catalog of possible still frame images, each image being identified by a particular reference number. An example of such a catalogue is described below in connection with Fig. 26c. Although other identifying indicia may of course be used, in Fig. 8a a combination of markable boxes 72 and numeral markable boxes 74 are provided for use in designating still frames from the catalog. Boxes 72 and 74 are merely marked to select a particular still frame image. Caption blocks or boxes 76 are provided in which the user can write, type or otherwise record desired captions for inclusion in the still frame. These captions are processed in the same manner as captions included in caption boxes 66. In addition, a "TS" box 78 may be marked by a user to designate the electronic generation and insertion of the caption from box 16 in the video presentation.
Therefore, by using the data recording tickets 32, 34, 36 and 37, a user has substantial creative control over the sequencing and selection of visual images for inclusion in the finished video presentation. The user also has substantial control over transitions between video images, as further described below in connection with the topic zone cards of Fig. 23. In addition, the duration of the appearance of particular video images in the video presentation is also under the user's control. Moreover, the user may intersperse still frames and captions to further customize the finished video presentation. As also explained below in connection with Fig. 23, the user can control audio to be included in the finished video presentation. An alternate form of frame tickets or cards is shown in Fig. 8b. These cards are typically included on a single sheet of paper and are respectively designated as 80, 82 and 84. As the Fig. 8b form of cards have many elements in common with the Fig. 8a form, similar elements are numbered with the same numbers and therefore will not be discussed in detail. Also, for purposes of clarity, only the card 80 is numbered in Fig. 8b. The Fig. 8b form of card does not have a preprinted frame number such as indicated in box 38 in Fig. 8a. Instead, the Fig. 8b form of card includes a series of markable boxes indicated at 86. These boxes are filled in with the number of a particular photograph or slide. If the same photo is used in a succeeding portion of the video presentation, the user merely marks the same number in box 82 of another of the frame cards. Consequently, the same photo and new photo boxes 56, 58 of the Fig. 8a form of card are eliminated.
To provide additional transition information, a markable "still frame" box 88 is included in the cards 80, 82 and 84. When the still frame box is marked by a user, as explained below, the video processing system of the present invention abruptly changes the prior video image to the designated video image in the video presentation. In otherwords, panning or zooming between the two images is eliminated. Although not shown, the data recording cards of Figs. 8b also may include the "TS" markable boxes 68 and 78 in the event the video processing system includes an electronic caption generator and inserter. Alternately, separate data cards may be used to indicate that, for example, all captions are to be electronically inserted into the video presentation. The data recording cards of Fig. 8b include a markable "clip" box 90. This box is marked to indicate that the data card refers to a clip of video, motion film, or other motion picture source material provided by the user. Box 86 is filled in to indicate the number of the particular motion clip. The remaining information on the data recording card is then typi¬ cally left blank, as not being applicable. In addition, a clip ticket, such as shown in Fig. 25 and described below, may be marked by a user to indicate the beginning and end portions of video material which is to be included in the video presentation. Each of the slides, photographs, motion picture clips and other source material provided by the user is identified with a number which is the same as the frame number in box 38 of the corresponding Fig. 8a data recording card or cards or the same as the number indicated in box 86 of the corresponding Fig. 8b card or cards. A sticker bearing the appropriate number may be affixed to the visual material for identification purposes.
In describing a method of operating using the apparatus, reference is made to Figs. 9 and 10. A user may have a photograph such as noted at 91, here a photo of a skier in mountainous terrain. However, the user may wish to use only a small portion of this overall photograph. The present invention allows the user to select a specific portion of the photograph, denote the precise location of the selected portion, and to record and/or communicate that information.
Referring to Fig. 9, the user places overlay sheet 10 on, or in front of, the photograph. The amount of photograph 91 to be used is determined by which of apertures 12, 14, 16 and 18 are positioned over the selected portion of the photograph, recognizing that the apertures vary in size. In this illustration, aperture, 18 (frame size 12) is used. When the user has positioned the aperture to set off the desired section of the photograph, the location is easily determined by noting at 92, 94 the position of the border or other selected reference on the photo relative to the alphabetic notations along the borders of overlay sheet 10.
The user merely reads the alphabetic notations on the horizontal and vertical borders as illustrated and then records the same on the data card associated with that photo. In this instance, it will be noted that on card 32, in Fig. 9, the frame size has been marked as 1-2 (to denote frame size 12) . Also, the lower, horizontal, border notation is marked on this card as AH. In addition, the right side, vertical, border is marked on the card as LM. This conforms to the locations denoted by indicia 18b, 18£. By using a manually marked communicating and recording medium as noted herein, the information as to the size and position of the selected portion of the photograph can be transmitted easily and precisely for further use. For example, the card may be machine readable by an optical mark scanner to control a computer operated animation camera as explained below. By this recording mechanism, one can precisely locate the selected portion of the photograph to place it on video tape for projection on a video screen 98, as illustrated in Fig. 10.
Explaining further, a user can easily take a photograph 91 and, with the use of an aperture in the transparent overlay sheet, select a portion of the photograph to be used. The position of the selected portion is noted by the notation indicia at the edges of the overlay sheet. This can be recorded and communicated easily by use of the described data recording cards. The recorded information can then be subsequently used by others to relocate that exact portion of the photograph.
In the example set out above, the user has selected the portion of photo 91 which shows only the upper portion of the skier. The precise location of this portion of the photo has been communicated by the border notation indicia and recorded. From this information, the same user, or another, can precisely relocate the selected portion for capture on video tape or some other media, if desired. In Fig. 10, the selected portion has been put on video tape and is shown in replay.
The Fig. 9 caption block 66 of data recording card 32 has been filled in by the user with the words "Super Mom". During preparation of the video presentation, this caption is processed so as to appear at 100 (Fig. 10) in the displayed video presentation. A shadow has been added to the caption during processing so that the displayed caption 100 has greater depth than the caption written in box 66. This is accomplished as explained below.
Figs. 11-16 illustrate additional uses of this system. In Fig. 11, a photograph 110 including two subjects 112, 114 is shown. In this figure, sheet 10 is illustrated in a position overlaying photo 110 with aperture 16 positioned over subject 112. In this manner, the user can select a portion of one subject in the photograph. By noting the horizontal and vertical alphabetic notations along the border of framer 10, as previously described, this information can be communicated for subsequent video processing. This enables the production of a video presentation with the segregated image 112 on video tape. Image 112 can then be displayed on a video screen 115, as shown in Fig. 14. Similarly, by moving overlay sheet 10 to a position shown in Fig. 12, where aperture 18 overlies subject 114, and noting the position of the overall work in relation to the boundary notations, subject 114 can be segregated on video from the rest of photograph 110. When displayed, the video presentation then includes the video image shown in Fig. 15.
The notations developed in the Fig. 11 and Fig. 12 applications also serve to denote the relative positions of selected portions of the single work. This information on relative positions of multiple selected portions within a single work can be used to develop an animation or panning path for a camera to follow in scanning from the image in Fig. 11 to the image in Fig. 12.
This is more clearly shown and described with reference to Fig. 21. In this figure, a photograph 120 is depicted of a banana 122, an apple 124 and an orange 126. Using the frame tickets as previously described, a first section 128 of photograph 120 may be designated. Similarly, a second section 130 of photograph 120 may also be designated. Assume that photograph sections 128 and 130 are to be positioned in succession on the finished video presentation and further assume that the still frame and slow-zoom boxes 88, 64 (Fig. 8b) have not been marked.
In this case, the overlay 10 defines the boundaries of camera motion, controlled during the video editing process, over a single photo. That is, the size and position coordinates of the begin frame section 128 and end frame section 130 define the beginning and end of areas from photograph 120 which are to be captured by the camera. Starting with section 128, the camera is controlled to follow an animation path and pan or capture visual information - 27 -
from section 128 to Section 130. That is, the camera first captures frame section 128, the banana, on the video tape. As the video camera of the processing system is moved toward the orange or end frame 130, as indicated by arrows 132, the intermediate image of the apple 124 is captured on the video tape. Finally, Section 130 is captured. The relationship of the selected frames 128 and 130 define the boundaries of the animation path and thus the intermediate portions of photograph 120 which are included in the finished video presentation.
Fig. 13 illustrates a use of another overlay sheet 140 with larger transparent apertures 142, 144. This overlay also includes a gridwork pattern imprinted thereon and alphabetic notations along its borders for use with these larger apertures. Apertures 142, 144 are sized so as to accommodate both subjects 112, 114. Information recorded using overlay 140 is communicated during video processing to produce a video presenta- tion which displays the visual image illustrated in Fig. 16.
With further reference to Fig. 9, the data recording cards of Fig. 8b are used in the same manner as described above in connection with the cards of Fig. 8a. In addition, as explained above, slides may be processed as well, in which case the slide box 54 is marked. The slide is then projected in the place of photograph 91 of Fig. 9. The boundaries of the border of the projected slide are then recorded along with the frame size to designate the portion of the slide which is to be included in the video presentation.
In addition to the approach for designating portions of slides described previously, a small sheet or screen 147 (Fig. 27) may be used in conjunction with framer 10 (i.e. Figs. 6 and 7) for this purpose. Sheet 146 is square, may be eight inches by eight inches or other convenient size, and has perpendic¬ ularly disposed side and top reference edges 147, 148 which intersect at a reference point 149. In addition, parallel spaced apart vertical lines 150 are provided on sheet 146. These lines, together with the upper and lower edges of sheet 146, enclose a vertically oriented region corresponding to the aspect ratio of a standard thirty-five millimeter slide projector. Similarly, parallel spaced apart lines 151 are provided on sheet 146. These lines, together with the side edges of the sheet 146 enclose a horizontally oriented-region, which also corresponds to the aspect ratio of the projector. A user projects a slide on sheet 146 until the slide fills the enclosure defined by line 150 (for vertical slides) or by line 151 (for horizontal slides) . A framer, such as shown in Fig. 6 is then placed over the screen to place the desired portion of the slide within the aperature of the framer. The coordinates on the framer which intersect the edges 147 and 148 are then recorded along with the frame size. Then, during video processing, the slides are projected to the same size and the designated portions of the projected slide are included in the video presentation.
Specialty overlays, such as overlay 154 as shown in Fig. 22, may also be utilized;. For example, assume that a user is preparing a "Halloween" or other novelty video presentation. In this case, the user may want to position the woman's face from photograph 91 (Fig. 9) in the gorilla's head 156 of the overlay 154. To accomplish this, the frame size box 42 of the Figs. 8a and 8b data recording cards is marked with the designation 1, 2, 3. In addition, the coordinate indicating boxes, 46 and 50 on the data cards are marked with the coordinates from overlay 154 which center the woman's face in the gorilla head. During video processing, as explained below, the gorilla figure is obtained from a still frame store and super imposed over the designated portion of photograph 91. As a result, the woman's face appears in the gorilla head in the finished video presentation. Other borders or accents to photographs and slides can be generated in the same manner by using other specialty overlays. Thus, with these overlay devices and method, a user can precisely select and locate a portion of a visual Work and record or communicate the same for subsequent use. Also, other information for editing a finished video presentation can likewise be recorded. Figs. 17 and 18 illustrate still another embodiment of the invention. In this instance, the aperture size is variable, while the height-to-width proportionality of the aperture is maintained in a preselected ratio, or proportionality.
Explaining this embodiment in greater detail, a substantially transparent overlay sheet 160 is provided, having an essentially straight-line slot 162 formed therein. Arrayed along regions disposed substantially perpendicularly to each other and along the borders of sheet 160 are boundary, or border, symbolic indicia 166, 168 similar to that previously described.along the borders of aperture sheet 10. A pair of elongate slits 170, 172 extend through portions of the overlay sheet 100 adjacent to boundary notations 166, 168 respectively. A second, or L-shaped, member 180 is mounted on overlay 160. Element 180 has perpendicularly disposed legs 182, 184. As is shown, these extend through slits 170, 172. At the vertex or intersection of legs 182, 184 is means to aid in mounting element 180 on sheet 160 while allowing movement therebetween. This is better illustrated in Figs. 19 and 20. In Fig. 19, it can be seen that an enlarged portion 186 projects upwardly from the juncture of legs 182, 184. A holding portion 188, which is wider than slot 162, underlies the marginal edges of sheet 160 which bound slot 162. An elongate neck portion 190 joins portions 186, 188 and fits slidably in slot 162.
This construction allows the L-shaped element 180 to slide relative to sheet 160 along slot 162 in the direction indicated by dual-headed arrow 194. Movement of element 180 relative to element 160 varies the size of the aperture defined between slits 170, 172 and the inner edges of legs 182, 184, while maintaining a preselected height-to-width ratio, or proportionality of the aperature.
Arrayed along leg 184 of element 180 are a series of numerical indicia which indicate the size of the aperture at any given position of element 180 relative to overlay sheet 160. For example, in
Fig. 17, the aperture size would be noted as 24. In contrast, in Fig. 18 the aperture size has been reduced and would be indicated as an aperture size 14. Use of the variable aperture size framer illustrated in Figs. 17-19 is similar to that previously described for the other forms of framer.
As has been mentioned previously, the framer apparatus and method set forth herein is useful, not only in the video production application described herein, but also in a variety of other areas. Such could include, but need not be limited to, commercial art, business graphics, and any and all other areas in which one wishes to select and/or segregate a portion of a work, locate its position, and record or communicate these for subsequent use. In addition to communicating size and position information concerning selected portions of art work, such as done by the frame tickets or cards of Figs. 8a and 8b, a mechanism is provided by which a user can control additional parameters of the final video editing process.
Transition effects in addition to those mentioned above may be designated by a user. For example, as shown in Fig. 23, one or more topic zone cards or tickets, such as illustrated at 200 and 202, may be marked to record data concerning these parameters. Each of these cards include a block 204 which contains plural markable boxes for indicating the beginning of each topic zone. Typically, a topic zone is comprised of a series of photographs or other art work related to a common topic, such as concerning a user's birthday. However, a topic zone may consist of various images from a single photograph. For convenience, the end of a topic zone automatically occurs at the beginning of the next topic zone. Therefore, the end of a topic zone need not be s«_narately indicated. Also, the "clip" box of a topic zυne card is marked when a topic zone begins with a particular motion or video clip, as opposed to beginning with a photograph, slide or other still art. For purposes of discussion, assume that topic zone one begins at the user's first photograph. This is designated by marking the zero box in the left hand column of block 204 of card 200 and one box in the right hand column of this block. Also, assume that topic zone two begins at the user's fifth photograph. This is indicated by marking the zero box in the left hand column of block 204 of card 202 and the five box in the right hand column of this block. Each of the topic zone cards also include a series of markable "transition" boxes 206 labelled as "medley", "wipes", "dissolves" and "cut". These boxes are marked by the user to designate and control the manner in which the video processing equipment will shift from one photograph or piece of art work to the next within a topic zone. Each of these transition effects are well known in the video editing art and are more particularly described as follows. The "wipes" transition involves revealing a second video image through a moving geometric pattern or curtain as the prior image is in effect revealed on the video screen by the pattern. The "dissolves" transition involves fading out the first image as the next image is faded in. This provides a soft transition effect. The "cuts" transition involves replacing the first image by a second, image in a rapid blinking motion. This accentuates action videos, such as those of sporting events and is the transition most often used by professionals in broadcast television and motion pictures. Finally, the "medley" box is marked to designate that a collage of the various other transition effects is to be used in shifting from image to image. This transition approach is automatically selected by the video edit control program if none of the other boxes are marked. Other transitions (i.e., digital video effects) may also be used by including appropriate markable boxes and corresponding editing equipment in the video processing system.
In the example under discussion, assume that the "wipes" box of card 200 is marked and the
"dissolves" box of card 202 is marked. In this case, transitions between images one through four of the first topic zone are accomplished by the "wipes" approach. In comparison, transitions beginning with the fifth image and topic zone two are accomplished by the "dissolves" approach. The topic zone cards 200, 202 also include a sound library section 208 which includes markable boxes that are filled in by a user to designate audio for inclusion as a background in the associated topic zones. A user is provided with a catalog of audio selections ranging from medleys to individual songs. By marking the appropriate boxes corresponding to the identity of songs in the catalog, the user selects audio for the topic zone. For example, referring to the portion of an exemplary music library catalogue page shown in Fig. 26a, selection TOO comprise the song "Happy Birthday".
The topic zone cards 200, 202 additionally include personal sound source markable boxes indicated generally at 210. These boxes are marked when a user desires the inclusion of his or her own audio in the topic zone. Typically, this audio information is provided on audio cassette tape and is labelled with the same designation that -is marked in block 210 for the associated topic zone. During video processing, the appropriate personal audio material is matched to the corresponding topic zone. As explained below, both a sound library and personal sound source selection may be made for a given topic zone. In this case, the two audio sources would overlay one another and be included in the video presentation as the video information for the topic zone is recorded.
Sound effect cards 212:, 214, such as shown in Fig. 24, may also be included for use in recording the positions in the video presentation at which sound effects are desired. Each of these cards 212, 214 are similar. Therefore, only card 212 will be described in detail. Card 212 includes a markable begin sound effect block 216 and an end sound block 218. The boxes of block 216 are used to mark the number of the motion picture material, photograph, or slide at which a particular sound effect is to start in the video presentation. Similarly, the boxes of block 218 are marked to designate the number of the motion picture material, photograph, slide or other art at which the sound effect is to end. For example, to start a sound effect at the first photograph, the zero box in the left hand column of block 216 is marked and the one box in the right hand column of this block is marked. To end the sound effect at the sixth photograph, the zero markable box in the lefthand column of block 218 is marked and the six markable box in the righthand column of block 218 is marked. The user is provided with a catalog identifying possible sound effects for inclusion in the video presentation. Markable boxes in a block 220 are provided with indicia corresponding to the indicia in the catalog for the sound effects. With reference to the exemplary portion of a sound effects catalogue page shown in Fig. 26b, markable sound effect box A0 is marked to, select the sound effect of discharging fireworks.
With topic zone and sound effect cards as shown in Figs. 23 and 24, users may designate combinations of audio from a sound library, personal sound source material, and sound effects for inclusion in a video presentation. In addition, transitions between images of the video presentation may also be selected and recorded on- the topic zone cards.
Clip tickets 221, 222, such as shown in Fig. 25, may also be provided for use in recording information concerning motion picture and video material provided by the user, assuming such material is to be included in the video presentation. Typically, a set of such cards, identified by a serial number, is provided for each video tape. To match the tape to. the corresponding clip tickets, a sticker or label bearing the serial number is affixed to the tape. Each of the clip cards 221, 222 are identical, therefore, only clip card 221 will be described in detail. Referring to Fig. 25, clip card 221 includes a markable begin block 223 for identifying the location, in the motion picture or video material supplied by a user, at which the segment to be included in the video presentation commences. A markable end block 224 is provided for identifying the end of this segment. To generate this information, the user may simply monitor the counter on a conventional video player and note the count at which the desired segment commences. This count is recorded in markable block 223. Similarly, the count at the end of the segment may be recorded in block 224. This information is then used during video processing to locate the segment of video to be included in the video presentation. Motion picture material may be transferred by the user to video and, when played as described above, the start and stop counts for the desired segment may be located for marking on clip ticket 221. A second segment of video from the tape can be noted on clip ticket 222, and so on with additional clip tickets.
Alternately, a user may first send motion picture or video material to the processing station described below. This material is then duplicated on a video tape which is marked in a conventional manner with timing information which is displayed on a television or other screen when the duplicate tape is played. The duplicate tape is then returned to the user, who then notes the precise starting and stopping points of the desired segment from the dis¬ played timing information. These points are recorded on clip ticket 221. Thereafter, the video material and clip tickets are returned to the processing station. During processing, a video player runs the tape to the designated starting location in a conventional manner. At the appropriate time, the designated segment of video is then included in the video presentation. Clip ticket 221 also includes a "slow-motion" markable box 225. When this box is marked by a user, the identified motion picture or video segment is played at a slow speed at the time it is incorporated into the finished video presentation. In addition, a "freeze frame" markable box 226 is also included on the clip ticket. If this box is marked, a single frame of video, identified in block 223 of the clip ticket, is designated for inclusion in the finished video presentation. To precisely identify the selected frame, the above described precision timing approach is used as tape counters are typically not accurate enough to precisely identify a desired frame.
The clip tickets also include a still art designation section 227, usable like the section 70 of the Figs. 8a and 8b frame tickets to designate still art for inclusion in the video presentation. Caption blocks 228, 229 are also provided on the clip tickets for use in recording captions for inclusion in the finished video presentation along with the identified clip. These captions are processed in the same manner as the captions recorded in blocks 66, 76 of the Figs. 8a and 8b frame tickets.
Typically, a user is provided with a kit containing instructions and a number of framers of various sizes, such as shown in Figs. 11 through 13. In addition, the kit may also include specialty framers, such as shown in Fig. 22. Frame tickets, cards or slips, such as shown in Figs. 8a and 8b, are also included in the kit. Moreover, the kit includes a number of topic zone cards and sound effect cards, such as shown in Figs. 23 and 24. If inclusion of user supplied motion picture or video into a video presentation is an available option, clip tickets are typically also included in the kit. The instructions in the kit typically include a catalog or listing which identifies a number of still frame images, credit slide images and other art work. These still frame images are selected by the user for inclusion into the video presentation through use of the still frame selection portion 70 of the Figs. 8a and 8b frame tickets. For example, with reference to the portion of an exemplary still art catalog page shown in Fig. 26c, the still frame selection boxes are marked with the designation U12 to designate the inclusion of a picture of a birthday cake in the video presentation. The catalog also includes a list of selections available in the sound library and a list of sound effects as explained above. This audio material is selected and designated on the Figs. 23 and 24 topic zone and sound effect tickets. Using the illustrated framer and the data recording means, the user generates the basic instructions for controlling the editing of a video presentation.
Data Conversion - Edit Program Development or Assembly Stage
The data recorded by users on the frame tickets, topic zone tickets, and sound effect tickets. and clip tickets is of a form which does not resemble* edit decision lists normally used in the video produc- tion industry. In the data conversion stage, the user generated decisions are assembled into a set of soft¬ ware instructions in a proper format for controlling the editing of the final video presentation from the user's instructions. Although other input formats may be used, the sound effect tickets, frame tickets, topic zone tickets and clip tickets are optically readable to facilitate the conversion process.
With reference to Fig. 1, the group of frame tickets 230, topic zone tickets 232, and sound effect tickets 234 and clip tickets 235 from a particular user are optically scanned by an optical mark reader 236, such as from Scan-tron Corporation of Los Angeles, California. The output on line 238 from the scanner is fed to a personal computer 240, with conversion software which converts the raw data from the optical scanner into an edit control program. This conversion software is identified as APES (Automated Photo Editing Software) conversion soft¬ ware in Fig. 1. The resulting control program is identified as APES operating software 252 in Fig. 1, with components noted in this figure.
As pointed out above, user decisions for flat art are described by frame size and a set of characters which describe the position of the desired frame on the art. By defining additional characters, the user specifies the manner of transition from a first frame, to new coordinates or a new frame size. In addition, the user specifies the sequence in which frames are to be included in the finished video presentation. As explained below, a pair of cameras are typically employed with each camera being mounted to a respective animation stand for movement in an X and Y direction. The cameras are spaced a fixed distance from tables which support the art woik so that focus is not a factor. Instead, the cameras have respective lenses which are zoomed to change the size of the image selected from the art work. The position coordinates from the frame tickets 230 are converted by the APES conversion software in personal computer 240 into a format which can be used by the animation controller to move the animation stands and cameras mounted thereto. That is, the position on the art work is described by character strings of data set forth in blocks 46 and 50 in the Figs. 8a and 8b frame tickets. This data is converted to X and Y position coordinates of the animation stand. In addition, frame size, as defined by the simple number in block 42 on the frame tickets, is converted to a particular position of the zoom lens of the camera. Specifically, successive pieces of flat art work or slides are displayed in alternating order on the animation tables. For example, a first photograph may be placed on the first table underneath the first camera while a second photograph is placed on the second table under the second camera. A third photo¬ graph is then placed on the first table following the recording of images from the first photograph. By alternating back and forth between the two cameras, succeeding slides, photographs and the like are incorporated at the appropriate places into the video presentation. This is explained in greater detail below in connection Fig. 3.
The APES conversion software assembles a sequence of instructions in order for the respective animation stands and camera lenses. This animation sequence is stored as an animation control program part of the APES operating software, as indicated at 252 in Fig. 1.: Under the control of an edit controller as set forth below, the animation stands and lenses are stepped through the animation sequence during the production of the finished video presentation.
The APES conversion software also assembies a still image or frame control program which comprises a sequential list of still images from the library of stills (i.e. background images, credit slides, etc.) designated by the user on the various data tickets, such as frame tickets 230. This sequence of still images is also stored as a part of the APES operating software. Once organized into sequential form, the still frames are called up by the edit controller individually, as explained below, at the appropriate times for inclusion in the finished video presentation.
Assume that the user has designated, such as by marking frame ticket boxes 68 or 78, that certain captions on the frame tickets 230 are to be electron¬ ically generated for inclusion in the finished video presentation. In this case, the operator of the video processing system takes all of the frame tickets containing captions to be processed in this manner. Captions from these tickets are then entered in order into the APES operating software as stored character data. A character generator 242, such as a VP-1 character generator from Chyron Corporation of Melville, New York, may be used for this purpose. As this list of captions is generated, the operator can verify its accuracy on a screen 246, which forms a part of the character generator 242. The character generator is interfaced, as indicated by line 248, with the personal computer 240. The transition and other edit information from the user's data tickets is also converted by the APES conversion software into a software edit control program compatible with the video processing equipment. Audio identifying information from the topic zone tickets 232 and sound effect tickets 234 are converted into an audio decision list, which is used to assemble a continuous audio bed for the video presentation. This audio decision list is typically integrated into the edit control program. If caption boxes 66 and 76 of the frame tickets (see Figs. 8a and 8b) are marked and electronic insertion of captions is not specified, the APES conversion software recognizes that a still frame containing the graphics or caption will be subsequently generated during video processing. The existence of this still frame is recognized by the optical scanner and is added to the still frame list in the correct order with other still frames on the list. As explained below, these captions are retrieved from the still frame store apparatus and overlaid on video sources during production of the video presentation. Time information from the frame tickets is also monitored by the APES conversion software. From the above information, an edit decision list or edit control program is developed with actual running time. This decision list is used by an edit controller to control the various components of the video processing system to produce the finished video presentation.
The resulting software comprises the automated photo editing system (APES) operating software indicated at 252 in Fig. 1. This operating software contains the edit decision list or edit control program, including the timing, transition and audio control information. The APES operating software also includes the stored animation control sequence, the stored still frame sequence and the stored character data. The APES operating software is thus generated off-line from video processing equipment. Therefore, generation of this software does not tie up the processing equipment. Because of this off-line preparation of the APES operating software, time delays associated with generating this program are eliminated from the actual video editing process. Therefore, video presentations may be prepared on a volume basis. Video Presentation Completion Stage With reference to Figs. 2 through 5, the video processing portion of the system will next be described. More specifically and with reference to the block diagram of Fig. 2, the video processing system 260 includes a personal computer 262 into which the APES operating software 252 is loaded. The animation control program or sequence is loaded from personal computer 262 to an animation controller 264 and the still frame sequence or program list is delivered from this personal computer to a still frame store apparatus 266. In addition, the edit decision list or edit control program is loaded from computer 262 into an edit controller 268. The edit controller is coupled to the animation controller, the still frame store and also to a destination tape recorder 270, which records the finished video presentation on a video tape or other media.
The animation controller 264 is coupled to an animation camera stand 270, and also to a lens 273 of a camera 272 mounted on the stand. The animation controller controls the positioning of the stand so as to place the camera over the desired portion of a photograph or other art work. The animation controller also controls the zooming of the lens 273 to select the size of the image captured by the camera. The camera is connected to a video switcher circuit 274 of a switching circuit 276, the switcher 274 also being connected to a destination tape recorder 277. In addition, the edit controller 268 and still frame store apparatus 266 are coupled to video switcher 274.
In general, edit controller 268 sends a "next" sequence instruction to animation controller 264 at times when camera 272 is to be shifted to capture a section or sections of a particular photograph or to a section of a different photograph. The captured video is delivered to the video switcher. In addition, if a particular still frame is required, a "next sequence" instruction is delivered from the edit controller to still frame store 266, causing the next still frame in the list to be delivered to the video switcher. The video switcher 274 is commer¬ cially available and, in response to instructions from edit controller 268, performs the desired transitions between the successive images in the sequence. The resulting video material is delivered from video switcher 274 to the destination tape recorder 277 for inclusion in the finished video presentation.
One or more audio sources 278 may also be included in the processing system 260. These sources receive control signals from edit controller 268. In addition, the output of these sources is coupled via an audio mixer 280 of switching circuit 276 to the destination tape recorder 277. The edit controller 268 is connected to the audio mixer as well for con- trolling the audio mixer to mix audio from the various audio sources which are triggered to operate by the edit controller. As a result, audio information may be simultaneously included in the destination tape, along with video information from the video switcher. In the event the video processing system 260 is also to handle motion pictures material, this material is typically first transferred to video. This trans¬ ferred video source material, as well as other video source material, is played by one or more video sources 282 which are included in the system. Video sources 282 are coupled to and controlled by the edit controller. In addition, the video output of such sources is delivered by a line 284 to the video switcher 274 for incorporation into the destination tape. At the same time, audio from sources 282 is coupled via a line 286 to the audio mixer and thus to the destination tape. This is done in the event audio from the user supplied video material is to be included in the finished video presentation. The detailed operation of the video processing system 260 will become apparent with reference to the description of Figs. 3, 4 and 5, which proceeds as follows.
As previously explained, computer 262 is coupled to edit controller 268 for delivering the edit control program to the edit controller. In addition, computer 262 delivers the animation sequence program to animation controller 264 by way of a data bus or link 290. The still frame store control program is also delivered via this bus to the still frame store apparatus 266. Also, in the event electronic character generated data is to be processed, the stored character data 254 (Fig. 1) is delivered over bus 290 to a commercially available character generator inserter 292, such as from Chyron Corporation.
Frame tickets containing captions in caption boxes 66, 76 are then placed on animation table 294a and 294b for filming by respective cameras 272a and 272b. Other caption blocks, for example, from clip tickets 235, are also filmed as well. Each video frame containing a caption filmed by camera 272a is delivered via a line 296a to video switcher 274 and then via a line 298 to the still- frame store apparatus 266. These caption frames are subsequently recalled at the appropriate time for inclusion in the finished video presentation. Similarly, each frame containing a caption filmed by camera 272b is delivered, via line 296b and the video switcher, to line 298 and then to the still frame store apparatus. The operator of the video processing system places the caption bearing data tickets on the respective animation tables 294a and 294b in the sequence they are to appear in the video presentation so that they are stored in the still frame store apparatus in the correct order. Filming of the captions may be manually controlled. Alternately, control signals, delivered via a line 300 to the animation controller 264 from the edit controller 268, may control this caption filming subprocess.
After all of these lists have been entered into the video processing control equipment, and the captions which are not to be electronically inserted have been filmed, video processing is ready to commence. The operator of the system cues the first two flat art images on the respective animation tables 294a and 294b and manually confirms that the rest of the system has been properly cued fox start of the edit operation. If any device in the video processing system is not available when the edit controller calls for its operation as explained below, editing is stopped at that point. The system is then re-cued for a clean edit after the equipment is repaired or otherwise made available.
With reference to Figs. 4 and 5, each of the animation tables 294a and 294b are similar so that only table 294a will be described in detail. In general, the animation tables are stationary. In addition, the animation tables define a fixed reference point 302 against which a corner of a photograph 30.4 or other flat part is placed. To facilitate positioning of the corner of photograph 304 against the reference point 302, right angularly disposed legs or stops 306, 308 project upwardly from the marginal edges of table 294a. The intersection of these legs and the table top defines the reference point 302. With this construction photograph 304, as well as succeeding photographs, may be quickly placed at the desired position on the table against the stops 306, 308 so as to establish the reference point 302 at the corner of the photograph. The coordinates recorded in the blocks 46 and 50 on frame tickets (Fig. 8a and 8b) correspond to the coordinates relative to this reference point 302. Consequently, the desired por¬ tion of the photograph is readily determined once the flat art is placed on the animation table.
This construction of the animation tables facilitates the automatic editing process performed by system 260. That is, the position of each piece of still art on the animation table 294a is automa¬ tically known by the video processing system and does not have to be individually programmed in. In addition, by positioning all flat art work at corner 302, substantially constant lighting of the art work is provided. That is, lighting typically varies somewhat over the surface of an animation table. Consequently, by always positioning art relative to a fixed reference, greater constancy of lighting is achieved.
The animation table top 301 may comprise a projection screen. In this case, slides provided by the user are projected by a slide projector 310 and off a mirror 312 to the underside of screen 301. Of course, the slides can be reversed and projected directly onto the screen, without mirror 312. Also, the slides could be projected directly to the cameras. These slides are thus projected for capture by the associated video camera 272a in the same manner as photographs or other flat art work. Typically, the slide is projected to be the same size as the image projected by the user at the time coordinate data is generated as previously explained in connection with Fig. 27. Consequently, data pertaining to the slide on the user's frame tickets 230 (Fig. 1) correlates to the projected slide. This data is used, via the APES operating software, to capture the designated portion of the slide for inclusion in the finished video presentation. Although it would slow down the pro- cessing, the user's frame tickets could include a place for recording the size of the projected slide and the corresponding coordinates of the designated portion of the image relative to a corner of the projected slide. In this case, camera 310 would be adjusted to project an image of the same size as the case when the user generated the coordinate data. Alternately, the coordinate data could be adjusted by computer 240 during generation of the APES operating software to compensate for differences between the size of the image from which the user obtained coordi¬ nate data and the image generated by projector 310.
Assume that the first two items of video material to be included in a video presentation comprise information from photographs 316 and 318 (Fig. 3) . Edit controller 268, which may comprise a 205T Edit Control System from Convergence, Inc., sends a command via line 300 and a general purpose commercially available interface to the animation controller 264. Typically, animation controller comprises a separate animation controller for each of the animation stands 270a and 270b. As a specific example, each of the animation controllers may comprise a model 2000 Animation Control System, with software, available from Cinetron Computer System Company of Norcross, Georgia. In addition, the animation stands 270a, 270b may comprise model AC20 animation control tables from Motion Tech Company of Reseda, California. The animation stands 270a, 270b are positioned above respective tables 294a, 294b for supporting cameras 272a, 272b. In response to the command from the edit controller, animation stand 270a is moved to position camera 272a at the desired X-Y position over the photograph 316. In addition, lens 273a is controlled to capture the desired size frame from photograph 216, as determined from the user generated basic data. The captured first image is delivered via line 296a to the video switcher 274, operating in response to control signals from edit controller 268 transmitted to the video switcher on a line 302. Video switcher may comprise a model 100N video switcher available from Grass Valley Group, of Grass Valley, California. This switcher is equipped with a serial interface which is also available from this company and couples the switcher to the edit controller. The video output from switcher 274 is delivered on a line 304 through the character generator inserter 292 and to a pair of outputs 306, 308 of the character generator.
The character generator output 306 is connected to a VHS recorder 310, such as a JVC Model BR-8600 VHS play/record apparatus. In addition, out¬ put 308 is connected to a three-quarter inch video recorder 312, such as a JVC model CR-850 LU three- quarter inch play/record apparatus. Edit controller 268 is connected via interfaces and lines 314, 316 to the respective recorders 310, 312 so that the edit controller controls the starting and stopping of these recorders during the preparation of video presentations. Of course, if only VHS tapes are desired, recorder 312 is eliminated. However, typically if other formats are desired, a three- quarter inch destination tape is recorded by recorder 312 and then transferred to beta, eight millimeter or other formats. Other recorder devices may be utilized as they become technically available. Assume that, as the first video image from photograph 316 is being transmitted from the video switcher to the destination recorders 310, 312, an electronically inserted caption is desired. The edit control decision list in edit controller 268 sends a next caption signal, via a line 320 to the character generator 292. In response to this signal, the next caption stored in the character generator inserter 292 is overlaid on the video which enters the character generator inserter on line 304 from the video switcher. In other words, the caption is placed over the video information obtained from photograph 316. Thus, the character generator 292 adds the next stored caption to the video when edit controller calls for a caption from this device. Character generator 292 also includes a conventional shadow generator for automatic- ally adding a shadow or depth to the caption to make it more distinctive when displayed on a video screen. Although shown separately in Fig. 3, character gener¬ ator 292 may also be incorporated into the video switcher 274. Transitions between selected portions of the same photograph 316 are controlled by edit controller 268, which sends transition control signals on line 300 to the animation controller 264. These signals cause camera 272a to slow-zoom, still frame and otherwise make transitions between these images. For still frame transitions, recorders 310 and 312 are stopped as the camera 272a is shifted from focusing on one section of photograph^ 316 to another section of this photograph. More often than not, recorders 310 and 312 continue operating as the photograph is panned from one location to the next (described above in connection with Fig. 21) or a slow-zoom transition (also described above) is made between the images. Edit controller 268 controls the time during which camera 272a captures a particular section of photo¬ graph 316, as determined by the edit decision list obtained from user timing information taken from the frame tickets 230.
Eventually, all of the user designated information from photograph 316 is included in the video presentation. Assume that the next user desig¬ nated information is to be taken from photograph 318. In this case, under the control of edit controller 268, animation controller 264 shifts animation stand 270b and controls lens 273b to capture the desired infor- mation from photograph 318. This information is transferred via line 296b to the switcher and then to the destination recorders 310, 312 as previously • described. Edit controller 268 controls video switcher 274 to achieve the desired transition between photo- graphs 316 and 318 on the finished video presentation. In general, to accomplish a desired transition from a first selection of video from camera 272a to a second selection of video from camera 272b, several seconds before the scheduled end of the first selec- tion, the second selection starts. At the desired end of the first selection of video, the video switcher 274 is controlled to accomplish the desired transition between the two selections. For example, switcher 274 is designed to operate, in response to signals from edit controller 268 via line 302, to gradually fade one video input (i.e. the first selection) out while the other video input (i.e. the second selection) is faded in. This accomplishes the previously described "dissolve" transition. At the end of the transition, the first video selection is stopped. The operator of the video processing system then removes the first photograph 316 and places a third image on animation table 294a for subsequent filming. A tally light (not shown) alerts the operator to the status of the system so that a particular photograph or image is not removed while it is still being filmed. This process continues until all of the images have been captured on the destination tape.
To accomplish a "cut" transition, the input from line 296a is abruptly cut off while the input from 296b is immediately started. This in essence replaces the first selection with the second selection in a blink. The "wipe" transition is also accomplished by the video switcher 274 which in effect gradually blocks the video input from line 296a while gradually receiving the input pattern or from line 296. This in effect reveals, by a geometric pattern or curtain, the image from line 296b on the screen while the image from line 296a is removed from the screen. Finally, the "medley" transition involves a mixture of the other transition effects. Of course, by altering the video switcher 274 and adding additional transition generating components, additional modes of transition between images may be provided.
Assume that the next image to be included in the video presentation comprises an image from the still frame library stored in the still frame store apparatus. This apparatus may comprise an 842 dual channel frame store device from Abekas Company of Foster City, California, together with a model WD350 350 FRM/700 FLD disk drive also from this source. Edit controller 268 is connected via an interface and a line 324 to the still frame store apparatus 266. When a still frame of video is required, edit controller 268 triggers the still frame 266 to advance to the next image in the still frame list stored in apparatus 266. This causes a transmission of this image via line 298 to the switcher, which then delivers the image via line 304 to the recorder devices 310, 312. Assume also that it is desired to include a caption, other than from character generator inserter 292, into the final video presentation. Edit controller 268 calls the next frame from the still frame store apparatus 266. This next frame comprises the desired caption because the still frame list in the still frame apparatus is organized in the correct order. The frame containing the caption is transmitted via a line 326 to the video switcher 274. Video switcher 274 in turn overlays the video inputs from channels 298 and 326 to position the caption over the video. To provide depth to captions obtained from the still frame store apparatus, video switcher 274 may include a commercially available borderline generator. For example, from Grass Valley Group, for adding a shadow to the captions in a known manner. By providing two video input lines 298, 326 from the still frame apparatus to the video switcher, the video switcher has the capability, under control of edit controller 268, to cross fade and otherwise make transitions between two frames of video from apparatus 266. As another example of a use of the still frame store apparatus 266, overlays such as the gorilla depicted in the Fig. 22 framer, may be called from the still frame store for combining with video on one of the input lines 296a or 296b. Video sw.tcher 274 is operable, in response to edit controlled 268, to provide the same modes of transitions between information from the still frame store apparatus and other video sources as between video on lines 296a and 296b.
Switching circuit 276 includes audio mixer 280, which may comprise a model AMX-100 audio mixer stereo circuit from Grass Valley Group. This audio mixer delivers audio from a variety of audio sources via lines 330 and 332, to the respective recorders 310, 312. Consequently, audio may be simultaneously recorded with the video on the destination tape. Edit controller 268 is coupled via an interface and line
334 to the audio mixer for controlling the delivery of audio information to the recorders. In the illustrated embodiment, three audio sources are included in the video processing system 260. These include first and second compact disk audio players 278a, 278b, such as model NCD-600 multi-CD players from Nikko Company of Japan. Outputs 336a and 336b from the respective audio players 278a, 278b are connected to audio mixer circuit 280. In addition, edit controller 268 is connected to players 278a and 278b by interfaces and respective lines 338a and 338b. In addition, an audio cassette tape player 278c also comprises one of the audio sources. Cassette player 278c may comprise a computer controllable audio cassette deck from Revox Company of Nashville, Tennessee. The output of cassette player 278c is connected via an interface and a line 336c to the audio mixer. Edit controller 268 is connected by an interface and a control line 338c to the cassette player.
Audio players 278a and 278b each contain a full library of audio selections and sound effects corresponding to the audio selections identified in the topic zone and scυnd effect cards of Figs. 23 and 24. i' .
From the edit decision list stored in edit controller 268, the edit controller monitors the audio which is to be included in the video presentation. For example, assume a first audio selection is -to be used in conection with a particular series of images from photographs 316 and 318. Edit controller ;268 cues audio player 278a to the next audio selection and then triggers this audio player to play the selection at the appropriate time. The audio material is then delivered via audio mixer 280 to the recorder for inclusion in the video presentation. In addition, audio player 278b may be cued to the next audio selection, whether it be additional background music or another series sound effects, as audio player 278a is operated. Audio mixer 280 may mix background audio from audio player 278a or 278b and sound effects from the other audio player for inclusion in the destination tape. Assuming personal audio information supplied by a user is to be included in the tape, cassette tape player 278c is used for this purpose. Specifically, edit controller 268 triggers this cassette player to play the personal audio information at the appropriate time, as found in the edit decision list generated from the basic user supplied data. Audio mixer 280 then includes the personal audio information with any other audio information from other audio sources. This audio is combined with video being recorded at recorders 310, 312.
The illustrated system 260 also includes video sources 282 for processing video tapes or motion pictures transformed to video and .supplied by the user. Although other computer controllable sources may be utilized, a VHS video player 282a and three-quarter inch video player 282b is provided. Source 282a may be the same type of equipment as recorder 310. Typically, source 282b comprises a model BVU-820 three-quarter inch play/recorder (slow-mo) from Sony Company. This source has the capability of playing video tape in slow motion as desired (i.e. if clip ticket box 225 of Fig. 25 is marked) . In addition, a beta format video player source 282c is also provided, such as a model Slo-1800 Betamax play/recorder from Sony. The video outputs 284a, 284b and 284c from these video sources are fed to a time base correction circuit 339 and then to respective inputs of video switcher 274. Time base correction circuit may comprise model BVT-820 time base correctors from Sony Company. Such circuits compensate for stretching of video tape and other non-uniformities in video output from the video sources. Therefore, when delivered to video switcher 274, the video from these sources arrives at the appropriate time for inclusion in the destination tape. Edit controller 268 is coupled to the video sources by interfaces and respective lines 340a, 340b and 340c for controlling the playing of video from these sources. In addition, the audio outputs from these video sources is delivered by respective lines 286a, 286b and 286c to audio mixer 280. Therefore, if audio from this material is to be included in the destination table, this information is passed to audio mixer 280 and then to the tape.
The use of video sources enables a user to include segments of motion picture or video clips in the finished video presentation. Clip tickets (Fig. 25) are used as explained above to designate the beginning and end of the segment to be transferred to the video presentation. Therefore, by selecting segments for inclusion into the video presentation, the user is not forced to dump undesired material from a motion picture or video tape onto the destination tape.
A number of audio and video monitors 350 are provided in the system 260 for monitoring the performance of various components of the system.
Also, the system includes a synchronization generator 352, such as a model 1410R Option 4 sync generator from Tektronix Company of Beaverton, Oregon. Sync generator 352 provides a synchronization ("S") output at 354 for synchronizing video processing components of the video processing system. Specifically, output 354 is coupled to the edit controller or 268, the video switcher 274, the time base circuits 339, the still frame store apparatus 266, the cameras 272a, 272b and the character generator 292, as shown, for synchronization purposes. The above described video processing system provides a unique way of producing video presentations from user generated basic editing instructions. In this system, the illustrated components are operated from the same data base or APES operating software generated off-line from system 260. In its basic form, the system involves the control of an animation stand, a still frame store source and various audio sources from a single edit controller. In addition, simultaneous animation and editing of video presenta¬ tions is accomplished by system 260. That is, during the editing process, the video output from cameras 272a, 272b is transferred on line under the control of the edit controller, to a destination tape. In addition, audio, video, captions and animation are accomplished in one pass on essentially a real time basis.
Having illustrated and described the principles of my invention with reference to several preferred embodiments, it will be apparent to those persons skilled in the art that the illustrated embodiment may be modified in numerous respects with¬ out departing from the principles of my invention. For example, in its most basic form, the system is used to provide video presentations comprised of information taken from flat art such as photographs and slides, including and transitions therebetween. In this case, the video and audio sources, the still frame apparatus and the character generator may be eliminated. Various components are then added to enhance the capability of the system, with the system depicted in Fig. 3 providing substantial versatility and video processing capabilities. Therefore, my invention encompasses all modifications of my invention, and equivalents-, which fall within the scope of the following claims.

Claims

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. A system for recording a visual sequence from at least one still visual image comprising: data storage means for storing, off-line from the other components of the system, visual image selection data which identifies the position and size of sections of one or more visual images to be included in the visual sequence and transition data which designates the mode of transition between the identified sections; visual sequence production means responsive to data stored by the data storage means for recording a visual sequence comprised of the identified sections of the one or more visual images and the transitions therebetween.
2. A system according to claim 1 in which the visual sequence production means comprises means for recording the visual sequence during a period of time which is substantially no longer than the duration of the resulting recorded visual sequence.
3. A system according to claim 1 in which the visual sequence production means includes means for converting the data from the data storage means into a visual sequence production control program off-line from other components of the system; the visual sequence production means including means responsive to the visual sequence production control program, and thus to the stored data, for recording the visual sequence, such last named means comprising: camera means for capturing the identified sections of the one or more visual images; animation stand means for supporting the camera means such that the camera means is focused on the one or more visual images; animation controller means for controlling the animation stand means to position the camera means to capture the identified sections of the one or more visual images; video switching means coupled to the camera means for receiving video captured by the camera means which corresponds to the identified sections of the one or more visual images; video recorder means coupled to the video switching means for receiving and recording video from the video switching means; and edit controller means for controlling the camera means, the animation controller means, the video switching means and the video recorder means to control the production of the video sequence.
4. A system according to claim 3 in which the camera means comprises first and second video cameras, the animation stand means comprises first and second animation stands each movable in two mutually perpendicular X and Y directions, the first animation stand means comprising means for supporting the first camera for movement in the X and Y directions at a fixed distance from still images to be captured by the first camera, the second animation stand means comprising means for supporting the second camera for movement in the X and Y directions at a fixed distance from still images to be captured by the second camera, each of the cameras having a lens with an adjustable focal length, the animation controller means comprising means for moving the respective animation stands in the X and Y directions to position the cameras to capture the identified sections of the one or more visual images, the edit controller means comprising means for controlling the focal length of the lenses of the first and second cameras to capture the identified size of the identified section of the one or more visual images, the edit controller means comprising means for independently controlling the first and second camera means, whereby the first camera means can capture one or more identified sections from a first visual image while the second camera means is positioned to capture one or more identified sections from a second visual image, and the second camera means can capture one or more identified sections from the second visual image while the first camera means is positioned to capture one or more identified sections from a third visual image, thereby increasing the speed at which visual information can be recorded from successive visual images.
5. A system according to claim 4 in which the data storage means comprises means for storing transition data which designates the mode of transition from plural possible modes of transition, the edit controller means comprising means for controlling the camera means and .video switching means to produce a visual sequence with, the designated modes of transition between the identified sections of the one or more visual images.
6. A system according to claim 5 in which the plural possible modes of transition include still frame transitions between identified sections from the same visual image, slow-zoom transitions between identified sections, cut transitions between identified sections, dissolve transitions between identified sections and wipes transitions between identified sections.
7. A system according to claim 1 in which the data storage means comprises means for storing timing data which designates the duration of time that an identified visual image is to appear in the visual sequence.
8. A system according to claim 5 in which the data storage means comprises means for storing timing data which designates the duration of time that an identified visual image is to appear in the visual sequence.
9. A system according to claim 3 in which the data storage means comprises means for storing other visual information such as captions to be included in the visual sequence and for identifying the position in the visual sequence at .which the visual information is to appear, the visual sequence production means including means for including such visual information at the identified position in the recorded visual sequence as the visual sequence is recorded.
10. A system according to claim 9 in which the means for including the visual information comprises still frame store means for storing frames of video comprising the visual information, the still frame store means being coupled to the video switching means and to the edit controller means for delivery of stored frames of visual information to the video switching means for inclusion in the recorded visual sequence under the control of the edit controller.
11. A system according to claim 9 including character generator means for entering the stored visual information into the visual sequence production control program, and in which the means for including the visual information comprises character generator inserter means ccupled tp the edit cpntrpller and tp the video switching means for inserting the entered stored visual information into the recorded visual sequence under the control of the edit controller means.
12. A system according to claim 3 in which the visual sequence production means includes still frame store means containing a still frame library of stored video frames, the still frame store means having an output coupled to the video switching means for delivery of stored video frames to the video switching means for inclusion in the recorded video sequence, the data storage means including means for designating still frames from the still frame library for inclusion in the recorded visual sequence and the position at which the designated still frames are to be included in the recorded visual sequence, the edit controller means being coupled to the still frame store means for controlling the delivery of designated still frames to the video switching means for inclusion in the recorded visual sequence.
13. A system according to claim 3 in which the data storage means includes means for identifying the position at which motion video or motion picture material transfered to video is to be included in the recorded visual sequence, the visual sequence production means including at least one video player means coupled to the edit controller means and having a video output coupled to the video switching means the video player means comprising means controlled by the edit controller means for playing video material to produce a video output to the video switching means, for inclusion in the recorded visual sequence at the identified position in the sequence.
14. A system according to claim 13 in which the data storage means includes means for identifying the starting and ending location of a segment of motion video or motion picture material transferred to video which is to be included in the recorded visual sequence, the edit controller comprising means for controlling the video player means and switching means to include the identified segment in the recorded visual sequence.
15. A system according to claim 14 in which the data storage means comprises means for storing transition data which designates the mode of transition between the identified video segments and the other visual images included in the recorded visual sequence.
16. A system according to claim 4 in which the data storage means comprises means for storing audio identification data which identifies audio material to be included in the recorded visual sequence and the position at which such audio material is to be included, the visual sequence production means including at least one audio player means and an audio mixer means coupled to the audio player means and to the video recorder means, the video recorder means also comprising audio recorder means, the edit controller means being coupled to the audio player means and audio mixer means for controlling the audio player means and audio mixer means to play the identified audio material and deliver such audio material to the recorder means for inclusion in the visual sequence as the visual sequence is recorded.
17. A system recording to claim 16 in which the visual sequence production means includes an audio library of sound effects and other audio material, the data storage means comprising means for identifying audio material from the audio library.
18. A system according to claim 17 including plural audio player means, at least one such player - 63 -
means comprising means for playing audio material from a source other than the audio library.
19. A system according to claim 1 in which the still visual images comprise photographs and slides.
20. A system according to claim 12 in which the data storage means comprises optically scannable data recording cards.
21. A system for recording a visual sequence from plural still visual images comprising: data storage means for storing, off-line from the other components of the system, visual image selection data which identifies the position and size of sections of the visual images which are to be included in the visual sequence and for storing transition data which designates the mode of transition between the identified sections, the data storage means also comprising means for storing timing data which designates the duration of time that the identified visual images are to appear in the visual sequence, the data storage means further comprising means for storing other visual information such as captions to be included in the visual sequence and for identifying the position in the visual sequence at which the stored visual information is to appear, the system including a library for inclusion in the recorded visual sequence of still frames and the data » storage means further including means for designating still frames from the still frame library and the position at which the designated still frames are to be included in the recorded visual sequence; visual production means responsive to data stored by the data storage means for recording a visual sequence comprised of the identified section of the visual images and the transitions therebetween together with the stored other visual information such as capticns and the designated still frames frpm the still frame library, the visual sequence prcduction means including means for converting the data from the data storage means into a visual sequence production control program off-line from the other components of the system; the visual sequence production means also including means responsive to the visual sequence production control program, and thus to the stored data, for recording the visual sequence, such last named means comprising camera means for capturing the identified sections of the visual images; animation stand means for supporting the camera means such that the camera means is focused on the visual images; animation controller means for controlling the animation stand means to position the camera means to capture the identified sections of the visual images; video switching means coupled to the camera means for receiving video captured by the camera means which corresponds to the identified sections of the visual images; video recorder means coupled to the video switching means for receiving and recording video from the video switching means; and edit controller means for controlling the camera means, the animation controller means, the video switcher means and the video recorder means to control the production of the video sequence; the camera means comprising first and second video cameras, the animation stand means comprises first and second animation stands each movable in two mutually perpendicular X and Y directions, the first animation stand comprising means for supporting the first camera for movement in the X and Y directions at a fixed distance from still images to be captured by the first camera, the second animation stand comprising means for supporting the second camera for movement in the X and Y directions at a fixed distance from still images to be captured by the second camera, each of the cameras having a lens with an adjustable focal length, the animation controller means comprising means for moving the respective animation stands in the X and Y directions to position the cameras to capture the identified sections of visual images, the edit controller means comprising means for controlling the focal length of the lenses of the first and second cameras to capture the identified size of the identified sections of visual images, the edit controller means comprising means for independently controlling the first and second camera means, whereby the first camera means can capture one or more identified sections from a first visual image while the second camera means is positioned to capture one or more identified sections from a second visual image and the second camera means can capture one or more identified sections from the second visual image, while the first camera means is positioned to capture one or more identified sections from a third visual image, thereby increasing the speed at which visual information can be recorded from successive visual images; the visual sequence production means including means for including the stored visual information such as captions at the identified positions in the recorded visual sequence as the visual sequence is recorded, the means for including the stored visual information comprising still frame store means for storing frames of video comprising the stored visual information, the still frame store means being coupled to the video switching means and to the edit controller means for delivery of stored frames of visual information to the video switching means for inclusion in the recorded visual sequence under the control of the edit controller, the system including character generator means for selectively entering the stored visual information into the visual sequence production control program, the means for including the stored visual information also comprising character generator inserter means coupled to the edit controller and to the video switching means for inserting the entered stored visual information into the recorded visual sequence under the control of the edit controller means; and the edit controller comprising means controlling the delivery of designated still frames to the video switching means for inclusion in the recorded visual sequence.
22. A system according to claim 21 in which the data storage means includes means for identifying the position at which motion video or motion picture material transfered to video is to be included in the recorded visual sequence, the visual sequence production means including at least one video player means coupled to the edit controller means and having a video output coupled to the video switching means, the video player means comprising means controlled by the edit controller means for playing video material to produce a video output to the video switching means for inclusion in the recorded visual sequence at the identified position in the sequence, the data storage means also including means for identifying the starting and ending location of a segment of motion video or motion picture material transferred to video which is to be included in the recorded visual sequence, the edit controller means comprising means for controlling the video player means and switching means to include the identified segment in the recorded visual sequence.
23. A system according to claim 22 in which the data storage means comprises means for storing audio data which identifies audio material to be included in the recorded visual sequence and the position at which such audio material is to be included, the visual sequence production means including at least one audio player means and an audio mixer means coupled to the audio player means and to the video recorder means, the video recorder means also comprising audio recorder means, the edit controller means being coupled to the audio player means and audio mixer means for controlling the audio player means and audio mixer means to play the identified audio material and deliver such audio material to the recorder means for inclusion in the visual sequence as the visual sequence is recorded, and the visual sequence production means including an audio library of sound effects and other audio material, the data storage means comprising means for identifying audio material from the audio library.
24. In a system for producing a video presentation from still visual images, a data storing device for off-line storing of data comprising: means for storing visual image solution data identifying the position and size of sections of visual images to be included in the visual presentation; means for storing transition data designating the mode of transition between the identified sections of the visual images; means for storing caption data for inclusion in the video presentation and for storing the position at which the caption data is to be included; and means for storing timing information designating the duration that visual images appear in the video presentation.
25. In a system according to claim 24 which includes a still frame library of plural video frames, the data storing device including means for designating video frames from the still frame library for inclusion in the video presentation and for designating the position at which the still frames are to be included.
26. In a system according to claim 25 which is capable of including motion video material into the video presentation, the data recording device including means for indentifying motion video material to be included in the video presentation, the position in the video presentation at which the motion video material is to be included, and the starting and stopping positions of segments of motion video material which are to be included.
27. In a system according to claim 26 which is capable of including audio material into the video presentation, the data recording device including means for identifying audio material to be included in the video presentation.
28. An apparatus for producing a frame of video from a visual image, comprising: a fixed surface including a fixed reference point and defining a fixed coordinate system relating to the fixed reference point; means for positioning the visual image on the fixed surface at a location which is known with respect to the fixed reference point; camera means for producing a frame of video from an image; coordinate inputting means for inputting a pair of coordinate data points, said pair of ccprdinate data ppints indicating the lccaticn pf a particular portion of the visual image to be produced into a frame of video, said location being indicated relative to the fixed coordinate system of the fixed surface on which the visual image is positioned; positioning signal generation means for generating a camera positioning signal from said input coordinate data points, said camera positioning signal to be used for positioning the camera means towards the portion of the visual image indicated by the pair of coordinate data points; and camera positioning means responsive to the camera positioning signal for moving and positioning the camera means relative to the fixed surface and the visual image positioned thereon, whereby the camera means is positioned to produce a video frame of the portion of the visual image indicated by the input coordinate data points.
29. The apparatus of claim 28 in which the coordinate inputting means comprises optoelectronic detector means for detecting coordinate data points optically encoded on a data recording medium.
30. An apparatus for recording a visual sequence from still visual images comprising: audio source means for controllably providing an audio signal; audio dubbing means coupled to the controllable audio source means for contεollably recording audio signals onto a recording medium; visual image source means for providing a visual image; camera means for selectively recording portions of the visual image on the recording medium; camera positioning means for controllably moving and positioning the camera means to record different portions of the visual image so as to enable the production of an animated video sequence; and system control means for producing an animated audio-visual sequence, said system control means including means for simultaneously: controlling the camera means and the camera positioning means so that the camera means records an animated video sequence on the recording medium; controlling the audio source means to provide the audio signal; and controlling the audio dubbing means to record the audio signal onto said recording medium; and in which the camera means includes means for producing and recording a sequence of video frames; the coordinate inputting means includes means for inputting a sequence of paired coordinate data points, each of said pairs of coordinate data points indicating the location of a particular portion of the visual image to be recorded by the camera means relative to the fixed coordinate system; and the positioning signal generation means includes means for generating a camera positioning signal from said sequence of input coordinate data points, whereby the camera means is moved so as to record an animated video sequencer incorporating the portions of the visual image indicated by the input pairs of coordinate data points.
31. The apparatus of claim 30 which further includes means for recording a sequence of video frames during the movement of the camera means between the sequential positions corresponding to the input pairs of coordinate data points.
32. An apparatus for producing a panned transition between portions of a video sequence from a still visual image, comprising: a fixed surface including a fixed reference point and defining a fixed coordinate system relating to the fixed reference point; still visual image positioning means for positioning the still visual image on the fixed surface at a location which is known with respect to the fixed reference point; camera means for filming a sequence of video frames; data input means for receiving a sequence of data sets, each data set including a pair of coordinate data points indicating the location of a portion of the still visual image to be filmed, said location being indicated relative to the fixed coordinate system; camera positioning signal generation means for generating from the sequence of paired coordinate data points a camera positioning signal for positioning the camera means sequentially towards portions of the still visual image indicated by the sequence of paired coordinate data points; and camera positioning means responsive to the camera positioning signal for moving and positioning the camera means relative to the fixed surface and the visual image positioned thereon, whereby the camera means films a panning sequence of video images that includes areas of the still visual image intermediate the portions of the image indicated by the sequence of paired coordinate data points.
33. The apparatus of claim 32 in which each data set included in the sequence of data sets received by the input means further includes a frame size datum indicating the size of the portion of the still visual image to be filmed, and in which the camera means includes focal length varying means for varying the camera focal length; the apparatus further comprising: focal length signal generation means for generating from the sequence of frame size data a focal length signal used by the focal length varying means for varying the focal length of the camera means during the filming of the panning sequence.
34. A system for providing a set of control instructions for an automated video production system comprising: manual recording means for recording manually encoded optical representations of video production instructions at a location remote from the automated video production system; optical decoder means for reading said optical representations from the manual recording means and converting said representations into an electronic signal; and control instruction generation means for generat:;.ng from said electronic signal a set of control" instructions for the automated video production system.
35. A system according to claim 34 in which the manual recording means includes: means for recording an optical representation of the duration of time during which a visual image is to be included in the video production; and means for recording an optical representation of the location within the visual image which is to be included in the video production.
36. The system of claim 35 in which the manual recording means further includes means for recording an optical representation relating to a source of audio signals that is to be included in the video production.
PCT/US1987/001100 1986-05-12 1987-05-12 Video production system and method WO1987007108A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US862,130 1986-05-12
US06/862,130 US4704796A (en) 1986-05-12 1986-05-12 Framer
US91405886A 1986-10-01 1986-10-01
US914,058 1986-10-01
US07/046,721 US4827620A (en) 1986-05-12 1987-05-07 Framer
US046,721 1987-05-07

Publications (1)

Publication Number Publication Date
WO1987007108A1 true WO1987007108A1 (en) 1987-11-19

Family

ID=27366962

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US1987/001015 WO1987006887A1 (en) 1986-05-12 1987-05-08 Framer
PCT/US1987/001100 WO1987007108A1 (en) 1986-05-12 1987-05-12 Video production system and method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/US1987/001015 WO1987006887A1 (en) 1986-05-12 1987-05-08 Framer

Country Status (4)

Country Link
EP (2) EP0267265A4 (en)
JP (1) JPH01500024A (en)
AU (2) AU7432787A (en)
WO (2) WO1987006887A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0488684A1 (en) * 1990-11-27 1992-06-03 Pioneer Electronic Corporation Image reproducing apparatus for musical accompaniment
EP0599607A1 (en) * 1992-11-27 1994-06-01 Matsushita Electric Industrial Co., Ltd. Video editing system
US6452612B1 (en) 1998-12-18 2002-09-17 Parkervision, Inc. Real time video production system and method
US6952221B1 (en) 1998-12-18 2005-10-04 Thomson Licensing S.A. System and method for real time video production and distribution
US9123380B2 (en) 1998-12-18 2015-09-01 Gvbb Holdings S.A.R.L. Systems, methods, and computer program products for automated real-time execution of live inserts of repurposed stored content distribution, and multiple aspect ratio automated simulcast production
US11109114B2 (en) 2001-04-18 2021-08-31 Grass Valley Canada Advertisement management method, system, and computer program product

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2678725A1 (en) * 1991-07-01 1993-01-08 Herbin Michel Frame for establishing orthogonality relationships

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4459622A (en) * 1982-04-01 1984-07-10 Sony Corporation Dynamic motion programmable sequencer
US4467371A (en) * 1980-07-16 1984-08-21 Nippon Television Network Corp. Method of pre-editing an original video tape in combination of scene numbers with a card and systems for carrying out the method
US4538188A (en) * 1982-12-22 1985-08-27 Montage Computer Corporation Video composition method and apparatus

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US855121A (en) * 1902-12-22 1907-05-28 Lewis M Mccormick Lantern-slide and mat therefor.
US1583355A (en) * 1920-03-19 1926-05-04 Ernest E Kottenbach Micrometer scale, operator's card, and layout sheet for printing and other work
US1799818A (en) * 1927-06-24 1931-04-07 Eastman Kodak Co Photographic printing and vignetting apparatus
US1976278A (en) * 1928-06-12 1934-10-09 Robert C Cock Proportional mask
US2246920A (en) * 1940-03-15 1941-06-24 Louis L Kromholz Variable photographic mask for maintaining constant proportions of a visible area
US2560937A (en) * 1948-05-06 1951-07-17 Magnus Res Inc Adjustable cropping angle device
US2702944A (en) * 1953-12-30 1955-03-01 Charles C Lane Adjustable drafting template
US4171573A (en) * 1978-08-23 1979-10-23 Anthony Picciotto Cropping guide
FR2503421B1 (en) * 1981-04-03 1985-10-25 Baliozian Mardick DEVICE FOR CALCULATING GEOMETRIC CHARACTERISTICS OF RECTANGLES FOR FRAMING VARIOUS OBJECTS

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4467371A (en) * 1980-07-16 1984-08-21 Nippon Television Network Corp. Method of pre-editing an original video tape in combination of scene numbers with a card and systems for carrying out the method
US4459622A (en) * 1982-04-01 1984-07-10 Sony Corporation Dynamic motion programmable sequencer
US4538188A (en) * 1982-12-22 1985-08-27 Montage Computer Corporation Video composition method and apparatus

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0488684A1 (en) * 1990-11-27 1992-06-03 Pioneer Electronic Corporation Image reproducing apparatus for musical accompaniment
EP0599607A1 (en) * 1992-11-27 1994-06-01 Matsushita Electric Industrial Co., Ltd. Video editing system
US5526132A (en) * 1992-11-27 1996-06-11 Matsushita Electric Industrial Co., Ltd. Image editing device with special effects using a recording medium in which two-channel reproduction and single-channel recording are simultaneously possible
US6452612B1 (en) 1998-12-18 2002-09-17 Parkervision, Inc. Real time video production system and method
US6952221B1 (en) 1998-12-18 2005-10-04 Thomson Licensing S.A. System and method for real time video production and distribution
US9123380B2 (en) 1998-12-18 2015-09-01 Gvbb Holdings S.A.R.L. Systems, methods, and computer program products for automated real-time execution of live inserts of repurposed stored content distribution, and multiple aspect ratio automated simulcast production
US9558786B2 (en) 1998-12-18 2017-01-31 Gvbb Holdings S.A.R.L. Systems, methods, and computer program products for multiple aspect ratio automated simulcast production
US9711180B2 (en) 1998-12-18 2017-07-18 Gvbb Holdings S.A.R.L. Systems, methods, and computer program products for automated real-time execution of live inserts of repurposed stored content distribution
US10056111B2 (en) 1998-12-18 2018-08-21 Gvbb Holdings S.A.R.L. Systems, methods, and computer program products for multiple aspect ratio automated simulcast production
US11109114B2 (en) 2001-04-18 2021-08-31 Grass Valley Canada Advertisement management method, system, and computer program product
US10360944B2 (en) 2002-05-09 2019-07-23 Gvbb Holdings S.A.R.L. Systems, methods, and computer program products for multiple aspect ratio automated simulcast production
US10546612B2 (en) 2002-05-09 2020-01-28 Gvbb Holdings S.A.R.L. Systems, methods, and computer program products for automated real-time execution of live inserts of repurposed stored content distribution

Also Published As

Publication number Publication date
EP0267943A1 (en) 1988-05-25
AU7358887A (en) 1987-12-01
WO1987006887A1 (en) 1987-11-19
EP0267265A4 (en) 1988-09-28
EP0267265A1 (en) 1988-05-18
JPH01500024A (en) 1989-01-12
AU7432787A (en) 1987-12-01

Similar Documents

Publication Publication Date Title
US6546190B1 (en) Synchronization of multiple media streams
US5841512A (en) Methods of previewing and editing motion pictures
US4538188A (en) Video composition method and apparatus
AU681665B2 (en) Method and user interface for creating, specifying and adjusting motion picture transitions
US5877842A (en) Digital Dailies
EP0469850A2 (en) Method and apparatus for pre-recording, editing and playing back presentations on a computer system
US20040170415A1 (en) Photograph print selling method, photograph print selling program, recording medium on which photograph print selling program is recorded, photograph print selling machine, print medium, and print medium unit
WO1987007108A1 (en) Video production system and method
US4197004A (en) Scrolling microfiche and method of producing the same
JP2009516992A (en) Composite medium recording element, imaging system and method of use thereof
Gunter Super 8: The Modest Medium.
Rubin Constraint-based cinematic editing
FR2765354A1 (en) Film dubbing synchronisation system
JP2536868B2 (en) Multi image trimming device
Fielding Recent electronic innovations in professional motion picture production
Ross NEW FILM EDITING TECHNIQUES
Thoma Videodisc premastering facility: Technical evaluation
JP2536867B2 (en) Multi image cropping method
Spear Creating Visuals for TV; A Guide for Educators.
Singleton-Turner Interviews, discussion and chat
Flaherty et al. New television production techniques
Velazquez-Cruz Linear and non-linear editing systems: Changes on the creative and technical aspects of video post-production
Ross Tech News-Preparing Films for Transfer in Telecine no. 6 in a series of 10
Gibson et al. Increasing end-user awareness of library services through promotion: Grumman Aerospace Corporation
Turner THE TELEVISION DIRECTION OF A VIDEO-TAPE OF ORIGINAL CHOREOVIDEO DANCE: AN ANALYSIS OF SELECTED SPATIAL AND DESIGN ELEMENTS FOR CHOREOGRAPHING TELEVISION DANCE FOR PRESENTATION VIA VIDEO-TAPE

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE FR GB IT LU NL SE