WO2001056277A1 - Verifying reacquired video - Google Patents

Verifying reacquired video Download PDF

Info

Publication number
WO2001056277A1
WO2001056277A1 PCT/US2001/002675 US0102675W WO0156277A1 WO 2001056277 A1 WO2001056277 A1 WO 2001056277A1 US 0102675 W US0102675 W US 0102675W WO 0156277 A1 WO0156277 A1 WO 0156277A1
Authority
WO
WIPO (PCT)
Prior art keywords
characteristic
video
video data
reacquired
data
Prior art date
Application number
PCT/US2001/002675
Other languages
French (fr)
Inventor
Michael F. Mintz
Original Assignee
Media 100 Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Media 100 Inc. filed Critical Media 100 Inc.
Priority to AU2001231190A priority Critical patent/AU2001231190A1/en
Publication of WO2001056277A1 publication Critical patent/WO2001056277A1/en

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • H04N9/8047Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction using transform coding

Definitions

  • the invention relates to verifying that reacquired video is the same as video initially acquired.
  • source video is digitized (unless already digitized) and stored for random access in generating, editing and playing a video program made up of a plurality of sequences of frames for different scenes.
  • the video sequences also referred to as "clips," can be pieced together from different sources.
  • the editing can involve adding effects to the video.
  • it is desired to reenter the source video which might. for example be implemented by an instruction sent to a tape operator to load a particular tape and play particular segments from the tape. Mistakes can arise in reacquiring source video, where, e.g.. an operator can load the wrong tape or access the wrong portion of a tape.
  • the invention features, in general, a method and apparatus for verifying that reacquired video is the same as originalh acquired video.
  • a characteristic of the originally acquired video data is determined and stored during acquiring.
  • the same characteristic is determined at the same time for the reacquired video data.
  • the characteristic for the reacquired video data is then compared with the characteristic for the originally acquired video data.
  • the acquired and reacquired video data can include luminance values, and the characteristic can be based on the luminance values or, e.g., pixel- to-pixel differences in luminance values.
  • the acquired and reacquired video data can include RGB intensity values, and the characteristic can be based on one or more of the RGB intensity values, or on one or more luminance values derived from the RGB intensity values.
  • the comparison of characteristics of the acquired and reacquired video data can involve calculating differences of pixel-to-pixel differences of luminance values for the acquired and reacquired video data.
  • the characteristic used for comparison can be based on portions of one or more video frames.
  • the video frames can have time or key codes that can be monitored during reacquiring, and the comparison of characteristics can begin when a particular time or key code is detected.
  • the invention features, in general, a method or apparatus for locating a particular video frame in a series of video frames.
  • a fine comparison of the same or a different characteristic is carried out on the video frames to identify the particular frame.
  • the characteristic used in the coarse comparison can. e.g.. be auxiliary data, an average luminance value for a plurality of pixels in a frame, a characteristic of JPEG encoded data, or an average luminance value for a plurality of pixels in a frame determined from MPEG encoded data.
  • the characteristic used in the fine comparison can be luminance or one or more RGB intensity values.
  • W avelet filtering can also be used to generate the characteristics for coarse and/or fine comparison.
  • Embodiments of the invention may include one or more of the following advantages.
  • the verified data can thus be received in a single pass and without the need to store and then delete incorrect data.
  • Fig. 1 is a block diagram of a video editing system.
  • Fig. 2 is a block diagram of video editing circuitry of the Fig. 1 system.
  • Fig. 3 is a diagram of a stream of video frames processed by the Fig. 1 system.
  • Fig. 4 is a diagram of a frame of the Fig. 3 stream of frames.
  • Fig. 5 is an illustration of a sum of the differences of the differences comparison carried out by the Fig. 1 system.
  • video editing system 40 is implemented by computer 42.
  • video editing software 44 running on computer 42. and video editing expansion card 46 plugged into computer 42.
  • VTR 48 is a source of video segments that can be stored on disk or other mass storage 50 and randomly accessed by computer 42. The video can also be provided other sources 60, e.g., network storage or backup storage.
  • Keyboard 52 and mouse 54 are user input devices, and monitor 56 is used to provide a video editing interface including display of a program being created. An additional monitor (not shown) can also be used to play the video program.
  • U.S. Patents Nos. 5.909,250: 5.506,932; 5,488.695; 5,471 ,577. which are hereby incorporated by reference, describe video editing systems implemented on a computer.
  • video editing card 46 includes video input/output (I/O) 62. analog to digital converter (A/D) 64, processor 66. and buffer 68.
  • Video I/O receives analog video as an input and provides it to A/D 64, which outputs digital video in the Y. Cr. Cb format.
  • These values are temporally stored in buffer 68 and sent on to the host buffer of computer 42 for storage on disk 50.
  • the values may be compressed b ⁇ components that are not shown on Fig. 2 before being transmitted to the host bus.
  • the data for the video is contained in frames 70 that are sent one after the other.
  • the system inputs video clips or "segments" 72 which each include a sequence of video frames 70.
  • each video frame 70 includes lines 74 of pixel values.
  • processor 66 receives the luminance (Y ) values, and computes an identification characteristic of the video sequence being input based upon the luminance values. Differences in luminance values can be used as the identification characteristic.
  • One technique involves determining the pixel-to-pixel differences in luminance values, and then comparing the pixel-to-pixel differences of the reacquired video with those of the original video. One comparison could involve taking the differences of the differences, summing them up for a series of pixels, and seeing if the summation is less than a threshold value. The determination of characteristics and comparisons of characteristics could take place at processor 66.
  • Fig. 5 shows an example where four pixels of the same line 74 of a frame are compared for the original and reacquired video.
  • the original video has the luminance values 16, 16, 216, 216 for these pixels, and the reacquired video has 1 7. 1 8. 218, 219 for these pixels.
  • the pixel-to-pixel differences for the original video are 16. 0, 200. 0, and the pixel-to-pixel differences for the reacquired video are 17, 1 , 200, 1 .
  • the differences of the absolute values of the differences are 1 , 1 , 0, 1 , and the absolute sum of the differences is 3.
  • the threshold could be done on a percentage basis. Here, the maximum luminance value is 219, and the sum value. 3. was obtained over 4 pixels; thus the percentage difference is (3/4)/219. or 0.3%. If the threshold were 1 %, the resulting 0.3% value w ould meet the threshold, and indicate that the reacquired video is from the same source.
  • RGB intensity value can be used for the comparison, or luminance values could be derived on the fly from the RGB intensity values.
  • a rough cut comparison to see if the reacquired video is getting close to the actual frame of the original, and then do the sum of the differences of the differences comparison as described with reference to Fig. 5 to see if there is a match.
  • the rough cut approximation might involve the Fig. 5 type of comparison with a larger threshold value, e.g.. 10%.
  • a larger threshold value e.g. 10%.
  • Further techniques for rough cut comparisons to determine frames for a fine comparison include comparing compression coefficients (e.g., JPEG coefficients, converted to the frequency domain), comparing a displayable character, comparing auxiliary compression coefficients (e.g., JPEG coefficients, converted to the frequency domain), comparing a displayable character, comparing auxiliary compression coefficients (e.g., JPEG coefficients, converted to the frequency domain), comparing a displayable character, comparing auxiliary
  • Wavelet filtering can also be used to generate the characteristics for coarse and/or fine comparison.
  • the video data will be buffered as the reacquired video is being received and evaluated.
  • the rough cut comparison can thus be used to decide to discard video prior to a match. Once a match has been met (by rough cut comparison), the data could continue to be buffered in buffer 68 while a fine cut comparison is then carried out on the pixels of interest.
  • the data will be stored in mass storage 50. In this way. the rough cut and fine comparisons can be carried out on the fly and prior to storage at the computer. Thus there is no need to store data until it has been verified, and there is no need to reinput the data after verification.
  • the luminance or other values being used for comparison could be filtered, with the comparisons being carried out on the filtered data, to remove noise (high frequencies) and the effects of overall gain changes (low frequencies).

Abstract

Method and apparatus (66) for verifying that reacquired video is the same as originally acquired video. A characteristic of the originally acquired video data is determined and stored during acquiring (66). When the same sequence of video frames is reacquired, the same characteristic is determined at the same time for the reacquired video data (66). The characteristic for the reacquireed video data is then compared with the characteristic of the originally acquired video data (66). Apparatus and method for locating a particular video frame in a series of video frames by first doing a coarse comparison of a characteristic of the video frames to determine that a frame being subject to the coarse comparison is likely to be in the same clip as the particular video frame to be located, and then doing a fine comparison of the same or a different characteristic to identify the particular frame.

Description

VERIFYING REACQUIRED VIDEO Background of the l ention The invention relates to verifying that reacquired video is the same as video initially acquired. In video editing systems source video is digitized (unless already digitized) and stored for random access in generating, editing and playing a video program made up of a plurality of sequences of frames for different scenes. The video sequences, also referred to as "clips," can be pieced together from different sources. The editing can involve adding effects to the video. In some instances it is desired to reenter the source video, which might. for example be implemented by an instruction sent to a tape operator to load a particular tape and play particular segments from the tape. Mistakes can arise in reacquiring source video, where, e.g.. an operator can load the wrong tape or access the wrong portion of a tape.
Summary of the Invention In one aspect, the invention features, in general, a method and apparatus for verifying that reacquired video is the same as originalh acquired video. A characteristic of the originally acquired video data is determined and stored during acquiring. When the same sequence of video frames is reacquired, the same characteristic is determined at the same time for the reacquired video data. The characteristic for the reacquired video data is then compared with the characteristic for the originally acquired video data.
In preferred embodiments, the acquired and reacquired video data can include luminance values, and the characteristic can be based on the luminance values or, e.g., pixel- to-pixel differences in luminance values. Alternatively the acquired and reacquired video data can include RGB intensity values, and the characteristic can be based on one or more of the RGB intensity values, or on one or more luminance values derived from the RGB intensity values. The comparison of characteristics of the acquired and reacquired video data can involve calculating differences of pixel-to-pixel differences of luminance values for the acquired and reacquired video data. The characteristic used for comparison can be based on portions of one or more video frames. The video frames can have time or key codes that can be monitored during reacquiring, and the comparison of characteristics can begin when a particular time or key code is detected.
In another aspect, the invention features, in general, a method or apparatus for locating a particular video frame in a series of video frames. First a coarse comparison of a characteristic of the video frames is carried out to determine that a frame being subjected to the coarse comparison is likely to be in the same clip as the particular video frame to be located. Then a fine comparison of the same or a different characteristic is carried out on the video frames to identify the particular frame. In preferred embodiments, the characteristic used in the coarse comparison can. e.g.. be auxiliary data, an average luminance value for a plurality of pixels in a frame, a characteristic of JPEG encoded data, or an average luminance value for a plurality of pixels in a frame determined from MPEG encoded data. The characteristic used in the fine comparison can be luminance or one or more RGB intensity values. W avelet filtering can also be used to generate the characteristics for coarse and/or fine comparison.
Embodiments of the invention may include one or more of the following advantages. One can verify that video data being reacquired is the correct data at the time of reacquiring, and possibly even before one begins storing the data, as the data is being received. The verified data can thus be received in a single pass and without the need to store and then delete incorrect data.
Other features and advantages of the invention will be apparent from the following description of a preferred embodiment and from the claims.
Brief Description of the Drawing
Fig. 1 is a block diagram of a video editing system.
Fig. 2 is a block diagram of video editing circuitry of the Fig. 1 system.
Fig. 3 is a diagram of a stream of video frames processed by the Fig. 1 system.
Fig. 4 is a diagram of a frame of the Fig. 3 stream of frames. Fig. 5 is an illustration of a sum of the differences of the differences comparison carried out by the Fig. 1 system.
Description of the Preferred Embodiments
Referring to Fig. 1. video editing system 40 is implemented by computer 42. video editing software 44 running on computer 42. and video editing expansion card 46 plugged into computer 42. VTR 48 is a source of video segments that can be stored on disk or other mass storage 50 and randomly accessed by computer 42. The video can also be provided
Figure imgf000003_0001
other sources 60, e.g., network storage or backup storage. Keyboard 52 and mouse 54 are user input devices, and monitor 56 is used to provide a video editing interface including display of a program being created. An additional monitor (not shown) can also be used to play the video program. U.S. Patents Nos. 5.909,250: 5.506,932; 5,488.695; 5,471 ,577. which are hereby incorporated by reference, describe video editing systems implemented on a computer.
Referring to Fig. 2, video editing card 46 includes video input/output (I/O) 62. analog to digital converter (A/D) 64, processor 66. and buffer 68. Video I/O receives analog video as an input and provides it to A/D 64, which outputs digital video in the Y. Cr. Cb format. These values are temporally stored in buffer 68 and sent on to the host buffer of computer 42 for storage on disk 50. The values may be compressed b\ components that are not shown on Fig. 2 before being transmitted to the host bus. As is shown in Fig. 3. the data for the video is contained in frames 70 that are sent one after the other. The system inputs video clips or "segments" 72 which each include a sequence of video frames 70. As is shown in Fig. 4, each video frame 70 includes lines 74 of pixel values.
Returning to Fig. 2. processor 66 receives the luminance (Y ) values, and computes an identification characteristic of the video sequence being input based upon the luminance values. Differences in luminance values can be used as the identification characteristic. One technique involves determining the pixel-to-pixel differences in luminance values, and then comparing the pixel-to-pixel differences of the reacquired video with those of the original video. One comparison could involve taking the differences of the differences, summing them up for a series of pixels, and seeing if the summation is less than a threshold value. The determination of characteristics and comparisons of characteristics could take place at processor 66.
Fig. 5 shows an example where four pixels of the same line 74 of a frame are compared for the original and reacquired video. Thus the original video has the luminance values 16, 16, 216, 216 for these pixels, and the reacquired video has 1 7. 1 8. 218, 219 for these pixels. The pixel-to-pixel differences for the original video are 16. 0, 200. 0, and the pixel-to-pixel differences for the reacquired video are 17, 1 , 200, 1 . The differences of the absolute values of the differences are 1 , 1 , 0, 1 , and the absolute sum of the differences is 3. The threshold could be done on a percentage basis. Here, the maximum luminance value is 219, and the sum value. 3. was obtained over 4 pixels; thus the percentage difference is (3/4)/219. or 0.3%. If the threshold were 1 %, the resulting 0.3% value w ould meet the threshold, and indicate that the reacquired video is from the same source.
If the video involves RGB video (instead of YUV). an RGB intensity value can be used for the comparison, or luminance values could be derived on the fly from the RGB intensity values.
One can determine the frames of the original video and the reacquired video that should be used in the comparison by using time codes or key codes (used for film to video transfer) to identify the same frame.
Alternatively, one could do a rough cut comparison to see if the reacquired video is getting close to the actual frame of the original, and then do the sum of the differences of the differences comparison as described with reference to Fig. 5 to see if there is a match. The rough cut approximation might involve the Fig. 5 type of comparison with a larger threshold value, e.g.. 10%. Alternatively one could use another, simpler comparison for the coarse comparison.
Further techniques for rough cut comparisons to determine frames for a fine comparison include comparing compression coefficients (e.g., JPEG coefficients, converted to the frequency domain), comparing a displayable character, comparing auxiliary
(nonimage) data, and comparing average luminance values (either calculated or from MPEG data) for an entire frame. In the rough cut comparisons, wide tolerances could be used.
Wavelet filtering can also be used to generate the characteristics for coarse and/or fine comparison. The video data will be buffered as the reacquired video is being received and evaluated. The rough cut comparison can thus be used to decide to discard video prior to a match. Once a match has been met (by rough cut comparison), the data could continue to be buffered in buffer 68 while a fine cut comparison is then carried out on the pixels of interest.
Assuming that there is a match, the data will be stored in mass storage 50. In this way. the rough cut and fine comparisons can be carried out on the fly and prior to storage at the computer. Thus there is no need to store data until it has been verified, and there is no need to reinput the data after verification.
The luminance or other values being used for comparison could be filtered, with the comparisons being carried out on the filtered data, to remove noise (high frequencies) and the effects of overall gain changes (low frequencies).
Other embodiments of the invention are within the scope of the appended claims.
What is claimed is:

Claims

1 . A method of verifying reacquired video comprising: acquiring a sequence of video frames from a source to obtain acquired video data. determining a characteristic of said acquired video data, storing said characteristic, reacquiring the same said sequence of video frames from said source or another source to obtain reacquired video data, determining said characteristic of said reacquired video data at the same time that the same said sequence of video frames is being reacquired, and comparing said characteristic for said reacquired video data with said characteristic for said acquired video data.
2. The method of claim 1 wherein said acquired video data and reacquired video data include luminance values, and wherein said characteristic is based on said luminance values.
3. The method of claim 1 wherein said acquired video data and reacquired video data include RGB intensity values, and wherein said characteristic is based on one or more of said RGB intensity values.
4. The method of claim 1 wherein said acquired video data and reacquired video data include RGB intensity values, and wherein said characteristic is based on one or more luminance values derived from said RGB intensity values.
5. The method of claim 2 wherein said characteristic is based on differences in luminance values.
6. The method of claim 5 wherein said characteristic is based on pixel-to- pixel differences in luminance values.
7. The method of claim 6 wherein said comparing involves calculating differences of said pixel-to-pixel differences for said acquired video data and said reacquired video data.
8. The method of claim 1 wherein each said characteristic is based on portions of video frames in the respective said sequence.
9. The method of claim 1 wherein each said characteristic is based on a plurality of video frames in the respective said sequence.
10. The method of claim 1. wherein said sequence of video frames has time codes associated with it, and further comprising monitoring said time codes during said reacquiring and beginning said comparing when a particular time code is detected.
1 1. The method of claim 1. wherein said sequence of video frames has ke> codes associated with it, and further comprising monitoring said key codes during said reacquiring and beginning said comparing when a particular key code is detected.
12. The method of claim 1. wherein said comparing includes carrying out a coarse comparison to verify that the same sequence is being reacquired and thereafter carrying out a fine comparison to identify a particular frame in said sequence.
13. The method of claim 5 wherein said determining steps include passing said differences through a low pass filter and a high pass filter.
14. A method of locating a particular video frame in a series of video frames comprising carrying out a coarse comparison of a first characteristic of said series of video frames to determine that a frame being subjected to the coarse comparison is likely to be in the same clip as said particular video frame, and thereafter carrying out a fine comparison on a second characteristic of said series of video frames to identify said particular frame.
15. The method of claim 14 wherein said first characteristic and said second characteristic are different characteristics.
16. The method of claim 14 wherein said first characteristic and said second characteristic are the same characteristic.
17. The method of claim 15 wherein said first characteristic is based on auxiliary data.
18. The method of claim 1 5 wherein said first characteristic is an average luminance value for a plurality of pixels in a frame. 19 The method of claim 15 wheiein said fu st chaiactei istic is a chai acteπstic ot JPEG encoded data
20 The method of claim 1 5 w heiein said fust chai actei ιstι^ is an aveiage luminance value toi a pluia ty ot pixels in a ti ame detei mined tiom MPEG encoded data
21 The method ot claim 14 wherein said second chaiacteπstic is luminance
22 The method of claim 14 wheiein said fust chaiactei istic is geneiated using wavelet filtering
23 The method of claim 14 whei ein said second chaiactei istic is geneiated using wavelet filtering
24 Appaiatus for venfying l eacquned video data compnsing an input for receiving acquired video data and reacquiied \ ideo data, a video characteπzei to determine a charactenstic of said acquired video data and said reacquired video data as said data are being received, a storage for storing said characteristic, and a comparator toi comparing said chaiacteπstic toi said leacqun ed video data with said characteristic foi said acquired video data to deteimine that the reacquii ed video data is the same as the acquired video data
25 Apparatus foi locating a paiticulai video tiame in a senes of video frames comprising a coaise comparatoi that cai i ies out a coai se compai ison ot a fust chaiactei istic of said series of video frames to determine that a frame being subjected to the coaise comparison is likely to be in the same clip as said particular video frame and a fine comparatoi that can les out a fine compai ison on a second chaiactei istic of said series of video frames to identify said particular frame
26 The apparatus of claim 25 wherein said fu st characteristic is geneiated using wavelet filtering
27 The apparatus of claim 25 w heiein said second characteristic is geneiated using wavelet filtering
PCT/US2001/002675 2000-01-26 2001-01-26 Verifying reacquired video WO2001056277A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001231190A AU2001231190A1 (en) 2000-01-26 2001-01-26 Verifying reacquired video

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US49114000A 2000-01-26 2000-01-26
US09/491,140 2000-01-26

Publications (1)

Publication Number Publication Date
WO2001056277A1 true WO2001056277A1 (en) 2001-08-02

Family

ID=23950951

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/002675 WO2001056277A1 (en) 2000-01-26 2001-01-26 Verifying reacquired video

Country Status (2)

Country Link
AU (1) AU2001231190A1 (en)
WO (1) WO2001056277A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006136299A1 (en) * 2005-06-22 2006-12-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Device and method for determining a point in a film
US7948557B2 (en) 2005-06-22 2011-05-24 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating a control signal for a film event system
US8326112B2 (en) 2005-06-22 2012-12-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for performing a correlation between a test sound signal replayable at variable speed and a reference sound signal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4230990A (en) * 1979-03-16 1980-10-28 Lert John G Jr Broadcast program identification method and system
US5668917A (en) * 1994-07-05 1997-09-16 Lewine; Donald A. Apparatus and method for detection of unwanted broadcast information
US5790236A (en) * 1994-05-12 1998-08-04 Elop Electronics Industries Ltd. Movie processing system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4230990A (en) * 1979-03-16 1980-10-28 Lert John G Jr Broadcast program identification method and system
US4230990C1 (en) * 1979-03-16 2002-04-09 John G Lert Jr Broadcast program identification method and system
US5790236A (en) * 1994-05-12 1998-08-04 Elop Electronics Industries Ltd. Movie processing system
US5668917A (en) * 1994-07-05 1997-09-16 Lewine; Donald A. Apparatus and method for detection of unwanted broadcast information

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006136299A1 (en) * 2005-06-22 2006-12-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Device and method for determining a point in a film
JP2008547144A (en) * 2005-06-22 2008-12-25 フラウンホッファー−ゲゼルシャフト ツァ フェルダールング デァ アンゲヴァンテン フォアシュンク エー.ファオ Apparatus and method for determining position in a film
US7948557B2 (en) 2005-06-22 2011-05-24 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating a control signal for a film event system
US8326112B2 (en) 2005-06-22 2012-12-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for performing a correlation between a test sound signal replayable at variable speed and a reference sound signal

Also Published As

Publication number Publication date
AU2001231190A1 (en) 2001-08-07

Similar Documents

Publication Publication Date Title
US5774593A (en) Automatic scene decomposition and optimization of MPEG compressed video
US6298145B1 (en) Extracting image frames suitable for printing and visual presentation from the compressed image data
Fan et al. Maximum likelihood estimation of JPEG quantization table in the identification of bitmap compression history
US6842540B1 (en) Surveillance system
US20070030900A1 (en) Denoising video
US6823089B1 (en) Method of determining the extent of blocking and contouring artifacts in a digital image
US20080232765A1 (en) Automatic detection, removal, replacement and tagging of flash frames in a video
JP2000259832A (en) Image feature amount generator, image retrieval device and generation method and retrieval method therefor
US20030112874A1 (en) Apparatus and method for detection of scene changes in motion video
JP4359085B2 (en) Content feature extraction device
US8259172B2 (en) Picture searching apparatus
US6915000B1 (en) System and apparatus for inserting electronic watermark data
Li et al. Extraction of PRNU noise from partly decoded video
US20060053086A1 (en) Method and apparatus for removing visible artefacts in video images
GB2438689A (en) Method of detecting correction methods for feature images
WO2001056277A1 (en) Verifying reacquired video
US8238601B2 (en) System and method for removing digital watermarks from a watermarked image
US20190354798A1 (en) Pattern generation device, image processing device, pattern generation method, and storage medium on which program is stored
US20020051489A1 (en) Image matching method, and image processing apparatus and method using the same
US7747130B2 (en) Apparatus and method for extracting representative still images from MPEG video
US7386146B2 (en) Insertion of a message in a sequence of digital images
CN107071405B (en) A kind of method for video coding and device
Manimurugan et al. A tailored anti-forensic technique for digital image applications
CN110517252B (en) Video detection method and device
Lim et al. Rate-distortion based image segmentation using recursive merging

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP