CN101601068B - System and method for embedding and detecting data - Google Patents

System and method for embedding and detecting data Download PDF

Info

Publication number
CN101601068B
CN101601068B CN200880003825.1A CN200880003825A CN101601068B CN 101601068 B CN101601068 B CN 101601068B CN 200880003825 A CN200880003825 A CN 200880003825A CN 101601068 B CN101601068 B CN 101601068B
Authority
CN
China
Prior art keywords
video
pigment
frame
coordinate
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN200880003825.1A
Other languages
Chinese (zh)
Other versions
CN101601068A (en
Inventor
Z·盖泽尔
L·德伦多尔夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Synamedia Ltd
Original Assignee
NDS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from IL183841A external-priority patent/IL183841A0/en
Application filed by NDS Ltd filed Critical NDS Ltd
Priority claimed from PCT/IB2008/050104 external-priority patent/WO2008096281A1/en
Publication of CN101601068A publication Critical patent/CN101601068A/en
Application granted granted Critical
Publication of CN101601068B publication Critical patent/CN101601068B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A method and system for embedding data in video frames is described, the method comprising receiving marking information, representing the marking information as a 2-coordinate vector, denoted omega, where the 2-coordinates are denoted, respectively, alpha, belta, such that omega = (alpha, belta), providing a video frame to be marked, the video frame including a plurality of pixels, each pixel of the plurality of pixels being represented as p, where p = (x, y), x and y including coordinates of pixel p, the plurality of pixels being represented as a triad of color elements, denoted, respectively, as R, G, and B, and marking the video frame by transforming each pixel among the plurality of pixels as follows R'(p) = R(p) + (P, omega R), G'(p) = G(p) + (P, omega G), and B'(p) = B(p) + (P, omega B), where (P, omega R) represents a dot product operation on p and omega R, (P, omega G) represents a dot product operation on p and omega G, and (P, omega B) represents a dot product operation on p and omega B.

Description

Be used to embed and detect the method and system of data
Technical field
The present invention relates to the data embedded system, concrete, relate to the data embedded system of unique identification as input.
Background technology
Along with the recent progress in the internet content distribution that comprises peer-to-peer network and real-time video streaming system,, data are embedded in the video become important with the tracking point of departure for fear of undelegated distribution of contents.Point of departure usually is the beholder who authorizes, and for example, makes the cinema of pirate copies with field camera, and perhaps its output is hunted down and recompile is the set-top box television demoder of video file.After tracking the source, can take measures to avoid more unauthorized distribution.
It all is broad-spectrum field in academic research and commerce are invented that signal is embedded in the video.Hidden watermark in compression (MPEG) territory is added on known in the art, like tangible watermark and the password figure watermark (steganographic watermark) that appears at the video top as bitmap.
M.Barni; The Digital Watermarking ofVisual Data:State of the Art and New Trends of F.Bartolini and A.Piva.; Congres Signal processing X:Theories and Applications (Tampere; 4-8 September 2000), EUPSICO 2000:European Signal Processing Conference No 10, Tampere; Finland (04/09/2000) has looked back the prior art level of visual data being added digital watermarking briefly.The solution commonly used that has adopted communication viewpoint (communication perspective) to confirm the subject matter in the digital watermarking and proposed to adopt by research institution.The author has at first considered to be used for the various schemes that watermark embeds and hides.Consider communication channel subsequently, and summarized the main research tendency in the attack model foundation.Because watermark recovery to the influence of the final reliability of whole watermaking system, has therefore given special concern to it.
M.Barni, the Multichannel Watermarking of Color Images of F.Bartolini and A.Piva., (IEEE Transactions on Circuits and Systems for Video Technology; Vol.12, No.3, publish in March, 2002); Described in the image watermark field; Research mainly concentrates in the gray level image watermark, and marking image brightness is normally passed through in the expansion of colored situation, perhaps accomplishes through handling each Color Channel respectively.In this document, a kind of DCT territory digital watermark that is specifically designed as the characteristic of utilizing coloured image has been proposed.The subclass of the full frame DCT coefficient through revising each Color Channel is hidden in watermark in the data.Detection is based on overall correlativity measurement, and overall correlativity measurement is to calculate through information of having considered to be transmitted by three Color Channels and their correlativity.Confirm for final whether image comprises watermark, with relevance values and threshold.With respect to existing gray scale algorithm, new departure that a kind of threshold value is selected has been proposed, its permission will be omitted the probability that detects and will be reduced to minimumly, and the while has been guaranteed given erroneous detection probability again.Provide experimental result and theoretical analysis to prove that this new departure is with respect to only in the validity of the algorithm of the enterprising row operation of brightness of image.
Satoshi Kanai, the Digital Watermarkingfor 3D Polygons using Multiresolution Wavelet Decomposition of Hiroaki Date and Takeshi Kishinami (can network address citeseer.ist.psu.edu/504450.html obtain) have described recently in order to the copyright of protection numerical data and avoid very interested in its method for bootlegging.Yet,, also do not have effective and efficient manner to protect the copyright of 3D geometric model in CAD/CAM and CG field.As the first step that addresses this problem, introduced a kind of new digital watermark method of the 3D of being used for polygon model in this article.Watermark is one of copy-right protection method, wherein, sightless watermark secret ground is embedded in the raw data.The water mark method that is proposed presents (MRR) based on the wavelet transformation (WT) and the multiresolution of polygon model.Can on the various resolution levels of MRR, watermark be embedded in the bigger wavelet coefficient vector.This feasible watermark that embeds can't be discovered for affined transformation and be constant.Also feasible control for the geometric error that is caused by watermark is reliable.At first, discuss the demand and the characteristics of the water mark method that is proposed.Secondly, the WT of polygon model and the mathematical formulae of MRR have been shown.The 3rd, proposed to embed and extract the algorithm of watermark.At last, the validity that shows the water mark method that is proposed through several analog results.
The United States Patent (USP) 7,068,809 of Stach has been described a kind of method, wherein, in the multi-media signal such as image, video and audio frequency, has used cutting techniques in the method for embedding and detection digital watermarking.Digital watermarking embeds device based on characteristics of signals, and for example similarity measurement, texture measurement, shape measure or brightness or other color-values extreme value are measured, and media signal is divided into the zone of arbitrary shape.Subsequently that these are regional attribute is used to adjust auxiliary signal, so that it more effectively is hidden in the media signal.In an exemplary implementation, the perceptibility model that cutting procedure has utilized the people is grouped into the sample of media signal in a plurality of continuous zones based on its similarity.Subsequently the attribute such as its frequency characteristic in zone is adjusted to be suitable for expecting the frequency characteristic of watermark signal.A kind of characteristic of embedding grammar adjustment region to be to embed the element of auxiliary signal, for example through the message signale of Error Correction of Coding.Detection method recomputates said cutting apart, and calculates identical characteristic, and eigenwert is mapped as symbol, to rebuild the estimation of auxiliary signal.Demodulation subsequently or decoding auxiliary signal are so that recover message with error correction decoding/demodulation operation.
People's such as Schuman United States Patent (USP) 6,950,532 has been described a kind of sense of vision copyright protecting system, and said sense of vision copyright protecting system comprises input content, interrupt handler and output content.Interrupt handler will interrupt content and be inserted into input and create output content in the content, and it has hindered the ability that optical recording apparatus is produced the useful copy of output content.
The summary of Jap.P. JP11075055 has been described a kind of method, wherein, secret information is embedded in the luminance signal, and the positional information of secret information is embedded in the corresponding chrominance signal.In the M system of method use that is used for embedding secret information, it is a pseudo random number (a PN system).Picture signal is divided into the piece of N pixel value, and to increase length be the pseudo random number of N.Each piece of received image signal is all carried out this operation, so that can be formed in the picture signal that has wherein embedded secret information.Pseudo random number is being embedded into the location overlap of the corresponding chrominance signal in position in the luminance signal with tick-tack.Each bar sweep trace of chrominance signal all is divided into a plurality of pieces that are made up of N pictorial element, and overlapping length is the pseudo random number of N.Calculate correlativity so that decode.
People's such as Brill U.S. Patent application 20020027612 has been described a kind of method; Be used for watermark is added to the vision signal of presentation video; Said method comprising the steps of: the first watermark function is used for first group of pixel of first frame, and the complementary function of the first watermark function is used for second group of pixel of first frame.
The United States Patent (USP) 5,832,119 of Rhoads has been described a kind of method, so as to the many bit signals that from the empirical data such as image or voice data, detect to embed with the password graphics mode, and controls some aspects of the operation of related system in view of the above.An application of this invention is video playback or recording unit, controls this equipment according to the many bit signals that embed, so that restriction playback or recording operation.Another application is the photocopying booth, and it makes out password pictorial symbolization specific in the image that is replicated, and interrupts replicate run.
Think and also reflected the prior art state below with reference to document:
The US6 of Rhoads, 760,463;
People's such as Reed US6,721,440;
The US5 of Rhoads, 636,292;
The US5 of Rhoads, 768,426;
The US5 of Rhoads, 745,604;
The US6 of Rhoads, 404,898;
The US7 of Rhoads, 058,697;
The US5 of Rhoads, 832,119;
The US5 of Rhoads, 710,834;
People's such as Alattar US7,020,304;
The US7 of Stach, 068,809;
The US6 of Rhoads, 381,341;
People's such as Schumann US6,950; 532;
The US7 of Rhoads, 035,427; With
The WO02/07362 of Digimarc company.
Above-described and this instructions be the disclosure of described all lists of references in the whole text, and all lists of references of in these lists of references, mentioning are in view of the above through with reference to incorporating this paper into.
Summary of the invention
The present invention manages to provide a kind of improved system and method that is used for data are embedded into target, and said target includes but not limited to digital video.Therefore, during data embedded, based on position and the input information of pixel on screen, all pixels that will embed each frame of data therein all were used for the mathematic(al) manipulation of tlv triple of three color components (R, G, B) of pixel.For example but what do not limit aforesaid ubiquity is that input information comprises the possessory unique ID that is encoded as bivector.To during embedding detection of information,, try to achieve the value of chromaticity (color mass) to the color component value summation of pixel in each frame that comprises the frame that embeds data.Can compare with expected results through the value that will try to achieve, use equality to extract the information of embedding.
Therefore according to a preferred embodiment of the invention, a kind of method is provided, said method comprises: receive label information, said label information is shown as the 2-coordinate vector; Said 2-coordinate vector is expressed as ω, wherein, said 2-coordinate is expressed as α, β respectively, so that ω=(α; β), the frame of video that be labeled is provided, said frame of video comprises a plurality of pixels, and each pixel in said a plurality of pixels all is expressed as p; Wherein p=(x, y), x and y comprise the coordinate of pixel p; Said a plurality of pixel is expressed as the tlv triple of pigment (color element), and said pigment is expressed as R, G, B respectively, and through according to R ' (p)=R (p)+<p, ω R>, G ' (p)=G (p)+<p, ω G>And B ' (p)=B (p)+<p, ω B>Conversion each pixel in said a plurality of pixels is come the said frame of video of mark, wherein,<p, ω R>Expression is to p and ω RDot-product operation,<p, ω G>Expression is to p and ω GDot-product operation,<p, ω B>Expression is to p and ω BDot-product operation.
In addition, according to a preferred embodiment of the invention, label information comprises the information that is used to identify rendering apparatus.
Other again, according to a preferred embodiment of the invention, the information that is used to identify rendering apparatus comprises unique device identifier.
In addition, according to a preferred embodiment of the invention, label information comprises copyright mark.
In addition, according to a preferred embodiment of the invention, label information comprises the access right data.
In addition, according to a preferred embodiment of the invention, the access right data comprise playback/copy permission.
Other again, according to a preferred embodiment of the invention, at least one pigment comprises the R-G-B pigment.
In addition, according to a preferred embodiment of the invention, at least one pigment comprises the chrominance/luminance pigment.
In addition, according to a preferred embodiment of the invention, the chrominance/luminance pigment comprises YCbCr chrominance/luminance pigment.
In addition, according to a preferred embodiment of the invention, the chrominance/luminance pigment comprises YPbPr chrominance/luminance pigment.
Other again, according to a preferred embodiment of the invention, the chrominance/luminance pigment comprises YDbDr chrominance/luminance pigment.
In addition, according to a preferred embodiment of the invention, the chrominance/luminance pigment comprises xvYCC chrominance/luminance pigment.
In addition, according to a preferred embodiment of the invention, R ' (p), G ' (p) and B ' (p) be no more than colour and present the maximal value that each pigment allowed in the system.
In addition, according to a preferred embodiment of the invention, block any R ' (p), G ' (p) and B ' (p), with guarantee any R ' (p), G ' (p) and B ' (p) all be no more than maximal value.
Other again, according to a preferred embodiment of the invention, colour presents system and comprises that the R-G-B colour presents system.
In addition, according to a preferred embodiment of the invention, colour presents system and comprises that the chrominance/luminance colour presents system.
In addition, according to a preferred embodiment of the invention, R ' (p), G ' (p) and B ' (p) be not less than colour and present the minimum value that each pigment allowed in the system.
In addition, according to a preferred embodiment of the invention, block any R ' (p), G ' (p) and B ' (p), with guarantee any R ' (p), G ' (p) and B ' (p) all be not less than minimum value.
Other again, according to a preferred embodiment of the invention, colour presents system and comprises that the R-G-B colour presents system.
In addition, according to a preferred embodiment of the invention, colour presents system and comprises that the chrominance/luminance colour presents system.
In addition; According to a preferred embodiment of the invention; The step that label information is expressed as the 2-coordinate vector comprises: label information is expressed as Bit String; Said Bit String is subdivided into a plurality of bit substrings, and converts each the bit substring in said a plurality of bit substrings into corresponding 2-coordinate vector.
In addition, according to a preferred embodiment of the invention, each the bit substring in said a plurality of bit substrings all comprises the Bit String of tribit.
Other again, according to a preferred embodiment of the invention, each the bit substring in said a plurality of bit substrings all comprises the Bit String of dibit.
According to a further advantageous embodiment of the invention, a kind of method is provided also, said method comprises: catch and comprise the video flowing that embeds data; Said video flowing is divided into a plurality of frame of video that are included in wherein; Through color-values coordinate summation to the given pigment in each the single frame of video that is included in said a plurality of frame of video, come each pigment location chromaticity for this single frame of video, be expressed as C '; Each pigment location chromaticity for corresponding single frame of video is expressed as C, and corresponding single frame of video is corresponding to not comprising the frame of video that embeds data; From C ', deduct C; And from the result who subtracts each other, derive first coordinate and second and sit target value, said first coordinate and said second coordinate comprise the coordinate of vector, and said vector is corresponding to Bit String, and said Bit String comprises the information that is embedded in the single frame of video.
In addition, according to a preferred embodiment of the invention,, rebuild label information as deriving the result that first coordinate and second is sat the step of target value.
Other again, according to a preferred embodiment of the invention, the result as rebuilding said label information identifies unique user ID.
According to another preferred embodiment more of the present invention, also there is provided a system comprising: the label information receiver; The 2-coordinate vector, it is represented as ω, wherein, said 2-coordinate is expressed as α, β respectively, so that ω=(α, β), said 2-coordinate vector is represented said label information; The frame of video that is labeled, said frame of video comprises a plurality of pixels, each pixel in said a plurality of pixels all is expressed as p; Wherein, and p=(x, y); X and y comprise the coordinate of pixel p, and said a plurality of pixels are expressed as the tlv triple of pigment, and said pigment is expressed as R, G, B respectively; And the frame of video marker, its through according to: R ' (p)=R (p)+<p, ω R>G ' (p)=G (p)+<p, ω G>And B ' (p)=B (p)+<p, ω B>Conversion each pixel in said a plurality of pixels is come the said frame of video of mark, wherein:<p, ω R>Expression is to p and ω RDot-product operation;<p, ω G>Expression is to p and ω GDot-product operation; And<p, ω B>Expression is to p and ω BDot-product operation.
According to another preferred embodiment more of the present invention, also there is provided a system comprising: comprise the video captured stream that embeds data; The video flowing dispenser, it is divided into a plurality of frame of video that are included in wherein with said video captured stream; The first chromaticity steady arm, it comes to locate first chromaticity for each pigment of this single frame of video through the color-values coordinate summation to the given pigment in each the single frame of video that is included in said a plurality of frame of video, is expressed as C '; The second chromaticity steady arm, it locatees second chromaticity for each pigment of corresponding single frame of video, is expressed as C, and corresponding single frame of video is corresponding to not comprising the frame of video that embeds data; Processor, it deducts C from C '; And second processor; It is derived first coordinate and second and sits target value from the result who subtracts each other; Said first coordinate and said second coordinate comprise the coordinate of vector, and said vector is corresponding to Bit String, and said Bit String comprises the information that is embedded in the single frame of video.
According to another preferred embodiment more of the present invention, a kind of signal also is provided, said signal comprises: the video flowing that comprises a plurality of frame of video; Each frame of video in said a plurality of frame of video all comprises a plurality of pixels, and each pixel in said a plurality of pixels all is expressed as p, wherein; P=(x, y), x and y comprise the coordinate of pixel p; Said a plurality of pixel is expressed as the tlv triple of pigment, and said pigment is expressed as R, G, B respectively, wherein: label information is shown as the 2-coordinate vector; Said 2-coordinate vector is expressed as ω, wherein, said 2-coordinate is expressed as α, β respectively; So that ω=(α, β), used said label information according to: R ' (p)=R (p)+<p, ω R>G ' (p)=G (p)+<p, ω G>And B ' (p)=B (p)+<p, ω B>Conversion each pixel in said a plurality of pixels, wherein:<p, ω R>Expression is to p and ω RDot-product operation;<p, ω G>Expression is to p and ω GDot-product operation; And<p, ω B>Expression is to p and ω BDot-product operation.
According to another preferred embodiment more of the present invention, a kind of storage medium also is provided, said storage medium comprises the video flowing that comprises a plurality of frame of video; Each frame of video in said a plurality of frame of video all comprises a plurality of pixels, and each pixel in said a plurality of pixels all is expressed as p, wherein; P=(x, y), x and y comprise the coordinate of pixel p; Said a plurality of pixel is expressed as the tlv triple of pigment, and said pigment is expressed as R, G, B respectively, wherein: label information is shown as the 2-coordinate vector; Said 2-coordinate vector is expressed as ω, wherein, said 2-coordinate is expressed as α, β respectively; So that ω=(α, β), used said label information according to: R ' (p)=R (p)+<p, ω R>G ' (p)=G (p)+<p, ω G>And B ' (p)=B (p)+<p, ω B>, conversion each pixel in said a plurality of pixels, wherein:<p, ω R>Expression is to p and ω RDot-product operation;<p, ω G>Expression is to p and ω GDot-product operation; And<p, ω B>Expression is to p and ω BDot-product operation.
Description of drawings
Also combine accompanying drawing according to following detailed description, will become more fully understood and understand the present invention, wherein:
Fig. 1 is the brief block diagram of the video data embedded system of constructing and operating according to the preferred embodiment of the present invention;
Fig. 2 is the sketch that will data be embedded into typical frame wherein in the system of Fig. 1;
Fig. 3 is the describing an of preferred embodiment of method that is used for label information is injected the typical frame of Fig. 2;
Fig. 4 is describing with the typical frame of Fig. 2 of 8 vector coverings;
Fig. 5 is a sketch of having described example frame according to the system of Fig. 1, and it had shown before data embed, and was included in the pigment and the pixel coordinate of a plurality of pixels in the said example frame;
Fig. 6 is typical color gradient and the sketch of 2-coordinate vector ω in a frame that is produced by preferred implementation of the present invention;
Fig. 7 and 8 is simplified flow charts of method for optimizing of operation of the system of Fig. 1.
Embodiment
With reference now to Fig. 1,, it is the brief block diagram of the video data embedded system of constructing and operating according to the preferred embodiment of the present invention.The system of Fig. 1 comprises content rendering apparatus 10.Content rendering apparatus 10 preferably includes label information 15 and data embedded system 20.
Label information 15 preferably includes any suitable message, for example but what do not limit aforesaid ubiquity is that the information of sign rendering apparatus 10 preferably is used for unique device id of content rendering apparatus 10.Interchangeable and preferably copyright mark or other access right data, for example but do not limit aforesaid ubiquity be the playback/copy permission of obeying by content rendering apparatus 10.For example but what do not limit aforesaid ubiquity is to one skilled in the art will recognize that copyright information can be that expression has the single-bit of copyright/no copyright.Interchangeable, can represent copyright with a plurality of bits, for example but what do not limit aforesaid ubiquity is can duplicate but can not be burnt to the permission of CD.The playback apparatus of supposing mandate supposes simultaneously that in accordance with these signals undelegated playback apparatus is in accordance with these signals.The combination that can be appreciated that the identification information of suitable type can replacedly be used as label information 15.
Data embedded system 20 preferably can operate in case be depicted as among Fig. 1 asterisk " *" the embedding data be injected in the frame 30,40,50 of video flowing 60.
The operation of the system of Fig. 1 is described now.Video flowing 60 is depicted as comprises three dissimilar frame of video:
Do not comprise the frame 30 that embeds data as yet;
Embedding the frame 40 of data; And
Embedded the frame 50 of data.
Data embedded system 20 preferably receives label information 15 as input, produces and embeds data, be depicted as asterisk " *", and watermark (among this paper with term " WM " expression) is injected in the current frame 40 that is embedding data.
The content that comprises video flowing 60 comprises a plurality of frames 50 that embedded data now, can said content uploaded to content sharing network 70 or it can be obtained on content sharing network 70.Content sharing network 70 generally include streaming medium content shared network or peer content shared network one of them.Interchangeable, content sharing network 70 can comprise the online of any suitable type and/or off-line content distribution approach, for example but do not limit aforesaid ubiquity be the retail of pirate DVD.Second equipment 80 can obtain video flowing 60 from content sharing network 70 subsequently.
The agency of broadcaster, content owner or other proper authorizations also can obtain video flowing 60 from content sharing network 70.By broadcaster, content owner or other interested shareholders after content sharing network 70 has obtained video flowing 60, preferably video flowing 60 is input in the checkout equipment 90.Checkout equipment 90 preferably each from be included in video flowing 60 embedded extract in the frame 50 of data be depicted as asterisk " *" the embedding data.The embedding data of extracting are input to subsequently and embed in the data detection system 95.Embed data-detection apparatus 95 and preferably can confirm the label information 15 of injection according to the embedding data of input.
With reference now to Fig. 2,, it is the sketch that will data be embedded into typical frame wherein in the system of Fig. 1.One skilled in the art will recognize that, each frame that data are embedded into is wherein all comprised a plurality of pixels.Each pixel in said a plurality of pixel can be expressed as and comprise the tuple that is illustrated in one group of pigment in this pixel.For example but do not limit aforesaid ubiquity, and in the RGB color system (R hereinafter, G, B, wherein R represents redly, and G represent green, and B represents blueness.No matter be common or independent employing), each pixel in said a plurality of pixels can be expressed as the value that is included between 0 and 255.
One skilled in the art will recognize that, can replacedly in any suitable color space, show pixel color, such as any known chrominance/luminance (for example, YCbCr of system; YPbPr; YDbDr), perhaps according to the xvYCC standard, IEC 61966-2-4.Succinct for what discuss, with non-limiting way pixel color is expressed as the RGB tlv triple among this paper.
The term of all grammatical forms used herein " injects (inject) " and can exchange the ground use with the term " embedding " of all grammatical forms.
Use following mark in argumentation below and the claim, in Fig. 2, described some part of these symbols for exemplary purpose:
W is the frame width of unit with the pixel
H is the vertical frame dimension degree of unit with the pixel
P=(x, the y) position with respect to the center of pixel, for example top left corner pixel be (W/2 ,-H/2).
R (p), G (p), the original red, green, blue component of B (p) pixel p
R ' (p), G ' (p), B ' is the red, green, blue component of the pixel p after data embed (p)
R *R (p) summation of each pixel p in=∑ R (p) frame
For G similarly, G *=∑ G (p), for B, B *=∑ B (p).Succinct for what discuss, further example is limited to the R component.
ω=(α, the information of β) injecting is expressed as the 2-coordinate vector.As stated, with reference to figure 1, the information of injection preferably depends on some suitable message, preferably identifies the information of rendering apparatus 10 (Fig. 1), and preferably is used for unique device id of content rendering apparatus 10 (Fig. 1).
<a, B>=∑ (A i* B i) dot-product operation of vectorial A and B.
With reference now to Fig. 3 and 4.Fig. 3 is the describing an of preferred embodiment of method that is used for label information 15 (Fig. 1) is injected the typical frame of Fig. 2.Fig. 4 is describing with the typical frame of Fig. 2 of 8 vector coverings.As stated, with reference to figure 1, label information 15 (Fig. 1) preferably includes any suitable message, for example but what do not limit aforesaid ubiquity is that the information of sign rendering apparatus 10 (Fig. 1) preferably is used for unique device id of content rendering apparatus 10 (Fig. 1).In Fig. 3,, label information 300 is shown as optional 32 bit numbers for this non-limiting example.
Label information 300 is shown as and is divided into a plurality of 3 bit tlv triple.Describe each 3 bit tlv triple explicitly with specific 2-coordinate vector ω.Concrete:
Each 3 bit tlv triple all is associated with one of 8 vectorial a-h shown in Fig. 4.Described a preferred version by beholder's matrix of the result 320 of related each 3 bit tlv triple in the bottom of Fig. 3.Concrete:
Vector Bit value
a 000
b 001
c 010
d 011
[0096]?
e 100
f 101
g 110
h 111
Can be appreciated that the method that identification information is divided into the group of 3 bits is arbitrarily, any suitable alternative dividing method all is effective.
Can be appreciated that vectorial a-h confirms arbitrarily, is the effective Vector Groups that is used for the preferred embodiments of the present invention in order to make alternative Vector Groups have any alternative Vector Groups of using at the initial point of watching the screen center.
Can be appreciated that bit value is arbitrarily with the related of vector, any alternative scheme all is effective.For example but do not limit aforesaid ubiquity be, following form description each 3 bit tlv triple with the vector alternative related:
Vector Bit value
a 111
b 110
c 101
d 100
e 011
f 010
g 001
h 000
Can be appreciated that the description of the label information 300 among Fig. 3 is shown as 32 bit numbers, yet in order to have one group of complete vectorial ω R3, ω G3, ω B3, and possible ω R4, ω G4, ω B4, can need 33 bit numbers or 36 bit numbers.In Fig. 3, show and lack required bit by empty frame 330.In order to have one group of 33 complete bit or 36 bits, just must use techniques well known in the art to add one or four filling bits to 32 bit labeling information 300.For example but do not limit aforesaid ubiquity be; Can add 4 bit verifications and be used as filling bit; Can repeat last 4 bits and be used as filling bit; Can add 4 bit sequences arbitrarily (for example 0000,0101,1010 or 1111 in any one) and be used as filling bit, thereby 32 bit labeling information 300 are rounded to 36 bits.Can use similar techniques that 32 bit labeling information 300 are rounded to 33 bits.
Preferably with three vectorial ω of each group Rn, ω Gn, ω BnThe frame that is used to limited quantity embeds data with being described below.For example but do not limit aforesaid ubiquity be, in the example shown in Fig. 3 and 4, with ω R2, ω G2, ω B2Be used for data are embedded into frame 1801-3600.
After whole 33 bits or 36 bits being used for data are embedded a framing, repeating label information 300.
Preferably label information 15 (Fig. 1) is encoded to three 2 dimensional vector ω on the real number group R, ω G, ω B, it receives the restriction of the following stated.
In order to inject data ω R, ω G, ω B, like each pixel p in down conversion one frame:
R’(p)=R(p)+<p,ω R>;
G ' (p)=G (p)+<p, ω G>With
B’(p)=B(p)+<p,ω B>。
Can be appreciated that, no matter R ' (p), G ' (p) and B ' value (p) what is, the value of R, G and B also never can surpass and present the maximal value that system applies by video color.For example but what do not limit aforesaid ubiquity is that in the system of rgb value between 0 to 255, R, G and B never can be higher than maximal value 255.Similarly, no matter R ' (p), G ' (p) and B ' value (p) what is, the value of R, G and B also never can be lower than minimum value 0.For example but what do not limit aforesaid ubiquity is that if G ' (p)=258, so just G ' (p) being subdued is 255.Similarly, if B ' (p)=-2, so just (p) brings up to 0 with B '.
With reference now to Fig. 5,, it is a sketch of having described example frame according to the system of Fig. 1, and it had shown before data embed, and was included in the pigment and the pixel coordinate of a plurality of pixels in the said example frame.Discuss Fig. 3 and 4 as the example of a preferred embodiment of the present invention.Can be appreciated that.All values all only for exemplary purpose provides, never should be interpreted as restriction.In order to be easy to describe, the example frame of describing among Fig. 5 only comprises 16 pixels.Following form with describe among Fig. 5 example frame shown in each example values list form in:
Pixel p (2,2) R 112 G 27 B 19
p(-1,2) 113 26 25
p(1,2) 111 27 19
p(2,2) 110 29 19
[0112]?
p(-2,1) 110 26 ?21
p(-1,1) 114 24 ?18
p(1,1) 110 24 ?23
p(2,1) 108 23 ?25
p(-2,-1) 108 23 ?23
p(-1,-1) 108 22 ?25
p(1,-1) 100 20 ?27
p(2,-1) 98 20 ?30
p(-2,-2) 103 19 ?27
p(-1,-2) 100 17 ?29
p(1,-2) 96 13 ?32
p(2,-2) 94 11 ?35
? R *=∑R(p)=1695 G *=∑G(p)=351 ?B *=∑B(p)=397
Provide the several examples that embed data now.In order to be easy to narration, suppose that a frame is that 3 pixels are taken advantage of 3 pixels.Each pixel all is labeled as P n, and provide following coordinate for each pixel:
P 1(-1,-1) P 2(0,-1) P 3(1,-1)
P 4(-1,0) P 5(0,0) P 6(1,0)
P 7(-1,1) P 8(0,1) P 9(1,1)
As stated, the pixel in the upper left corner be (W/2 ,-H/2), so the first half of coordinate system is used the negative value of y.
As stated, each pixel P 1-P 9All comprise a rgb value.The rgb value that below provides provides as an example:
P 1(191,27,0) P 2(188,25,220) P 3(212,6,194)
P 4(123,203,86) P 5(212,38,161) P 6(35,89,121)
P 7(20,194,19) P 8(104,76,199) P 9(62,149,131)
Suppose ω RGB=(α, β)=(2,0), and with the coordinate of each pixel (x, y) multiply by (α β), obtains (α * x)+(β * y)=(2*x)+(0*y)=(2*x), thereby to each pigment in each pixel the correction that will increase is provided:
P 1(-2) ?P 2(0) ?P 3(2)
P 4(-2) ?P 5(0) ?P 6(2)
P 7(-2) ?P 8(0) ?P 9(2)
As stated, each pigment increase correction for each pixel obtains:
P’ 1(189,25,0) P’ 2(188,25,220) P’ 3(214,6,194)
P’ 4(121,201,84) P’ 5(212,38,161) P’ 6(37,91,123)
P’ 7(18,192,17) P’ 8(104,76,199) P’ 9(64,151,133)
As second example, suppose that a frame is that 5 pixels are taken advantage of 5 pixels:
P 1(209,54,9) P 2(144,165,59) P 3(97,88,158) P 4(112,87,92) P 5(35,191,8)
P 6(118,184,246) P 7(204,18,51) P 8(60,253,35) P 9(20,116,54) P 10(111,76,177)
P 11(137,116,184) P 12(145,79,254) P 13(254,139,112) P 14(7,96,98) P 15(151,45,193)
P 16(142,85,214) P 17(123,193,146) P 18(64,41,196) P 19(231,60,231) P 20(69,56,174)
P 21(53,241,229) P 22(16,179,88) P 23(22,130,219) P 24(36,132,117) P 25(174,72,122)
With each pixel logo is P n, and provide following coordinate for each pixel:
P 1(-2,-2) P 2(-1,-2) P 3(0,-2) P 4(1,-2) P 5(2,-2)
P 6(-2,-1) P 7(-1,-1) P 8(0,-1) P 9(1,-1) P 10(2,-1)
P 11(-2,0) P 12(-1,0) P 13(0,0) P 14(1,0) P 15(2,0)
P 16(-2,1) P 17(-1,1) P 18(0,1) P 19(1,1) P 20(2,1)
P 21(-2,2) P 22(-1,2) P 23(0,2) P 24(1,2) P 25(2,2)
Suppose ω RGB=(α, β)=(1,1), and with the coordinate of each pixel (x, y) multiply by (α, β), obtain (α * x)+(β * y)=(1*x)+(1*y), thus to each pigment in each pixel the correction that will increase is provided:
P 1(-2,-2)=0 ?P 2(1,-2)=-1 ?P 3(0,-2)=-2 P 4(-1,-2)=-3 ?P 5(-2,-2)=-4
P 6(2,-1)=1 ?P 7(1,-1)=0 ?P 8(0,-1)=-1 P 9(-1,-1)=-2 ?P 10(-2,-1)=-3
P 11(2,0)=2 ?P 12(1,0)=1 ?P 13(0,0)=0 P 14(-1,0)=-1 ?P 15(2-,0)=-2
P 16(2,1)=3 ?P 17(1,1)=2 ?P 18(0,1)=1 P 19(-1,1)=0 ?P 20(-2,1)=-1
P 21(2,2)=4 ?P 22(1,2)=3 ?P 23(0,2)=2 P 24(-1,2)=1 ?P 25(-2,2)=0
As stated, each pigment increase correction for each pixel obtains:
P’ 1(209,54,9) P’ 2(143,164,58) P’ 3(95,86,156) P’ 4(109,84,89) P’ 5(31,187,4)
[0130]?
P’ 6(119,185,247) P’ 7(204,18,51) P’ 8(59,252,34) P’ 9(18,114,52) P’ 10(108,73,174)
P’ 11(139,118,186) P’ 12(146,80,255) P’ 13(254,139,112) P’ 14(6,95,67) P’ 15(149,43,191)
P’ 16(145,88,217) P’ 17(125,195,148) P’ 18(65,42,197) P’ 19(231,60,231) P’ 20(68,55,173)
P’ 21(57,245,233) P’ 22(19,182,91) P’ 23(24,132,221) P’ 24(37,133,118) P’ 25(174,72,122)
With reference now to Fig. 6,, it is by typical color gradient in the frame 620 of preferred implementation generation of the present invention and the sketch of 2-coordinate vector ω 610.Be described below, p is maximum in the screen angle, and therefore, dot product < p, ω>is maximum for the maximum length of p.Therefore, with pixel 630 be depicted as basically not as pixel 640 so dark.Can be appreciated that in this example, ω 610 shows the influence of ω 610 for any RGB component.
One skilled in the art will recognize that vision signal or other appropriate signals can comprise and comprising like above video with reference to the described embedding data of figure 1-6.One skilled in the art will recognize that, comprise like above video and can be stored in compact disk (CD), digital multi-purpose disk (DVD), flash memories with reference to the described embedding data of figure 1-6, or in other suitable storage mediums.
Describe now embedding the detection of data.In order to be easy to describe, below describe only concentrating on the red component.Can be appreciated that the detection of the embedding data in other color components is identical with the detection in red component.Checkout equipment 90 (Fig. 1) is usually from content sharing network 70 received contents 60.
In following summation, all summations all are to all pixels in the frame of being checked, only if specialize different.
As stated, before data are embedded into given frame, will be expressed as R for the chromaticity of components R *=∑ R (p).Summation R *=∑ R (p) is the summation of whole values of the single pigment in each pixel in single frame.
Can be appreciated that after data being embedded into a frame, chromaticity remains unchanged:
∑R’(p)=∑R(p)+<∑p,ω R>=∑R(p)+0=R *
One skilled in the art will recognize that since for each pixel p=(x, y), dot product<∑ p, ω R>=0, therefore exist corresponding pixel-p=(x ,-y).Therefore, the equal summand that all has opposite in sign for each summand in the ∑ < p, ω >.
Let C ' expression comprise the chromaticity center (color mass center) of a frame that embeds data.Therefore, for red component, the chromaticity center of this frame is defined as normalized bivector:
C &prime; ( R ) = &Sigma; R &prime; ( p ) * p R *
By means of subtraction, confirm the difference between the chromaticity center of chromaticity center and primitive frame of this frame after embedding data:
D ( R ) = &Sigma;R &prime; ( p ) * p &Sigma; R &prime; ( p ) - &Sigma;R ( p ) * p &Sigma;R ( p ) = &Sigma;R ( p ) * p + &Sigma;p < p , &omega; R > - &Sigma;R ( p ) * p R *
= &Sigma; < p , &omega; R > * p R *
Because p=(x, y) and ω R=(α, β),
∑<p,ω R>*p=(∑x*(αx+βy),∑y*(αx+βy))
=(∑x*(αx+βy),∑y*(αx+βy))
Open bracket and cancellation value and be 0 summand:
∑ x* (α x+ β y)=α ∑ x 2+ β ∑ xy, and
β∑xy=β∑x∑y=0。
Therefore,
&alpha;&Sigma; x 2 = 2 * 1 3 ( W 2 ) 3 * H = HW 3 12
To power with use following formula:
&Sigma; k = 0 n k 2 = 2 n 3 + 3 n 2 + n 6
Accurate and approximate equality below can from above equality, deriving:
&alpha; &Sigma; x = 0 . . W / 2 y = 0 . . H / 2 x 2 = &alpha; &Sigma; x = 0 W / 2 H 2 x 2 = &alpha; H 2 * 2 ( W / 2 ) 3 + 3 ( W / 2 ) 2 + W / 2 6 &ap; &alpha;HW 3 12
&beta; &Sigma; x = 0 . . W / 2 y = 0 . . H / 2 y 2 = &beta; &Sigma; x = 0 W / 2 W 2 y 2 = &beta; W 2 * 2 ( H / 2 ) 3 + 3 ( H / 2 ) 2 + H / 2 6 &ap; &alpha;WH 3 12
Therefore:
D ( R ) &ap; HW 12 R * * ( &alpha; W 2 , &beta; H 2 ) ,
Obtain α and β from following approximated equation:
&alpha; &ap; D ( R ) 12 R * HW 3 * ( 1,0 )
&beta; &ap; D ( R ) 12 R * H 3 W * ( 0,1 )
Clear for what describe, at the above approximate expression that used.One skilled in the art will recognize that the actual extracting process that should accurate equality be used for embedding data is to replace approximate value.
Watch impression in order to ensure not damaging, preferably ω is selected, so that the color component correction is no more than specific threshold.Inventor's of the present invention suggestion is: the threshold value of suggestion is 2%, perhaps on 0 to 255 scope about 4.Because dot product < p, ω>is linear, so dot product < p, ω>is maximum for the maximum length of p.Concrete, p is maximum on the angle of screen.Therefore, apply following constraint and preferably be used for limiting the upper limit, thereby guaranteed the scope about 0-255, threshold value is no more than 2%:
αH/2+βW/2<(2/100)*255
Inventor's of the present invention suggestion is: chromaticity frequency data embedded technology as herein described has the resistibility of height for known attack.Concrete:
Filtering-the present invention on the frequency domain of image and video, typically has very little influence on extremely low frequency in its preferred embodiment, perhaps not influence fully.Therefore, can not wait with standard low-pass filter, the video color poising tool that target noise and signal are in high frequency and detect or remove the WM technology that is proposed.
Adjustment size (stretch), rotation and shear-since in simple terms α and β be the coordinate that covers the vector on the screen, so stretch or rotate the linear change that expection can cause α and β value in the coded data comprising the video that embeds data.In addition; Can select a kind of coding method; For example be used for but do not limit aforesaid ubiquity ground through selecting the group of vectorial ω; Make minimum angles between any two possible vectors than by attacking big many of caused maximum rotation, thereby avoided adjustment size (stretching), rotation and shearing attack.
Collusion attack-collusion attack is usually through averaging the several vision signals that comprise WM, perhaps from the several frames that comprise WM, selects each frame to carry out, thereby produces the WM that has made up from the data of the signal of all reference inspections.Concrete, the frequency analysis of the signal of combination has been disclosed the frequency of all injections usually.If as stated, data embedded system 20 (Fig. 1) is suspended between the injection of the byte of separating, and when an original WM only occurring, resultant signal preferably comprises at interval so, thereby allows Signal Separation.Preferably utilize as known in the art, injecting and detecting the standard error correcting technique that all uses, so that help a plurality of WM of separation.
Shearing-shearing comprises the video council that embeds data and causes losing of color information, thus α in the change coded data and the value of β.Variation in the value of α and β can be proportional in the video quality of perception decline and with the similarity of the quality of original video.
Collusion attack: average-by means of comprising collusion attack that the vision signal that embeds data averages and can produce usually and made up from all by the WM of the data of average original signal to several.The α and the β that produce can be that all are at first by average α and the mean value of β.For example but what do not limit aforesaid ubiquity is that the present invention through be in the embedding of turn-on data in the random time span at each injector, has preferably avoided the loss of information in its preferred embodiment.
Collusion attack: select-by means of from comprise the different video signals that embeds data, selecting the collusion attack of different frames can produce the WM that has from the data of all initial selected signals.Resultant α and β can confirm the source of each participation respectively.In other words, it is otiose selecting to attack.
With reference now to Fig. 7-8,, they are simplified flow charts of method for optimizing of operation of the system of Fig. 1.According to above argumentation, believe Fig. 7-the 8th, conspicuous.
Can be appreciated that if needs are arranged, component software of the present invention can be realized with ROM (ROM (read-only memory)) form.If needs are arranged, component software can use conventional art to realize with hardware usually.
Can be appreciated that described a plurality of characteristics of the present invention also can be combined among the single embodiment and provide for the sake of clarity and under the background of a plurality of independent embodiment.On the contrary, also can provide respectively or in suitable arbitrarily son combination, provide in described a plurality of characteristics of the present invention under the background of single embodiment for the sake of brevity.
One skilled in the art will recognize that the present invention does not receive the restriction of the content of above concrete demonstration and description.On the contrary, scope of the present invention is only defined by subsidiary claim.

Claims (25)

1. method that is used to embed data comprises:
Receive label information;
Said label information is shown as the 2-coordinate vector, said 2-coordinate vector is expressed as ω, wherein, the 2-coordinate is expressed as α, β respectively, so that ω=(α, β);
The frame of video that will be labeled is provided, and said frame of video comprises a plurality of pixels, and each pixel in said a plurality of pixels all is expressed as p; Wherein, P=(x, y), x and y comprise the coordinate with respect to the center of said frame of video of pixel p; Said a plurality of pixel is expressed as the tlv triple of pigment, and said pigment is represented as R, G, B respectively; And
Through according to:
R’(p)=R(p)+<p,ω R>;
G ' (p)=G (p)+<p, ω G>With
B’(p)=B(p)+<p,ω B>,
Conversion each pixel in said a plurality of pixels is come the said frame of video of mark, wherein:
<p, ω R>Expression is to p and ω RDot-product operation;
<p, ω G>Expression is to p and ω GDot-product operation; And
<p, ω B>Expression is to p and ω BDot-product operation,
Wherein, ω R, ω GAnd ω BExpression is encoded as the label information of three 2-coordinate vectors on the tlv triple of pigment R, G and B, R (p), G (p); B (p) is respectively the original red, green, blue component of pixel p; R ' (p), G ' (p), B ' is respectively the red, green, blue component of the pixel p after data embed (p).
2. the method for claim 1, wherein said label information comprises the information that is used to identify rendering apparatus.
3. method as claimed in claim 2, wherein, the said information that is used to identify rendering apparatus comprises unique device identifier.
4. the method for claim 1, wherein said label information comprises copyright mark.
5. the method for claim 1, wherein said label information comprises the access right data.
6. method as claimed in claim 5, wherein, said access right data comprise playback/copy permission.
7. like any described method among the claim 1-6, wherein, said pigment comprises the R-G-B pigment.
8. like any described method among the claim 1-6, wherein, said pigment comprises the chrominance/luminance pigment.
9. method as claimed in claim 8, wherein, it is one of following that said chrominance/luminance pigment comprises:
YCbCr chrominance/luminance pigment;
YPbPr chrominance/luminance pigment;
YDbDr chrominance/luminance pigment; And
XvYCC chrominance/luminance pigment.
The method of claim 1, wherein R ' (p), G ' (p) and B ' (p) all be no more than colour and present the maximal value that each said pigment allowed in the system.
11. method as claimed in claim 10, wherein, block any R ' (p), G ' (p) and B ' (p), with guarantee any R ' (p), G ' (p) and B ' (p) all be no more than said maximal value.
12. like claim 10 or 11 described methods, wherein, said colour presents system and comprises that the R-G-B colour presents system.
13. like claim 10 or 11 described methods, wherein, said colour presents system and comprises that the chrominance/luminance colour presents system.
14. the method for claim 1, wherein R ' (p), G ' (p) and B ' (p) all be not less than colour and present the minimum value that each said pigment allowed in the system.
15. method as claimed in claim 14, wherein, block any R ' (p), G ' (p) and B ' (p), with guarantee any R ' (p), G ' (p) and B ' (p) all be not less than said minimum value.
16. like claim 14 or 15 described methods, wherein, said colour presents system and comprises that the R-G-B colour presents system.
17. like claim 14 or 15 described methods, wherein, said colour presents system and comprises that the chrominance/luminance colour presents system.
18. the step that the method for claim 1, wherein said label information is expressed as the 2-coordinate vector comprises:
Said label information is expressed as Bit String;
Said Bit String is subdivided into a plurality of bit substrings; And
Convert each the bit substring in said a plurality of bit substrings into corresponding 2-coordinate vector.
19. method as claimed in claim 18, wherein, each the bit substring in said a plurality of bit substrings all comprises the Bit String of tribit.
20. method as claimed in claim 18, wherein, each the bit substring in said a plurality of bit substrings all comprises the Bit String of dibit.
21. one kind is used to detect the method that embeds data, wherein said embedding data are to use the method for claim 1 to embed, and comprising:
Catch and comprise the video flowing that embeds data;
Said video flowing is divided into a plurality of frame of video that are included in wherein;
Color-values coordinate summation through to the given pigment in each the single frame of video that is included in said a plurality of frame of video to confirm the chromaticity center for each pigment of this single frame of video that it is represented as C ';
Confirm the chromaticity center for each pigment of corresponding single frame of video, it is represented as C, and corresponding single frame of video is corresponding to not comprising the frame of video that embeds data;
From C ', deduct C; And
From the result who subtracts each other, derive first coordinate and second and sit target value, said first coordinate and said second coordinate comprise the coordinate of vector, and said vector is corresponding to Bit String, and said Bit String comprises the information that is embedded in the single frame of video.
22. method as claimed in claim 21 wherein, as deriving the result that first coordinate and second is sat the step of target value, has been rebuild label information.
23. like claim 21 or 22 described methods, wherein, the result as rebuilding said label information identifies unique user ID.
24. a system that is used to embed data comprises:
The label information receiver;
The 2-coordinate vector, it is represented as ω, wherein, said 2-coordinate is expressed as α, β respectively, so that ω=(α, β), said 2-coordinate vector is represented said label information;
The frame of video that is labeled, said frame of video comprises a plurality of pixels, and each pixel in said a plurality of pixels all is expressed as p; Wherein, P=(x, y), x and y comprise the coordinate with respect to the center of said frame of video of pixel p; Said a plurality of pixels are expressed as the tlv triple of pigment, and said pigment is represented as R, G, B respectively; And
The frame of video marker, its through according to:
R’(p)=R(p)+<p,ω R>;
G ' (p)=G (p)+<p, ω G>With
B’(p)=B(p)+<p,ω B>,
Conversion each pixel in said a plurality of pixels is come the said frame of video of mark, wherein:
<p, ω R>Expression is to p and ω RDot-product operation;
<p, ω G>Expression is to p and ω GDot-product operation; And
<p, ω B>Expression is to p and ω BDot-product operation,
Wherein, ω R, ω GAnd ω BExpression is encoded as the label information of three 2-coordinate vectors on the tlv triple of pigment R, G and B, R (p), G (p); B (p) is respectively the original red, green, blue component of pixel p; R ' (p), G ' (p), B ' is respectively the red, green, blue component of the pixel p after data embed (p).
25. one kind is used to detect the system that embeds data, wherein said embedding data are to use system as claimed in claim 24 to embed, and comprising:
Video captured stream, it comprises the embedding data;
The video flowing dispenser, it is divided into a plurality of frame of video that are included in wherein with said video captured stream;
The first chromaticity steady arm, it comes to confirm the first chromaticity center for each pigment of this single frame of video that through the color-values coordinate summation to the given pigment in each the single frame of video that is included in said a plurality of frame of video it is represented as C ';
The second chromaticity steady arm, it confirms the second chromaticity center for each pigment of corresponding single frame of video, and it is represented as C, and corresponding single frame of video is corresponding to not comprising the frame of video that embeds data;
Processor, it deducts C from C '; And
Second processor, it is derived first coordinate and second and sits target value from the result who subtracts each other, and said first coordinate and said second coordinate comprise the coordinate of vector, and said vector is corresponding to Bit String, and said Bit String comprises the information that is embedded in the single frame of video.
CN200880003825.1A 2007-02-05 2008-01-13 System and method for embedding and detecting data Expired - Fee Related CN101601068B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
IL181167 2007-02-05
IL181167A IL181167A0 (en) 2007-02-05 2007-02-05 System for embedding data
IL183841A IL183841A0 (en) 2007-06-11 2007-06-11 System for embedding data
IL183841 2007-06-11
PCT/IB2008/050104 WO2008096281A1 (en) 2007-02-05 2008-01-13 System for embedding data

Publications (2)

Publication Number Publication Date
CN101601068A CN101601068A (en) 2009-12-09
CN101601068B true CN101601068B (en) 2012-12-19

Family

ID=41421582

Family Applications (1)

Application Number Title Priority Date Filing Date
CN200880003825.1A Expired - Fee Related CN101601068B (en) 2007-02-05 2008-01-13 System and method for embedding and detecting data

Country Status (2)

Country Link
CN (1) CN101601068B (en)
IL (1) IL181167A0 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103098453B (en) * 2010-09-13 2016-12-21 杜比实验室特许公司 Use the data transmission of the outer color coordinates of colour gamut

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1183693A (en) * 1996-10-28 1998-06-03 国际商业机器公司 Protecting images with image watermark
US5930369A (en) * 1995-09-28 1999-07-27 Nec Research Institute, Inc. Secure spread spectrum watermarking for multimedia data
US5960081A (en) * 1997-06-05 1999-09-28 Cray Research, Inc. Embedding a digital signature in a video sequence
CN1758282A (en) * 2004-10-10 2006-04-12 北京华旗数码影像技术研究院有限责任公司 Method of using vulnerable watermark technology for digital image fidelity

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5930369A (en) * 1995-09-28 1999-07-27 Nec Research Institute, Inc. Secure spread spectrum watermarking for multimedia data
CN1183693A (en) * 1996-10-28 1998-06-03 国际商业机器公司 Protecting images with image watermark
US5960081A (en) * 1997-06-05 1999-09-28 Cray Research, Inc. Embedding a digital signature in a video sequence
CN1758282A (en) * 2004-10-10 2006-04-12 北京华旗数码影像技术研究院有限责任公司 Method of using vulnerable watermark technology for digital image fidelity

Also Published As

Publication number Publication date
IL181167A0 (en) 2008-01-06
CN101601068A (en) 2009-12-09

Similar Documents

Publication Publication Date Title
US6961444B2 (en) Time and object based masking for video watermarking
US9996891B2 (en) System and method for digital watermarking
Dittmann et al. Robust MPEG video watermarking technologies
CN101273367B (en) Covert and robust mark for media identification
EP2115694B1 (en) System for embedding data
JP2003134483A (en) Method and system for extracting watermark signal in digital image sequence
CN113179407B (en) Video watermark embedding and extracting method and system based on interframe DCT coefficient correlation
US20130287369A1 (en) Frequency-Modulated Watermarking
CN101601068B (en) System and method for embedding and detecting data
Biswas et al. MPEG-2 digital video watermarking technique
KR100971221B1 (en) The adaptive watermarking method for converting digital to analog and analog to digital
Qin et al. A new JPEG image watermarking method exploiting spatial JND model
Tang et al. Improved spread transform dither modulation using luminance-based JND model
Burdescu et al. A spatial watermarking algorithm for video images
JP4944966B2 (en) How to mark a digital image with a digital watermark
US7796778B2 (en) Digital watermarking system according to background image pixel brightness value and digital watermarking method
Sulaiman et al. Fractal based fragile watermark
Fu et al. RAWIW: RAW Image Watermarking robust to ISP pipeline
El Allali et al. Object based video watermarking sheme using feature points
Bae et al. A New Mobile Watermarking Scheme Based on Display-capture
Echizen et al. Use of human visual system to improve video watermarking for immunity to rotation, scale, translation, and random distortion
Biswas et al. Compressed video watermarking technique
CN103985079A (en) Tampering detection method for digital image with invisible watermark
Bahrushin et al. A video watermarking scheme resistant to synchronization attacks
Pandya et al. Digital Video Watermarking for Educational Video Broadcasting and Monitoring Application

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1138669

Country of ref document: HK

C14 Grant of patent or utility model
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: GR

Ref document number: 1138669

Country of ref document: HK

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121219

CF01 Termination of patent right due to non-payment of annual fee