EP0753969A2 - Model-assisted video coding - Google Patents

Model-assisted video coding Download PDF

Info

Publication number
EP0753969A2
EP0753969A2 EP96304900A EP96304900A EP0753969A2 EP 0753969 A2 EP0753969 A2 EP 0753969A2 EP 96304900 A EP96304900 A EP 96304900A EP 96304900 A EP96304900 A EP 96304900A EP 0753969 A2 EP0753969 A2 EP 0753969A2
Authority
EP
European Patent Office
Prior art keywords
region
coding
video signal
closed curve
ellipse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP96304900A
Other languages
German (de)
French (fr)
Inventor
Alexandros Eleftheriadis
Arnaud Eric Jacquin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
AT&T Corp
Original Assignee
AT&T Corp
AT&T IPM Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by AT&T Corp, AT&T IPM Corp filed Critical AT&T Corp
Publication of EP0753969A2 publication Critical patent/EP0753969A2/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/012Comfort noise or silence coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/149Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/15Data rate or code amount at the encoder output by monitoring actual compressed data size at the memory before deciding storage at the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/152Data rate or code amount at the encoder output by measuring the fullness of the transmission buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding

Definitions

  • This disclosure relates generally to the field of signal coding, and more particularly to the detection of facial features for use in the coding of video.
  • Video generally requires relatively high bit rates, such as in the Motion Picture Experts Groups (MPEG) standards, to provide high quality images, but it may be acceptable to use relatively low bit rate video for teleconferencing situations.
  • MPEG Motion Picture Experts Groups
  • the video coding may produce artifacts which are systematically present throughout coded images; for example, due to the coding of both facial and non-facial image portions at the same bit rate.
  • a common coding bit rate for both facial and non-facial image portions fails to provide sufficient coding quality of the facial portions, thus hampering the viewing of the video image.
  • a very good rendition of facial features is paramount to intelligibility, such as in the case of hearing-impaired viewers who may rely on lip reading.
  • An apparatus which responds to a video signal representing a succession of frames, where at least one of the frames corresponds to an image of an object.
  • the apparatus includes a processor for processing the video signal to detect at least the region of the object characterized by at least a portion of a closed curve and to generate a plurality of parameters associated with the closed curve for use in coding the video signal.
  • the present disclosure describes a facial feature detection system and method for detecting and locating facial features in video.
  • MC DCT Motion-Compensated Discrete Cosine Transform
  • the codec 10 may include an object locator 12 having a processor, such as a microprocessor, and memory (not shown) for operating an object detection program implementing the disclosed facial feature detection system 14 and method.
  • the object detection program may be compiled source code written in the C programming language.
  • the codec 10 also includes a coding controller 16 having a processor, such as a microprocessor, and a memory (not shown) for operating a coding control program implementing a disclosed buffer rate modulator 18 and a disclosed buffer size modulator 20.
  • the coding control program may be compiled source code written in the C++ programming language.
  • object locator 12 including the disclosed facial feature detection system 14 and method as well as the disclosed coding controller 16 having the disclosed buffer rate modulator 18 and buffer size modulator 20 may be implemented in software using other programming languages and/or by hardware or firmware to perform the operations described hereinbelow.
  • the illustrative embodiment of the disclosed facial feature detection system 14 and method as well as the disclosed coding controller 16 having the disclosed buffer rate modulator 18 and buffer size modulator 20 is presented as having individual functional blocks, which may include functional blocks labelled as "processors".
  • the functions represented by these blocks may be provided through the use of either shared or dedicated hardware, including, but not limited to, hardware capable of executing software.
  • the functions of the blocks presented herein may be provided by a single shared processor or by a plurality of individual processors.
  • the use of the functional blocks with accompanying labels herein is not to be construed to refer exclusively to hardware capable of executing software.
  • Illustrative embodiments may include digital signal processor (DSP) hardware, such as the AT&T DSP16 or DSP32C, read-only memory (ROM) for storing software performing the operations discussed below, and random access memory (RAM) for storing DSP results.
  • DSP digital signal processor
  • ROM read-only memory
  • RAM random access memory
  • VLSI Very large scale integration
  • the codec 10 further includes a video coder 22 and video decoder 24, with each of the video coder 22 and video decoder 24 controlled by the coding controller 16.
  • the codec 10 receives an input video signal 26 and an external control signal 28, and the object locator 12 and video coder 22 process the input video signal 26, under the control of the coding controller 16 using the external control signal 28, to generate the output coded bitstream 30.
  • the video coder 22 may process the input video signal 26 using a source coder 32, a video multiplex coder 34, a transmission buffer 36, and a transmission coder 38 to generate the output coded bitstream 30.
  • the video codec 10 receives an input coded bitstream 40 by the video decoder 24, which processes the input coded bitstream 40 using a receiving decoder 42, a receiving buffer 44, a video multiplex decoder 46, and a source decoder 48 for generating the output video signal 50.
  • FIG. 2 An exemplary implementation of the model-assisted source coder 32 of the video coder 22 in FIG. 1 is shown in greater detail in FIG. 2, where the source coder 32 includes components 54-70 for generating output signals such as a quantized transform coefficient signal, labelled "q"; a motion vector signal, labelled “v”; and a switching signal, labelled "f", for indicating the switching on/off of the loop filter 68.
  • the coding controller 16 uses object location signals from the object locator 12 to generate control signals such as an INTRA/INTER flag signal, labelled "p”; a flag labelled "t” to indicate a transmission state; and a quantizer indicator signal, labelled "qz". Such control signals are provided to the video multiplex coder 34 for further processing.
  • the codec 10 shown in an exemplary embodiment in FIG. 1, as well as the video coder 22 having, in an exemplary embodiment, the model-assisted source coder 32 shown in FIG. 2, operate in accordance with the CCITT Rec. H.261 standard, such as described in "Line Transmission on Non-Telephone Signals - Video Codec For Audiovisual Services at p ⁇ 64 kbit/s - Recommendation H.261", CCITT, Geneva, 1990.
  • the disclosed facial feature detection system 14 and method may then be implemented as described in greater detail below, where the object location signals from the object locator 12 provided to the coding controller 16 includes facial feature detection signals from the facial feature detector 14.
  • the facial feature detection system 14 and method automatically extracts facial area location information for the region-selective coding of video teleconferencing sequences. Low-complexity methods are then performed to detect a head outline, and to identify an "eyes-nose-mouth" region from downsampled binary thresholded edge images. There are no restrictions regarding the nature and content of the head-and-shoulders sequences to code.
  • the head outline detection and the "eyes-nose-mouth” detection may operate accurately and robustly in cases of significant head rotation and/or partial occlusion by moving objects, and in cases where the person in the image has facial hair and/or wears eyeglasses.
  • the disclosed invention also includes object-selective control of a quantizer in a standard coding system such as quantizer 58 in the H.261 compatible source coder 32 shown in FIG. 2.
  • the codec 10 is a Reference Model 8 (RM8) implementation of the H.261 compliant encoder, such as described in "Description of Reference Model 8 (RM8)", CCITT SGXV WG4, Specialists Group on Coding for Visual Telephony, Doc. 525, June 1989, which is incorporated herein by reference.
  • RM8 Reference Model 8
  • the disclosed facial feature system 14 and method is not limited to H.261, and may be used with different coding techniques, such as CCITT Rec. H.263 and MPEG, and different facial feature detection devices, such as neural network-based facial feature classifiers.
  • the object locator 12 incorporating the disclosed facial feature detection system 14 and method operates to control the quantizer 58 by performing buffer rate modulation and buffer size modulation.
  • a rate controller of the coding controller 16 associated with the quantizer 58 to transfer a relatively small fraction of the total available bit rate from the coding of the non-facial image area to the coding of the facial image area; for example, about 10-15% of the average bit rate
  • the disclosed facial feature system 14 and method produces images with better-rendered facial features. For example, block-like artifacts in the facial area are less pronounced and eye contact is preserved.
  • the disclosed facial feature system 14 and method may provide perceptually significant improvement of video sequences coded at the rates of 64 kbps and 32 kbps, with 56 kbps and 24 kbps respectively reserved for the input (color) video signal.
  • the codec 10 may operate using a total audio-video integrated services digital network (ISDN) rate of 64 kbps, with an input digital color video signal in the YUV format, and with a coding rate of 56 kbps for the video signal, where the video signal represents "head-and-shoulders" sequences.
  • ISDN integrated services digital network
  • face location detection and detection of "eyes-nose-mouth” regions in images is performed.
  • the facial feature detection system 14 determines contours of a face, including the side or oblique views of the head when tilted with respect to the image, as ellipses.
  • the detection of "eyes-nose-mouth” regions uses the symmetry of such regions with respect to a slanted facial axis, which is inherent to a human face appearing in a two-dimensional (2D) projection for a rotated head.
  • the input video signals 26 represent video data of head-and-shoulders sequences. It is to be understood that the disclosed facial feature detection system 14 and method may be used with other video encoder systems and techniques operating at other rates, and such object tracking techniques may be adapted for different applications where objects other than faces are of interest.
  • face location is defined herein to include images representing people having heads turned to their left or right, thereby appearing in a profile, as well as images of people with their back to the camera, such that “face location” encompasses the location of a head outline.
  • the face location is represented by an ellipse 72 labelled E as shown in FIG. 3, having a center 74 with coordinates (x 0 , y 0 ), with the semi-major length 76 labelled A and the semi-minor length 78 labelled B, along the ellipse's major and minor axes, respectively, and having an associated "tilt" angle 80 labelled ⁇ 0 .
  • Areas of the ellipse 72 at opposing ends along the major axis are where the upper and lower areas of a face are positioned.
  • the upper and lower areas in actual face outlines may have quite different curvatures, but ellipses provide relative accuracy and parametric simplicity as a model of a face outline. If the face outline information is not used to regenerate the face outline, a relative lack of model-fitting accuracy of the disclosed facial feature detection system and method may not have a significant impact in the overall performance of the coding process.
  • the parameters of Equation (1) are provided to the codec 10 as described in further detail below.
  • an elliptical head outline may provide a rough estimate of the face location, but the use of elliptical head outlines is adapted by the disclosed facial feature detection system 14 and method to identify a rectangular region 82 having a common center at coordinates (x 0 , y 0 ) with the ellipse 72, as shown in FIG. 4, and an axis of symmetry 84 of the face outline, labelled A S , which may be parallel to the major axis of the ellipse 72.
  • the rectangular region 82 includes the eyes, nose, and mouth of the person in the encoded image.
  • Such a rectangular region 82 has an extra degree of freedom relative to using only an ellipse, for example, ellipse 72 of FIG.
  • a trapezoidal region labelled R in FIG. 4
  • W u the window 88 labelled W u in FIG. 4
  • the eyes and eyebrows may include the eyes and eyebrows, which are generally are the two most reliably symmetric features in a human face.
  • the window 88 may be characterized by a window center 90 having coordinates (x i , y i ), window width W w , height h, and angle 92 labelled ⁇ i , where the angle 92 determining a trapezoidal shape generally indicates the relative positions of the eyes, nose, and mouth of a person's face.
  • the facial feature detection system 14 of FIGS. 1-2 includes at least one preprocessor circuit for pre-processing the input video signal for detection of facial features by a detector.
  • the facial feature detection system 14 may include a face location preprocessor 94 operatively connected to a face location detector 96, having a coarse scanner 98, a fine scanner 100, and an ellipse fitter 102 for generating a face location signal 104.
  • the facial feature detection system 14 may further include an eyes-nose-mouth (ENM) region preprocessor 106 operatively connected to an eyes-nose-mouth region detector 108 having a search region identifier 110 and a search region scanner 112 for generating an ENM region signal 114.
  • Each of the search region identifier 110 and the search region scanner 112 may be implemented as hardware and/or software as described above in conjunction with the facial feature detector 14.
  • Each preprocessor 94, 106 may be implemented as a preprocessing circuit 116, as illustrated in FIG. 6, which may employ a temporal downsampler 118, a low pass filter 120, a decimator 122, an edge detector 124, and a thresholding circuit 126.
  • the temporal downsampler 118 may be included if the input video signal 26 has not been downsampled to, for example, a desired input frame rate of the source coder 32 of FIG. 2.
  • the temporal downsampler 118 performs temporal downsampling of the input luminance video signal from, for example, about 30 frames per second (fps) to, for example, about 7.5 fps to be the frame rate of the input video signal to the video codec 10.
  • the low pass filter 120 is a separable filter for performing spatial low-pass filtering of input video frames of size 360 ⁇ 240 pixels with a cut-off frequency at ⁇ /c, where c is a decimation factor.
  • each of the face location preprocessor 94 and the ENM region preprocessor 106 shown in FIG. 5 may be implemented to employ a common temporal downsampler and a common low pass filter. After low pass filtering, the filtered input video signal is then processed by the decimator 122 to perform decimation by a predetermined decimation factor c in both horizontal and vertical dimensions to produce low-pass images of a predetermined size.
  • Each preprocessor 94, 106 then performs edge detection on the decimated images by edge detector 124 employing Sobel operator techniques, where the Sobel operator may be represented in matrix form by horizontal and vertical operators; for example, which are used to determine the components of an image gradient.
  • a gradient magnitude image is then obtained by generating the magnitude of the gradient at each pixel using the edge detector 124.
  • Binary edge data signals are then generated using a threshold circuit 126 for performing thresholding of the gradient magnitude images.
  • Each of the face location detector 96 and the ENM region detector 108 shown in FIG. 5 uses the respective binary edge data signals from the respective preprocessors 94, 106 to detect the face location and the eyes-nose-mouth region, respectively, of the image represented by the input video signal 26.
  • the face location detector 96 detects and traces the outline of a face location geometrically modeled as an ellipse, using the preprocessed and thresholded gradient magnitude images of size 45 ⁇ 30 pixels, to locate both oval shapes (i.e. "filled” shapes) as well as oval contours partially occluded by data.
  • the face location detector 96 operates a hierarchical three-step procedure: coarse scanning by a coarse scanner 98, fine scanning by a fine scanner 100, and ellipse fitting by an ellipse fitter 102, each of which may be implemented in hardware and/or software as described above for the facial feature detector 14.
  • the face location detector 96 selects a detected ellipse in an image as a most likely face outline among multiple candidates.
  • the decomposition of the recognition and detection tasks in these three steps, along with the small input image size, provide for a low computational complexity of the disclosed facial detection system 14, and exhaustive searches of large pools of candidates may thus be avoided.
  • the coarse scanner 98 segments the input binary edge data signal into blocks of size B ⁇ B pixels; for example, of size 5 ⁇ 5 pixels. Each block is marked by the coarse scanner 98 if at least one of the pixels in the block has a non-zero value.
  • the block array is then scanned in, for example, a left-to-right, top-to-bottom fashion, searching for contiguous runs of marked blocks. For each such run, fine scanning and ellipse fitting are performed.
  • the fine scanner 100 scans the pixels in the blocks of a run, for example, in a left-to-right, top-to-bottom fashion to detect the first line that has non-zero pixel values, as opposed to contiguous runs of pixels.
  • the first and last non-zero pixels of the detected line with coordinates (X START , Y) and (X END , Y), provide for a horizontal scanning region.
  • the coarse and fine scanning perform as a horizontal edge-merging filter.
  • the size of the block relates to the maximum allowable distance between merged edges, and also has a direct effect on the speed of the face location detection to process large block sizes.
  • the coarse and fine scanning identifies candidate positions for the top of the head, where the edge data corresponding to the head outline may be characterized as being generally unencumbered by data corresponding to other objects.
  • the face location detector 96 may identify a horizontal segment which may include a top of a head in the image.
  • the ellipse fitter 102 scans the line segment determined by (X START , Y) and (X END , Y). At each point of the segment, ellipses of various sizes and aspect ratios are determined for fitness, where the top-most point of the ellipse may be located on the horizontal scanning segment. Good matches are entered as entries in a list obtained in a memory of the facial feature detection system 14 (not shown in FIG. 5). After the search is performed on the segment by the ellipse fitter 102, the face location detector continues processing input binary edge data signals using the coarse scanner 98.
  • the fitness of any given ellipse to the data is determined by computing normalized weighted average intensities I i and I e of the binary pixel data on the ellipse contour and the ellipse border, respectively.
  • an ellipse contour as well as the ellipse border may be well-defined by its non-parametric form, rasterization (spatial sampling) of image data may require the mapping of a continuous elliptical curve to actual image pixels.
  • Elliptical curves for ellipse fitting performed by the disclosed facial feature detection system 14 may be discrete curves determined as described below. Let I E (i,j) be an index function for the set of points that are inside or on the ellipse E, so
  • the parameter L determines a desired thickness of the ellipse contour and border; for example, L may be set at 1 or 2 pixels.
  • the normalized weighted average intensities I e and I i may be defined as follows: and where p(m,n) represent the binary image data,
  • R m 1 + I i 1 + I e
  • R m 1 + I MAX .
  • the ellipse fitter 102 filters out false candidates by fitting ellipses satisfying the conditions I i > I imin and I e ⁇ I emax , where I imin and I emax are predetermined parameters.
  • the model-fitting ration R m may be more sensitive to the relative values of parameters I i and I e than to the absolute values of such parameters.
  • the face location detector 96 may "lock on" to such arcs to locate severely occluded faces.
  • the face location detector 96 may detect more that one ellipse with a good fit, so an elimination process may be performed to select a final candidate using confidence thresholds ⁇ R min and ⁇ I emin . If the value of R m for a good fitting ellipse is higher than the R m value for a second good fitting ellipse by more than ⁇ R min then the first ellipse is selected. Otherwise, if the border intensity difference between the two ellipses is higher than ⁇ I emin , then the ellipse with the smaller value of I e is selected. If the border intensity difference is smaller than ⁇ I emin then the ellipse with a greater value of R m is selected.
  • the face location detector 96 Having determined a face outline by a well-fitted ellipse, the face location detector 96 generates a face location signal from the parameters of the well-fitted ellipse, and provides the face location signal to the coding controller 16.
  • the coding controller 16 uses the face location signal to increase the quantization of the area in the image corresponding to the face location.
  • face location may also be performed using the ENM region detector 108 to segment the elliptical region as shown in FIG. 3 into a rectangular window and its complement, i.e. the remainder of the ellipse, as shown in FIG. 4.
  • the ENM region detector 108 receives the ellipse parameters of the detected face outlines from the face location detector 96, and processes the ellipse parameters such that the rectangular window is positioned to capture the region of the face corresponding to eyes and mouth.
  • the ENM region detector 108 identifies eyes/mouth regions uses the basic procedure described in F. Lavagetto et al., "Object-Oriented Scene Modeling for Interpersonal Video Communication at Very Low Bit Rate," SIGNAL PROCESSING: IMAGE COMMUNICATION, VOL. 6, 1994, pp. 379-395.
  • the ENM region detector 108 also includes detection of an eyes-nose-mouth region in an input video image where the subject does not directly face the camera, the subject has facial hair and/or wears eyeglasses, and the subject does not have a Caucasian skin pigmentation.
  • the ENM region detector 108 exploits the typical symmetry of facial features with respect to a longitudinal axis going through the nose and across the mouth, where the axis of symmetry may be slanted with respect to the vertical axis of the image, to provide robustness in the detection of an eyes-nose-mouth region. Detection of the eyes-nose-mouth region may also be effected when the subject does not look directly at the camera which may occur in a video teleconferencing situation.
  • the ENM region detector 108 determines a search region using the search region identifier 110, where the center (x 0 , y 0 ) of the elliptical face outline is used to obtain estimates for the positioning of the ENM window.
  • the ENM window is chosen to have a fixed size W w ⁇ h relative to the minor and major axes of the face outline.
  • the ENM region detector 108 then processes the data associated with the search region using the search region scanner 112, in which, for each candidate position (x k , y k ) of the window center in the search region, a symmetry value or functional is determines with respect to the facial axis.
  • the facial axis may be rotated by discrete angle values about the center of the window.
  • slant values ⁇ k may be of any of the discrete values -10°, -5°, 0°, 5°, and 10°.
  • the symmetry value is determined as follows: where A(R) is the cardinality, i.e. the area in pixels, of the trapezoidal region R illustrated in FIG. 4, R ⁇ W u is the set difference of R and W u , a m,n is determined by: and w is a weighting factor greater than one.
  • the value of w is determined so that the data in W u significantly contributes to the symmetry value of Equation (12).
  • the segmentation of the rectangular window into the regions W u and R provides that the data corresponding roughly to the eyes, nose and mouth are applied in the positioning of the window, and that this positioning depends on the "eye data" as a substantially symmetric region.
  • the ENM region detector 108 also eliminates false candidates defined as windows having a density of data points below a minimum density D min .
  • the ENM region detector 108 then generates an ENM region signal corresponding to the parameters of the resulting trapezoidal region R, with the ENM region signal used by the coding controller 16 to refine the quantization of the image data in the trapezoidal region R in the images corresponding to the eyes, nose, and mouth of a face in the image.
  • the face location signal and the ENM region signal from the facial feature detector 14 are provided to the coding controller 16 which implements the CCITT Rec. H.261 standard, Reference Model 8 (RM8).
  • the Rec. H.261 standard prescribes the quantization of DCT coefficients using identical uniform quantizers with dead zones for all AC coefficients, and 8-bit uniform quantization for the DC coefficients with a step size of 8 pixels, so there is no perceptual frequency weighting.
  • the AC coefficient quantizer step size is determined as twice the value of a parameter Q p or MQUANT, as the Q p parameter is referred to in the standard, which may be indicated up to the macroblock (MB) level.
  • MB is an abbreviation for macroblock.
  • a rectangular array of 11 ⁇ 3 MBs defines a group of blocks (GOB).
  • the video images received and processed in the exemplary embodiment have a resolution of 360 ⁇ 240 pixels. resulting in a total of 10 GOBs per picture (frame).
  • an increase of the length of the run-lengths in zig-zag scanned DCT coefficients is implemented by a "variable thresholding" technique which eliminates series of DCT coefficients with small enough values.
  • Variable thresholding is applied prior to quantization and is generally-effective in improving coding efficiency, particularly at relatively low bit rates.
  • An MC/no-MC decision is based on the values of the macroblock and displaced macroblock differences, based on a predetermined curve.
  • the intra/non-intra decision is based on comparison of the variances of the original and motion-compensated macroblocks. Predicted macroblocks in P pictures are skipped if their motion vector is zero and all of their blocks have zero components after quantization. Macroblocks are also skipped in cases of output buffer overflow.
  • Rate control is performed starting with the first picture, as an I - picture, which is coded with a constant Q p of 16 pixels.
  • An output buffer is set at 50% occupancy.
  • Q p is adapted at the start of each line of MBs within a GOB, so Q p may be adapted three times within each GOB.
  • the buffer occupancy is examined after the transmission of each MB and, if overflow occurs, the next MB is skipped, which may result in a small temporary buffer overflow, and the MB that caused the overflow is transmitted.
  • Q p is updated with the buffer occupancy according to the relation: where Q pi is the value of Q p selected for MB i, B i is the output buffer occupancy prior to coding MB i, and B max is the output buffer size.
  • a buffer size of 6,400 ⁇ q bits may used for a given bit rate of q ⁇ 64 kbps for the video signal only. In an exemplary embodiment, a buffer size of 6400 bits may be employed.
  • Model-assisted coding operates to assign different "quality levels" at different regions of an image, such as regions bearing perceptual significance to a viewer.
  • macroblocks are coded in a regular left to right, top to bottom order within each GOB, and quantizer selection is based on a current buffer occupancy level.
  • the location of a MB is used for such macroblock coding in order to allocate more bits to regions of interest while staying within a prescribed bit budget for each video image and/or avoiding buffer overflow. Accordingly, the coding may be controlled so as to allocate fewer bits on the remaining image regions.
  • the regions are not required to be convex.
  • the rectangular region encompassing the whole image is denoted by R I , and its area by A.
  • the coding of each macroblock may use ⁇ bits on the average, when the target budget rate is B r and the buffer size is B max .
  • the parameters ⁇ 1 , ⁇ 2 , ..., ⁇ M represent the target average number of bits per macroblock for the coding of each of the regions of interest.
  • ⁇ i > ⁇ indicates an improved quality within the region of interest R i .
  • the region of the image that belongs to none of the regions of interest is indicated by R 0 , with a corresponding area A 0 and average number of bits per macroblock ⁇ 0 . To satisfy the given average bit budget, then:
  • the function c(i) depends on the input video signal, as well as the current value of Q p , which in turn depends on the selection of the function f(.).
  • Equations (18) and (19) are converted as described below to provide location-dependent, model-assisted operation, where the disclosed coding controller 16 includes a buffer rate modulator 18 to modulate the target rate so that more bits are spent for MBs that are inside regions of interest, and less for MBs that are not.
  • the buffer rate modulator 18 uses the face location signal and the ENM region signal from the facial feature detector 14, the buffer rate modulator 18 generates parameter ⁇ to be greater than I in facial regions in the image associated with the region index function.
  • the buffer rate modulator 18 then implements Equation (20) to increase the coding rate of regions in the image corresponding to detected face outlines and ENM features.
  • B i B i -1 + c ⁇ (i -1) ( i -1) - ⁇ ⁇ ( i ) t where the number of bits spent c ⁇ (i) (i) is region-dependent.
  • Equation (15) the total average rate is rate t.
  • Equations (19) and (21) may be tracked to avoid buffer overflow or underflow.
  • a modulated, "virtual" buffer which satisfies Equation (21) may be used to drive the generation of Q p via the function f(.) of Equation (18), while an actual buffer is monitored to force MB skipping in cases of overflow.
  • Q p is typically assigned a maximum value, depending on f(.).
  • the disclosed coding controller 16 implements a buffer size modulator 20 to perform buffer size modulation.
  • Equation (18) is modified to be: where ⁇ i are modulation factors for each region of the image.
  • the buffer size modulator 20 implements Equation (23) to operate in regions of low interest, where ⁇ i ⁇ 1 and ⁇ i ⁇ 1, to indicate that the buffer occupancy is higher that in actuality, and in regions of high interest such as face outlines and ENM regions where ⁇ i > 1 and ⁇ i > 1 to indicate that the buffer occupancy is lower than in actuality. Accordingly, the Q pi values are "pushed" to higher or lower values, depending on the position of the MB within the image as coinciding with facial regions.
  • the buffer occupancy when operating to encode a high coding quality region from a lower coding quality region, the buffer occupancy is low; for example, the buffer occupancy is less that B max / ⁇ 0 on the average for exterior regions.
  • ample buffer space may then be available to absorb a rapid increase in the number of bits generated while coding blocks inside a high coding quality region.
  • series of MB's of one region alternate with those of another, and hence "relief" intervals are present in which the output buffer is allowed to drain.
  • Equation (23) may then be applied to Equation (14) to obtain:
  • the disclosed buffer rate modulation may force the rate control operations to spend a specified number of additional bits in regions of interest, while buffer size modulation ensures that these bits are evenly distributed in the macroblocks of each region. It is to be understood that both the disclosed buffer rate modulation and buffer size modulation techniques may be applied in general to any rate control scheme, including ones that take into account activity indicators, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

An apparatus responds to a video signal representing a succession of frames, where at least one of the frames corresponds to an image of an object, to detect at least a region of the object. The apparatus includes a processor for processing the video signal to detect at least the region of the object characterized by at least a portion of a closed curve and to generate a plurality of parameters associated with the closed curve for use in coding the video signal.

Description

    BACKGROUND OF THE INVENTION 1. Field of the Invention
  • This disclosure relates generally to the field of signal coding, and more particularly to the detection of facial features for use in the coding of video.
  • 2. Description of the Related Art
  • With the increase in computational power in personal computers (PCs), including laptops, etc., multimedia applications integrating text, sound, and video capabilities have become more available to such PCs. Video generally requires relatively high bit rates, such as in the Motion Picture Experts Groups (MPEG) standards, to provide high quality images, but it may be acceptable to use relatively low bit rate video for teleconferencing situations. In such low bit rate video teleconferencing, the video coding may produce artifacts which are systematically present throughout coded images; for example, due to the coding of both facial and non-facial image portions at the same bit rate. As viewers tend to focus on facial features such as by maintaining eye contact with the "eyes" of the people in the image, a common coding bit rate for both facial and non-facial image portions fails to provide sufficient coding quality of the facial portions, thus hampering the viewing of the video image. In some situations, a very good rendition of facial features is paramount to intelligibility, such as in the case of hearing-impaired viewers who may rely on lip reading.
  • SUMMARY
  • An apparatus is disclosed which responds to a video signal representing a succession of frames, where at least one of the frames corresponds to an image of an object. The apparatus includes a processor for processing the video signal to detect at least the region of the object characterized by at least a portion of a closed curve and to generate a plurality of parameters associated with the closed curve for use in coding the video signal.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The features of the disclosed facial feature detection system and method will become more readily apparent and may be better understood by referring to the following detailed description of an illustrative embodiment of the present invention, taken in conjunction with the accompanying drawings, where:
    • FIG. 1 illustrates a block diagram of a codec;
    • FIG. 2 illustrates a block diagram of a source coder;
    • FIG. 3 illustrates an ellipse and associated parameters;
    • FIG. 4 illustrates a rectangular region and associated parameters;
    • FIG. 5 illustrates a block diagram of the disclosed facial feature detector; and
    • FIG. 6 illustrates a block diagram of a preprocessor.
    DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • Referring now in specific detail to the drawings, with like reference numerals identifying similar or identical elements, as shown in FIG. 1, the present disclosure describes a facial feature detection system and method for detecting and locating facial features in video. The disclosed facial feature detection system and method is implemented, in an exemplary embodiment, in a low bit rate video codec 10, shown in FIG. 1, based on Motion-Compensated Discrete Cosine Transform (MC DCT) techniques, such as video coding methods complying with CCITT Recommendation H.261 for video coding at rates of p × 64 kilobits per second (kbps), where p = 1, 2, ..., 30.
  • In the exemplary embodiment shown in FIG. 1, the codec 10 may include an object locator 12 having a processor, such as a microprocessor, and memory (not shown) for operating an object detection program implementing the disclosed facial feature detection system 14 and method. In an exemplary embodiment, the object detection program may be compiled source code written in the C programming language. The codec 10 also includes a coding controller 16 having a processor, such as a microprocessor, and a memory (not shown) for operating a coding control program implementing a disclosed buffer rate modulator 18 and a disclosed buffer size modulator 20. In an exemplary embodiment, the coding control program may be compiled source code written in the C++ programming language. It is to be understood that the object locator 12 including the disclosed facial feature detection system 14 and method as well as the disclosed coding controller 16 having the disclosed buffer rate modulator 18 and buffer size modulator 20 may be implemented in software using other programming languages and/or by hardware or firmware to perform the operations described hereinbelow.
  • For clarity of explanation, the illustrative embodiment of the disclosed facial feature detection system 14 and method as well as the disclosed coding controller 16 having the disclosed buffer rate modulator 18 and buffer size modulator 20 is presented as having individual functional blocks, which may include functional blocks labelled as "processors". The functions represented by these blocks may be provided through the use of either shared or dedicated hardware, including, but not limited to, hardware capable of executing software. For example, the functions of the blocks presented herein may be provided by a single shared processor or by a plurality of individual processors. Moreover, the use of the functional blocks with accompanying labels herein is not to be construed to refer exclusively to hardware capable of executing software. Illustrative embodiments may include digital signal processor (DSP) hardware, such as the AT&T DSP16 or DSP32C, read-only memory (ROM) for storing software performing the operations discussed below, and random access memory (RAM) for storing DSP results. Very large scale integration (VLSI) hardware embodiments, as well as custom VLSI circuitry in combination with a general purpose DSP circuit, may also be provided. Any and all of these embodiments may be deemed to fall within the meaning of the labels for the functional blocks as used herein.
  • The codec 10 further includes a video coder 22 and video decoder 24, with each of the video coder 22 and video decoder 24 controlled by the coding controller 16. For coding operations, the codec 10 receives an input video signal 26 and an external control signal 28, and the object locator 12 and video coder 22 process the input video signal 26, under the control of the coding controller 16 using the external control signal 28, to generate the output coded bitstream 30. In an exemplary embodiment, the video coder 22 may process the input video signal 26 using a source coder 32, a video multiplex coder 34, a transmission buffer 36, and a transmission coder 38 to generate the output coded bitstream 30.
  • For decoding operations, the video codec 10 receives an input coded bitstream 40 by the video decoder 24, which processes the input coded bitstream 40 using a receiving decoder 42, a receiving buffer 44, a video multiplex decoder 46, and a source decoder 48 for generating the output video signal 50.
  • An example of a video encoder using model-assisted coding is described in greater detail in EP-A-0684736.
  • An exemplary implementation of the model-assisted source coder 32 of the video coder 22 in FIG. 1 is shown in greater detail in FIG. 2, where the source coder 32 includes components 54-70 for generating output signals such as a quantized transform coefficient signal, labelled "q"; a motion vector signal, labelled "v"; and a switching signal, labelled "f", for indicating the switching on/off of the loop filter 68. The coding controller 16 uses object location signals from the object locator 12 to generate control signals such as an INTRA/INTER flag signal, labelled "p"; a flag labelled "t" to indicate a transmission state; and a quantizer indicator signal, labelled "qz". Such control signals are provided to the video multiplex coder 34 for further processing.
  • The codec 10 shown in an exemplary embodiment in FIG. 1, as well as the video coder 22 having, in an exemplary embodiment, the model-assisted source coder 32 shown in FIG. 2, operate in accordance with the CCITT Rec. H.261 standard, such as described in "Line Transmission on Non-Telephone Signals - Video Codec For Audiovisual Services at p × 64 kbit/s - Recommendation H.261", CCITT, Geneva, 1990. The disclosed facial feature detection system 14 and method may then be implemented as described in greater detail below, where the object location signals from the object locator 12 provided to the coding controller 16 includes facial feature detection signals from the facial feature detector 14.
  • The facial feature detection system 14 and method automatically extracts facial area location information for the region-selective coding of video teleconferencing sequences. Low-complexity methods are then performed to detect a head outline, and to identify an "eyes-nose-mouth" region from downsampled binary thresholded edge images. There are no restrictions regarding the nature and content of the head-and-shoulders sequences to code. The head outline detection and the "eyes-nose-mouth" detection may operate accurately and robustly in cases of significant head rotation and/or partial occlusion by moving objects, and in cases where the person in the image has facial hair and/or wears eyeglasses.
  • The disclosed invention also includes object-selective control of a quantizer in a standard coding system such as quantizer 58 in the H.261 compatible source coder 32 shown in FIG. 2. In the exemplary embodiment, the codec 10 is a Reference Model 8 (RM8) implementation of the H.261 compliant encoder, such as described in "Description of Reference Model 8 (RM8)", CCITT SGXV WG4, Specialists Group on Coding for Visual Telephony, Doc. 525, June 1989, which is incorporated herein by reference. It is to be understood that the disclosed facial feature system 14 and method is not limited to H.261, and may be used with different coding techniques, such as CCITT Rec. H.263 and MPEG, and different facial feature detection devices, such as neural network-based facial feature classifiers.
  • The object locator 12 incorporating the disclosed facial feature detection system 14 and method operates to control the quantizer 58 by performing buffer rate modulation and buffer size modulation. By forcing a rate controller of the coding controller 16, associated with the quantizer 58, to transfer a relatively small fraction of the total available bit rate from the coding of the non-facial image area to the coding of the facial image area; for example, about 10-15% of the average bit rate, the disclosed facial feature system 14 and method produces images with better-rendered facial features. For example, block-like artifacts in the facial area are less pronounced and eye contact is preserved. The disclosed facial feature system 14 and method may provide perceptually significant improvement of video sequences coded at the rates of 64 kbps and 32 kbps, with 56 kbps and 24 kbps respectively reserved for the input (color) video signal.
  • The codec 10 may operate using a total audio-video integrated services digital network (ISDN) rate of 64 kbps, with an input digital color video signal in the YUV format, and with a coding rate of 56 kbps for the video signal, where the video signal represents "head-and-shoulders" sequences. In the disclosed facial feature detection system 14 and method, face location detection and detection of "eyes-nose-mouth" regions in images is performed.
  • In the exemplary embodiment, the facial feature detection system 14 determines contours of a face, including the side or oblique views of the head when tilted with respect to the image, as ellipses. The detection of "eyes-nose-mouth" regions uses the symmetry of such regions with respect to a slanted facial axis, which is inherent to a human face appearing in a two-dimensional (2D) projection for a rotated head.
  • Hereinbelow, in the exemplary embodiment, the input video signals 26 represent video data of head-and-shoulders sequences. It is to be understood that the disclosed facial feature detection system 14 and method may be used with other video encoder systems and techniques operating at other rates, and such object tracking techniques may be adapted for different applications where objects other than faces are of interest.
  • The term "face location" is defined herein to include images representing people having heads turned to their left or right, thereby appearing in a profile, as well as images of people with their back to the camera, such that "face location" encompasses the location of a head outline.
  • The face location is represented by an ellipse 72 labelled E as shown in FIG. 3, having a center 74 with coordinates (x0, y0), with the semi-major length 76 labelled A and the semi-minor length 78 labelled B, along the ellipse's major and minor axes, respectively, and having an associated "tilt" angle 80 labelled θ0. Areas of the ellipse 72 at opposing ends along the major axis are where the upper and lower areas of a face are positioned. The upper and lower areas in actual face outlines, such as regions of a person's hair and chin, respectively, may have quite different curvatures, but ellipses provide relative accuracy and parametric simplicity as a model of a face outline. If the face outline information is not used to regenerate the face outline, a relative lack of model-fitting accuracy of the disclosed facial feature detection system and method may not have a significant impact in the overall performance of the coding process.
  • An ellipse of arbitrary size and tilt may be represented by a quadratic, non-parametric equation in implicit form in Equation (1) below: ax 2 +2 bxy + cy 2 +2 dx +2 ey + f = 0,    b 2 - ac < 0
    Figure imgb0001
    where the negative value of the discriminant D = b2 - ac has Equation (1) determining an ellipse. The parameters of Equation (1) are provided to the codec 10 as described in further detail below.
  • In some situations, an elliptical head outline may provide a rough estimate of the face location, but the use of elliptical head outlines is adapted by the disclosed facial feature detection system 14 and method to identify a rectangular region 82 having a common center at coordinates (x0, y0) with the ellipse 72, as shown in FIG. 4, and an axis of symmetry 84 of the face outline, labelled AS, which may be parallel to the major axis of the ellipse 72. The rectangular region 82 includes the eyes, nose, and mouth of the person in the encoded image. Such a rectangular region 82 has an extra degree of freedom relative to using only an ellipse, for example, ellipse 72 of FIG. 3, to model a face outline using a trapezoidal region, labelled R in FIG. 4, by allowing a slant of its sides with respect to an image vertical 86 parallel to the axis of symmetry 84, as shown in FIG. 4, which ensures that the detection is robust in the case of slight head motion. In an exemplary embodiment, about an upper third of the rectangular region 82, defined herein to be a window 88 labelled Wu in FIG. 4, may include the eyes and eyebrows, which are generally are the two most reliably symmetric features in a human face. The window 88 may be characterized by a window center 90 having coordinates (xi, yi), window width Ww, height h, and angle 92 labelled θi, where the angle 92 determining a trapezoidal shape generally indicates the relative positions of the eyes, nose, and mouth of a person's face.
  • As illustrated in FIG. 5, the facial feature detection system 14 of FIGS. 1-2 includes at least one preprocessor circuit for pre-processing the input video signal for detection of facial features by a detector. As shown in FIG. 5, the facial feature detection system 14 may include a face location preprocessor 94 operatively connected to a face location detector 96, having a coarse scanner 98, a fine scanner 100, and an ellipse fitter 102 for generating a face location signal 104. The facial feature detection system 14 may further include an eyes-nose-mouth (ENM) region preprocessor 106 operatively connected to an eyes-nose-mouth region detector 108 having a search region identifier 110 and a search region scanner 112 for generating an ENM region signal 114. Each of the search region identifier 110 and the search region scanner 112 may be implemented as hardware and/or software as described above in conjunction with the facial feature detector 14.
  • Each preprocessor 94, 106 may be implemented as a preprocessing circuit 116, as illustrated in FIG. 6, which may employ a temporal downsampler 118, a low pass filter 120, a decimator 122, an edge detector 124, and a thresholding circuit 126. The temporal downsampler 118 may be included if the input video signal 26 has not been downsampled to, for example, a desired input frame rate of the source coder 32 of FIG. 2.
  • The temporal downsampler 118 performs temporal downsampling of the input luminance video signal from, for example, about 30 frames per second (fps) to, for example, about 7.5 fps to be the frame rate of the input video signal to the video codec 10. The low pass filter 120 is a separable filter for performing spatial low-pass filtering of input video frames of size 360 × 240 pixels with a cut-off frequency at π/c, where c is a decimation factor.
  • In the exemplary embodiment, each of the face location preprocessor 94 and the ENM region preprocessor 106 shown in FIG. 5 may be implemented to employ a common temporal downsampler and a common low pass filter. After low pass filtering, the filtered input video signal is then processed by the decimator 122 to perform decimation by a predetermined decimation factor c in both horizontal and vertical dimensions to produce low-pass images of a predetermined size. In the exemplary embodiment, the face location preprocessor 94 employs a decimator performing decimation by a factor of c = 8 to generate an image of size 45 × 30 pixels. The ENM region preprocessor 106 employs a decimator performing decimation by a factor of c = 2 to generate an image of size 180 × 120 pixels in order not to lose the features of interest in the downsampling; for example, the eye, nose, and mouth edge data.
  • Each preprocessor 94, 106 then performs edge detection on the decimated images by edge detector 124 employing Sobel operator techniques, where the Sobel operator may be represented in matrix form by horizontal and vertical operators; for example,
    Figure imgb0002
    which are used to determine the components of an image gradient. A gradient magnitude image is then obtained by generating the magnitude of the gradient at each pixel using the edge detector 124. Binary edge data signals are then generated using a threshold circuit 126 for performing thresholding of the gradient magnitude images.
  • Each of the face location detector 96 and the ENM region detector 108 shown in FIG. 5 uses the respective binary edge data signals from the respective preprocessors 94, 106 to detect the face location and the eyes-nose-mouth region, respectively, of the image represented by the input video signal 26. The face location detector 96 detects and traces the outline of a face location geometrically modeled as an ellipse, using the preprocessed and thresholded gradient magnitude images of size 45 × 30 pixels, to locate both oval shapes (i.e. "filled" shapes) as well as oval contours partially occluded by data.
  • The face location detector 96 operates a hierarchical three-step procedure: coarse scanning by a coarse scanner 98, fine scanning by a fine scanner 100, and ellipse fitting by an ellipse fitter 102, each of which may be implemented in hardware and/or software as described above for the facial feature detector 14. The face location detector 96 then selects a detected ellipse in an image as a most likely face outline among multiple candidates. The decomposition of the recognition and detection tasks in these three steps, along with the small input image size, provide for a low computational complexity of the disclosed facial detection system 14, and exhaustive searches of large pools of candidates may thus be avoided.
  • For coarse scanning, the coarse scanner 98 segments the input binary edge data signal into blocks of size B × B pixels; for example, of size 5 × 5 pixels. Each block is marked by the coarse scanner 98 if at least one of the pixels in the block has a non-zero value. The block array is then scanned in, for example, a left-to-right, top-to-bottom fashion, searching for contiguous runs of marked blocks. For each such run, fine scanning and ellipse fitting are performed.
  • The fine scanner 100 scans the pixels in the blocks of a run, for example, in a left-to-right, top-to-bottom fashion to detect the first line that has non-zero pixel values, as opposed to contiguous runs of pixels. The first and last non-zero pixels of the detected line, with coordinates (XSTART, Y) and (XEND, Y), provide for a horizontal scanning region.
  • The coarse and fine scanning perform as a horizontal edge-merging filter. The size of the block relates to the maximum allowable distance between merged edges, and also has a direct effect on the speed of the face location detection to process large block sizes. The coarse and fine scanning identifies candidate positions for the top of the head, where the edge data corresponding to the head outline may be characterized as being generally unencumbered by data corresponding to other objects. After the fine scanning, the face location detector 96 may identify a horizontal segment which may include a top of a head in the image.
  • The ellipse fitter 102 scans the line segment determined by (XSTART, Y) and (XEND, Y). At each point of the segment, ellipses of various sizes and aspect ratios are determined for fitness, where the top-most point of the ellipse may be located on the horizontal scanning segment. Good matches are entered as entries in a list obtained in a memory of the facial feature detection system 14 (not shown in FIG. 5). After the search is performed on the segment by the ellipse fitter 102, the face location detector continues processing input binary edge data signals using the coarse scanner 98.
  • In an exemplary embodiment, ellipses with "zero tilt" (θ = 0) may be fitted to the input images for computational simplicity. The fitness of any given ellipse to the data is determined by computing normalized weighted average intensities Ii and Ie of the binary pixel data on the ellipse contour and the ellipse border, respectively. Although an ellipse contour as well as the ellipse border may be well-defined by its non-parametric form, rasterization (spatial sampling) of image data may require the mapping of a continuous elliptical curve to actual image pixels.
  • Elliptical curves for ellipse fitting performed by the disclosed facial feature detection system 14 may be discrete curves determined as described below. Let IE(i,j) be an index function for the set of points that are inside or on the ellipse E, so
    Figure imgb0003
  • A given pixel may be classified as being on the ellipse contour if the pixel is inside (or on) the ellipse, and at least one of the pixels in a neighborhood of size
    (2L + 1) × (2L + 1) pixels about the given pixel is not, i.e.: ( i , j ) ∈ C i I E ( i , j ) = 1
    Figure imgb0004
    and
    Figure imgb0005
  • A given pixel is classified as being on the ellipse border if the given pixel is outside the ellipse, and at least one of the pixels in a neighborhood of size (2L + 1) × (2L + 1) pixels about the given pixel is inside the ellipse, i.e. ( i , j ) ∈ C e I E ( i , j ) = 0
    Figure imgb0006
    and
    Figure imgb0007
  • The parameter L determines a desired thickness of the ellipse contour and border; for example, L may be set at 1 or 2 pixels. For such contour and border pixels, the normalized weighted average intensities Ie and Ii may be defined as follows:
    Figure imgb0008
    and
    Figure imgb0009
    where p(m,n) represent the binary image data, |Ci| and |Ce| represent the cardinality of Ci, Ce, respectively, and wm,n are weighting factors for enhancing the contribution of the data in an upper quarter of the ellipse, as shown in FIG. 3, which may be a more reliable region for fitting the ellipse, i.e.
    Figure imgb0010
    where Qu is the upper quarter of the ellipse shown in FIG. 3.
  • In the exemplary embodiment, a weight w = 1.5 may be used. Normalization with respect to the "length" of the ellipse contour and border may also be performed to accommodate ellipses of different sizes. Generally, an ellipse may fit ellipse-shaped data when the value of Ii is high, such as being close to a maximum value IMAX = (3+w)/4, and when the value of Ie is low, such as being close to zero in value. Such a joint maximization-minimization condition may be transformed to a maximization of a single quantity by defining a model-fitting ratio Rm as: R m = 1 + I i 1 + I e
    Figure imgb0011
    where higher values of Rm determine a better fitting of the candidate ellipse to the head outline in an input image. For example, perfectly ellipse-shaped data may have the best-fitting ellipse aligned with the data corresponding to Ii = IMAX, Ie = 0 and Rm = 1 + IMAX.
  • The ellipse fitter 102 filters out false candidates by fitting ellipses satisfying the conditions
    Ii > Iimin and Ie < Iemax, where Iimin and Iemax are predetermined parameters. The model-fitting ration Rm may be more sensitive to the relative values of parameters Ii and Ie than to the absolute values of such parameters.
  • In some video images, only an arc of an ellipse may be distinguishable due to partial occlusion as well as motion in the area surrounding the face, including the shoulders. Using the above thresholds and the ratio Rm, the face location detector 96 may "lock on" to such arcs to locate severely occluded faces.
  • The face location detector 96 may detect more that one ellipse with a good fit, so an elimination process may be performed to select a final candidate using confidence thresholds ΔRmin and ΔIemin. If the value of Rm for a good fitting ellipse is higher than the Rm value for a second good fitting ellipse by more than ΔRmin then the first ellipse is selected. Otherwise, if the border intensity difference between the two ellipses is higher than ΔIemin, then the ellipse with the smaller value of Ie is selected. If the border intensity difference is smaller than ΔIemin then the ellipse with a greater value of Rm is selected.
  • Having determined a face outline by a well-fitted ellipse, the face location detector 96 generates a face location signal from the parameters of the well-fitted ellipse, and provides the face location signal to the coding controller 16. The coding controller 16 uses the face location signal to increase the quantization of the area in the image corresponding to the face location.
  • In addition to locating a face outline using the face location detector 96, face location may also be performed using the ENM region detector 108 to segment the elliptical region as shown in FIG. 3 into a rectangular window and its complement, i.e. the remainder of the ellipse, as shown in FIG. 4. The ENM region detector 108 receives the ellipse parameters of the detected face outlines from the face location detector 96, and processes the ellipse parameters such that the rectangular window is positioned to capture the region of the face corresponding to eyes and mouth. The ENM region detector 108 identifies eyes/mouth regions uses the basic procedure described in F. Lavagetto et al., "Object-Oriented Scene Modeling for Interpersonal Video Communication at Very Low Bit Rate," SIGNAL PROCESSING: IMAGE COMMUNICATION, VOL. 6, 1994, pp. 379-395.
  • The ENM region detector 108 also includes detection of an eyes-nose-mouth region in an input video image where the subject does not directly face the camera, the subject has facial hair and/or wears eyeglasses, and the subject does not have a Caucasian skin pigmentation. The ENM region detector 108 exploits the typical symmetry of facial features with respect to a longitudinal axis going through the nose and across the mouth, where the axis of symmetry may be slanted with respect to the vertical axis of the image, to provide robustness in the detection of an eyes-nose-mouth region. Detection of the eyes-nose-mouth region may also be effected when the subject does not look directly at the camera which may occur in a video teleconferencing situation.
  • The ENM region detector 108 determines a search region using the search region identifier 110, where the center (x0, y0) of the elliptical face outline is used to obtain estimates for the positioning of the ENM window. The search region for the center of the ENM window may be a square region of size S × S pixels, where S = 12 in the exemplary embodiments. As shown in FIG. 4, the ENM window is chosen to have a fixed size Ww × h relative to the minor and major axes of the face outline.
  • The ENM region detector 108 then processes the data associated with the search region using the search region scanner 112, in which, for each candidate position (xk, yk) of the window center in the search region, a symmetry value or functional is determines with respect to the facial axis. The facial axis may be rotated by discrete angle values about the center of the window. In an exemplary embodiment, slant values θk may be of any of the discrete values -10°, -5°, 0°, 5°, and 10°. For S(m,n) as the point which is symmetric to (m,n) with respect to an axis of symmetry Bs((xk,yk),θk), the symmetry value is determined as follows:
    Figure imgb0012
    where A(R) is the cardinality, i.e. the area in pixels, of the trapezoidal region R illustrated in FIG. 4, R\Wu is the set difference of R and Wu, am,n is determined by:
    Figure imgb0013
    and w is a weighting factor greater than one. The value of w is determined so that the data in Wu significantly contributes to the symmetry value of Equation (12). The segmentation of the rectangular window into the regions Wu and R provides that the data corresponding roughly to the eyes, nose and mouth are applied in the positioning of the window, and that this positioning depends on the "eye data" as a substantially symmetric region.
  • The ENM region detector 108 also eliminates false candidates defined as windows having a density of data points below a minimum density Dmin. The ENM region detector 108 then generates an ENM region signal corresponding to the parameters of the resulting trapezoidal region R, with the ENM region signal used by the coding controller 16 to refine the quantization of the image data in the trapezoidal region R in the images corresponding to the eyes, nose, and mouth of a face in the image.
  • The face location signal and the ENM region signal from the facial feature detector 14 are provided to the coding controller 16 which implements the CCITT Rec. H.261 standard, Reference Model 8 (RM8). The Rec. H.261 standard prescribes the quantization of DCT coefficients using identical uniform quantizers with dead zones for all AC coefficients, and 8-bit uniform quantization for the DC coefficients with a step size of 8 pixels, so there is no perceptual frequency weighting. The AC coefficient quantizer step size is determined as twice the value of a parameter Qp or MQUANT, as the Qp parameter is referred to in the standard, which may be indicated up to the macroblock (MB) level. Throughout this disclosure, the term "MB" is an abbreviation for macroblock. A rectangular array of 11 × 3 MBs defines a group of blocks (GOB). The video images received and processed in the exemplary embodiment have a resolution of 360 × 240 pixels. resulting in a total of 10 GOBs per picture (frame).
  • Generally, using RM8, an increase of the length of the run-lengths in zig-zag scanned DCT coefficients is implemented by a "variable thresholding" technique which eliminates series of DCT coefficients with small enough values. Variable thresholding is applied prior to quantization and is generally-effective in improving coding efficiency, particularly at relatively low bit rates. An MC/no-MC decision is based on the values of the macroblock and displaced macroblock differences, based on a predetermined curve. Similarly, the intra/non-intra decision is based on comparison of the variances of the original and motion-compensated macroblocks. Predicted macroblocks in P pictures are skipped if their motion vector is zero and all of their blocks have zero components after quantization. Macroblocks are also skipped in cases of output buffer overflow.
  • Rate control is performed starting with the first picture, as an I - picture, which is coded with a constant Qp of 16 pixels. An output buffer is set at 50% occupancy. For the remaining pictures Qp is adapted at the start of each line of MBs within a GOB, so Qp may be adapted three times within each GOB. The buffer occupancy is examined after the transmission of each MB and, if overflow occurs, the next MB is skipped, which may result in a small temporary buffer overflow, and the MB that caused the overflow is transmitted. Qp is updated with the buffer occupancy according to the relation:
    Figure imgb0014
    where Qpi is the value of Qp selected for MB i, Bi is the output buffer occupancy prior to coding MB i, and Bmax is the output buffer size. A buffer size of 6,400 × q bits may used for a given bit rate of q × 64 kbps for the video signal only. In an exemplary embodiment, a buffer size of 6400 bits may be employed.
  • Model-assisted coding operates to assign different "quality levels" at different regions of an image, such as regions bearing perceptual significance to a viewer. For low bit rate coding such as coding used in person-to-person communication applications, where the RM8 specification is used, macroblocks are coded in a regular left to right, top to bottom order within each GOB, and quantizer selection is based on a current buffer occupancy level. The location of a MB is used for such macroblock coding in order to allocate more bits to regions of interest while staying within a prescribed bit budget for each video image and/or avoiding buffer overflow. Accordingly, the coding may be controlled so as to allocate fewer bits on the remaining image regions. The coding operates for M regions of interest R1, R2, ..., RM in an image, with corresponding areas A1, A2, ... AM, where the regions may be non-overlapping, i.e. Ri ∩ Rj = the null set when i≠j. The regions are not required to be convex. The rectangular region encompassing the whole image is denoted by RI, and its area by A. In an exemplary embodiment, the coding of each macroblock may use β bits on the average, when the target budget rate is Br and the buffer size is Bmax.
  • The parameters β1, β2, ..., βM represent the target average number of bits per macroblock for the coding of each of the regions of interest. Generally, βi > β indicates an improved quality within the region of interest Ri. The region of the image that belongs to none of the regions of interest is indicated by R0, with a corresponding area A0 and average number of bits per macroblock β0. To satisfy the given average bit budget, then:
    Figure imgb0015
  • For given parameters β1, β2, ..., βM,
    which determines an equivalent average quality for the image region that is exterior to all objects and may be determined by a desired average coding quality of the regions and their sizes. Equation (16) may be expressed in terms of
    Figure imgb0016
    relative average qualities γi = βi/β for i = 0,..., M, according to:
    Figure imgb0017
    where γ0 < 1 if γi > 1 for all i > 0.
  • For a generic rate control operation of an encoder operating according to: Q p i = f ( B i )
    Figure imgb0018
    which generalizes Equation (14) above, the function f(.) may depend on the input video signal. The output buffer operation may be described by: B i = B i -1 + c ( i -1)- t
    Figure imgb0019
    where Bi is the buffer occupancy prior to coding MB i, t is the average target rate (in bits per MB), and c(i) is the number of bits spent to code the iTH MB and its immediately preceding overhead information; for example, headers. The function c(i) depends on the input video signal, as well as the current value of Qp, which in turn depends on the selection of the function f(.).
  • Equations (18) and (19) are converted as described below to provide location-dependent, model-assisted operation, where the disclosed coding controller 16 includes a buffer rate modulator 18 to modulate the target rate so that more bits are spent for MBs that are inside regions of interest, and less for MBs that are not. The rate t in Equation (19) now becomes location-dependent, and is given by: t i = γ ζ( i ) t
    Figure imgb0020
    where the region index function ζ(i) associates the position of MB i with the region in which MB i belongs, and a macroblock is considered to belong to a region if at least one of its pixels is inside that particular region. Accordingly, using the face location signal and the ENM region signal from the facial feature detector 14, the buffer rate modulator 18 generates parameter γ to be greater than I in facial regions in the image associated with the region index function. The buffer rate modulator 18 then implements Equation (20) to increase the coding rate of regions in the image corresponding to detected face outlines and ENM features.
  • The buffer operation may now be described by: B i = B i -1 + c ζ(i -1) ( i -1) - γ ζ( i ) t
    Figure imgb0021
    where the number of bits spent cζ(i)(i) is region-dependent. For stationary behavior of the buffer in region k, and performing an expectation operation on both sides of Equation (21), the average rate for region k is: c ¯ k = γ k t
    Figure imgb0022
  • If the values of γi satisfy the budget constraint given by Equation (15), then the total average rate is rate t. For a system operating with a regular, un-modulated output buffer emptied at the constant rate t, Equations (19) and (21) may be tracked to avoid buffer overflow or underflow. A modulated, "virtual" buffer which satisfies Equation (21) may be used to drive the generation of Qp via the function f(.) of Equation (18), while an actual buffer is monitored to force MB skipping in cases of overflow. When the virtual buffer overflows, no action may be taken, and Qp is typically assigned a maximum value, depending on f(.).
  • For MB scanning such as the scanning techniques employed by Rec. H.261, continuous runs of MBs of any one region may contain just 1 or 2 MB, resulting in an asymmetry in the overall bit distribution where left-most MBs of a region may have relatively high Qp values compared to their right-most counterparts.
  • The disclosed coding controller 16 implements a buffer size modulator 20 to perform buffer size modulation. Equation (18) is modified to be:
    Figure imgb0023
    where µi are modulation factors for each region of the image. The buffer size modulator 20 implements Equation (23) to operate in regions of low interest, where γi < 1 and µi < 1, to indicate that the buffer occupancy is higher that in actuality, and in regions of high interest such as face outlines and ENM regions where γi > 1 and µi > 1 to indicate that the buffer occupancy is lower than in actuality. Accordingly, the Qpi values are "pushed" to higher or lower values, depending on the position of the MB within the image as coinciding with facial regions. In particular, when operating to encode a high coding quality region from a lower coding quality region, the buffer occupancy is low; for example, the buffer occupancy is less that Bmax0 on the average for exterior regions. Generally, ample buffer space may then be available to absorb a rapid increase in the number of bits generated while coding blocks inside a high coding quality region. Furthermore, due to the MB scanning pattern, series of MB's of one region alternate with those of another, and hence "relief" intervals are present in which the output buffer is allowed to drain.
  • Equation (23) may then be applied to Equation (14) to obtain:
    Figure imgb0024
  • RM8 updates Qp at the start of each line of MB's at each GOB, and the buffer size modulator 20 has Qp updated for each macroblock that is inside a region with γi > 1. Accordingly, the disclosed buffer rate modulation may force the rate control operations to spend a specified number of additional bits in regions of interest, while buffer size modulation ensures that these bits are evenly distributed in the macroblocks of each region. It is to be understood that both the disclosed buffer rate modulation and buffer size modulation techniques may be applied in general to any rate control scheme, including ones that take into account activity indicators, etc.
  • While the disclosed facial feature system and method have been particularly shown and described with reference to the preferred embodiments, it is understood by those skilled in the art that various modifications in form and detail may be made therein without departing from the scope of the invention. Accordingly, modifications such as those suggested above, but not limited thereto, are to be considered within the scope of the invention.

Claims (27)

  1. An apparatus for coding a video signal representing a succession of frames, at least one of the frames corresponding to an image of an object, the apparatus comprising:
       a processor for processing the video signal to detect at least the region of the object characterized by at least a portion of a closed-curve and to generate a plurality of parameters associated with the closed curve for use in coding the video signal.
  2. The apparatus of claim 1 wherein the processor detects at least the region of the object characterized by at least a portion of an ellipse as the closed curve.
  3. The apparatus of claim 2 wherein the processor detects at least the region of a head outline as the object characterized by at least a portion of an ellipse as the closed curve substantially fitting the head outline.
  4. The apparatus of claim 1 wherein the processor detects at least the region of the object characterized by at least a portion of a rectangle as the closed curve.
  5. The apparatus of claim 4 wherein the processor detects at least the eye region of a head as the object characterized by at least the portion of the rectangle as the closed curve having an axis of symmetry substantially parallel to an axis of symmetry of the eye region.
  6. The apparatus of claim I wherein the processor detects at least the region of the object characterized by at least a portion of a trapezoid as the closed curve.
  7. The apparatus of claim 6 wherein the processor detects at least the region of the object of an eyes-nose-mouth region of a head as the object characterized by at least a portion of the trapezoid as the closed curve substantially fitting the eyes-nose-mouth region.
  8. The apparatus of claim 1 wherein the processor further includes:
    a preprocessor for preprocessing the video signal to generate an edge data signal corresponding to an edge of the region of the object; and
    an object detector for processing the edge data signal to generate the plurality of parameters.
  9. The apparatus of claim 1 further including:
       a coding controller, responsive to an object detection signal relating to the parameters, for performing buffer size modulation to adjust a quantizer step size used in the coding of the video signal to increase a buffer size for controlling the coding of the detected region of the object.
  10. The apparatus of claim I further including:
       a coding controller, responsive to an object detection signal relating to the parameters, for performing buffer rate modulation to adjust a quantizer step size used in the coding of the video signal to increase a rate of coding for controlling the coding of the detected region of the object.
  11. A coding controller responsive to an object detection signal for controlling the coding of a video signal representing a succession of frames, at least one of the frames corresponding to an image of an object, the object detection signal indicating a detected region of the object, the coding controller comprising:
       a processor for performing buffer size modulation and responsive to the object detection signal for adjusting a quantizer step size used in the coding of the video signal to increase a buffer size for coding of the detected region of the object.
  12. The coding controller of claim 11 wherein the processor performs the buffer size modulation to adjust the quantizer step size for coding in accordance with the CCITT Rec. H.261 standard.
  13. A coding controller responsive to an object detection signal for controlling the coding of a video signal representing a succession of frames, at least one of the frames corresponding to an image of an object, the object detection signal indicating a detected region of the object, the coding controller comprising:
       a processor for performing buffer rate modulation and responsive to the object detection signal for adjusting a quantizer step size used in the coding of the video signal to increase a rate of coding of the detected region of the object.
  14. The coding controller of claim 13 wherein the processor performs the rate modulation to adjust the quantizer step size for coding in accordance with the CCITT Rec. H.261 standard.
  15. A method, responsive to a video signal representing a succession of frames, at least one of the frames corresponding to an image of an object, for coding at least a region of an object, the method comprising the steps of:
    detecting at least the region of the object characterized by at least a portion of a closed curve;
    generating a plurality of parameters associated with the closed curve; and
    coding the video signal using the plurality of parameters.
  16. The method of claim 15 wherein the step of detecting includes the step of:
       detecting at least the region of the object characterized by at least a portion of an ellipse as the closed curve.
  17. The method of claim 16 wherein the step of detecting includes the steps of:
    detecting at least the region of a head outline as the object; and
    substantially fitting an ellipse to the head outline to characterize at least a portion of an ellipse as the closed curve.
  18. The method of claim 15 wherein the step of detecting includes the step of:
       detecting at least the region of the object characterized by at least a portion of a rectangle as the closed curve.
  19. The method of claim 18 wherein the step of detecting includes the steps of:
    detecting at least the eye region of a head as the object; and
    determining at least the portion of the rectangle as the closed curve having an axis of symmetry substantially parallel to an axis of symmetry of the eye region.
  20. The method of claim 15 wherein the step of detecting includes the step of:
       detecting at least the region of the object characterized by at least a portion of a trapezoid as the closed curve.
  21. The method of claim 20 wherein the step of detecting includes the steps of:
    detecting at least the eyes-nose-mouth region of a head as the object; and
    substantially fitting at least a portion of the trapezoid to characterize the eyes-nose-mouth region as the closed curve.
  22. The method of claim 15 wherein the step of generating the plurality of parameters includes the step of:
    preprocessing the video signal to generate an edge data signal corresponding to an edge of the region of the object; and
    wherein the step of detecting includes the step of processing the edge data signal to detect at least the region of the object characterized by at least a portion of a closed curve.
  23. The method of claim 15 further including the step of:
    adjusting a quantizer step size in response to the object detection signal; and
    coding the video signal using the adjusted quantizer step size.
  24. The method of claim 23 further including the step of:
       increasing a buffer size for coding of the detected region of the object.
  25. The method of claim 23 further including the step of:
       increasing a rate of coding of the detected region of the object.
  26. A method, responsive to an object detection signal, for controlling the coding of a video signal representing a succession of frames, at least one of the frames corresponding to an image of an object, the object detection signal indicating a detected region of the object, the method comprising the steps of:
    adjusting a quantizer step size in response to the object detection signal; and
    coding the video signal using the adjusted quantizer step size.
  27. The method of claim 26 further including the step of:
       increasing a buffer size for coding of the detected region of the object.
EP96304900A 1995-07-10 1996-07-03 Model-assisted video coding Withdrawn EP0753969A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/500,672 US5852669A (en) 1994-04-06 1995-07-10 Automatic face and facial feature location detection for low bit rate model-assisted H.261 compatible coding of video
US500672 1995-07-10

Publications (1)

Publication Number Publication Date
EP0753969A2 true EP0753969A2 (en) 1997-01-15

Family

ID=23990439

Family Applications (1)

Application Number Title Priority Date Filing Date
EP96304900A Withdrawn EP0753969A2 (en) 1995-07-10 1996-07-03 Model-assisted video coding

Country Status (4)

Country Link
US (1) US5852669A (en)
EP (1) EP0753969A2 (en)
JP (1) JPH0935069A (en)
CA (1) CA2177866A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0827345A1 (en) * 1995-03-17 1998-03-04 Mitsubishi Denki Kabushiki Kaisha Image encoding system
EP1453321A2 (en) * 2003-02-10 2004-09-01 Samsung Electronics Co., Ltd. Video encoder capable of differentially encoding image of speaker during visual call and method for compressing video signal
US6792144B1 (en) 2000-03-03 2004-09-14 Koninklijke Philips Electronics N.V. System and method for locating an object in an image using models
EP2990993A1 (en) * 2014-08-25 2016-03-02 Renesas Electronics Corporation Image communication apparatus, image transmission apparatus, and image reception apparatus

Families Citing this family (137)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3263807B2 (en) * 1996-09-09 2002-03-11 ソニー株式会社 Image encoding apparatus and image encoding method
ID21557A (en) * 1996-11-28 1999-06-24 Thomson Multimedia Sa PROCESS FOR CODING WITH REGIONAL INFORMATION
ES2266396T3 (en) 1997-02-14 2007-03-01 The Trustees Of Columbia University In The City Of New York AUDIO TERMINAL - VISUAL BASED ON OBJECTS AND FLOW STRUCTURE OF CORRESPONDING BITS.
US6047078A (en) * 1997-10-03 2000-04-04 Digital Equipment Corporation Method for extracting a three-dimensional model using appearance-based constrained structure from motion
SG116400A1 (en) 1997-10-24 2005-11-28 Matsushita Electric Ind Co Ltd A method for computational graceful degradation inan audiovisual compression system.
US6035055A (en) * 1997-11-03 2000-03-07 Hewlett-Packard Company Digital image management system in a distributed data access network system
US6108437A (en) * 1997-11-14 2000-08-22 Seiko Epson Corporation Face recognition apparatus, method, system and computer readable medium thereof
US6061400A (en) * 1997-11-20 2000-05-09 Hitachi America Ltd. Methods and apparatus for detecting scene conditions likely to cause prediction errors in reduced resolution video decoders and for using the detected information
US6173069B1 (en) * 1998-01-09 2001-01-09 Sharp Laboratories Of America, Inc. Method for adapting quantization in video coding using face detection and visual eccentricity weighting
KR100595924B1 (en) 1998-01-26 2006-07-05 웨인 웨스터만 Method and apparatus for integrating manual input
US7199836B1 (en) * 1998-02-13 2007-04-03 The Trustees Of Columbia University In The City Of New York Object-based audio-visual terminal and bitstream structure
US6236749B1 (en) 1998-03-23 2001-05-22 Matsushita Electronics Corporation Image recognition method
CA2273188A1 (en) * 1999-05-28 2000-11-28 Interquest Inc. Method and apparatus for encoding/decoding image data
US6593956B1 (en) * 1998-05-15 2003-07-15 Polycom, Inc. Locating an audio source
RU2154918C1 (en) * 1998-08-01 2000-08-20 Самсунг Электроникс Ко., Лтд. Method and device for loop filtration of image data
US6081554A (en) * 1998-10-02 2000-06-27 The Trustees Of Columbia University In The City Of New York Method to control the generated bit rate in MPEG-4 shape coding
US6526097B1 (en) 1999-02-03 2003-02-25 Sarnoff Corporation Frame-level rate control for plug-in video codecs
JP2000259814A (en) * 1999-03-11 2000-09-22 Toshiba Corp Image processor and method therefor
US20040028130A1 (en) * 1999-05-24 2004-02-12 May Anthony Richard Video encoder
US6792135B1 (en) * 1999-10-29 2004-09-14 Microsoft Corporation System and method for face detection through geometric distribution of a non-intensity image property
EP1968012A3 (en) * 1999-11-16 2008-12-03 FUJIFILM Corporation Image processing apparatus, image processing method and recording medium
JP2001285787A (en) 2000-03-31 2001-10-12 Nec Corp Video recording method, system therefor and recording medium therefor
US7035803B1 (en) 2000-11-03 2006-04-25 At&T Corp. Method for sending multi-media messages using customizable background images
US6990452B1 (en) 2000-11-03 2006-01-24 At&T Corp. Method for sending multi-media messages using emoticons
US7203648B1 (en) 2000-11-03 2007-04-10 At&T Corp. Method for sending multi-media messages with customized audio
US6976082B1 (en) 2000-11-03 2005-12-13 At&T Corp. System and method for receiving multi-media messages
US7091976B1 (en) 2000-11-03 2006-08-15 At&T Corp. System and method of customizing animated entities for use in a multi-media communication application
US20080040227A1 (en) 2000-11-03 2008-02-14 At&T Corp. System and method of marketing using a multi-media communication system
US6963839B1 (en) 2000-11-03 2005-11-08 At&T Corp. System and method of controlling sound in a multi-media communication application
US7155036B2 (en) * 2000-12-04 2006-12-26 Sony Corporation Face detection under varying rotation
WO2002073517A1 (en) * 2001-03-13 2002-09-19 Voxar Ag Image processing devices and methods
JP3773805B2 (en) * 2001-04-27 2006-05-10 Necエレクトロニクス株式会社 Data stream generation method and apparatus therefor
TW505892B (en) * 2001-05-25 2002-10-11 Ind Tech Res Inst System and method for promptly tracking multiple faces
EP2804325B1 (en) 2001-08-31 2017-10-04 Panasonic Intellectual Property Corporation of America Picture decoding method and decoding device
CA2359269A1 (en) * 2001-10-17 2003-04-17 Biodentity Systems Corporation Face imaging system for recordal and automated identity confirmation
US7671861B1 (en) * 2001-11-02 2010-03-02 At&T Intellectual Property Ii, L.P. Apparatus and method of customizing animated entities for use in a multi-media communication application
KR100643454B1 (en) * 2001-11-17 2006-11-10 엘지전자 주식회사 Method for video data transmission control
US7545949B2 (en) * 2004-06-09 2009-06-09 Cognex Technology And Investment Corporation Method for setting parameters of a vision detector using production line information
US9092841B2 (en) * 2004-06-09 2015-07-28 Cognex Technology And Investment Llc Method and apparatus for visual detection and inspection of objects
US20040052418A1 (en) * 2002-04-05 2004-03-18 Bruno Delean Method and apparatus for probabilistic image analysis
US7369685B2 (en) * 2002-04-05 2008-05-06 Identix Corporation Vision-based operating method and system
KR20020075960A (en) * 2002-05-20 2002-10-09 주식회사 코난테크놀로지 Method for detecting face region using neural network
JP2003346149A (en) * 2002-05-24 2003-12-05 Omron Corp Face collating device and bioinformation collating device
US7227976B1 (en) * 2002-07-08 2007-06-05 Videomining Corporation Method and system for real-time facial image enhancement
US6925122B2 (en) * 2002-07-25 2005-08-02 National Research Council Method for video-based nose location tracking and hands-free computer input devices based thereon
CN1282943C (en) * 2002-12-30 2006-11-01 佳能株式会社 Image processing method and device
US7565030B2 (en) 2003-06-26 2009-07-21 Fotonation Vision Limited Detecting orientation of digital images using face detection information
US7792970B2 (en) 2005-06-17 2010-09-07 Fotonation Vision Limited Method for establishing a paired connection between media devices
US8330831B2 (en) 2003-08-05 2012-12-11 DigitalOptics Corporation Europe Limited Method of gathering visual meta data using a reference image
US7616233B2 (en) 2003-06-26 2009-11-10 Fotonation Vision Limited Perfecting of digital image capture parameters within acquisition devices using face detection
US8896725B2 (en) 2007-06-21 2014-11-25 Fotonation Limited Image capture device with contemporaneous reference image capture mechanism
US8593542B2 (en) 2005-12-27 2013-11-26 DigitalOptics Corporation Europe Limited Foreground/background separation using reference images
US8494286B2 (en) 2008-02-05 2013-07-23 DigitalOptics Corporation Europe Limited Face detection in mid-shot digital images
US8989453B2 (en) 2003-06-26 2015-03-24 Fotonation Limited Digital image processing using face detection information
US7471846B2 (en) 2003-06-26 2008-12-30 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US8498452B2 (en) 2003-06-26 2013-07-30 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US9692964B2 (en) 2003-06-26 2017-06-27 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US8682097B2 (en) 2006-02-14 2014-03-25 DigitalOptics Corporation Europe Limited Digital image enhancement with reference images
US7574016B2 (en) 2003-06-26 2009-08-11 Fotonation Vision Limited Digital image processing using face detection information
US8948468B2 (en) 2003-06-26 2015-02-03 Fotonation Limited Modification of viewing parameters for digital images using face detection information
US7844076B2 (en) 2003-06-26 2010-11-30 Fotonation Vision Limited Digital image processing using face detection and skin tone information
US7440593B1 (en) 2003-06-26 2008-10-21 Fotonation Vision Limited Method of improving orientation and color balance of digital images using face detection information
US7269292B2 (en) * 2003-06-26 2007-09-11 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US8155397B2 (en) 2007-09-26 2012-04-10 DigitalOptics Corporation Europe Limited Face tracking in a camera processor
US7362368B2 (en) * 2003-06-26 2008-04-22 Fotonation Vision Limited Perfecting the optics within a digital image acquisition device using face detection
US7315630B2 (en) * 2003-06-26 2008-01-01 Fotonation Vision Limited Perfecting of digital image rendering parameters within rendering devices using face detection
US9129381B2 (en) 2003-06-26 2015-09-08 Fotonation Limited Modification of post-viewing parameters for digital images using image region or feature information
US7620218B2 (en) 2006-08-11 2009-11-17 Fotonation Ireland Limited Real-time face tracking with reference images
US7003140B2 (en) * 2003-11-13 2006-02-21 Iq Biometrix System and method of searching for image data in a storage medium
JP4085959B2 (en) * 2003-11-14 2008-05-14 コニカミノルタホールディングス株式会社 Object detection device, object detection method, and recording medium
JP2005196519A (en) * 2004-01-08 2005-07-21 Sony Corp Image processor and image processing method, recording medium, and program
US8243986B2 (en) * 2004-06-09 2012-08-14 Cognex Technology And Investment Corporation Method and apparatus for automatic visual event detection
US8127247B2 (en) 2004-06-09 2012-02-28 Cognex Corporation Human-machine-interface and method for manipulating data in a machine vision system
US8891852B2 (en) 2004-06-09 2014-11-18 Cognex Technology And Investment Corporation Method and apparatus for configuring and testing a machine vision detector
US20050276445A1 (en) * 2004-06-09 2005-12-15 Silver William M Method and apparatus for automatic visual detection, recording, and retrieval of events
US8265354B2 (en) * 2004-08-24 2012-09-11 Siemens Medical Solutions Usa, Inc. Feature-based composing for 3D MR angiography images
US8320641B2 (en) 2004-10-28 2012-11-27 DigitalOptics Corporation Europe Limited Method and apparatus for red-eye detection using preview or other reference images
US9292187B2 (en) 2004-11-12 2016-03-22 Cognex Corporation System, method and graphical user interface for displaying and controlling vision system operating parameters
US7636449B2 (en) 2004-11-12 2009-12-22 Cognex Technology And Investment Corporation System and method for assigning analysis parameters to vision detector using a graphical interface
US7720315B2 (en) 2004-11-12 2010-05-18 Cognex Technology And Investment Corporation System and method for displaying and using non-numeric graphic elements to control and monitor a vision system
US7315631B1 (en) 2006-08-11 2008-01-01 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
US8503800B2 (en) 2007-03-05 2013-08-06 DigitalOptics Corporation Europe Limited Illumination detection using classifier chains
WO2006087789A1 (en) * 2005-02-17 2006-08-24 Fujitsu Limited Image processing method, image processing system, image processing device, and computer program
US8948461B1 (en) * 2005-04-29 2015-02-03 Hewlett-Packard Development Company, L.P. Method and system for estimating the three dimensional position of an object in a three dimensional physical space
US8208758B2 (en) 2005-10-05 2012-06-26 Qualcomm Incorporated Video sensor-based automatic region-of-interest detection
US8019170B2 (en) * 2005-10-05 2011-09-13 Qualcomm, Incorporated Video frame motion-based automatic region-of-interest detection
US8150155B2 (en) 2006-02-07 2012-04-03 Qualcomm Incorporated Multi-mode region-of-interest video object segmentation
US8265392B2 (en) * 2006-02-07 2012-09-11 Qualcomm Incorporated Inter-mode region-of-interest video object segmentation
US8265349B2 (en) * 2006-02-07 2012-09-11 Qualcomm Incorporated Intra-mode region-of-interest video object segmentation
EP2033142B1 (en) 2006-06-12 2011-01-26 Tessera Technologies Ireland Limited Advances in extending the aam techniques from grayscale to color images
US7916897B2 (en) 2006-08-11 2011-03-29 Tessera Technologies Ireland Limited Face tracking for controlling imaging parameters
US7403643B2 (en) 2006-08-11 2008-07-22 Fotonation Vision Limited Real-time face tracking in a digital image acquisition device
JP4228320B2 (en) * 2006-09-11 2009-02-25 ソニー株式会社 Image processing apparatus and method, and program
AU2007221976B2 (en) * 2006-10-19 2009-12-24 Polycom, Inc. Ultrasonic camera tracking system and associated methods
US7855718B2 (en) 2007-01-03 2010-12-21 Apple Inc. Multi-touch input discrimination
US8130203B2 (en) 2007-01-03 2012-03-06 Apple Inc. Multi-touch input discrimination
US8269727B2 (en) 2007-01-03 2012-09-18 Apple Inc. Irregular input identification
US8055067B2 (en) 2007-01-18 2011-11-08 DigitalOptics Corporation Europe Limited Color segmentation
EP2115662B1 (en) 2007-02-28 2010-06-23 Fotonation Vision Limited Separating directional lighting variability in statistical face modelling based on texture space decomposition
JP4970557B2 (en) 2007-03-05 2012-07-11 デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド Face search and detection in digital image capture device
US8135181B2 (en) * 2007-03-26 2012-03-13 The Hong Kong Polytechnic University Method of multi-modal biometric recognition using hand-shape and palmprint
US7916971B2 (en) 2007-05-24 2011-03-29 Tessera Technologies Ireland Limited Image processing method and apparatus
US8237099B2 (en) * 2007-06-15 2012-08-07 Cognex Corporation Method and system for optoelectronic detection and location of objects
NO327899B1 (en) * 2007-07-13 2009-10-19 Tandberg Telecom As Procedure and system for automatic camera control
CN101375791A (en) * 2007-08-31 2009-03-04 佛山普立华科技有限公司 System and method for monitoring sleeping condition of baby
US8103085B1 (en) 2007-09-25 2012-01-24 Cognex Corporation System and method for detecting flaws in objects using machine vision
US8027521B1 (en) 2008-03-25 2011-09-27 Videomining Corporation Method and system for robust human gender recognition using facial feature localization
US7855737B2 (en) 2008-03-26 2010-12-21 Fotonation Ireland Limited Method of making a digital camera image of a scene including the camera user
US8538171B2 (en) * 2008-03-28 2013-09-17 Honeywell International Inc. Method and system for object detection in images utilizing adaptive scanning
US8462996B2 (en) * 2008-05-19 2013-06-11 Videomining Corporation Method and system for measuring human response to visual stimulus based on changes in facial expression
WO2010012448A2 (en) 2008-07-30 2010-02-04 Fotonation Ireland Limited Automatic face and skin beautification using face detection
US8570359B2 (en) * 2008-08-04 2013-10-29 Microsoft Corporation Video region of interest features
US8379917B2 (en) 2009-10-02 2013-02-19 DigitalOptics Corporation Europe Limited Face recognition performance using additional image features
TWI405143B (en) * 2009-11-27 2013-08-11 Altek Corp Object image correcting apparatus and method of identification
EP3582177A1 (en) * 2010-04-02 2019-12-18 Nokia Technologies Oy Methods and apparatuses for face detection
US8326001B2 (en) 2010-06-29 2012-12-04 Apple Inc. Low threshold face recognition
US8824747B2 (en) 2010-06-29 2014-09-02 Apple Inc. Skin-tone filtering
US8676574B2 (en) 2010-11-10 2014-03-18 Sony Computer Entertainment Inc. Method for tone/intonation recognition using auditory attention cues
JP5291735B2 (en) * 2011-02-24 2013-09-18 ソネットエンタテインメント株式会社 Caricature creation apparatus, arrangement information generation apparatus, arrangement information generation method, and program
US8756061B2 (en) 2011-04-01 2014-06-17 Sony Computer Entertainment Inc. Speech syllable/vowel/phone boundary detection using auditory attention cues
US20120259638A1 (en) * 2011-04-08 2012-10-11 Sony Computer Entertainment Inc. Apparatus and method for determining relevance of input speech
US20130159431A1 (en) * 2011-12-19 2013-06-20 Jeffrey B. Berry Logo message
US9651499B2 (en) 2011-12-20 2017-05-16 Cognex Corporation Configurable image trigger for a vision system and method for using the same
US10205953B2 (en) 2012-01-26 2019-02-12 Apple Inc. Object detection informed encoding
US9202138B2 (en) 2012-10-04 2015-12-01 Adobe Systems Incorporated Adjusting a contour by a shape model
US9158963B2 (en) 2012-10-04 2015-10-13 Adobe Systems Incorporated Fitting contours to features
US9020822B2 (en) 2012-10-19 2015-04-28 Sony Computer Entertainment Inc. Emotion recognition using auditory attention cues extracted from users voice
US9031293B2 (en) 2012-10-19 2015-05-12 Sony Computer Entertainment Inc. Multi-modal sensor based emotion recognition and emotional interface
US9672811B2 (en) 2012-11-29 2017-06-06 Sony Interactive Entertainment Inc. Combining auditory attention cues with phoneme posterior scores for phone/vowel/syllable boundary detection
US20140198838A1 (en) * 2013-01-15 2014-07-17 Nathan R. Andrysco Techniques for managing video streaming
US8957940B2 (en) 2013-03-11 2015-02-17 Cisco Technology, Inc. Utilizing a smart camera system for immersive telepresence
US9836118B2 (en) 2015-06-16 2017-12-05 Wilson Steele Method and system for analyzing a movement of a person
KR102543444B1 (en) 2017-08-29 2023-06-13 삼성전자주식회사 Video encoding apparatus
US11166080B2 (en) 2017-12-21 2021-11-02 Facebook, Inc. Systems and methods for presenting content
US11763595B2 (en) * 2020-08-27 2023-09-19 Sensormatic Electronics, LLC Method and system for identifying, tracking, and collecting data on a person of interest
US11803237B2 (en) * 2020-11-14 2023-10-31 Facense Ltd. Controlling an eye tracking camera according to eye movement velocity
US20220417533A1 (en) * 2021-06-23 2022-12-29 Synaptics Incorporated Image processing system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR900003778B1 (en) * 1984-09-29 1990-05-31 니뽕 빅터 가부시끼 가이샤 Video signal digital processing circuit and method
US4691233A (en) * 1986-09-30 1987-09-01 Rca Corporation Rate buffer control of difference signal decimation and interpolation for adaptive differential pulse code modulator
US4700226A (en) * 1986-10-17 1987-10-13 Rca Corporation Rate buffer control of predicted signal decimation and interpolation for adaptive differential pulse code modulator
AU612543B2 (en) * 1989-05-11 1991-07-11 Panasonic Corporation Moving image signal encoding apparatus and decoding apparatus
US5381183A (en) * 1992-07-03 1995-01-10 Mitsubishi Denki Kabushiki Kaisha Motion-adaptive scanning-line conversion circuit
US5327228A (en) * 1992-07-30 1994-07-05 North American Philips Corporation System for improving the quality of television pictures using rule based dynamic control
JPH0678320A (en) * 1992-08-25 1994-03-18 Matsushita Electric Ind Co Ltd Color adjustment device
US5367629A (en) * 1992-12-18 1994-11-22 Sharevision Technology, Inc. Digital video compression system utilizing vector adaptive transform
US5566208A (en) * 1994-03-17 1996-10-15 Philips Electronics North America Corp. Encoder buffer having an effective size which varies automatically with the channel bit-rate
US5512939A (en) * 1994-04-06 1996-04-30 At&T Corp. Low bit rate audio-visual communication system having integrated perceptual speech and video coding

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0827345A1 (en) * 1995-03-17 1998-03-04 Mitsubishi Denki Kabushiki Kaisha Image encoding system
US5926574A (en) * 1995-03-17 1999-07-20 Mitsubishi Denki Kabushiki Kaisha Image encoding system
US6792144B1 (en) 2000-03-03 2004-09-14 Koninklijke Philips Electronics N.V. System and method for locating an object in an image using models
EP1453321A2 (en) * 2003-02-10 2004-09-01 Samsung Electronics Co., Ltd. Video encoder capable of differentially encoding image of speaker during visual call and method for compressing video signal
EP1453321A3 (en) * 2003-02-10 2006-12-06 Samsung Electronics Co., Ltd. Video encoder capable of differentially encoding image of speaker during visual call and method for compressing video signal
EP2990993A1 (en) * 2014-08-25 2016-03-02 Renesas Electronics Corporation Image communication apparatus, image transmission apparatus, and image reception apparatus

Also Published As

Publication number Publication date
JPH0935069A (en) 1997-02-07
CA2177866A1 (en) 1997-01-11
US5852669A (en) 1998-12-22

Similar Documents

Publication Publication Date Title
US5852669A (en) Automatic face and facial feature location detection for low bit rate model-assisted H.261 compatible coding of video
EP0863671B1 (en) Object-oriented adaptive prefilter for low bit-rate video systems
US6343141B1 (en) Skin area detection for video image systems
US5832115A (en) Ternary image templates for improved semantic compression
US7181050B1 (en) Method for adapting quantization in video coding using face detection and visual eccentricity weighting
Eleftheriadis et al. Automatic face location detection and tracking for model-assisted coding of video teleconferencing sequences at low bit-rates
Eleftheriadis et al. Automatic face location detection for model-assisted rate control in H. 261-compatible coding of video
EP0720385B1 (en) Video encoder with motion area extraction
US8295350B2 (en) Image coding apparatus with segment classification and segmentation-type motion prediction circuit
EP2405382B1 (en) Region-of-interest tracking method and device for wavelet-based video coding
US6983079B2 (en) Reducing blocking and ringing artifacts in low-bit-rate coding
JPH09214963A (en) Method for coding image signal and encoder
Hartung et al. Object-oriented H. 263 compatible video coding platform for conferencing applications
Eleftheriadis et al. Model-assisted coding of video teleconferencing sequences at low bit rates
EP0684736A2 (en) Model-assisted coding of video sequences at low bit rates
Jacquin et al. Content-adaptive postfiltering for very low bit rate video
Lin et al. A low-complexity face-assisted coding scheme for low bit-rate video telephony
Eleftheriadis et al. in H. 261-compatible coding of video
Ishikawa et al. Very low bit-rate video coding based on a method of facial area specification
Lee et al. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability
KR100298184B1 (en) Eye image process method in video coder
EP1367833A2 (en) Method and apparatus for coding and decoding image data
Plompen et al. An Image Knowledge Based Video Codec For Low Bitrates.
KR100310863B1 (en) Moving image motion estimation method using eye-image tracking in video coder
JPH0998418A (en) Method and system for encoding and decoding picture

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): DE FR GB IT

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Withdrawal date: 19971112