WO2010141767A1 - System and method for improving the quality of halftone video using an adaptive threshold - Google Patents

System and method for improving the quality of halftone video using an adaptive threshold Download PDF

Info

Publication number
WO2010141767A1
WO2010141767A1 PCT/US2010/037315 US2010037315W WO2010141767A1 WO 2010141767 A1 WO2010141767 A1 WO 2010141767A1 US 2010037315 W US2010037315 W US 2010037315W WO 2010141767 A1 WO2010141767 A1 WO 2010141767A1
Authority
WO
WIPO (PCT)
Prior art keywords
video frame
data
frame
halftone
pixel
Prior art date
Application number
PCT/US2010/037315
Other languages
French (fr)
Inventor
Hamood-Ur Rehman
Original Assignee
Qualcomm Mems Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Mems Technologies, Inc. filed Critical Qualcomm Mems Technologies, Inc.
Priority to US12/794,648 priority Critical patent/US8330770B2/en
Publication of WO2010141767A1 publication Critical patent/WO2010141767A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the field of the invention relates to image processing.
  • Halftoning is a technique that transforms continuous-tone images into binary images.
  • a continuous-tone video stream needs to be shown on a binary display, a halftone video may be produced by halftoning each frame in the video stream independently.
  • this process results in artifacts including flicker, i.e., a artifact between frames that occurs on the display at low fresh rates. Therefore, it is desirable to have a system and method for reducing artifacts in the halftone video thus improving the quality of the video.
  • a method of processing video data comprises generating halftone data for a first video frame and generating halftone data for a second video frame.
  • the method further comprises, to reduce at least one visual artifact, selectively copying the halftone data for the first video frame into the halftone data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change resulting due to the copying of the data, in the human visual system model based perceptual error of the halftone data for the second video frame.
  • an apparatus for processing video data comprises a memory device having stored therein at least halftone data for a first and second video frame.
  • the apparatus further comprises a processor that is configured to communicate with said processor and is configured to reduce at least one visual artifact by selectively copying the halftone data for the first video frame into the halftone data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change, resulting due to this copying of the data, in the human visual system model based perceptual error of the halftone data for the second video frame.
  • an apparatus for processing video data comprises means for generating halftone data for a first video frame and means for generating halftone data for a second video frame.
  • the apparatus further comprises means for reducing at least one visual artifact by selectively copying the halftone data for the first video frame into the halftone data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change, resulting due to the copying of the data, in the human visual system model based perceptual error of the halftone data for the second video frame.
  • Figures IA and IB are diagrams illustrating one embodiment of a method of processing halftone video frames to reduce halftone video artifacts.
  • Figures 2A-2B are diagrams illustrating a different embodiment of a method of processing halftone video frames to reduce halftone video artifacts.
  • Figure 3 is a flowchart illustrating one embodiment of a method of processing halftone video frames to reduce halftone video artifacts.
  • Figure 4 is a flowchart illustrating one embodiment of a method of processing halftone video frames to reduce halftone video artifacts.
  • Figure 5 is a diagram illustrating the process of visiting pixels in a frame.
  • FIG. 6 is a block diagram illustrating one embodiment of an apparatus for processing video data. Detailed Description of Certain Embodiments
  • a halftone video stream including a sequence of frames is processed to reduce halftone video artifacts under the constraint that the perceptual error between each frame of halftone video and the corresponding frame of continuous-tone video satisfies a criteria. This ensures artifact reduction while preserving the quality of the video.
  • the perceptual error may be estimated based on a human visual system model. Any human visual system model may be used.
  • the perceptual error between a halftone video frame and the corresponding continuous-tone video frame may also be referred to as "the perceptual error of the halftone video frame.”
  • the method comprises generating halftone data for a first video frame and generating halftone data for a second video frame.
  • the method further comprises, to reduce at least one visual artifact, selectively copying the halftone data for the first video frame into the halftone data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change resulting due to the copying of the data, in the human visual system model based perceptual error of the halftone data for the second video frame.
  • Figures IA and IB are diagrams illustrating one embodiment of a method of processing halftone video frames to reduce halftone video artifacts.
  • the halftone frames includes a sequence of video frames such as frame X and the next frame in sequence X+l .
  • the halftone frames are generated by halftoning each frame of the continuous-tone video independently.
  • the method receives a halftone video stream. In another embodiment, the method receives a continuous tone video stream and generate the halftone video stream by halftoning each frame of the continuous-tone video independently.
  • Each frame includes a set of pixels. Each pixel is referred to by its spatial coordinates (x,y) within the frame. X and y are the horizontal and vertical coordinates as shown. In the exemplary embodiment, the pixel 12 of the frame X may by referred to the pixel (1,1). The pixel 14 is at the corresponding location in the frame X+l . In other words, the pixel 14 in the frame X+l and the pixel 12 in the frame X have the same spatial coordinates in their respective frame.
  • each pixel may be of one of two values representing bright or dark state when being rendered at a display.
  • the pixels are drawn as a dark box or a white box depending on its pixel value.
  • the pixel 12 has a value representing dark state while the pixel 14 has a value representing bright state.
  • FIG. 1A shows the output video frames X and X+l before a trial change is made to the pixel 14 in the frame X+l .
  • Figure IB shows the output video frames X and X+l after a trial change is made to the pixel 14 in the frame X+l .
  • a trial change is made to the pixel 14 in the frame X+l by copying the value of the pixel 12 in the frame X into the pixel 14 in the frame X+l .
  • Figure IB shows that the pixel 14 of the frame X+l has the same value as the pixel 12 of the frame X.
  • Such trial change could reduce the video artifact such as flickering in the output video stream by improving consistency between neighboring frames.
  • a check is then run to ensure that the difference between a perceptual error of the originally generated halftone data for the output video frame X+l and a perceptual error of the output video frame X+l after the change, i.e., the frame X+l shown in Figure IB, satisfies a criteria. If the criteria is met, the change to the frame X+l is kept. Otherwise, the pixel 14 of the frame X+l is restored to its previous value.
  • the check is run by comparing, the change in a perceptual error of the halftone video frame X+l due to the trial change, and an adaptive threshold.
  • the trial change is kept if the change in the perceptual error does not exceed the adaptive threshold.
  • all pixels in the frame X+l is checked in a particular order to see if a value of the corresponding pixel in the frame X should be copied to that pixel, following the same process as described above with regard to the pixel 14. Also, all frames in the output video stream are subject to the same process as described here with regard to the frame X+l .
  • data from a frame X of the output video stream is selectively copied into the next frame X+l of the output video stream.
  • data from a frame X of the output video stream may be selectively copied into the frame X-I of the output video stream, i.e., the frame immediately before the frame X.
  • the value of a pixel (1,1) of a frame X of the output video stream is selectively copied into a pixel of the same spatial coordinates in a neighboring frame of the output video stream.
  • Figures 2A-2B are diagrams illustrating a different embodiment of a method of processing halftone video frames to reduce halftone video artifacts. This embodiment is similar to the embodiment shown in Figure 1, except when noted otherwise below.
  • Figure 2 A shows the output video frames X and X+l before a trial change is made to the pixel 18 in the frame X+l .
  • Figure 2B shows the output video frames X and X+l after a trial change is made to the pixel 18 in the frame X+l.
  • the value of the pixel 18 of the frame X+l of the output video streams is swapped with the value of a neighboring pixel, e.g., the pixel 16 in the same frame as shown.
  • the trial change thus changes the value of the pixel 18 as well as the pixel 16.
  • FIG. 3 is a flowchart illustrating one embodiment of a method of processing halftone video frames to reduce halftone video artifacts. Depending on the embodiment, certain steps of the method may be removed, merged together, or rearranged in order.
  • the method 30 starts at a block 32, wherein halftone data is generated for a first video frame of a continuous tone video stream.
  • the halftone data may be generated by halftoning the first video frame independently.
  • the method then moves to a block 34, wherein halftone data is generated for a second video frame of the continuous tone video stream.
  • the halftone data may be generated by halftoning the second video frame independently.
  • the first and the second video frames are next to each other in the continuous tone video stream.
  • the method selectively, based on an adaptive threshold, includes the halftone data for the first video frame in the halftone data in the second video frame to reduce at least one visual artifact.
  • the method selectively copies a pixel of the halftone data for the first video frame into the corresponding pixel of the halftone data for the second video frame, if a criteria is met.
  • the selective copying is made if the change in a perceptual error of the second halftone video frame due to the selective copying does not exceed an adaptive threshold.
  • the method selectively copies the halftone data for the first video frame into the halftone data for the second video frame wherein the selectively copying is based upon a comparison between an adaptive threshold and the change, resulting due to this copying of the data, in the human visual system model based perceptual error.
  • the method may receive the continuous tone video stream and the halftone video stream as input. In that case, the block 34 and 36 may be removed.
  • FIG. 4 is a flowchart illustrating one embodiment of a method of processing halftone video frames to reduce halftone video artifacts. Depending on the embodiment, certain steps of the method may be removed, merged together, or rearranged in order. In the exemplary embodiment, the steps below may be performed by a processor which may be any suitable general purpose single- or multi-chip microprocessor, or any suitable special purpose microprocessor such as a digital signal processor, microcontroller, or a programmable gate array.
  • the method 40 starts at a block 42, wherein a continuous tone video steam "c" and a halftone video stream "h" are received.
  • the continuous tone video stream c includes a sequence of video frames.
  • the halftone video stream h is the halftone data of the continuous tone video stream c. In one embodiment, the halftone video stream h is produced by halftoning each frame in the continuous tone video stream c independently.
  • the method then moves to a block 44, wherein an output video stream "o" is generated. It is noted that though the video stream o is called an output video stream, it is for convenience of description. The video stream o is not sent out for display or further processing until the completion of the method 40.
  • the output video stream is the duplicate of the halftone video stream h.
  • a variable "k” is originally set to 1. The variable k is used to indicate which frame in the output video stream is currently under process.
  • variable k is compared with the number of frames in the halftone video steam h. If the k is no less than the number of frames in the halftone video stream h, then all frames have been processed.
  • the method moves to a block 48, in which the method stops and the output video stream c is provided for further image processing or provided to a display for rendering.
  • variable k is less than the number of frames in the halftone video stream h
  • the method moves to a block 52, in which k is increased by 1.
  • a variable "m" is originally set to 1.
  • the variable m is used to indicate how many times a pixel was copied in one round in which each pixel of the frame k is checked, i.e., in blocks 56-92.
  • the variable m is used as an indicator of whether the current frame k has converged to a solution such that the method may move to the next frame.
  • a decision block 54 it is determined whether m equals to 0. If m does equal to 0, it is determined that no pixel is copied in one round in which each pixel of the frame k is checked. Since no pixel is copied, the method moves to block 46 to process the next frame in the output video stream.
  • m does not equal to 0, the method then moves to a block 56.
  • a variable "i" is originally set to 0.
  • the variable i is used to indicate the row coordinate of a pixel currently under process.
  • the variable m is also assigned to 0.
  • a new variable 1 is introduced with its value initially set to 0. The variable 1 is used to track the number of visits to, i.e., tests of, a pixel of the current frame K.
  • variable i is compared with the number of rows in the halftone video frame h k . If the variable i is not less than the number of rows in the halftone video frame h k , then all rows in this frame have been processed, and the method moves to the decision block 54.
  • variable i is less than the number of rows in the halftone video frame h k . If it is determined that the variable i is less than the number of rows in the halftone video frame h k , the method moves to a block 62, wherein i is increased by 1 so the method start processing pixels in the next row.
  • a variable "j" is originally set to 0. The variable j is used to indicate the column coordinate of a pixel currently under process.
  • variable j is compared with the number of columns in the halftone video frame h k . If the variable j is no less than the number of columns in the halftone video frame h k , then all pixels in row i have been processed. The method moves to the decision block 58. If the variable j is less than the number of columns in the halftone video frame hk, then the method moves to a block 66.
  • j is increased by 1 so that the next pixel in row i is under process.
  • the variable 1 is increased by 1 as the method moves to visit the next pixel.
  • the pixel (i,j) of the kth video frame in the output video stream which is referred to as O k (ij) is processed.
  • the value of the pixel O k (ij) is compared to the value of the pixel (i,j) of the (k-l)th frame in the output video frame O k -i, which is referred to as Cvi(i,j).
  • the (k-l)th frame is the frame immediately before the kth frame in the output video steam.
  • the method moves to a block 69, wherein the threshold value T /+1 used for the visit of the next pixel remains the same as the current threshold value T ' . Subsequently, the method moves to the decision block 64 to process the next pixel.
  • the method moves to block 72, wherein the value of the pixel O k - ⁇ (ij) is copied into the pixel Ok(ij) for a trial.
  • the method evaluates the effect of the trial change made in block 72 so as to decide whether the trial change should be accepted.
  • the method determines whether AE k/ is within an adaptive threshold value T 1 .
  • AE k/ represents the change in a perceptual error of the halftone video frame due to the trial change.
  • the trial change is j 1 __ j- _ ⁇ E accepted.
  • the method moves to a block 88, wherein /+1 ' k > .
  • the variable m is increased by 1 to indicate that one more pixel copying is made. The method then moves to the decision block 64 to process the next pixel.
  • the method moves to a block 92, wherein the threshold value T /+1 used for the
  • the value of the pixel O k (ij) is set back to the value of the pixel h k (ij), which is the pixel (i,j) of the kth frame in the halftone video stream. The method then moves to the decision block 64 to process the next pixel.
  • the method evaluates the effect of each trial change based on AE k/ , which represents the change in a perceptual error of the halftone video frame due to the trial change.
  • the perceptual error indicates the difference between the halftone video frame and the continuous-tone video frame as perceived by human vision.
  • Such perceptual error may be calculated based on a model for human visual system (HVS). Any suitable human visual system model may be used.
  • the corresponding k ' error frame is denoted ehc - k , with its each pixel e ⁇ k ( - 1 ' ⁇ defined by e*c* 0, J ) ⁇ c k 0, J ) - h (i , j) Equation 1 [0054]
  • the corresponding k ' perceived error frame !> hc k is then defined as
  • * indicates 2-dimensional convolution
  • p is a point spread function representing a model of the human visual system.
  • the point spread function is determined by a luminance spatial frequency response function as proposed by R. Nasanen, "Visibility of halftone dot textures", IEEE Trans. Syst. Man. Cyb., vol. 14, no. 6, pp. 920-924, 1984.
  • R. Nasanen "Visibility of halftone dot textures”
  • IEEE Trans. Syst. Man. Cyb. vol. 14, no. 6, pp. 920-924, 1984.
  • other human visual system models may also be used.
  • E E current frame K. hcJc is as defined in Equation 3. ° c - k '- ] represents the perceptual error
  • k ' is the kth video frame in the output video stream after its pixels are visited 1 times.
  • the adaptive threshold value T ' can be calculated as follows.
  • T 1 T 0 , where T 0 is the initial threshold value E ⁇ uation 9
  • the method evaluates the effect of each trial change by calculating the change in a perceptual error of the halftone video frame due to the trial change.
  • the exemplary embodiment is more computationally efficient over an alternative approach which includes, each time a trial change is made to a halftone video frame, recalculating the difference in a perceptual error of the originally generated halftone video frame and a perceptual error of the halftone video frame after a trial change.
  • the alternative approach requires that, each time the method moves to the next pixel, comparing each and every pixel in the output and the halftone
  • the metric oc - k hc - k (which involves comparing each and every pixel in the output and halftone frames) only needs to be calculated once with each frame when the method visits the first pixel of the frame.
  • the method moves from a pixel X to the next pixel X+l, the method only needs to calculate
  • the output video steam is processed beginning from the second frame to the last frame of the output video steam.
  • a frame X is under process, each round all pixels within the frame are checked one by one to test if a change to the pixel value may be made. If no pixel is changed within one round, the method moves to the next frame.
  • the pixels in the frame X is processed in a raster scanning order, i.e., beginning from top to bottom and from left to right.
  • the pixels in a row Y are processed before pixels in a row immediately below the row Y. Within the row Y, a pixel Z is processed before the pixel next to the pixel Z on the right side.
  • each pixel of the frame X it is determined whether a value of the pixel at the corresponding location in the frame immediately before the frame X may be copied and whether the perceptual error between the output video frame X and the continuous-tone video frame X still remains within a threshold with such a change. The change is kept if the perceptual error remains within the threshold.
  • the pixels within a frame are processed according to the raster scanning order. It should be noted that the pixels may be processed in any other order.
  • the method moves to the next frame if no pixel is changed within one round in which all pixels of a frame are checked once. It should be noted that in another embodiment, the method moves to the next frame if the number of changes made within one round satisfies a certain criteria, e.g., if the changes made within one round is less than three times.
  • Figure 5 is a diagram illustrating the process of visiting pixels in a frame.
  • Frames 102, 112, 122, 132, and 142 are used to represent different versions of the frame K in the output video stream.
  • the method starts to visit the first pixel, i.e., the pixel 104, of the frame 102 to check if a trial change to the pixel 104 should be made.
  • the variable 1 is set as 1.
  • the frame 102 which is also referred to as Ki , represents the frame K before the trial change is made.
  • the method then moves to the next pixel, i.e., the pixel 114.
  • the frame 112 which is also referred to as K 2; represents the frame K after the processing of the pixel 104 is completed and before the trial change is made to the pixel 114.
  • the method then moves to the next pixel, i.e., the pixel 124.
  • the frame 122 which is also referred to as K 3, represents the frame K after the processing of the pixel 114 is completed and before the trial change is made to the pixel 124.
  • This process continues until the method tests the last pixel in the frame K, i.e., the pixel 134 in the frame 132, i.e., the frame K ⁇ .
  • the variable 1 is increased to 9.
  • the method may continue to visit the first pixel of the K frame, i.e., the pixel 144 in the frame 142.
  • the variable 1 is increased to 10.
  • the output K frame 142 is referred to as Ki 0 .
  • FIG. 6 is a block diagram illustrating one embodiment of an apparatus for processing video data.
  • the apparatus 160 receives a continuous tone video stream as video input and then provides a processed video stream to a display 166.
  • the processed video stream as provided by the apparatus 160 may be subject to further video processing before being provided to the display 166.
  • the apparatus 160 includes a memory device 164 which stores the continuous tone video stream and the corresponding halftone video stream as discussed above.
  • the memory device 164 may also store other data and any software modules to be executed.
  • the memory device 164 may be any type of storage media suitable for this purpose.
  • the apparatus 160 may further include a control unit 162 configured to communicate with the memory device and to perform the methods for processing video data as described above.
  • the control unit may be processor which may be any general purpose single- or multi-chip microprocessor such as an ARM, Pentium®, Pentium II®, Pentium III®, Pentium IV®, Pentium® Pro, an 8051, a MIPS®, a Power PC®, an ALPHA®, or any special purpose microprocessor such as a digital signal processor, microcontroller, or a programmable gate array.
  • the processor may be configured to execute one or more software modules. In addition to executing an operating system, the processor may be configured to execute one or more software applications.
  • the apparatus 160 receives the continuous tone video stream and generates halftone data for the continuous tone video stream as described above. In another embodiment, the apparatus 160 may receive both the continuous tone video stream and the corresponding halftone video stream. In one embodiment, the halftone video stream is generated by halftoning each frame of the continuous tone video stream independently.
  • the display 166 may be any device that is configured to display an image, whether in motion (e.g., video) or stationary (e.g., still image), and whether textual or pictorial. More particularly, it is contemplated that the embodiments may be implemented in or associated with a variety of electronic devices such as, but not limited to, mobile telephones, wireless devices, personal data assistants (PDAs), hand-held or portable computers, GPS receivers/navigators, cameras, MP3 players, camcorders, game consoles, wrist watches, clocks, calculators, television monitors, fiat panel displays, computer monitors, auto displays (e.g., odometer display, etc.), cockpit controls and/or displays, display of camera views (e.g., display of a rear view camera in a vehicle), electronic photographs, electronic billboards or signs, projectors, architectural structures, packaging, and aesthetic structures (e.g., display of images on a piece of jewelry).
  • PDAs personal data assistants
  • GPS receivers/navigators cameras
  • the display may be any binary display.
  • the display may be an interferometric modulator display.
  • the pixels are in either a bright or dark state. In the bright ("on" or “open") state, the display element reflects a large portion of incident visible light to a user. When in the dark (“off or “closed”) state, the display element reflects little incident visible light to the user.
  • the light reflectance properties of the "on” and “off states may be reversed.
  • These pixels can be configured to reflect predominantly at selected colors, allowing for a color display in addition to black and white.
  • each pixel comprises a Microelectromechanical systems (MEMS) interferometric modulator.
  • MEMS Microelectromechanical systems
  • an interferometric modulator display comprises a row/column array of these interferometric modulators.
  • Each interferometric modulator includes a pair of reflective layers positioned at a variable and controllable distance from each other to form a resonant optical gap with at least one variable dimension.
  • one of the reflective layers may be moved between two positions. In the first position, referred to herein as the relaxed position, the movable reflective layer is positioned at a relatively large distance from a fixed partially reflective layer.
  • the movable reflective layer In the second position, referred to herein as the actuated position, the movable reflective layer is positioned more closely adjacent to the partially reflective layer. Incident light that reflects from the two layers interferes constructively or destructively depending on the position of the movable reflective layer, producing either an overall reflective or non-reflective state for each pixel.

Abstract

A system and method for processing video data are disclosed. In one aspect, a method includes generating halftone data for a first video frame and generating halftone data for a second video frame. The method further includes, to reduce at least one visual artifact, selectively copying the halftone data for the first video frame into the halftone data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change resulting due to the copying of the data, in the human visual system model-based perceptual error of the halftone data for the second video frame.

Description

SYSTEM AND METHOD FOR IMPROVING THE QUALITY OF HALFTONE VIDEO USING AN ADAPTIVE THRESHOLD
BACKGROUND OF THE INVENTION Field of the Invention
[0001] The field of the invention relates to image processing.
Description of the Related Technology
[0002] Halftoning is a technique that transforms continuous-tone images into binary images. When a continuous-tone video stream needs to be shown on a binary display, a halftone video may be produced by halftoning each frame in the video stream independently. However, this process results in artifacts including flicker, i.e., a artifact between frames that occurs on the display at low fresh rates. Therefore, it is desirable to have a system and method for reducing artifacts in the halftone video thus improving the quality of the video.
Summary of Certain Inventive Aspects
[0003] The system, method, and devices of the invention each have several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of this invention, its more prominent features will now be discussed briefly. After considering this discussion, and particularly after reading the section entitled "Detailed Description of Certain Embodiments" one will understand how the features of this invention provide advantages over other display devices.
[0004] In one aspect, a method of processing video data is disclosed. The method comprises generating halftone data for a first video frame and generating halftone data for a second video frame. The method further comprises, to reduce at least one visual artifact, selectively copying the halftone data for the first video frame into the halftone data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change resulting due to the copying of the data, in the human visual system model based perceptual error of the halftone data for the second video frame. [0005] In another aspect, an apparatus for processing video data is disclosed. The apparatus comprises a memory device having stored therein at least halftone data for a first and second video frame. The apparatus further comprises a processor that is configured to communicate with said processor and is configured to reduce at least one visual artifact by selectively copying the halftone data for the first video frame into the halftone data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change, resulting due to this copying of the data, in the human visual system model based perceptual error of the halftone data for the second video frame.
[0006] In another aspect, an apparatus for processing video data is disclosed. The apparatus comprises means for generating halftone data for a first video frame and means for generating halftone data for a second video frame. The apparatus further comprises means for reducing at least one visual artifact by selectively copying the halftone data for the first video frame into the halftone data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change, resulting due to the copying of the data, in the human visual system model based perceptual error of the halftone data for the second video frame.
Brief Description of the Drawings
[0007] Figures IA and IB are diagrams illustrating one embodiment of a method of processing halftone video frames to reduce halftone video artifacts.
[0008] Figures 2A-2B are diagrams illustrating a different embodiment of a method of processing halftone video frames to reduce halftone video artifacts.
[0009] Figure 3 is a flowchart illustrating one embodiment of a method of processing halftone video frames to reduce halftone video artifacts.
[0010] Figure 4 is a flowchart illustrating one embodiment of a method of processing halftone video frames to reduce halftone video artifacts.
[0011] Figure 5 is a diagram illustrating the process of visiting pixels in a frame.
[0012] Figure 6 is a block diagram illustrating one embodiment of an apparatus for processing video data. Detailed Description of Certain Embodiments
[0013] The following detailed description is directed to certain specific embodiments of the invention. However, the invention can be embodied in a multitude of different ways. In this description, reference is made to the drawings wherein like parts are designated with like numerals throughout.
[0014] Certain embodiments as will be described below relate to a system and method of processing video data. In one embodiment, a halftone video stream including a sequence of frames is processed to reduce halftone video artifacts under the constraint that the perceptual error between each frame of halftone video and the corresponding frame of continuous-tone video satisfies a criteria. This ensures artifact reduction while preserving the quality of the video. The perceptual error may be estimated based on a human visual system model. Any human visual system model may be used. The perceptual error between a halftone video frame and the corresponding continuous-tone video frame may also be referred to as "the perceptual error of the halftone video frame."
[0015] In one embodiment, the method comprises generating halftone data for a first video frame and generating halftone data for a second video frame. The method further comprises, to reduce at least one visual artifact, selectively copying the halftone data for the first video frame into the halftone data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change resulting due to the copying of the data, in the human visual system model based perceptual error of the halftone data for the second video frame.
[0016] Figures IA and IB are diagrams illustrating one embodiment of a method of processing halftone video frames to reduce halftone video artifacts. The halftone frames includes a sequence of video frames such as frame X and the next frame in sequence X+l . In one embodiment, the halftone frames are generated by halftoning each frame of the continuous-tone video independently.
[0017] In the exemplary embodiment, the method receives a halftone video stream. In another embodiment, the method receives a continuous tone video stream and generate the halftone video stream by halftoning each frame of the continuous-tone video independently. [0018] Each frame includes a set of pixels. Each pixel is referred to by its spatial coordinates (x,y) within the frame. X and y are the horizontal and vertical coordinates as shown. In the exemplary embodiment, the pixel 12 of the frame X may by referred to the pixel (1,1). The pixel 14 is at the corresponding location in the frame X+l . In other words, the pixel 14 in the frame X+l and the pixel 12 in the frame X have the same spatial coordinates in their respective frame. In one embodiment, each pixel may be of one of two values representing bright or dark state when being rendered at a display. The pixels are drawn as a dark box or a white box depending on its pixel value. In Figure IA, the pixel 12 has a value representing dark state while the pixel 14 has a value representing bright state.
[0019] The halftone frames are duplicated into an output video stream, which is then processed to reduce the halftone artifact. Figure IA shows the output video frames X and X+l before a trial change is made to the pixel 14 in the frame X+l . Figure IB shows the output video frames X and X+l after a trial change is made to the pixel 14 in the frame X+l .
[0020] As a part of the process of reducing the halftone artifact, a trial change is made to the pixel 14 in the frame X+l by copying the value of the pixel 12 in the frame X into the pixel 14 in the frame X+l . After the trial change, Figure IB shows that the pixel 14 of the frame X+l has the same value as the pixel 12 of the frame X. Such trial change could reduce the video artifact such as flickering in the output video stream by improving consistency between neighboring frames.
[0021] In one embodiment, to preserve the quality of the output video, a check is then run to ensure that the difference between a perceptual error of the originally generated halftone data for the output video frame X+l and a perceptual error of the output video frame X+l after the change, i.e., the frame X+l shown in Figure IB, satisfies a criteria. If the criteria is met, the change to the frame X+l is kept. Otherwise, the pixel 14 of the frame X+l is restored to its previous value.
[0022] In one embodiment as will be described in regard to Figure 4, the check is run by comparing, the change in a perceptual error of the halftone video frame X+l due to the trial change, and an adaptive threshold. The trial change is kept if the change in the perceptual error does not exceed the adaptive threshold. [0023] In one embodiment, all pixels in the frame X+l is checked in a particular order to see if a value of the corresponding pixel in the frame X should be copied to that pixel, following the same process as described above with regard to the pixel 14. Also, all frames in the output video stream are subject to the same process as described here with regard to the frame X+l .
[0024] In the exemplary embodiment, data from a frame X of the output video stream is selectively copied into the next frame X+l of the output video stream. In another embodiment, data from a frame X of the output video stream may be selectively copied into the frame X-I of the output video stream, i.e., the frame immediately before the frame X.
[0025] In the exemplary embodiment, the value of a pixel (1,1) of a frame X of the output video stream is selectively copied into a pixel of the same spatial coordinates in a neighboring frame of the output video stream.
[0026] Figures 2A-2B are diagrams illustrating a different embodiment of a method of processing halftone video frames to reduce halftone video artifacts. This embodiment is similar to the embodiment shown in Figure 1, except when noted otherwise below. Figure 2 A shows the output video frames X and X+l before a trial change is made to the pixel 18 in the frame X+l . Figure 2B shows the output video frames X and X+l after a trial change is made to the pixel 18 in the frame X+l.
[0027] As a part of the process of reducing the halftone artifact, the value of the pixel 18 of the frame X+l of the output video streams is swapped with the value of a neighboring pixel, e.g., the pixel 16 in the same frame as shown. The trial change thus changes the value of the pixel 18 as well as the pixel 16.
[0028] In the above Figures 1A-2B the principle of various embodiments of a method of processing halftone video frames to reduce halftone video artifacts has been described. Certain exemplary flowcharts will be presented below to illustrate these methods. For illustration purpose, these flowcharts are using the embodiment shown in Figure 1 as an example, but these flowcharts should not be limited to the method of Figure 1 only.
[0029] Figure 3 is a flowchart illustrating one embodiment of a method of processing halftone video frames to reduce halftone video artifacts. Depending on the embodiment, certain steps of the method may be removed, merged together, or rearranged in order. The method 30 starts at a block 32, wherein halftone data is generated for a first video frame of a continuous tone video stream. The halftone data may be generated by halftoning the first video frame independently.
[0030] The method then moves to a block 34, wherein halftone data is generated for a second video frame of the continuous tone video stream. The halftone data may be generated by halftoning the second video frame independently. The first and the second video frames are next to each other in the continuous tone video stream.
[0031] Next, at a block 36, the method selectively, based on an adaptive threshold, includes the halftone data for the first video frame in the halftone data in the second video frame to reduce at least one visual artifact. In one embodiment, the method selectively copies a pixel of the halftone data for the first video frame into the corresponding pixel of the halftone data for the second video frame, if a criteria is met.
[0032] As will be further described below in regard to Figure 4, the selective copying is made if the change in a perceptual error of the second halftone video frame due to the selective copying does not exceed an adaptive threshold. Particularly, the method selectively copies the halftone data for the first video frame into the halftone data for the second video frame wherein the selectively copying is based upon a comparison between an adaptive threshold and the change, resulting due to this copying of the data, in the human visual system model based perceptual error.
[0033] In one embodiment, the method may receive the continuous tone video stream and the halftone video stream as input. In that case, the block 34 and 36 may be removed.
[0034] Figure 4 is a flowchart illustrating one embodiment of a method of processing halftone video frames to reduce halftone video artifacts. Depending on the embodiment, certain steps of the method may be removed, merged together, or rearranged in order. In the exemplary embodiment, the steps below may be performed by a processor which may be any suitable general purpose single- or multi-chip microprocessor, or any suitable special purpose microprocessor such as a digital signal processor, microcontroller, or a programmable gate array. [0035] The method 40 starts at a block 42, wherein a continuous tone video steam "c" and a halftone video stream "h" are received. The continuous tone video stream c includes a sequence of video frames. The halftone video stream h is the halftone data of the continuous tone video stream c. In one embodiment, the halftone video stream h is produced by halftoning each frame in the continuous tone video stream c independently.
[0036] The method then moves to a block 44, wherein an output video stream "o" is generated. It is noted that though the video stream o is called an output video stream, it is for convenience of description. The video stream o is not sent out for display or further processing until the completion of the method 40. The output video stream is the duplicate of the halftone video stream h. A variable "k" is originally set to 1. The variable k is used to indicate which frame in the output video stream is currently under process.
[0037] Next, at a decision block 46, the variable k is compared with the number of frames in the halftone video steam h. If the k is no less than the number of frames in the halftone video stream h, then all frames have been processed. The method moves to a block 48, in which the method stops and the output video stream c is provided for further image processing or provided to a display for rendering.
[0038] Referring again to the decision block 46, if it is determined that the variable k is less than the number of frames in the halftone video stream h, the method moves to a block 52, in which k is increased by 1. At the block 52, a variable "m" is originally set to 1. The variable m is used to indicate how many times a pixel was copied in one round in which each pixel of the frame k is checked, i.e., in blocks 56-92. In the exemplary embodiment, the variable m is used as an indicator of whether the current frame k has converged to a solution such that the method may move to the next frame.
[0039] Next, at a decision block 54, it is determined whether m equals to 0. If m does equal to 0, it is determined that no pixel is copied in one round in which each pixel of the frame k is checked. Since no pixel is copied, the method moves to block 46 to process the next frame in the output video stream.
[0040] If m does not equal to 0, the method then moves to a block 56. At this block, a variable "i" is originally set to 0. The variable i is used to indicate the row coordinate of a pixel currently under process. The variable m is also assigned to 0. Also, a new variable 1 is introduced with its value initially set to 0. The variable 1 is used to track the number of visits to, i.e., tests of, a pixel of the current frame K.
[0041] Moving to a decision block 58, the variable i is compared with the number of rows in the halftone video frame hk. If the variable i is not less than the number of rows in the halftone video frame hk, then all rows in this frame have been processed, and the method moves to the decision block 54.
[0042] Returning again to the decision block 58, if it is determined that the variable i is less than the number of rows in the halftone video frame hk, the method moves to a block 62, wherein i is increased by 1 so the method start processing pixels in the next row. At block 62, a variable "j" is originally set to 0. The variable j is used to indicate the column coordinate of a pixel currently under process.
[0043] Moving to a decision block 64, wherein the variable j is compared with the number of columns in the halftone video frame hk. If the variable j is no less than the number of columns in the halftone video frame hk, then all pixels in row i have been processed. The method moves to the decision block 58. If the variable j is less than the number of columns in the halftone video frame hk, then the method moves to a block 66.
[0044] At block 66, j is increased by 1 so that the next pixel in row i is under process. The variable 1 is increased by 1 as the method moves to visit the next pixel.
[0045] Next, at a decision block 68, the pixel (i,j) of the kth video frame in the output video stream, which is referred to as Ok(ij) is processed. The value of the pixel Ok(ij) is compared to the value of the pixel (i,j) of the (k-l)th frame in the output video frame Ok-i, which is referred to as Cvi(i,j). The (k-l)th frame is the frame immediately before the kth frame in the output video steam.
[0046] If the pixel Ok(i,j) and the pixel Ok-i(ij) have the same value, then the
method moves to a block 69, wherein the threshold value T /+1 used for the visit of the next pixel remains the same as the current threshold value T ' . Subsequently, the method moves to the decision block 64 to process the next pixel.
[0047] If the pixel Ok(iJ) and the pixel Ok-i(ij) do not have the same value, then the method moves to block 72, wherein the value of the pixel Ok-ι(ij) is copied into the pixel Ok(ij) for a trial. [0048] Next at a decision block 74, the method evaluates the effect of the trial change made in block 72 so as to decide whether the trial change should be accepted. In the exemplary embodiment, the method determines whether AE k/ is within an adaptive threshold value T1 . As will be further explained below, AE k/ represents the change in a perceptual error of the halftone video frame due to the trial change.
[0049] The method then moves to either a block 88 or the block 92, depending on
the answer to the inquiry at the decision block 86, to generate the threshold value T /+l used for the visit of the next pixel.
[0050] If the answer to the inquiry at block 86 is yes, then the trial change is j1 __ j- _ ΔE accepted. The method moves to a block 88, wherein /+1 ' k> . Next at a block 78, the variable m is increased by 1 to indicate that one more pixel copying is made. The method then moves to the decision block 64 to process the next pixel.
[0051] If the answer to the inquiry at block 86 is no, then the trial change is not
accepted. The method moves to a block 92, wherein the threshold value T /+1 used for the
visit of the next pixel remains the same as the current threshold value T ' . Next at a block 76, the value of the pixel Ok(ij) is set back to the value of the pixel hk(ij), which is the pixel (i,j) of the kth frame in the halftone video stream. The method then moves to the decision block 64 to process the next pixel.
[0052] In the exemplary embodiment, the method evaluates the effect of each trial change based on AE k/ , which represents the change in a perceptual error of the halftone video frame due to the trial change. The perceptual error indicates the difference between the halftone video frame and the continuous-tone video frame as perceived by human vision. Such perceptual error may be calculated based on a model for human visual system (HVS). Any suitable human visual system model may be used.
[0053] For the k' halftone frame, k , the corresponding k' error frame is denoted ehc-k , with its each pixel e^k (-1'^ defined by e*c* 0, J) ≡ ck 0, J) - h (i, j) Equation 1 [0054] The corresponding k' perceived error frame !>hc k is then defined as
e*c* ≡ e hc,k * P Equation 2
[0055] Here * indicates 2-dimensional convolution, and p is a point spread function representing a model of the human visual system. In the exemplary embodiment, the point spread function is determined by a luminance spatial frequency response function as proposed by R. Nasanen, "Visibility of halftone dot textures", IEEE Trans. Syst. Man. Cyb., vol. 14, no. 6, pp. 920-924, 1984. However, other human visual system models may also be used.
[0056] The perceptual error between * and °k is defined as
M
E hck ∑ YjhcM Equation 3 i=l y=l
[0057] Similarly, for the k output frame, ok , the corresponding k' error frame is denoted ohc k , with its each pixel ohc k (i, j) defined by
°hc,k (U J) ≡ ck 0"» J) - ok (U j) Equation 4
[0058] The corresponding k perceived error frame ?oc k is then defined as
e∞Jt ≡ eoc,k * P Equation 5
[0059] the perceptual error between ok and °k is defined as
M N
*^oc,k = 2~ι 2-^ eoc,k Equation 6
Z=I j=\
[0060] The change in a perceptual error of the halftone video frame due to the
trial change, as represented by *' , may be determined in the following Equations. ΔE*; = E0Cj11 ~ Ehc,k for / = 1 Equation 7
AEk k, = Eo o c c,kk, - E0 ock,,., for 2 ≤ l < MN „ Equati .on 8 _ [0061] Wherein M is the number of rows and N is the number of columns in the
E E current frame K. hcJc is as defined in Equation 3. °c-k'-] represents the perceptual error
between the output frame *'-■ and °k and oc-kl represents the perceptual error between the
output frame *' and k . k' is the kth video frame in the output video stream after its pixels are visited 1 times.
[0062] As shown above, the adaptive threshold value T ' can be calculated as follows.
T1 = T0, where T0 is the initial threshold value Eαuation 9
TM = T, - AEk/ , if the trial change is accepted Equation 10
T)+1 = T1 , if the trial change is not accepted Equation 11
[0063] In the exemplary embodiment, the method evaluates the effect of each trial change by calculating the change in a perceptual error of the halftone video frame due to the trial change. The exemplary embodiment is more computationally efficient over an alternative approach which includes, each time a trial change is made to a halftone video frame, recalculating the difference in a perceptual error of the originally generated halftone video frame and a perceptual error of the halftone video frame after a trial change.
[0064] Particularly, the alternative approach requires that, each time the method moves to the next pixel, comparing each and every pixel in the output and the halftone
E - E frames. In the embodiment as presented in Figure 4, the metric oc-k hc-k (which involves comparing each and every pixel in the output and halftone frames) only needs to be calculated once with each frame when the method visits the first pixel of the frame. When the method moves from a pixel X to the next pixel X+l, the method only needs to calculate
AE = E — E o o k> oc-k> oc'k'-χ , which is simple to calculate since the frame *'-■ and the frame *' are different only in the pixel X+l .
[0065] In the above flowchart, the output video steam is processed beginning from the second frame to the last frame of the output video steam. When a frame X is under process, each round all pixels within the frame are checked one by one to test if a change to the pixel value may be made. If no pixel is changed within one round, the method moves to the next frame. In the exemplary embodiment, the pixels in the frame X is processed in a raster scanning order, i.e., beginning from top to bottom and from left to right. The pixels in a row Y are processed before pixels in a row immediately below the row Y. Within the row Y, a pixel Z is processed before the pixel next to the pixel Z on the right side. For each pixel of the frame X, it is determined whether a value of the pixel at the corresponding location in the frame immediately before the frame X may be copied and whether the perceptual error between the output video frame X and the continuous-tone video frame X still remains within a threshold with such a change. The change is kept if the perceptual error remains within the threshold.
[0066] In the exemplary embodiment, the pixels within a frame are processed according to the raster scanning order. It should be noted that the pixels may be processed in any other order. In the exemplary embodiment, the method moves to the next frame if no pixel is changed within one round in which all pixels of a frame are checked once. It should be noted that in another embodiment, the method moves to the next frame if the number of changes made within one round satisfies a certain criteria, e.g., if the changes made within one round is less than three times.
[0067] Figure 5 is a diagram illustrating the process of visiting pixels in a frame. Frames 102, 112, 122, 132, and 142 are used to represent different versions of the frame K in the output video stream.
[0068] In the example, the method starts to visit the first pixel, i.e., the pixel 104, of the frame 102 to check if a trial change to the pixel 104 should be made. The variable 1 is set as 1. The frame 102, which is also referred to as Ki, represents the frame K before the trial change is made.
[0069] The method then moves to the next pixel, i.e., the pixel 114. The frame 112, which is also referred to as K2; represents the frame K after the processing of the pixel 104 is completed and before the trial change is made to the pixel 114.
[0070] The method then moves to the next pixel, i.e., the pixel 124. The frame 122, which is also referred to as K3, represents the frame K after the processing of the pixel 114 is completed and before the trial change is made to the pixel 124. [0071] This process continues until the method tests the last pixel in the frame K, i.e., the pixel 134 in the frame 132, i.e., the frame K^. The variable 1 is increased to 9.
[0072] Depending on whether any test changes are accepted in this round visiting all pixels of the K frame, the method may continue to visit the first pixel of the K frame, i.e., the pixel 144 in the frame 142. The variable 1 is increased to 10. And the output K frame 142 is referred to as Ki0.
[0073] Figure 6 is a block diagram illustrating one embodiment of an apparatus for processing video data. In the exemplary embodiment, the apparatus 160 receives a continuous tone video stream as video input and then provides a processed video stream to a display 166. In another embodiment, the processed video stream as provided by the apparatus 160 may be subject to further video processing before being provided to the display 166.
[0074] In the exemplary embodiment, the apparatus 160 includes a memory device 164 which stores the continuous tone video stream and the corresponding halftone video stream as discussed above. The memory device 164 may also store other data and any software modules to be executed. The memory device 164 may be any type of storage media suitable for this purpose.
[0075] The apparatus 160 may further include a control unit 162 configured to communicate with the memory device and to perform the methods for processing video data as described above. In the exemplary embodiment, the control unit may be processor which may be any general purpose single- or multi-chip microprocessor such as an ARM, Pentium®, Pentium II®, Pentium III®, Pentium IV®, Pentium® Pro, an 8051, a MIPS®, a Power PC®, an ALPHA®, or any special purpose microprocessor such as a digital signal processor, microcontroller, or a programmable gate array. As is conventional in the art, the processor may be configured to execute one or more software modules. In addition to executing an operating system, the processor may be configured to execute one or more software applications.
[0076] In the exemplary embodiment, the apparatus 160 receives the continuous tone video stream and generates halftone data for the continuous tone video stream as described above. In another embodiment, the apparatus 160 may receive both the continuous tone video stream and the corresponding halftone video stream. In one embodiment, the halftone video stream is generated by halftoning each frame of the continuous tone video stream independently.
[0077] The display 166 may be any device that is configured to display an image, whether in motion (e.g., video) or stationary (e.g., still image), and whether textual or pictorial. More particularly, it is contemplated that the embodiments may be implemented in or associated with a variety of electronic devices such as, but not limited to, mobile telephones, wireless devices, personal data assistants (PDAs), hand-held or portable computers, GPS receivers/navigators, cameras, MP3 players, camcorders, game consoles, wrist watches, clocks, calculators, television monitors, fiat panel displays, computer monitors, auto displays (e.g., odometer display, etc.), cockpit controls and/or displays, display of camera views (e.g., display of a rear view camera in a vehicle), electronic photographs, electronic billboards or signs, projectors, architectural structures, packaging, and aesthetic structures (e.g., display of images on a piece of jewelry).
[0078] In one embodiment, the display may be any binary display. In another embodiment, the display may be an interferometric modulator display. In an interferometric modulator display, the pixels are in either a bright or dark state. In the bright ("on" or "open") state, the display element reflects a large portion of incident visible light to a user. When in the dark ("off or "closed") state, the display element reflects little incident visible light to the user. Depending on the embodiment, the light reflectance properties of the "on" and "off states may be reversed. These pixels can be configured to reflect predominantly at selected colors, allowing for a color display in addition to black and white.
[0079] In one embodiment of the interferometric modulator display, each pixel comprises a Microelectromechanical systems (MEMS) interferometric modulator. In some embodiments, an interferometric modulator display comprises a row/column array of these interferometric modulators. Each interferometric modulator includes a pair of reflective layers positioned at a variable and controllable distance from each other to form a resonant optical gap with at least one variable dimension. In one embodiment, one of the reflective layers may be moved between two positions. In the first position, referred to herein as the relaxed position, the movable reflective layer is positioned at a relatively large distance from a fixed partially reflective layer. In the second position, referred to herein as the actuated position, the movable reflective layer is positioned more closely adjacent to the partially reflective layer. Incident light that reflects from the two layers interferes constructively or destructively depending on the position of the movable reflective layer, producing either an overall reflective or non-reflective state for each pixel.
[0080] The foregoing description details certain embodiments of the invention. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the invention can be practiced in many ways. It should be noted that the use of particular terminology when describing certain features or aspects of the invention should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the invention with which that terminology is associated.

Claims

WHAT IS CLAIMED IS:
1. A method of processing video data, comprising: generating halftone data for a first video frame; generating halftone data for a second video frame; and to reduce at least one visual artifact, selectively copying the halftone data for the first video frame into the halftone data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change resulting due to the copying of the data, in the human visual system model- based perceptual error of the halftone data for the second video frame.
2. The method of Claim 1, wherein selectively copying the generated halftone data further comprises: generating output data for the first frame, the output data for the first frame equal to the halftone data for the first frame; generating output data for the second frame, the output data for the second frame equal to the halftone data for the second frame; and to reduce at least one visual artifact, selectively copying the output data for the first video frame into the output data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change, resulting due to the copying of the data, in the human visual system model- based perceptual error of the output data for the second video frame.
3. The method of Claim 2, wherein selectively copying the output data comprises selectively copying a value of a pixel in the output data for the first video frame into the corresponding location in the output data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change, resulting due to this copying of the data, in the human visual system model-based perceptual error of the output data for the second video frame.
4. The method of Claim 3, wherein the pixel in the output data for the second video frame is located at a location corresponding to the location of the pixel in the output data for the first video frame.
5. The method of Claim 3, wherein selectively copying a value of a pixel in the output data for the first video frame comprises scanning each pixel in the output data for the second video frame and selectively copying the pixel in the output data for the first video frame into a pixel in the output data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change, resulting due to this copying of the data, in the human visual system model-based perceptual error of the output data for the second video frame.
6. The method of Claim 5, wherein scanning each pixel in the output data for the second video frame and selectively copying the pixel in the output data for the first video frame is repeated until the number of copying made during a previous round of scanning meets a predetermined criteria.
7. The method of Claim 6, wherein scanning each pixel in the output data for the second video frame and selectively copying of the pixel in the output data for the first video frame is repeated until no copying is made during a previous round of scanning.
8. The method of Claim 5, wherein the pixels in the output data for the second video frame are processed in a raster scan order starting from top to bottom and from left to right.
9. The method of Claim 1 , wherein the visual artifact comprises flicker.
10. The method of Claim 1, wherein the video data comprises a time sequence of video frames comprising the first and second video frame.
11. The method of Claim 1 , wherein the first video frame immediately precedes the second video frame.
12. The method of Claim 1, wherein the first video frame is the next frame after the second video frame.
13. The method of Claim 1, wherein selectively copying the generated halftone data further comprises selectively copying the halftone data for the first video frame into the halftone data for the second video frame based upon a comparison between an adaptive threshold and the change, resulting due to the copying of the data, in the human visual system model-based perceptual error of the halftone data for the second video frame, the adaptive threshold varying in response to previous modification made to the generated halftone data for the second video frame.
14. An apparatus for processing video data, comprising: a memory device having stored therein at least halftone data for a first and second video frame; and a processor that is configured to communicate with said memory device and is configured to reduce at least one visual artifact by selectively copying the halftone data for the first video frame into the halftone data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change, resulting due to this copying of the data, in the human visual system model-based perceptual error of the halftone data for the second video frame.
15. An apparatus for processing video data, comprising: means for generating halftone data for a first video frame; means for generating halftone data for a second video frame; and means for reducing at least one visual artifact by selectively copying the halftone data for the first video frame into the halftone data for the second video frame, the selectively copying being based upon a comparison between an adaptive threshold and the change, resulting due to the copying of the data, in the human visual system model-based perceptual error of the halftone data for the second video frame.
PCT/US2010/037315 2009-06-05 2010-06-03 System and method for improving the quality of halftone video using an adaptive threshold WO2010141767A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/794,648 US8330770B2 (en) 2009-06-05 2010-06-04 System and method for improving the quality of halftone video using an adaptive threshold

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US18453709P 2009-06-05 2009-06-05
US61/184,537 2009-06-05

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/794,648 Continuation US8330770B2 (en) 2009-06-05 2010-06-04 System and method for improving the quality of halftone video using an adaptive threshold

Publications (1)

Publication Number Publication Date
WO2010141767A1 true WO2010141767A1 (en) 2010-12-09

Family

ID=42537889

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/US2010/037314 WO2010141766A1 (en) 2009-06-05 2010-06-03 System and method for improving the quality of halftone video using a fixed threshold
PCT/US2010/037315 WO2010141767A1 (en) 2009-06-05 2010-06-03 System and method for improving the quality of halftone video using an adaptive threshold

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/US2010/037314 WO2010141766A1 (en) 2009-06-05 2010-06-03 System and method for improving the quality of halftone video using a fixed threshold

Country Status (2)

Country Link
US (2) US8305394B2 (en)
WO (2) WO2010141766A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010141766A1 (en) 2009-06-05 2010-12-09 Qualcomm Mems Technologies, Inc. System and method for improving the quality of halftone video using a fixed threshold
US20140168040A1 (en) * 2012-12-17 2014-06-19 Qualcomm Mems Technologies, Inc. Motion compensated video halftoning
US10237523B2 (en) 2013-05-07 2019-03-19 Dolby Laboratories Licensing Corporation Digital point spread function (DPSF) and dual modulation projection (including lasers) using DPSF

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050157791A1 (en) * 2004-01-20 2005-07-21 Eastman Kodak Company System and method for video tone scale reduction

Family Cites Families (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4709995A (en) 1984-08-18 1987-12-01 Canon Kabushiki Kaisha Ferroelectric display panel and driving method therefor to achieve gray scale
US5068649A (en) 1988-10-14 1991-11-26 Compaq Computer Corporation Method and apparatus for displaying different shades of gray on a liquid crystal display
US4982184A (en) 1989-01-03 1991-01-01 General Electric Company Electrocrystallochromic display and element
KR100202246B1 (en) 1989-02-27 1999-06-15 윌리엄 비. 켐플러 Apparatus and method for digital video system
US4954789A (en) 1989-09-28 1990-09-04 Texas Instruments Incorporated Spatial light modulator
EP0467048B1 (en) 1990-06-29 1995-09-20 Texas Instruments Incorporated Field-updated deformable mirror device
US5233459A (en) 1991-03-06 1993-08-03 Massachusetts Institute Of Technology Electric display device
EP0610665B1 (en) 1993-01-11 1997-09-10 Texas Instruments Incorporated Pixel control circuitry for spatial light modulator
US6674562B1 (en) 1994-05-05 2004-01-06 Iridigm Display Corporation Interferometric modulation of radiation
US5475397A (en) 1993-07-12 1995-12-12 Motorola, Inc. Method and apparatus for reducing discontinuities in an active addressing display system
CA2137059C (en) 1993-12-03 2004-11-23 Texas Instruments Incorporated Dmd architecture to improve horizontal resolution
US6040937A (en) 1994-05-05 2000-03-21 Etalon, Inc. Interferometric modulation
US7123216B1 (en) 1994-05-05 2006-10-17 Idc, Llc Photonic MEMS and structures
US6680792B2 (en) 1994-05-05 2004-01-20 Iridigm Display Corporation Interferometric modulation of radiation
US6571016B1 (en) * 1997-05-05 2003-05-27 Microsoft Corporation Intra compression of pixel blocks using predicted mean
US5790548A (en) 1996-04-18 1998-08-04 Bell Atlantic Network Services, Inc. Universal access multimedia data network
US6480177B2 (en) 1997-06-04 2002-11-12 Texas Instruments Incorporated Blocked stepped address voltage for micromechanical devices
GB9803441D0 (en) 1998-02-18 1998-04-15 Cambridge Display Tech Ltd Electroluminescent devices
DE19811022A1 (en) 1998-03-13 1999-09-16 Siemens Ag Active matrix LCD
JP3403635B2 (en) 1998-03-26 2003-05-06 富士通株式会社 Display device and method of driving the display device
JP4392970B2 (en) 2000-08-21 2010-01-06 キヤノン株式会社 Display element using interferometric modulation element
US7116287B2 (en) 2001-05-09 2006-10-03 Eastman Kodak Company Drive for cholesteric liquid crystal displays
KR100816336B1 (en) 2001-10-11 2008-03-24 삼성전자주식회사 a thin film transistor array panel and a method of the same
WO2003044765A2 (en) 2001-11-20 2003-05-30 E Ink Corporation Methods for driving bistable electro-optic displays
US6574033B1 (en) 2002-02-27 2003-06-03 Iridigm Display Corporation Microelectromechanical systems device and method for fabricating same
US7256795B2 (en) 2002-07-31 2007-08-14 Ati Technologies Inc. Extended power management via frame modulation control
US6741384B1 (en) 2003-04-30 2004-05-25 Hewlett-Packard Development Company, L.P. Control of MEMS and light modulator arrays
US7474442B2 (en) * 2003-11-05 2009-01-06 Stmicroelectronics, Inc. High performance coprocessor for color error diffusion halftoning
US7161728B2 (en) 2003-12-09 2007-01-09 Idc, Llc Area array modulation and lead reduction in interferometric modulators
US7142346B2 (en) 2003-12-09 2006-11-28 Idc, Llc System and method for addressing a MEMS display
EP1571485A3 (en) 2004-02-24 2005-10-05 Barco N.V. Display element array with optimized pixel and sub-pixel layout for use in reflective displays
US7327510B2 (en) 2004-09-27 2008-02-05 Idc, Llc Process for modifying offset voltage characteristics of an interferometric modulator
US7054051B1 (en) 2004-11-26 2006-05-30 Alces Technology, Inc. Differential interferometric light modulator and image display device
US8947465B2 (en) 2004-12-02 2015-02-03 Sharp Laboratories Of America, Inc. Methods and systems for display-mode-dependent brightness preservation
US8310442B2 (en) 2005-02-23 2012-11-13 Pixtronix, Inc. Circuits for controlling display apparatus
US20070153025A1 (en) * 2005-12-29 2007-07-05 Mitchell Owen R Method, apparatus, and system for encoding and decoding a signal on a viewable portion of a video
US7952545B2 (en) * 2006-04-06 2011-05-31 Lockheed Martin Corporation Compensation for display device flicker
US7777715B2 (en) 2006-06-29 2010-08-17 Qualcomm Mems Technologies, Inc. Passive circuits for de-multiplexing display inputs
US7403180B1 (en) 2007-01-29 2008-07-22 Qualcomm Mems Technologies, Inc. Hybrid color synthesis for multistate reflective modulator displays
US8451298B2 (en) 2008-02-13 2013-05-28 Qualcomm Mems Technologies, Inc. Multi-level stochastic dithering with noise mitigation via sequential template averaging
WO2010141766A1 (en) 2009-06-05 2010-12-09 Qualcomm Mems Technologies, Inc. System and method for improving the quality of halftone video using a fixed threshold

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050157791A1 (en) * 2004-01-20 2005-07-21 Eastman Kodak Company System and method for video tone scale reduction

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
BROWN M S ET AL: "Fast halftoning of mpeg video for bi-tonal displays", PROCEEDINGS 2003 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (CAT. NO.03CH37429), BARCELONA, SPAIN, 14-17 SEPT. 2003; [INTERNATIONAL CONFERENCE ON IMAGE PROCESSING], IEEE PISCATAWAY, NJ, USA, vol. 1, 14 September 2003 (2003-09-14), pages 445 - 448, XP010670266, ISBN: 978-0-7803-7750-9 *
CHAO-YONG HSU ET AL: "Video Halftoning Preserving Temporal Consistency", MULTIMEDIA AND EXPO, 2007 IEEE INTERNATIONAL CONFERENCE ON, IEEE, PI, 1 July 2007 (2007-07-01), pages 1938 - 1941, XP031124031, ISBN: 978-1-4244-1016-3 *
CHAO-YUNG HSU ET AL: "Compression of halftone video for electronic paper", IMAGE PROCESSING, 2008. ICIP 2008. 15TH IEEE INTERNATIONAL CONFERENCE, IEEE, PISCATAWAY, NJ, USA, 12 October 2008 (2008-10-12), pages 1600 - 1603, XP031374323, ISBN: 978-1-4244-1765-0 *
CRAIG GOTSMAN: "The Visual Computer", THE VISUAL COMPUTER, vol. 9, no. 5, 1 January 1993 (1993-01-01), Berlin / Heidelberg, pages 255 - 266, XP002596289, ISSN: 0178-2789, Retrieved from the Internet <URL:http://www.springerlink.com/content/ut05724lg0245471/abstract/> [retrieved on 20100811], DOI: 10.1007/BF01908448 *
DAMERA-VENKATA N ET AL: "ADAPTIVE THRESHOLD MODULATION FOR ERROR DIFFUSION HALFTONING", IEEE TRANSACTIONS ON IMAGE PROCESSING, IEEE SERVICE CENTER, PISCATAWAY, NJ, US LNKD- DOI:10.1109/83.892447, vol. 10, no. 1, 1 January 2001 (2001-01-01), pages 104 - 116, XP000998885, ISSN: 1057-7149 *
R. NASANEN: "Visibility of halftone dot textures", IEEE TRANS. SYST. MAN. CYB., vol. 14, no. 6, 1984, pages 920 - 924, XP011464953, DOI: doi:10.1109/TSMC.1984.6313320
ZHAOHUI SUN: "A Method to Generate Halftone Video", 2005 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING (IEEE CAT. NO.05CH37625) IEEE PISCATAWAY, NJ, USA, IEEE, PISCATAWAY, NJ LNKD- DOI:10.1109/ICASSP.2005.1415467, vol. 2, 18 March 2005 (2005-03-18), pages 565 - 568, XP010790702, ISBN: 978-0-7803-8874-1 *
ZHAOHUI SUN: "Video halftoning", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 15, no. 3, 31 March 2006 (2006-03-31), pages 678 - 686, XP002596418, ISSN: 1057-7149, DOI: 10.1109/TIP.2005.863023 *

Also Published As

Publication number Publication date
US20110032417A1 (en) 2011-02-10
US20110032427A1 (en) 2011-02-10
WO2010141766A1 (en) 2010-12-09
US8305394B2 (en) 2012-11-06
US8330770B2 (en) 2012-12-11

Similar Documents

Publication Publication Date Title
CN111194458B (en) Image signal processor for processing images
US11216914B2 (en) Video blind denoising method based on deep learning, computer equipment and storage medium
US20200349680A1 (en) Image processing method and device, storage medium and electronic device
US8149229B2 (en) Image apparatus for processing 3D images and method of controlling the same
RU2511574C2 (en) Multilevel stochastic pseudo mixing with noise suppression by successive averaging with help of patterns
JP4886583B2 (en) Image enlargement apparatus and method
US11257189B2 (en) Electronic apparatus and image processing method thereof
JP2005341527A (en) Gradation correcting apparatus, gradation correcting program, mobile terminal device and personal computer
KR101295649B1 (en) Image processing apparatus, image processing method and storage medium
US20210327027A1 (en) Electronic apparatus and control method thereof
JP2009509418A (en) Classification filtering for temporal prediction
CN110062282A (en) A kind of super-resolution video method for reconstructing, device and electronic equipment
US8330770B2 (en) System and method for improving the quality of halftone video using an adaptive threshold
JP2005516315A5 (en)
WO2008020506A1 (en) Image display control device, image display method, and information storage medium
KR20070040393A (en) Video processor comprising a sharpness enhancer
CN111612721B (en) Image restoration model training method and device and satellite image restoration method and device
CN111161685B (en) Virtual reality display equipment and control method thereof
US7733533B2 (en) Generating threshold values in a dither matrix
US8456494B2 (en) Automated bit sequencing for digital light modulation
CN105224538A (en) The dithering process method and apparatus of image
WO2016071566A1 (en) Variable resolution image capture
EP3062288B1 (en) Method, apparatus and computer program product for reducing chromatic aberrations in deconvolved images
CN114339030A (en) Network live broadcast video image stabilization method based on self-adaptive separable convolution
JP2004304543A (en) Halftoning processing method and halftoning processing system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10730901

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10730901

Country of ref document: EP

Kind code of ref document: A1