US6621595B1 - System and method for enhancing scanned document images for color printing - Google Patents

System and method for enhancing scanned document images for color printing Download PDF

Info

Publication number
US6621595B1
US6621595B1 US09/704,358 US70435800A US6621595B1 US 6621595 B1 US6621595 B1 US 6621595B1 US 70435800 A US70435800 A US 70435800A US 6621595 B1 US6621595 B1 US 6621595B1
Authority
US
United States
Prior art keywords
digital image
input digital
background
luminance
background threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US09/704,358
Inventor
Jian Fan
Daniel Tretter
Qian Lin
Neerja Raman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US09/704,358 priority Critical patent/US6621595B1/en
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to DE60137704T priority patent/DE60137704D1/en
Priority to PCT/US2001/045426 priority patent/WO2002037832A2/en
Priority to AU2002227115A priority patent/AU2002227115A1/en
Priority to JP2002540441A priority patent/JP4112362B2/en
Priority to EP01993115A priority patent/EP1330917B1/en
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HEWLETT-PACKARD COMPANY
Assigned to HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. reassignment HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: RAMAN, NEERJA, LIN, QIAN, FAN, JIAN, TRETTER, DANIEL R.
Application granted granted Critical
Publication of US6621595B1 publication Critical patent/US6621595B1/en
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/58Edge or detail enhancement; Noise or error suppression, e.g. colour misregistration correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/409Edge or detail enhancement; Noise or error suppression
    • H04N1/4095Correction of errors due to scanning a two-sided document, i.e. show-through correction

Definitions

  • the invention relates generally to the field of image processing, and more particularly to a system and method for enhancing scanned document images.
  • Two-sided documents i.e., documents having text and/or pictorial content on both sides of the paper
  • visual noise may appear in the copies that was not present on the scanned surfaces of the original documents.
  • the visual noise may be the result of digitally captured text and/or pictorial content printed on the opposite side of a scanned surface.
  • Visual noise may also appear in copies when multiple documents are placed on a scanning device. In this situation, the visual noise may be the result of digitally captured text and/or pictorial content on a document that was adjacent to the scanned document.
  • the appearance of visual noise caused by unwanted text and/or pictorial content will be referred to herein as the “see-through” effect.
  • the see-through effect is more prevalent for copies of documents having a white or very light color background.
  • the thickness of the scanned documents may increase the intensity of the visual noise, since thinner paper is more transparent than thicker paper.
  • documents typically contain black characters that are printed on thin white paper.
  • an adaptive spatial filter for digital image processing selectively performs spatial high pass filtering of an input digital signal to produce output pixel values having enhanced signal levels using a gain factor that effects a variable degree of enhancement of the input pixels.
  • a spatial-filter performs an adaptive edge-enhancement process that enhances the sharpness of edges, which are parts of an image having steep tone gradients.
  • a system and method for enhancing scanned document images utilizes an estimated background luminance of a given digital image to remove or reduce visual “see-through” noise.
  • the system and method may further enhance the scanned document images by removing color fringes and sharpening and/or darkening edges of text contained in the images. These image enhancements allow the system and method to significantly increase the overall quality of the scanned document images.
  • the system includes an edge detector, a background luminance estimator, and an image enhancer. These components may be implemented in any combination of hardware, firmware and software. In one embodiment, these components are embodied in the system as software that is executable by a processor of the system.
  • the edge detector operates to detect edges of text contained in a given digital image containing visual noise.
  • the background luminance estimator operates to generate a background threshold that is based on an estimation of the image background luminance.
  • the background threshold is dependent on the luminance values of the edge pixels of the detected edges. In one embodiment, the background threshold is generated using only the edge pixels that are on the lighter side of the detected edges.
  • the image enhancer operates to at least partially remove the visual noise by selectively modifying pixel values of the image using the background threshold.
  • the image enhancer may also perform color fringe removal and text enhancements, such as edge sharpening and edge darkening.
  • the method in accordance with the invention includes the steps of receiving a digital image that contains visual noise, detecting edges of text contained in the digital image, and generating a background threshold based on an estimation of the image background luminance.
  • the background threshold is dependent on the luminance values of the edge pixels of the text edges contained in the image.
  • the background threshold is generated using only the edge pixels that are on the lighter side of the detected edges.
  • the method further includes a step of modifying pixel values of the image using the background threshold to at least partially remove the visual noise from the image.
  • the method may also include the step of selectively removing color data from the edges pixels to reduce color fringe or the step of enhancing the text contained in the image by sharpening or darkening the text edges.
  • FIG. 1 is an illustration of an image enhancing system in accordance with the present invention.
  • FIG. 2 is an illustration of vertical differences for the luminance channel that are used to detect text edges in accordance with the invention.
  • FIG. 3 is an illustration of horizontal differences for the luminance channel that are used to detect text edges in accordance with the invention.
  • FIGS. 4 and 5 are process flow diagrams that illustrate the operation of a background luminance estimator of the image enhancing system in accordance with the invention.
  • FIG. 6 shows an exemplary limiting function that may be used by the background luminance estimator.
  • FIG. 7 is a process flow diagram of a method for enhancing scanned document images in accordance with the invention.
  • an image enhancing system 102 in accordance with the present invention operates to perform a number of image enhancements for scanned document images.
  • image enhancements include text edge sharpening, text edge darkening, color fringe removal, and “see-through” removal.
  • Color fringe is the appearance of color around the text of a scanned document image that may be due to an image capturing error.
  • See-through is the appearance of visual noise that was captured during the scanning process.
  • the visual noise may be the result of digitally captured text and/or pictorial content printed on the opposite side of the scanned surface. Alternatively, the visual noise may be the result of digitally captured text and/or pictorial content on an adjacent document.
  • the image enhancing system 102 includes an input interface 104 , a volatile memory 106 and a non-volatile memory 108 that are connected to a processor 110 .
  • These components 104 - 110 are computer components that are readily found in a conventional personal computer.
  • the image enhancing system further includes a color space converter 112 , a low-pass filter 114 , an edge detector 116 , a background luminance estimator 118 , and an image enhancer 120 , which are also connected to the processor 110 .
  • the components 112 - 120 are illustrated in FIG. 1 as separate components of the image enhancing system 102 , two or more of these components may be integrated, decreasing the number of components included in the image enhancing system.
  • the components 112 - 120 may be implemented in any combination of hardware, firmware and software.
  • the components 112 - 120 are embodied in the image enhancing system 102 as a software program that performs the functions of the components 112 - 120 when executed by the processor 110 .
  • the input interface 104 provides a means to receive digital images from an external image capturing device, such as a scanner 122 .
  • the input interface may be a USB port, a serial port or any other interface port that is designed to interface the image capturing device to the image enhancing system 102 .
  • the input interface may be a network interface to receive digital images from a local network (not shown) or from the Internet (not shown).
  • the color space converter 112 of the image enhancing system 102 operates to convert a scan line of an input digital image between an original color space, such as an RGB color space, and a luminance/chrominance space, such as a YC r C b space.
  • an original color space such as an RGB color space
  • a luminance/chrominance space such as a YC r C b space
  • YC r C b space is preferred, other color spaces that include separate luminance components may be utilized.
  • the color space conversion may be performed using a pre-calculated look-up table, which may be stored in the color space converter 112 or the non-volatile memory 108 .
  • the low-pass filter 114 of the image enhancing system 102 operates to smooth each of the YC r C b components of the image scan line.
  • the low-pass filter applies a Gaussian filtering to the luminance channel Y.
  • the Gaussian filter/mask may be designed according to the following equation.
  • G[i][j] k ⁇ exp ⁇ 2 (( i ⁇ c ) 2 +( j ⁇ c ) 2 )/ c 2 ⁇ ; 0 ⁇ i ⁇ M, 0 ⁇ j ⁇ M,
  • M is an odd integer
  • the low-pass filter 114 applies an M ⁇ M averaging filter, which is the same size as the Gaussian filter applied to the luminance channel Y.
  • Gaussian smoothing is applied to the luminance channel Y instead of averaging is that the Gaussian smoothing is needed to preserve text edge locations of the digital image.
  • the edge detector 116 of the image enhancing system 102 operates to detect the preserved edges of text contained in the image scan line. From the Gaussian smoothed luminance data, the edge detector calculates two metrics D 1 and D 2 .
  • the metric D 1 is the sum of absolute values of adjacent differences.
  • the metric D 1 is used to determine whether a pixel of interest is a part of a text edge by comparing it to a threshold value T e . For a pixel of interest with index (3,3), the sum D 1 of absolute values of adjacent luminance differences is defined by the following equation.
  • V stands for Gaussian smoothed luminance ⁇ overscore (Y) ⁇ .
  • the D 1 value is illustrated by FIGS. 2 and 3.
  • FIG. 2 shows the vertical differences
  • FIG. 3 shows the horizontal differences.
  • the metric D 2 is a second-order derivative that is used to determine whether a text edge pixel is on the dark side or the bright side of the edge.
  • the following second order derivative equation may be used to make the determination for a pixel of interest with index (3,3).
  • the pixel is determined to be on the dark side of the edge. Otherwise, the pixel is determined to be on the bright side of the edge.
  • the threshold value T e is initially set at a minimum value.
  • the threshold value T e is allowed to float with the maximum value of the metric D 1 .
  • the threshold value T e can only be increased.
  • the minimum value of T e and the parameter k should be empirically determined by experiments.
  • the background luminance estimator 118 of the image enhancing system 102 operates to compute an estimated background luminance Y b for each image scan line, which is used by the image enhancer 120 for see-through removal. Initially, the estimated background luminance Y b is set to 255. The estimated background luminance Y b is then updated once per image scan line as described in detail below. The estimated background luminance Y b is computed using two histograms. The first histogram is a histogram of all pixel luminance values, and is denoted H y . The second histogram is a histogram of text edge pixels on the lighter side of edges, and is denoted H e . Both histograms are updated for each new pixel.
  • the operation of the background luminance estimator 118 to update the estimated background luminance Y b is described with reference to FIGS. 4 and 5.
  • the histograms H e and H y are computed.
  • copies of the histograms H e and H y are made.
  • the original histograms H e and H y are used to update the histograms.
  • the copies of the histograms H e and H y are used to compute the updated estimated background luminance Y b .
  • steps 406 , 408 , 410 , 412 and 414 are performed.
  • the threshold C e may be 100. If the total edge pixel count is greater than the threshold C e , the process proceeds to step 408 , at which a windowed averaging is applied to smooth out small local noise. As an example, a window size of 7 may be used for the averaging. However, if the total edge pixel count is not greater than the threshold C e , the process skips to step 410 .
  • an object/background threshold ⁇ e is computed.
  • the object/background threshold ⁇ e is a luminance value on the histogram H e that optimally separates the dominant values of object and background intensities.
  • the object/background threshold ⁇ e may be computed using Kittler-Illingsworth algorithm, as described in J. Kittler and J. Illingworth, “Minimum Error Thresholding”, Pattern Recognition, Vol. 19, No. 1, 41-47, 1986, which is specifically incorporated herein by reference.
  • Kittler-Illingsworth algorithm as described in J. Kittler and J. Illingworth, “Minimum Error Thresholding”, Pattern Recognition, Vol. 19, No. 1, 41-47, 1986, which is specifically incorporated herein by reference.
  • a histogram h(g) that summarizes the gray level distribution of the image can be viewed as an estimate of the probability density function p(g) of the mixture population comprising gray levels of object and background pixels.
  • the Kittler-Illingsworth algorithm utilizes an arbitrary level T to threshold the gray level data and model each of the resulting pixel populations by a normal density h(g
  • J(T) a criterion function which characterizes the average performance of replacing a gray level g by a correct binary value.
  • J ⁇ ( T ) 1 + 2 ⁇ ⁇ [ P 1 ⁇ ( T ) ⁇ log ⁇ ⁇ ⁇ 1 ⁇ ( T ) + ⁇ ⁇ P 2 ⁇ ( T ) ⁇ log ⁇ ⁇ ⁇ 2 ⁇ ( T ) - 2 [ P 1 ⁇ ( T ) ⁇ log ⁇ ⁇ P 1 ⁇ ( T ) + P 2 ⁇ ( T ) ⁇ log ⁇ ⁇ P 2 ⁇ ( T ) ⁇
  • the optimal threshold ⁇ or a minimum error threshold, is determined by finding a value that minimizes the above equation.
  • a background peak eWhite on the histogram H e is determined, at step 412 .
  • the background peak eWhite corresponds to the peak in the upside range between the object/background threshold ⁇ e and 255 on the histogram H e .
  • a spread index S e is calculated.
  • the I high value is located on the left side of the eWhite value, while the I high value is located on the right side of the eWhite value.
  • steps 502 , 504 , 506 and 508 are performed.
  • the threshold C y may be 10,000. If the total edge pixel count is greater than the threshold C y , the process proceeds to step 504 , at which a windowed averaging is then applied to smooth out small local noise. As an example, a window size of 7 may be used for the averaging. However, if the total pixel count is not greater than the threshold C y , the process skips to step 506 .
  • an object/background threshold ⁇ y is computed.
  • the object/background threshold ⁇ y is a luminance value on the histogram H y that optimally separates the dominant values of object and background intensities.
  • a background peak aWhite on the histogram H y is determined.
  • the background peak aWhite corresponds to the peak in the upside range between the object/background threshold ⁇ y and 255 on the histogram H y .
  • steps 406 - 414 and 502 - 508 have been described in a sequential order, these steps do not have to be performed in the described order.
  • the steps 406 - 414 and 502 - 508 may be performed such that steps 406 - 414 are interweaved with steps 502 - 508 .
  • some of the steps 502 - 508 may be performed simultaneously with steps 406 - 414 .
  • MIN_WHITE minimum acceptable value
  • f ⁇ ( v ) ⁇ v , if ⁇ ⁇ v > MIN_WHITE ( 255.0 - MIN_WHITE ) ⁇ exp ⁇ ( - ( r ⁇ MIN_WHITE / 255 ⁇ v / 255 ) 2 ) + MIN_WHITE , ⁇ otherwise
  • Other functions characterized by three distinct regions may also be used to achieve a similar effect.
  • the three distinct regions are as follows: 1) a linear region from 255 down to MIN_WHITE for effective see-through removal; 2) a flat holding-up region from MIN_WHITE down to a set value; and 3) a gradual blackout region.
  • step 512 weighted-averaging is applied to the background peaks eWhite and aWhite using the following formula to derive an initial background luminance white(n) for the current line n.
  • the estimated background luminance Y b is calculated using the following formula.
  • Y b ( n+ 1) ⁇ ( n ) ⁇ Y b ( n )+(1 ⁇ ( n )) ⁇ white( n );
  • ⁇ ( n ) ⁇ max ⁇ ( ⁇ max ⁇ min ) ⁇ exp( ⁇ g ⁇ n/H ),
  • n is the current line number
  • H is the height (in pixels) of the image
  • g is the adaptation boost.
  • the above formula introduces a spatial (scan line) dependent smoothing for see-through effect in the vertical direction.
  • the image enhancer 120 of the image enhancing system 102 operates to enhance a given document image by performing text edge sharpening, text edge darkening, color fringe removal, and see-through removal.
  • the enhancement operations that are performed by the image enhancer differ for edge and non-edge pixels of the image, as described in the following C-style pseudo-code.
  • the parameter T c is a “colorful” threshold for determining the application of color fringe removal
  • parameter k is a positive value for determining the strength of the edge sharpening.
  • the function Q for see-through removal can take several forms.
  • the simplest form is the following two-segment linear function.
  • Q ⁇ ( x , Y b ) ⁇ 255 , if ⁇ ⁇ x > Y b x ⁇ 255 / Y b , otherwise
  • a more complex form is the following continuous non-linear function, which is commonly used for tone modification.
  • Q ⁇ ( x , Y b ) ⁇ 255 , if ⁇ ⁇ x > Y b 255 ⁇ [ 1.0 - 0.5 ⁇ ( 2 ⁇ ( Y b - x ) / Y b ) a ] 1 / y , else ⁇ ⁇ if ⁇ ⁇ x > Y b / 2 255 ⁇ [ 0.5 ⁇ ( 2 ⁇ x / Y b ) a ] 1 / y , otherwise
  • the parameters ⁇ and ⁇ control the contrast and the gamma, respectively.
  • the function Q(x,Y b ) can be implemented as a look-up table and updated for each scan line.
  • the enhanced image may be output to a printer 124 or a display 126 , as shown in FIG. 1 .
  • the image enhancing system 102 has been described as a separate system, the system may 102 be a sub-system that is part of a larger system, such as a copying machine (not shown).
  • a scan line of a digital image is received.
  • the digital image may be a scanned image of a document that contains text and/or pictorial content.
  • the image scan line is converted from the original color space to a luminance/chrominance space.
  • the original color is space may be an RGB space
  • the luminance/chrominance space may be a YC r C b space.
  • low-pass filtering is applied to luminance/chrominance channels.
  • a Gaussian filter is applied to the luminance channel, while an averaging filter is applied to both chrominance channels.
  • text edges are detected by identifying edge pixels and then classifying the edge pixels on the basis of whether the pixels are on the dark side or the bright side of the edge.
  • the edge detection involves calculating D 1 and D 2 values.
  • the D 1 value is used to identify the edge pixels using an adaptive threshold T e .
  • the D 2 value is used to classify the edge pixels.
  • the background luminance is estimated to derive an estimated background luminance Y b .
  • steps 712 , 714 and 716 are performed.
  • text enhancement is performed on the edge pixels.
  • the text enhancement includes edge sharpening and/or edge darkening.
  • color fringe removal is performed for the edge pixels.
  • the edge pixels are then converted to the original color space, e.g., an RGB space, at step 716 .
  • steps 718 and 720 are performed.
  • the non-edge pixels are converted to the original color space.
  • a see-through removal is then performed on the non-edge pixels, at step 720 .
  • step 722 a determination is made whether the current image scan line is the last scan line of the image. If it is the last scan line, the method comes to an end. However, if the current image scan line is not the last scan line of the image, the method proceeds back to step 702 to receive the next scan line of the image. The steps 702 - 722 are repeated until the last scan line of the image has been processed.

Abstract

A system and method for enhancing scanned document images utilizes an estimated background luminance of a given digital image to remove or reduce visual “see-through” noise. The estimated background luminance is dependent on the luminance values of the edges pixels of detected text edges of the image. In one embodiment, the estimated background luminance is generated using only the edge pixels that are on the lighter side of the detected edges. In addition to the see-through removal, the system and method may further enhance the scanned document images by removing color fringes and sharpening and/or darkening edges of text contained in the images.

Description

FIELD OF THE INVENTION
The invention relates generally to the field of image processing, and more particularly to a system and method for enhancing scanned document images.
BACKGROUND OF THE INVENTION
Two-sided documents, i.e., documents having text and/or pictorial content on both sides of the paper, present challenges for producing quality copies of the original documents. When two-sided documents are scanned in a copy machine or a scanner, visual noise may appear in the copies that was not present on the scanned surfaces of the original documents. The visual noise may be the result of digitally captured text and/or pictorial content printed on the opposite side of a scanned surface. Visual noise may also appear in copies when multiple documents are placed on a scanning device. In this situation, the visual noise may be the result of digitally captured text and/or pictorial content on a document that was adjacent to the scanned document. The appearance of visual noise caused by unwanted text and/or pictorial content will be referred to herein as the “see-through” effect.
The see-through effect is more prevalent for copies of documents having a white or very light color background. In addition, the thickness of the scanned documents may increase the intensity of the visual noise, since thinner paper is more transparent than thicker paper. In general, documents typically contain black characters that are printed on thin white paper. Thus, the quality of document copies can be significantly increased if the visual “see-through” noise can be effectively removed from the copies.
In addition to the see-through problem, another copy quality issue arises when documents are reproduced using color scanners and color printers. This copy quality issue is the appearance of color fringes around the reproduced text. These color fringes are primarily due to scanning and printing errors. The scanning and printing errors cause the edges of the text in the copies to appear to be something other than black. The appearance of color fringes is a significant factor that degrades the overall quality of document copies.
Conventional image enhancement approaches use variations of unsharp-masking to sharpen features in digitally captured document images. For example, according to one method, the sharpness is altered in the reproduction of an electronically encoded natural scene image by applying a filter function that increases maximum local contrast to a predetermined target value and increases all other contrast to an amount proportional thereto. According to another method, an adaptive spatial filter for digital image processing selectively performs spatial high pass filtering of an input digital signal to produce output pixel values having enhanced signal levels using a gain factor that effects a variable degree of enhancement of the input pixels. Thus, the pixels with activity values that are close to an iteratively adjustable activity threshold level will be emphasized significantly less than the pixels with associated activity values that are substantially above the threshold level. In still another method, a spatial-filter performs an adaptive edge-enhancement process that enhances the sharpness of edges, which are parts of an image having steep tone gradients.
These conventional image enhancement approaches are effective for their intended purposes. However, these conventional methods do not address the problems of color fringe and see-through. In view of this deficiency, there is a need for a system and method to enhance scanned document images by eliminating or reducing color fringes and visual “see-through” noise.
SUMMARY OF THE INVENTION
A system and method for enhancing scanned document images utilizes an estimated background luminance of a given digital image to remove or reduce visual “see-through” noise. In addition to the see-through removal, the system and method may further enhance the scanned document images by removing color fringes and sharpening and/or darkening edges of text contained in the images. These image enhancements allow the system and method to significantly increase the overall quality of the scanned document images.
In an exemplary embodiment, the system includes an edge detector, a background luminance estimator, and an image enhancer. These components may be implemented in any combination of hardware, firmware and software. In one embodiment, these components are embodied in the system as software that is executable by a processor of the system.
The edge detector operates to detect edges of text contained in a given digital image containing visual noise. The background luminance estimator operates to generate a background threshold that is based on an estimation of the image background luminance. The background threshold is dependent on the luminance values of the edge pixels of the detected edges. In one embodiment, the background threshold is generated using only the edge pixels that are on the lighter side of the detected edges. The image enhancer operates to at least partially remove the visual noise by selectively modifying pixel values of the image using the background threshold. The image enhancer may also perform color fringe removal and text enhancements, such as edge sharpening and edge darkening.
In an exemplary embodiment, the method in accordance with the invention includes the steps of receiving a digital image that contains visual noise, detecting edges of text contained in the digital image, and generating a background threshold based on an estimation of the image background luminance. The background threshold is dependent on the luminance values of the edge pixels of the text edges contained in the image. In one embodiment, the background threshold is generated using only the edge pixels that are on the lighter side of the detected edges. The method further includes a step of modifying pixel values of the image using the background threshold to at least partially remove the visual noise from the image. The method may also include the step of selectively removing color data from the edges pixels to reduce color fringe or the step of enhancing the text contained in the image by sharpening or darkening the text edges.
Other aspects and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrated by way of example of the principles of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is an illustration of an image enhancing system in accordance with the present invention.
FIG. 2 is an illustration of vertical differences for the luminance channel that are used to detect text edges in accordance with the invention.
FIG. 3 is an illustration of horizontal differences for the luminance channel that are used to detect text edges in accordance with the invention.
FIGS. 4 and 5 are process flow diagrams that illustrate the operation of a background luminance estimator of the image enhancing system in accordance with the invention.
FIG. 6 shows an exemplary limiting function that may be used by the background luminance estimator.
FIG. 7 is a process flow diagram of a method for enhancing scanned document images in accordance with the invention.
DETAILED DESCRIPTION
With reference to FIG. 1, an image enhancing system 102 in accordance with the present invention is shown. The image enhancing system operates to perform a number of image enhancements for scanned document images. These image enhancements include text edge sharpening, text edge darkening, color fringe removal, and “see-through” removal. Color fringe is the appearance of color around the text of a scanned document image that may be due to an image capturing error. See-through is the appearance of visual noise that was captured during the scanning process. The visual noise may be the result of digitally captured text and/or pictorial content printed on the opposite side of the scanned surface. Alternatively, the visual noise may be the result of digitally captured text and/or pictorial content on an adjacent document.
The image enhancing system 102 includes an input interface 104, a volatile memory 106 and a non-volatile memory 108 that are connected to a processor 110. These components 104-110 are computer components that are readily found in a conventional personal computer. The image enhancing system further includes a color space converter 112, a low-pass filter 114, an edge detector 116, a background luminance estimator 118, and an image enhancer 120, which are also connected to the processor 110. Although the components 112-120 are illustrated in FIG. 1 as separate components of the image enhancing system 102, two or more of these components may be integrated, decreasing the number of components included in the image enhancing system. The components 112-120 may be implemented in any combination of hardware, firmware and software. Preferably, the components 112-120 are embodied in the image enhancing system 102 as a software program that performs the functions of the components 112-120 when executed by the processor 110.
The input interface 104 provides a means to receive digital images from an external image capturing device, such as a scanner 122. The input interface may be a USB port, a serial port or any other interface port that is designed to interface the image capturing device to the image enhancing system 102. Alternatively, the input interface may be a network interface to receive digital images from a local network (not shown) or from the Internet (not shown).
The color space converter 112 of the image enhancing system 102 operates to convert a scan line of an input digital image between an original color space, such as an RGB color space, and a luminance/chrominance space, such as a YCrCb space. Although the YCrCb space is preferred, other color spaces that include separate luminance components may be utilized. The color space conversion may be performed using a pre-calculated look-up table, which may be stored in the color space converter 112 or the non-volatile memory 108.
The low-pass filter 114 of the image enhancing system 102 operates to smooth each of the YCrCb components of the image scan line. The low-pass filter applies a Gaussian filtering to the luminance channel Y. The Gaussian filter/mask may be designed according to the following equation.
G[i][j]=k·exp└−β2((i−c)2+(j−c)2)/c 2┘; 0≦i<M, 0≦j<M,
where M is an odd integer, c=(M−1)/2 is the center of the mask and k is a normalization factor such that i = 0 M - 1 j = 0 M - 1 G [ i ] [ j ] = 1.0 .
Figure US06621595-20030916-M00001
An example of such a mask (with M=5 and β=1.1) is shown below: [ 0.009 0.023 0.032 0.023 0.009 0.023 0.058 0.078 0.058 0.023 0.032 0.078 0.106 0.078 0.032 0.023 0.058 0.078 0.058 0.023 0.009 0.023 0.032 0.023 0.009 ]
Figure US06621595-20030916-M00002
For the chrominance channels Cr and Cb, the low-pass filter 114 applies an M×M averaging filter, which is the same size as the Gaussian filter applied to the luminance channel Y. The M×M averaging filter may be seen as a special case of the Gaussian filter with β=0. However, the averaging operation can be performed more efficiently than the Gaussian filtering. The reason why Gaussian smoothing is applied to the luminance channel Y instead of averaging is that the Gaussian smoothing is needed to preserve text edge locations of the digital image.
The edge detector 116 of the image enhancing system 102 operates to detect the preserved edges of text contained in the image scan line. From the Gaussian smoothed luminance data, the edge detector calculates two metrics D1 and D2. The metric D1 is the sum of absolute values of adjacent differences. The metric D1 is used to determine whether a pixel of interest is a part of a text edge by comparing it to a threshold value Te. For a pixel of interest with index (3,3), the sum D1 of absolute values of adjacent luminance differences is defined by the following equation. D1 = V 22 - V 12 + V 32 - V 22 + V 42 - V 32 + V 52 - V 42 + V 23 - V 13 + V 33 - V 23 + V 43 - V 33 + V 53 - V 43 + V 24 - V 14 + V 34 - V 24 + V 44 - V 34 + V 54 - V 44 + V 22 - V 21 + V 23 - V 22 + V 24 - V 23 + V 25 - V 24 + V 32 - V 31 + V 33 - V 32 + V 34 - V 33 + V 35 - V 34 + V 42 - V 41 + V 43 - V 42 + V 44 - V 43 + V 45 - V 44 ,
Figure US06621595-20030916-M00003
where the variable V stands for Gaussian smoothed luminance {overscore (Y)}. The D1 value is illustrated by FIGS. 2 and 3. FIG. 2 shows the vertical differences, while FIG. 3 shows the horizontal differences.
The metric D2 is a second-order derivative that is used to determine whether a text edge pixel is on the dark side or the bright side of the edge. The following second order derivative equation may be used to make the determination for a pixel of interest with index (3,3).
D 2=V 13 +V 53 +V 31 +V 35−4·V 33
If D2>0, the pixel is determined to be on the dark side of the edge. Otherwise, the pixel is determined to be on the bright side of the edge.
In operation, the threshold value Te is initially set at a minimum value. In order to make the threshold value Te adaptive to some extent, the threshold value Te is allowed to float with the maximum value of the metric D1. The maximum value D1 max is updated for every pixel, and for each image scan line, the threshold value Te is updated to Te=k·D1 max, if Tea<k·D1 max. Thus, the threshold value Te can only be increased. The minimum value of Te and the parameter k should be empirically determined by experiments.
The background luminance estimator 118 of the image enhancing system 102 operates to compute an estimated background luminance Yb for each image scan line, which is used by the image enhancer 120 for see-through removal. Initially, the estimated background luminance Yb is set to 255. The estimated background luminance Yb is then updated once per image scan line as described in detail below. The estimated background luminance Yb is computed using two histograms. The first histogram is a histogram of all pixel luminance values, and is denoted Hy. The second histogram is a histogram of text edge pixels on the lighter side of edges, and is denoted He. Both histograms are updated for each new pixel.
The operation of the background luminance estimator 118 to update the estimated background luminance Yb is described with reference to FIGS. 4 and 5. At step 402, the histograms He and Hy are computed. Next, at step 404, copies of the histograms He and Hy are made. The original histograms He and Hy are used to update the histograms. The copies of the histograms He and Hy are used to compute the updated estimated background luminance Yb.
For the histogram He, steps 406, 408, 410, 412 and 414 are performed. At step 406, a determination is made whether the total edge pixel count (pixels on the lighter side of detected edges) is greater than a given threshold Ce. As an example, the threshold Ce may be 100. If the total edge pixel count is greater than the threshold Ce, the process proceeds to step 408, at which a windowed averaging is applied to smooth out small local noise. As an example, a window size of 7 may be used for the averaging. However, if the total edge pixel count is not greater than the threshold Ce, the process skips to step 410. At step 410, an object/background threshold τe is computed. The object/background threshold τe is a luminance value on the histogram He that optimally separates the dominant values of object and background intensities.
The object/background threshold τe, may be computed using Kittler-Illingsworth algorithm, as described in J. Kittler and J. Illingworth, “Minimum Error Thresholding”, Pattern Recognition, Vol. 19, No. 1, 41-47, 1986, which is specifically incorporated herein by reference. For a grayscale image with pixel value g in the range of [0,n], a histogram h(g) that summarizes the gray level distribution of the image can be viewed as an estimate of the probability density function p(g) of the mixture population comprising gray levels of object and background pixels. Assuming that each of the two components (object and background) p(g/i) is a Gaussian distribution with mean μi, standard deviation σi and a priori probability Pi, the probability density function p(g) can be expressed by the following equation. p ( g ) = i = 1 2 P i p ( g / i ) ,
Figure US06621595-20030916-M00004
where p ( g / i ) = 1 2 π σ i exp ( - ( g - μ i ) 2 2 σ i 2 ) .
Figure US06621595-20030916-M00005
For a given p(g/i) and Pi, there exists a threshold τ that satisfies the following conditions.
For g≦τ, P1p(g/1)>P2p(g/2)
For g>τ, P1p(g/1)<P2p(g/2)
However, the parameters μi, σi and Pi are unknown.
The Kittler-Illingsworth algorithm utilizes an arbitrary level T to threshold the gray level data and model each of the resulting pixel populations by a normal density h(g|i,T) with parameters μi(T), σi(T) and Pi(T), which are defined by the following equations. P i ( T ) = g = a b h ( g ) , u i ( T ) = [ g = a b h ( g ) g ] / P i ( T ) and σ i 2 ( T ) = [ g = a b { g = μ i ( T ) } 2 h ( g ) ] / P i ( T ) ,
Figure US06621595-20030916-M00006
where a = { 0 , T + 1 , i = 1 i = 2 and b = { T , i = 1 n , i = 2 .
Figure US06621595-20030916-M00007
These definitions are used to express a criterion function J(T) which characterizes the average performance of replacing a gray level g by a correct binary value. The criterion function J(T) is defined by the following equation. J ( T ) = g = 0 T h ( g ) · { [ g - μ 1 ( T ) σ 1 ( T ) ] 2 + 2 log σ 1 ( T ) - 2 log P 1 ( T ) } + g = T + 1 n h ( g ) · { [ g - μ 2 ( T ) σ 2 ( T ) ] 2 + 2 log σ 2 ( T ) - 2 log P 2 ( T ) }
Figure US06621595-20030916-M00008
Substituting the equations for μi(T), σi(T) and Pi(T) into the criterion function J(T) yields the following equation. J ( T ) = 1 + 2 [ P 1 ( T ) log σ 1 ( T ) + P 2 ( T ) log σ 2 ( T ) - 2 [ P 1 ( T ) log P 1 ( T ) + P 2 ( T ) log P 2 ( T )
Figure US06621595-20030916-M00009
The optimal threshold τ, or a minimum error threshold, is determined by finding a value that minimizes the above equation.
After the object/background threshold τe is computed, a background peak eWhite on the histogram He is determined, at step 412. The background peak eWhite corresponds to the peak in the upside range between the object/background threshold τe and 255 on the histogram He. At step 414, a spread index Se is calculated. The spread index Se is defined as Se=(Ihigh−Ilow))/256, where Ihigh and Ilow are high and low luminance values of the spread that fall to half of the peak value. The Ihigh value is located on the left side of the eWhite value, while the Ihigh value is located on the right side of the eWhite value.
For the histogram Hy, steps 502, 504, 506 and 508 are performed. At step 502, a determination is made whether the total pixel count is greater than a given threshold Cy. As an example, the threshold Cy may be 10,000. If the total edge pixel count is greater than the threshold Cy, the process proceeds to step 504, at which a windowed averaging is then applied to smooth out small local noise. As an example, a window size of 7 may be used for the averaging. However, if the total pixel count is not greater than the threshold Cy, the process skips to step 506. At step 506, an object/background threshold τy is computed. Similar to the object/background threshold τe, the object/background threshold τy is a luminance value on the histogram Hy that optimally separates the dominant values of object and background intensities. At step 508, a background peak aWhite on the histogram Hy is determined. The background peak aWhite corresponds to the peak in the upside range between the object/background threshold τy and 255 on the histogram Hy.
Although the steps 406-414 and 502-508 have been described in a sequential order, these steps do not have to be performed in the described order. The steps 406-414 and 502-508 may be performed such that steps 406-414 are interweaved with steps 502-508. Alternatively, some of the steps 502-508 may be performed simultaneously with steps 406-414.
Next, at step 510, a minimum acceptable value MIN_WHITE is established for the background peaks eWhite and aWhite by embedding MIN_WHITE in a non-linear function ƒ. That is, eWhite′=ƒ(eWhite) and aWhite′=ƒ(aWhite). The simplest of such functions is a two segment linear function as defined below. f ( v ) = { v , if v > MIN_WHITE MIN_WHITE , otherwise
Figure US06621595-20030916-M00010
However, for images with mostly black figures and little unprinted regions, the background peaks eWhite and aWhite are usually far below the minimum acceptable value. In these cases, it may be desirable to reduce the effect of see-through removal that will be performed by the image enhancer 120 of the system 102, or even to not apply the see-through removal. This can be effectively realized by functions such as the following limiting function. f ( v ) = { v , if v > MIN_WHITE ( 255.0 - MIN_WHITE ) · exp ( - ( r · MIN_WHITE / 255 · v / 255 ) 2 ) + MIN_WHITE , otherwise
Figure US06621595-20030916-M00011
A limiting function ƒ(v) with MIN_WHITE=220 and r=3.5 is shown in FIG. 6. Other functions characterized by three distinct regions may also be used to achieve a similar effect. The three distinct regions are as follows: 1) a linear region from 255 down to MIN_WHITE for effective see-through removal; 2) a flat holding-up region from MIN_WHITE down to a set value; and 3) a gradual blackout region.
Next, at step 512, weighted-averaging is applied to the background peaks eWhite and aWhite using the following formula to derive an initial background luminance white(n) for the current line n.
white(n)=p·eWhite′+(1−paWhite′,
where p = { 0.8 · ( 1 - S e ) + 0.2 , if number of edge pixels in H e > C e 0.2 , otherwise
Figure US06621595-20030916-M00012
At step 514, the estimated background luminance Yb is calculated using the following formula.
Y b(n+1)=β(nY b(n)+(1−β(n))·white(n);
Y b(0)=255;
β(n)=βmax−(βmax−βmin)·exp(−g·n/H),
where n is the current line number, H is the height (in pixels) of the image and g is the adaptation boost. As an example, the parameters βmin, βmax, and g may be set as follows: βmin=0.9; βmax=0.995; and g=10. The above formula introduces a spatial (scan line) dependent smoothing for see-through effect in the vertical direction.
Turning back to FIG. 1, the image enhancer 120 of the image enhancing system 102 operates to enhance a given document image by performing text edge sharpening, text edge darkening, color fringe removal, and see-through removal. The enhancement operations that are performed by the image enhancer differ for edge and non-edge pixels of the image, as described in the following C-style pseudo-code. if D1 > T e { // edge pixel Y out = ( k + 1 ) · Y - k · Y _ ; // unsharp - masking if D2 > 0 // pixel on dark side of edge Y out = ( Y out · 3 ) / 4 ; // darkening if ( C _ r + C _ b ) < T C { // low - chroma edge pixel , strip color C r out = 0 ; C b out = 0 ; } else { // keep the original color C r out = C r ; C b out = C b ; } Y out C r out C b out R out G out B out ; } else { // non - edge pixel YC r C b RGB ; // see - through removal / background control R out = Q ( R , Y b ) ; G out = Q ( G , Y b ) ; B out = Q ( B , Y b ) ; }
Figure US06621595-20030916-M00013
In the above pseudo-code, the parameter Tc is a “colorful” threshold for determining the application of color fringe removal, and parameter k is a positive value for determining the strength of the edge sharpening. As an example, Tc=15 and k=2.5 may be used.
The function Q for see-through removal can take several forms. The simplest form is the following two-segment linear function. Q ( x , Y b ) = { 255 , if x > Y b x · 255 / Y b , otherwise
Figure US06621595-20030916-M00014
A more complex form is the following continuous non-linear function, which is commonly used for tone modification. Q ( x , Y b ) = { 255 , if x > Y b 255 · [ 1.0 - 0.5 · ( 2 ( Y b - x ) / Y b ) a ] 1 / y , else if x > Y b / 2 255 · [ 0.5 · ( 2 x / Y b ) a ] 1 / y , otherwise
Figure US06621595-20030916-M00015
In the above equation, the parameters α and γ control the contrast and the gamma, respectively. In order to increase computational speed, the function Q(x,Yb) can be implemented as a look-up table and updated for each scan line. After the enhancement operations, the enhanced image may be output to a printer 124 or a display 126, as shown in FIG. 1. Although the image enhancing system 102 has been described as a separate system, the system may 102 be a sub-system that is part of a larger system, such as a copying machine (not shown).
A method of enhancing scanned document images in accordance with the invention is described with references to FIG. 7. At step 702, a scan line of a digital image is received. The digital image may be a scanned image of a document that contains text and/or pictorial content. At step 704, the image scan line is converted from the original color space to a luminance/chrominance space. As an example, the original color is space may be an RGB space, and the luminance/chrominance space may be a YCrCb space. At step 706, low-pass filtering is applied to luminance/chrominance channels. A Gaussian filter is applied to the luminance channel, while an averaging filter is applied to both chrominance channels.
Next, at step 708, text edges are detected by identifying edge pixels and then classifying the edge pixels on the basis of whether the pixels are on the dark side or the bright side of the edge. The edge detection involves calculating D1 and D2 values. The D1 value is used to identify the edge pixels using an adaptive threshold Te. The D2 value is used to classify the edge pixels.
Next, at step 710, the background luminance is estimated to derive an estimated background luminance Yb. For edge pixels, steps 712, 714 and 716 are performed. At step 712, text enhancement is performed on the edge pixels. The text enhancement includes edge sharpening and/or edge darkening. At step 714, color fringe removal is performed for the edge pixels. The edge pixels are then converted to the original color space, e.g., an RGB space, at step 716. For non-edge pixels, steps 718 and 720 are performed. At step 718, the non-edge pixels are converted to the original color space. A see-through removal is then performed on the non-edge pixels, at step 720.
Next, at step 722, a determination is made whether the current image scan line is the last scan line of the image. If it is the last scan line, the method comes to an end. However, if the current image scan line is not the last scan line of the image, the method proceeds back to step 702 to receive the next scan line of the image. The steps 702-722 are repeated until the last scan line of the image has been processed.

Claims (25)

What is claimed is:
1. A method of enhancing digital images comprising:
receiving an input digital image containing visual noise;
detecting edges of a feature within said input digital image;
generating a background threshold based on an estimation of background luminance of said input digital image, said background threshold being dependent on luminance values of edge pixels of said edges; and
selectively modifying pixel values of said input digital image using said background threshold to at least partially remove said visual noise from said input digital image.
2. The method of claim 1 wherein said step of generating said background threshold includes using only said edge pixels that are on a lighter side of said edges of said feature to generate said background threshold.
3. The method of claim 1 wherein said step of generating said background threshold includes locating a first peak on a first histogram of selected luminance values of said edge pixels that likely corresponds to said background luminance.
4. The method of claim 3 wherein said step of locating said first peak on said first histogram includes computing a minimum error threshold of said first histogram.
5. The method of claim 3 wherein said step of generating said background threshold further includes locating a second peak on a second histogram of selected luminance values of pixels of said input digital image that likely corresponds to said background luminance.
6. The method of claim 5 wherein said step of generating said background threshold includes calculating the spreads of luminance distributions of said first and second histograms defined by said first and second peaks.
7. The method of claim 1 wherein said step of receiving, said step of detecting, said step of generating, and said step of selectively modifying are performed for individual scan lines of said input digital image, said background threshold being undated for each scan line of said input digital image.
8. The method of claim 7 wherein said background threshold is dependent on a previously computed background threshold to reduce differences in pixel modification between two adjacent scan lines of said input digital image.
9. The method of claim 1 wherein said step of selectively modifying said pixel values of said input digital image includes selectively removing color data from said edge pixels to reduce color fringe.
10. The method of claim 1 further comprising a step of enhancing said feature of said input digital image by sharpening or darkening said edges of said feature.
11. A system for enhancing an input digital image containing visual noise comprising:
means for detecting edges of a feature within said input digital image;
means for generating a background threshold based on an estimation of background luminance of said input digital image, said background threshold being dependent on luminance values of edge pixels of said edges; and
means for selectively modifying pixel values of said input digital image using said background threshold to at least partially remove said visual noise from said input digital image.
12. The system of claim 11 wherein said generating means is configured to generate said background threshold using only said edge pixels that are on a lighter side of said edges of said feature.
13. The system of claim 11 wherein said generating means is configured to locate a peak on a histogram of selected luminance values of said edge pixels that likely corresponds to said background luminance.
14. The system of claim 13 wherein said generating means is configured to compute a minimum error threshold of said histogram to locate said peak on said histogram.
15. The system of claim 14 wherein said generating means is further configured to calculate the spread of a luminance distribution of said histogram defined by said peak.
16. The system of claim 11 wherein said selectively modifying means is configured to selectively remove color data from said edge pixels to reduce color fringe.
17. The system of claim 11 wherein said detecting means, said generating means and said selectively modifying means are configured to operate on a line-by-line basis such that a single scan line of said input digital image is processed at a time.
18. The system of claim 17 wherein said generating means is configured to update said background threshold for each scan line of said input digital image such that modification of said pixel values of said input digital image for a current scan line of said input digital image is dependent on a previously processed scan line.
19. A program storage medium readable by a computer, tangibly embodying a program of instructions executable by said computer to perform method steps for enhancing an input digital image containing visual noise, said method steps comprising:
detecting edges of a feature within said input digital image;
generating a background threshold based on an estimation of background luminance of said input digital image, said background threshold being dependent on luminance values of edge pixels of said edges; and
selectively modifying pixel values of said input digital image using said background threshold to at least partially remove said visual noise from said input digital image.
20. The program storage medium of claim 19 wherein said step of generating said background threshold includes using only said edge pixels that are on a lighter side of said edges of said feature to generate said background threshold.
21. The program storage medium of claim 19 wherein said step of generating said background threshold includes locating a peak on a histogram of selected luminance values of said edge pixels that likely corresponds to said background luminance.
22. The program storage medium of claim 21 wherein said step of locating said peak on said histogram includes computing a minimum error threshold of said histogram.
23. The program storage medium of claim 21 wherein said step of locating said peak on said histogram includes calculating the spread of a luminance distribution of said histogram defined by said peak.
24. The program storage medium of claim 19 wherein said step of detecting, said step of generating, and said step of selectively modifying are performed for individual scan lines of said input digital image, said background threshold being undated for each scan line of said input digital image.
25. The program storage medium of claim 19 wherein said step of selectively modifying said pixel values of said input digital image includes selectively removing color data from said edge pixels to reduce color fringe.
US09/704,358 2000-11-01 2000-11-01 System and method for enhancing scanned document images for color printing Expired - Lifetime US6621595B1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US09/704,358 US6621595B1 (en) 2000-11-01 2000-11-01 System and method for enhancing scanned document images for color printing
PCT/US2001/045426 WO2002037832A2 (en) 2000-11-01 2001-10-31 System and method for enhancing scanned document images for color printing
AU2002227115A AU2002227115A1 (en) 2000-11-01 2001-10-31 System and method for enhancing scanned document images for color printing
JP2002540441A JP4112362B2 (en) 2000-11-01 2001-10-31 System and method for enhancing scanned document images for color printing
DE60137704T DE60137704D1 (en) 2000-11-01 2001-10-31 SYSTEM AND METHOD FOR IMPROVING SCANNED DOCUMENT IMAGES FOR COLOR PRINTING
EP01993115A EP1330917B1 (en) 2000-11-01 2001-10-31 System and method for enhancing scanned document images for colour printing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/704,358 US6621595B1 (en) 2000-11-01 2000-11-01 System and method for enhancing scanned document images for color printing

Publications (1)

Publication Number Publication Date
US6621595B1 true US6621595B1 (en) 2003-09-16

Family

ID=24829138

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/704,358 Expired - Lifetime US6621595B1 (en) 2000-11-01 2000-11-01 System and method for enhancing scanned document images for color printing

Country Status (6)

Country Link
US (1) US6621595B1 (en)
EP (1) EP1330917B1 (en)
JP (1) JP4112362B2 (en)
AU (1) AU2002227115A1 (en)
DE (1) DE60137704D1 (en)
WO (1) WO2002037832A2 (en)

Cited By (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020191857A1 (en) * 2001-05-14 2002-12-19 Macy William W. Inverse halftoning process
US20030118232A1 (en) * 2001-12-20 2003-06-26 Xerox Corporation Automatic background detection of scanned documents
US20030161007A1 (en) * 2002-02-28 2003-08-28 Maurer Ron P. User selected background noise removal for scanned document images
US20030228067A1 (en) * 2002-06-05 2003-12-11 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer program
US20040125411A1 (en) * 2002-09-19 2004-07-01 Kazunari Tonami Method of, apparatus for image processing, and computer product
US20040170339A1 (en) * 2003-02-28 2004-09-02 Maurer Ron P. Selective thickening of dark features by biased sharpening filters
US20040174566A1 (en) * 2003-03-07 2004-09-09 Minolta Co., Ltd. Method and apparatus for processing image
US20050002046A1 (en) * 2002-12-24 2005-01-06 Technische Universiteit Delft Method for transforming a colour image
US20050001872A1 (en) * 2003-07-02 2005-01-06 Ahne Adam Jude Method for filtering objects to be separated from a media
US20050018903A1 (en) * 2003-07-24 2005-01-27 Noriko Miyagi Method and apparatus for image processing and computer product
US20050031223A1 (en) * 2002-06-28 2005-02-10 Canon Kabushiki Kaisha Image processing apparatus and control method thereof
EP1591957A1 (en) * 2004-04-30 2005-11-02 Sagem SA Method and system of image processing by filtering blocks of pixels of an image
US20060115153A1 (en) * 2004-11-30 2006-06-01 Bhattacharjya Anoop K Page background estimation using color, texture and edge features
US20060239550A1 (en) * 2005-04-22 2006-10-26 Lexmark International Inc. Method and system for enhancing an image
US20060245665A1 (en) * 2005-04-29 2006-11-02 Je-Ho Lee Method to detect previous sharpening of an image to preclude oversharpening
US20060274376A1 (en) * 2005-06-06 2006-12-07 Lexmark International, Inc. Method for image background detection and removal
US20060291742A1 (en) * 2005-06-27 2006-12-28 Nuctech Company Limited And Tsinghua University Method and apparatus for enhancing image acquired by radiographic system
US20070036435A1 (en) * 2005-08-12 2007-02-15 Bhattacharjya Anoop K Label aided copy enhancement
US20070053608A1 (en) * 2005-08-23 2007-03-08 Jun Zhang Method for reducing mosquito noise
US20070189615A1 (en) * 2005-08-12 2007-08-16 Che-Bin Liu Systems and Methods for Generating Background and Foreground Images for Document Compression
US20070217701A1 (en) * 2005-08-12 2007-09-20 Che-Bin Liu Systems and Methods to Convert Images into High-Quality Compressed Documents
US20080056546A1 (en) * 2006-09-04 2008-03-06 Nec Corporation Character noise eliminating apparatus, character noise eliminating method, and character noise eliminating program
US20080181497A1 (en) * 2007-01-29 2008-07-31 Ahmet Mufit Ferman Methods and Systems for Characterizing Regions of Substantially-Uniform Color in a Digital Image
US20080298718A1 (en) * 2007-05-31 2008-12-04 Che-Bin Liu Image Stitching
US20090003700A1 (en) * 2007-06-27 2009-01-01 Jing Xiao Precise Identification of Text Pixels from Scanned Document Images
US20090129696A1 (en) * 2007-11-16 2009-05-21 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20100060910A1 (en) * 2008-09-08 2010-03-11 Fechter Joel S System and method, and computer program product for detecting an edge in scan data
CN101577832B (en) * 2008-05-06 2012-03-21 联咏科技股份有限公司 Image processing circuit and image processing method for strengthening character display effect
US20120087596A1 (en) * 2010-10-06 2012-04-12 Kamat Pawankumar Jagannath Methods and systems for pipelined image processing
US20120128244A1 (en) * 2010-11-19 2012-05-24 Raka Singh Divide-and-conquer filter for low-light noise reduction
US8223392B2 (en) * 2002-03-07 2012-07-17 Brother Kogyo Kabushiki Kaisha Image processing device and image processing method
US20140192372A1 (en) * 2008-09-24 2014-07-10 Samsung Electronics Co., Ltd Method of processing image and image forming apparatus using the same
US8855375B2 (en) 2012-01-12 2014-10-07 Kofax, Inc. Systems and methods for mobile image capture and processing
US8885229B1 (en) 2013-05-03 2014-11-11 Kofax, Inc. Systems and methods for detecting and classifying objects in video captured using mobile devices
US8958605B2 (en) 2009-02-10 2015-02-17 Kofax, Inc. Systems, methods and computer program products for determining document validity
US9058580B1 (en) 2012-01-12 2015-06-16 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US9058515B1 (en) 2012-01-12 2015-06-16 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US9137417B2 (en) 2005-03-24 2015-09-15 Kofax, Inc. Systems and methods for processing video data
US9134931B2 (en) 2013-04-30 2015-09-15 Hewlett-Packard Development Company, L.P. Printing content over a network
US9141926B2 (en) 2013-04-23 2015-09-22 Kofax, Inc. Smart mobile application development platform
US9204011B1 (en) * 2014-07-07 2015-12-01 Fujitsu Limited Apparatus and method for extracting a background luminance map of an image, de-shading apparatus and method
US9208536B2 (en) 2013-09-27 2015-12-08 Kofax, Inc. Systems and methods for three dimensional geometric reconstruction of captured image data
US20150373227A1 (en) * 2014-06-23 2015-12-24 Canon Kabushiki Kaisha Image processing apparatus, method, and medium
US9223769B2 (en) 2011-09-21 2015-12-29 Roman Tsibulevskiy Data processing systems, devices, and methods for content analysis
US9311531B2 (en) 2013-03-13 2016-04-12 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
US9355312B2 (en) 2013-03-13 2016-05-31 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
US9386235B2 (en) 2013-11-15 2016-07-05 Kofax, Inc. Systems and methods for generating composite images of long documents using mobile video data
US9396388B2 (en) 2009-02-10 2016-07-19 Kofax, Inc. Systems, methods and computer program products for determining document validity
CN106057167A (en) * 2016-07-21 2016-10-26 京东方科技集团股份有限公司 Method and device for character edge darkening processing
US9483794B2 (en) 2012-01-12 2016-11-01 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US9563938B2 (en) 2010-11-19 2017-02-07 Analog Devices Global System and method for removing image noise
US9576272B2 (en) 2009-02-10 2017-02-21 Kofax, Inc. Systems, methods and computer program products for determining document validity
US9747269B2 (en) 2009-02-10 2017-08-29 Kofax, Inc. Smart optical input/output (I/O) extension for context-dependent workflows
US9760788B2 (en) 2014-10-30 2017-09-12 Kofax, Inc. Mobile document detection and orientation based on reference object characteristics
US9767354B2 (en) 2009-02-10 2017-09-19 Kofax, Inc. Global geographic information retrieval, validation, and normalization
US9769354B2 (en) 2005-03-24 2017-09-19 Kofax, Inc. Systems and methods of processing scanned data
US9779296B1 (en) 2016-04-01 2017-10-03 Kofax, Inc. Content-based detection and three dimensional geometric reconstruction of objects in image and video data
US10146795B2 (en) 2012-01-12 2018-12-04 Kofax, Inc. Systems and methods for mobile image capture and processing
US10242285B2 (en) 2015-07-20 2019-03-26 Kofax, Inc. Iterative recognition-guided thresholding and data extraction
US10803350B2 (en) 2017-11-30 2020-10-13 Kofax, Inc. Object detection and image cropping using a multi-detector approach
US11350007B2 (en) * 2018-12-07 2022-05-31 Canon Kabushiki Kaisha Image reading apparatus, image processing method, and storage medium for correcting show-through

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7433535B2 (en) 2003-09-30 2008-10-07 Hewlett-Packard Development Company, L.P. Enhancing text-like edges in digital images
US20080174797A1 (en) * 2007-01-18 2008-07-24 Samsung Electronics Co., Ltd. Image forming device and method thereof
WO2010091971A1 (en) 2009-02-13 2010-08-19 Oce-Technologies B.V. Image processing system for processing a digital image and image processing method of processing a digital image

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0188193A2 (en) 1985-01-15 1986-07-23 International Business Machines Corporation Method and apparatus for processing image data
US5280367A (en) * 1991-05-28 1994-01-18 Hewlett-Packard Company Automatic separation of text from background in scanned images of complex documents
US5363209A (en) 1993-11-05 1994-11-08 Xerox Corporation Image-dependent sharpness enhancement
US5481628A (en) 1991-09-10 1996-01-02 Eastman Kodak Company Method and apparatus for spatially variant filtering
US5539541A (en) * 1991-03-27 1996-07-23 Canon Kabushiki Kaisha Image processing apparatus including noise elimination circuits
US5583659A (en) * 1994-11-10 1996-12-10 Eastman Kodak Company Multi-windowing technique for thresholding an image using local image properties
US5699454A (en) 1994-05-09 1997-12-16 Sharp Kabushiki Kaisha Image processing apparatus
US5825937A (en) 1993-09-27 1998-10-20 Ricoh Company, Ltd. Spatial-filtering unit for performing adaptive edge-enhancement process
US5848181A (en) 1995-07-26 1998-12-08 Sony Corporation Image processing method, image processing apparatus, noise removing method, and noise removing apparatus
US5850298A (en) 1994-03-22 1998-12-15 Ricoh Company, Ltd. Image processing device eliminating background noise
US20010050778A1 (en) * 2000-05-08 2001-12-13 Hiroaki Fukuda Method and system for see-through image correction in image duplication

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0188193A2 (en) 1985-01-15 1986-07-23 International Business Machines Corporation Method and apparatus for processing image data
US5539541A (en) * 1991-03-27 1996-07-23 Canon Kabushiki Kaisha Image processing apparatus including noise elimination circuits
US5280367A (en) * 1991-05-28 1994-01-18 Hewlett-Packard Company Automatic separation of text from background in scanned images of complex documents
US5481628A (en) 1991-09-10 1996-01-02 Eastman Kodak Company Method and apparatus for spatially variant filtering
US5825937A (en) 1993-09-27 1998-10-20 Ricoh Company, Ltd. Spatial-filtering unit for performing adaptive edge-enhancement process
US5363209A (en) 1993-11-05 1994-11-08 Xerox Corporation Image-dependent sharpness enhancement
US5850298A (en) 1994-03-22 1998-12-15 Ricoh Company, Ltd. Image processing device eliminating background noise
US5699454A (en) 1994-05-09 1997-12-16 Sharp Kabushiki Kaisha Image processing apparatus
US5583659A (en) * 1994-11-10 1996-12-10 Eastman Kodak Company Multi-windowing technique for thresholding an image using local image properties
US5848181A (en) 1995-07-26 1998-12-08 Sony Corporation Image processing method, image processing apparatus, noise removing method, and noise removing apparatus
US20010050778A1 (en) * 2000-05-08 2001-12-13 Hiroaki Fukuda Method and system for see-through image correction in image duplication

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
J. Kittler and J. Illingworth, "Minimum Error Thresholding", Pattern Recognition, vol. 19, No. 1, 1986, pp. 41-47.

Cited By (117)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020191857A1 (en) * 2001-05-14 2002-12-19 Macy William W. Inverse halftoning process
US7228002B2 (en) * 2001-05-14 2007-06-05 Intel Corporation Inverse halftoning process
US20030118232A1 (en) * 2001-12-20 2003-06-26 Xerox Corporation Automatic background detection of scanned documents
US7058222B2 (en) * 2001-12-20 2006-06-06 Xerox Corporation Automatic background detection of scanned documents
US20030161007A1 (en) * 2002-02-28 2003-08-28 Maurer Ron P. User selected background noise removal for scanned document images
US7050650B2 (en) * 2002-02-28 2006-05-23 Hewlett-Packard Development Company, L.P. User selected background noise removal for scanned document images
US8223392B2 (en) * 2002-03-07 2012-07-17 Brother Kogyo Kabushiki Kaisha Image processing device and image processing method
US8306357B2 (en) 2002-06-05 2012-11-06 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer program
US20030228067A1 (en) * 2002-06-05 2003-12-11 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and computer program
US8744209B2 (en) 2002-06-05 2014-06-03 Canon Kabushiki Kaisha Image processing apparatus and image processing method for visually reducing noise components contained in a low frequency range of image data
US8023764B2 (en) 2002-06-05 2011-09-20 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and a program, for removing low-frequency noise from image data
US20050031223A1 (en) * 2002-06-28 2005-02-10 Canon Kabushiki Kaisha Image processing apparatus and control method thereof
US7634153B2 (en) 2002-06-28 2009-12-15 Canon Kabushiki Kaisha Image processing apparatus and control method thereof
US7433538B2 (en) * 2002-06-28 2008-10-07 Canon Kabushiki Kaisha Image processing apparatus and control method thereof
US20080152254A1 (en) * 2002-06-28 2008-06-26 Canon Kabushiki Kaisha Image processing apparatus and control method thereof
US20040125411A1 (en) * 2002-09-19 2004-07-01 Kazunari Tonami Method of, apparatus for image processing, and computer product
US20050002046A1 (en) * 2002-12-24 2005-01-06 Technische Universiteit Delft Method for transforming a colour image
US20040170339A1 (en) * 2003-02-28 2004-09-02 Maurer Ron P. Selective thickening of dark features by biased sharpening filters
US7194142B2 (en) * 2003-02-28 2007-03-20 Hewlett-Packard Development Company, L.P. Selective thickening of dark features by biased sharpening filters
US8339673B2 (en) * 2003-03-07 2012-12-25 Minolta Co., Ltd. Method and apparatus for improving edge sharpness with error diffusion
US20040174566A1 (en) * 2003-03-07 2004-09-09 Minolta Co., Ltd. Method and apparatus for processing image
US20050001872A1 (en) * 2003-07-02 2005-01-06 Ahne Adam Jude Method for filtering objects to be separated from a media
US20050018903A1 (en) * 2003-07-24 2005-01-27 Noriko Miyagi Method and apparatus for image processing and computer product
FR2869749A1 (en) * 2004-04-30 2005-11-04 Sagem METHOD AND SYSTEM FOR PROCESSING IMAGE BY FILTERING BLOCKS OF IMAGE PIXELS
EP1591957A1 (en) * 2004-04-30 2005-11-02 Sagem SA Method and system of image processing by filtering blocks of pixels of an image
US20060115153A1 (en) * 2004-11-30 2006-06-01 Bhattacharjya Anoop K Page background estimation using color, texture and edge features
US7428331B2 (en) * 2004-11-30 2008-09-23 Seiko Epson Corporation Page background estimation using color, texture and edge features
US9769354B2 (en) 2005-03-24 2017-09-19 Kofax, Inc. Systems and methods of processing scanned data
US9137417B2 (en) 2005-03-24 2015-09-15 Kofax, Inc. Systems and methods for processing video data
US7586653B2 (en) 2005-04-22 2009-09-08 Lexmark International, Inc. Method and system for enhancing an image using luminance scaling
US20060239550A1 (en) * 2005-04-22 2006-10-26 Lexmark International Inc. Method and system for enhancing an image
US20060245665A1 (en) * 2005-04-29 2006-11-02 Je-Ho Lee Method to detect previous sharpening of an image to preclude oversharpening
US20060274376A1 (en) * 2005-06-06 2006-12-07 Lexmark International, Inc. Method for image background detection and removal
US7689055B2 (en) * 2005-06-27 2010-03-30 Nuctech Company Limited Method and apparatus for enhancing image acquired by radiographic system
US20060291742A1 (en) * 2005-06-27 2006-12-28 Nuctech Company Limited And Tsinghua University Method and apparatus for enhancing image acquired by radiographic system
US20070036435A1 (en) * 2005-08-12 2007-02-15 Bhattacharjya Anoop K Label aided copy enhancement
US7899258B2 (en) 2005-08-12 2011-03-01 Seiko Epson Corporation Systems and methods to convert images into high-quality compressed documents
US20070217701A1 (en) * 2005-08-12 2007-09-20 Che-Bin Liu Systems and Methods to Convert Images into High-Quality Compressed Documents
US20070189615A1 (en) * 2005-08-12 2007-08-16 Che-Bin Liu Systems and Methods for Generating Background and Foreground Images for Document Compression
US7557963B2 (en) 2005-08-12 2009-07-07 Seiko Epson Corporation Label aided copy enhancement
US7783117B2 (en) 2005-08-12 2010-08-24 Seiko Epson Corporation Systems and methods for generating background and foreground images for document compression
US20070053608A1 (en) * 2005-08-23 2007-03-08 Jun Zhang Method for reducing mosquito noise
US7734089B2 (en) * 2005-08-23 2010-06-08 Trident Microsystems (Far East) Ltd. Method for reducing mosquito noise
US8014574B2 (en) * 2006-09-04 2011-09-06 Nec Corporation Character noise eliminating apparatus, character noise eliminating method, and character noise eliminating program
US20080056546A1 (en) * 2006-09-04 2008-03-06 Nec Corporation Character noise eliminating apparatus, character noise eliminating method, and character noise eliminating program
US20080181497A1 (en) * 2007-01-29 2008-07-31 Ahmet Mufit Ferman Methods and Systems for Characterizing Regions of Substantially-Uniform Color in a Digital Image
US8134762B2 (en) * 2007-01-29 2012-03-13 Sharp Laboratories Of America, Inc. Methods and systems for characterizing regions of substantially-uniform color in a digital image
US7894689B2 (en) 2007-05-31 2011-02-22 Seiko Epson Corporation Image stitching
US20080298718A1 (en) * 2007-05-31 2008-12-04 Che-Bin Liu Image Stitching
US20090003700A1 (en) * 2007-06-27 2009-01-01 Jing Xiao Precise Identification of Text Pixels from Scanned Document Images
US7873215B2 (en) 2007-06-27 2011-01-18 Seiko Epson Corporation Precise identification of text pixels from scanned document images
US8437539B2 (en) * 2007-11-16 2013-05-07 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US8805070B2 (en) * 2007-11-16 2014-08-12 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20090129696A1 (en) * 2007-11-16 2009-05-21 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN101577832B (en) * 2008-05-06 2012-03-21 联咏科技股份有限公司 Image processing circuit and image processing method for strengthening character display effect
US9177218B2 (en) * 2008-09-08 2015-11-03 Kofax, Inc. System and method, and computer program product for detecting an edge in scan data
US20100060910A1 (en) * 2008-09-08 2010-03-11 Fechter Joel S System and method, and computer program product for detecting an edge in scan data
US9635215B2 (en) * 2008-09-24 2017-04-25 Samsung Electronics Co., Ltd. Method of processing image and image forming apparatus using the same
US20140192372A1 (en) * 2008-09-24 2014-07-10 Samsung Electronics Co., Ltd Method of processing image and image forming apparatus using the same
US9767354B2 (en) 2009-02-10 2017-09-19 Kofax, Inc. Global geographic information retrieval, validation, and normalization
US8958605B2 (en) 2009-02-10 2015-02-17 Kofax, Inc. Systems, methods and computer program products for determining document validity
US9747269B2 (en) 2009-02-10 2017-08-29 Kofax, Inc. Smart optical input/output (I/O) extension for context-dependent workflows
US9576272B2 (en) 2009-02-10 2017-02-21 Kofax, Inc. Systems, methods and computer program products for determining document validity
US9396388B2 (en) 2009-02-10 2016-07-19 Kofax, Inc. Systems, methods and computer program products for determining document validity
US20120087596A1 (en) * 2010-10-06 2012-04-12 Kamat Pawankumar Jagannath Methods and systems for pipelined image processing
US9563938B2 (en) 2010-11-19 2017-02-07 Analog Devices Global System and method for removing image noise
US20120128244A1 (en) * 2010-11-19 2012-05-24 Raka Singh Divide-and-conquer filter for low-light noise reduction
US9558402B2 (en) 2011-09-21 2017-01-31 Roman Tsibulevskiy Data processing systems, devices, and methods for content analysis
US9430720B1 (en) 2011-09-21 2016-08-30 Roman Tsibulevskiy Data processing systems, devices, and methods for content analysis
US9953013B2 (en) 2011-09-21 2018-04-24 Roman Tsibulevskiy Data processing systems, devices, and methods for content analysis
US9508027B2 (en) 2011-09-21 2016-11-29 Roman Tsibulevskiy Data processing systems, devices, and methods for content analysis
US11830266B2 (en) 2011-09-21 2023-11-28 Roman Tsibulevskiy Data processing systems, devices, and methods for content analysis
US11232251B2 (en) 2011-09-21 2022-01-25 Roman Tsibulevskiy Data processing systems, devices, and methods for content analysis
US10311134B2 (en) 2011-09-21 2019-06-04 Roman Tsibulevskiy Data processing systems, devices, and methods for content analysis
US10325011B2 (en) 2011-09-21 2019-06-18 Roman Tsibulevskiy Data processing systems, devices, and methods for content analysis
US9223769B2 (en) 2011-09-21 2015-12-29 Roman Tsibulevskiy Data processing systems, devices, and methods for content analysis
US9514357B2 (en) 2012-01-12 2016-12-06 Kofax, Inc. Systems and methods for mobile image capture and processing
US9165188B2 (en) 2012-01-12 2015-10-20 Kofax, Inc. Systems and methods for mobile image capture and processing
US9342742B2 (en) 2012-01-12 2016-05-17 Kofax, Inc. Systems and methods for mobile image capture and processing
US9483794B2 (en) 2012-01-12 2016-11-01 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US10664919B2 (en) 2012-01-12 2020-05-26 Kofax, Inc. Systems and methods for mobile image capture and processing
US10146795B2 (en) 2012-01-12 2018-12-04 Kofax, Inc. Systems and methods for mobile image capture and processing
US8879120B2 (en) 2012-01-12 2014-11-04 Kofax, Inc. Systems and methods for mobile image capture and processing
US10657600B2 (en) 2012-01-12 2020-05-19 Kofax, Inc. Systems and methods for mobile image capture and processing
US8855375B2 (en) 2012-01-12 2014-10-07 Kofax, Inc. Systems and methods for mobile image capture and processing
US9165187B2 (en) 2012-01-12 2015-10-20 Kofax, Inc. Systems and methods for mobile image capture and processing
US9158967B2 (en) 2012-01-12 2015-10-13 Kofax, Inc. Systems and methods for mobile image capture and processing
US9058515B1 (en) 2012-01-12 2015-06-16 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US9058580B1 (en) 2012-01-12 2015-06-16 Kofax, Inc. Systems and methods for identification document processing and business workflow integration
US8971587B2 (en) 2012-01-12 2015-03-03 Kofax, Inc. Systems and methods for mobile image capture and processing
US8989515B2 (en) 2012-01-12 2015-03-24 Kofax, Inc. Systems and methods for mobile image capture and processing
US9996741B2 (en) 2013-03-13 2018-06-12 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
US10127441B2 (en) 2013-03-13 2018-11-13 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
US9311531B2 (en) 2013-03-13 2016-04-12 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
US9754164B2 (en) 2013-03-13 2017-09-05 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
US9355312B2 (en) 2013-03-13 2016-05-31 Kofax, Inc. Systems and methods for classifying objects in digital images captured using mobile devices
US10146803B2 (en) 2013-04-23 2018-12-04 Kofax, Inc Smart mobile application development platform
US9141926B2 (en) 2013-04-23 2015-09-22 Kofax, Inc. Smart mobile application development platform
US9134931B2 (en) 2013-04-30 2015-09-15 Hewlett-Packard Development Company, L.P. Printing content over a network
US8885229B1 (en) 2013-05-03 2014-11-11 Kofax, Inc. Systems and methods for detecting and classifying objects in video captured using mobile devices
US9584729B2 (en) 2013-05-03 2017-02-28 Kofax, Inc. Systems and methods for improving video captured using mobile devices
US9253349B2 (en) 2013-05-03 2016-02-02 Kofax, Inc. Systems and methods for detecting and classifying objects in video captured using mobile devices
US9208536B2 (en) 2013-09-27 2015-12-08 Kofax, Inc. Systems and methods for three dimensional geometric reconstruction of captured image data
US9946954B2 (en) 2013-09-27 2018-04-17 Kofax, Inc. Determining distance between an object and a capture device based on captured image data
US9386235B2 (en) 2013-11-15 2016-07-05 Kofax, Inc. Systems and methods for generating composite images of long documents using mobile video data
US9747504B2 (en) 2013-11-15 2017-08-29 Kofax, Inc. Systems and methods for generating composite images of long documents using mobile video data
US9565338B2 (en) * 2014-06-23 2017-02-07 Canon Kabushiki Kaisha Image processing apparatus, method, and medium to perform image smoothing and brightness correction to remove show through
US20150373227A1 (en) * 2014-06-23 2015-12-24 Canon Kabushiki Kaisha Image processing apparatus, method, and medium
US9204011B1 (en) * 2014-07-07 2015-12-01 Fujitsu Limited Apparatus and method for extracting a background luminance map of an image, de-shading apparatus and method
US9760788B2 (en) 2014-10-30 2017-09-12 Kofax, Inc. Mobile document detection and orientation based on reference object characteristics
US10242285B2 (en) 2015-07-20 2019-03-26 Kofax, Inc. Iterative recognition-guided thresholding and data extraction
US9779296B1 (en) 2016-04-01 2017-10-03 Kofax, Inc. Content-based detection and three dimensional geometric reconstruction of objects in image and video data
CN106057167B (en) * 2016-07-21 2019-04-05 京东方科技集团股份有限公司 A kind of method and device of pair of edge darkening processing of text
CN106057167A (en) * 2016-07-21 2016-10-26 京东方科技集团股份有限公司 Method and device for character edge darkening processing
US11062176B2 (en) 2017-11-30 2021-07-13 Kofax, Inc. Object detection and image cropping using a multi-detector approach
US10803350B2 (en) 2017-11-30 2020-10-13 Kofax, Inc. Object detection and image cropping using a multi-detector approach
US11350007B2 (en) * 2018-12-07 2022-05-31 Canon Kabushiki Kaisha Image reading apparatus, image processing method, and storage medium for correcting show-through

Also Published As

Publication number Publication date
JP4112362B2 (en) 2008-07-02
DE60137704D1 (en) 2009-04-02
EP1330917A2 (en) 2003-07-30
EP1330917B1 (en) 2009-02-18
JP2004521529A (en) 2004-07-15
WO2002037832A3 (en) 2002-08-29
WO2002037832A2 (en) 2002-05-10
AU2002227115A1 (en) 2002-05-15

Similar Documents

Publication Publication Date Title
US6621595B1 (en) System and method for enhancing scanned document images for color printing
US7433535B2 (en) Enhancing text-like edges in digital images
US7068852B2 (en) Edge detection and sharpening process for an image
US7068328B1 (en) Method, apparatus and recording medium for image processing
US6628842B1 (en) Image processing method and apparatus
US6628833B1 (en) Image processing apparatus, image processing method, and recording medium with image processing program to process image according to input image
US7586653B2 (en) Method and system for enhancing an image using luminance scaling
US7050650B2 (en) User selected background noise removal for scanned document images
JP2006091980A (en) Image processor, image processing method and image processing program
US7586646B2 (en) System for processing and classifying image data using halftone noise energy distribution
US6771838B1 (en) System and method for enhancing document images
US20050286791A1 (en) Image processing method, image processing apparatus, image forming apparatus, computer program product and computer memory product
JP4093413B2 (en) Image processing apparatus, image processing program, and recording medium recording the program
US7057767B2 (en) Automatic background removal method and system
JP2003505893A (en) Method and apparatus for image classification and halftone detection
US7580158B2 (en) Image processing apparatus, method and program
JP4084537B2 (en) Image processing apparatus, image processing method, recording medium, and image forming apparatus
JP2002158872A (en) Image processing method, image processor and recording medium
US20030231324A1 (en) Image processing method and apparatus
JP2702133B2 (en) Image processing method
Kwon et al. Text-enhanced error diffusion using multiplicative parameters and error scaling factor
JP4093726B2 (en) Image processing method, image processing apparatus, image processing program, and recording medium
JP4005243B2 (en) Image processing device
JP2004056710A (en) Color image processing apparatus, color image processing method, program, and recording medium
JPH07212591A (en) Picture binarizing device

Legal Events

Date Code Title Description
AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:013780/0741

Effective date: 20030703

AS Assignment

Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FAN, JIAN;TRETTER, DANIEL R.;LIN, QIAN;AND OTHERS;REEL/FRAME:013780/0243;SIGNING DATES FROM 20030605 TO 20030707

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12