US20070154078A1 - Processing images of media items before validation - Google Patents

Processing images of media items before validation Download PDF

Info

Publication number
US20070154078A1
US20070154078A1 US11/639,593 US63959306A US2007154078A1 US 20070154078 A1 US20070154078 A1 US 20070154078A1 US 63959306 A US63959306 A US 63959306A US 2007154078 A1 US2007154078 A1 US 2007154078A1
Authority
US
United States
Prior art keywords
image
images
media item
decision making
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/639,593
Inventor
Chao He
Gary Ross
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NCR Voyix Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=37529297&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20070154078(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Individual filed Critical Individual
Priority to US11/639,593 priority Critical patent/US20070154078A1/en
Assigned to NCR CORPORATION reassignment NCR CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ROSS, GARY A., HE, CHAO
Publication of US20070154078A1 publication Critical patent/US20070154078A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present invention relates to a method and apparatus for processing images of media items before validation. It is particularly related to, but in no way limited to, processing images of media items such as banknotes, passports, bonds, share certificates, checks and the like.
  • Previous automatic validation methods typically require a relatively large number of examples of counterfeit banknotes to be known in order to train the classifier.
  • those previous classifiers are trained to detect known counterfeits only. This is problematic because often little or no information is available about possible counterfeits. For example, this is particularly problematic for newly introduced denominations or newly introduced currency.
  • Automatic media validation is typically problematic in the case of media items that are damaged or marked.
  • a method of processing images of media items before automatic validation which addresses this problem is described.
  • Aberrant image elements are identified, for example, using a bandpass filter.
  • the aberrant image elements are replaced by neutral decision making data. This data is neutral with respect to a decision making process being a specified automatic currency validation process.
  • a value is selected from the estimated distribution on the basis of a significance level which is related to a significance level used by the automatic media validation process. In this way media items which have tears, holes, marks or soiling may be successfully processed by an automated media validator.
  • the methods described herein may be performed by software in machine readable form on a storage medium.
  • the method steps may be carried out in any suitable order and/or in parallel as is apparent to the skilled person in the art.
  • FIG. 1 is a flow diagram of a method of identifying and replacing aberrant image elements in a banknote image
  • FIG. 2 is a flow diagram of a method of creating a classifier for banknote validation
  • FIG. 3 is a flow diagram of a method of replacing aberrant image elements in a banknote image
  • FIG. 4 is a schematic diagram of an apparatus for creating a classifier for banknote validation
  • FIG. 5 is a schematic diagram of a banknote validator
  • FIG. 6 is a flow diagram of a method of validating a banknote
  • FIG. 7 is a schematic diagram of a self-service apparatus with a banknote validator.
  • Embodiments of the present invention are described below by way of example only. These examples represent the best ways of putting the invention into practice that are currently known to the Applicant although they are not the only ways in which this could be achieved.
  • the present examples are described and illustrated herein as being implemented for automatic currency validation, the systems described herein are described as examples and not limitations. As those skilled in the art will appreciate, the present examples are suitable for application in a variety of different types of media validation systems, including but not limited to passport validation systems, check validation systems and validation systems for bonds and share certificates.
  • one class classifier is used to refer to a classifier that is formed or built using information about examples only from a single class but which is used to allocate newly presented examples either to that single class or not. This differs from a conventional binary classifier which is created using information about examples from two classes and which is used to allocate new examples to one or other of those two classes.
  • a one-class classifier can be thought of as defining a boundary around a known class such that examples falling out with that boundary are deemed not to belong to the known class.
  • an automatic currency validation system may use a process whereby an image of a banknote to be validated is divided into segments. Those segments may be formed using a grid structure or other method using spatial position information alone. Alternatively, the segments may be formed using a segmentation map that uses information about relative values of image elements between corresponding image elements in each member of a set of training banknote images.
  • a banknote to be validated is damaged or marked then this leads to problems in the automatic banknote validation process because some of the information is aberrant or corrupt. For example, holes in a banknote may result in pixels of abnormally high intensity in an image of that banknote. Also, soiling or marks on a banknote may result in pixels of abnormally low intensity in an image of that banknote.
  • decision-neutral data or “neutral decision making data” we mean data which will not influence the outcome of a pre-specified media item validation process. That media item validation process may be of any suitable type, including but not limited to, the particular media item validation processes described herein.
  • FIG. 1 is a high level flow diagram of a method of processing an image of a banknote to be validated.
  • An image of a banknote to be validated is captured (see box 1 ) using any suitable technique as described in more detail below.
  • the image is normalized and/or pre-processed (see box 2 ) for example to align it in a particular orientation and to scale it to a particular size. This enables variations in sensors and lighting conditions to be taken into account.
  • An optional step (see box 3 ) then involves using a recognition algorithm to determine one or more of the currency, series, denomination and orientation of the banknote. If the recognition algorithm fails then it may be retried by referencing different edges or corners of the banknote image. If all four edges are attempted and failed then the note is rejected (see box 7 ). Otherwise the process continues and looks for aberrations in the image (see box 4 ).
  • Aberrations may be identified in any suitable manner. For example, missing areas or holes in a banknote typically give rise to image areas of abnormally high brightness. In this case, all image areas, elements or pixels with an intensity above a specified threshold may be identified as aberrations.
  • polymer notes are used with windows. Such windows also give rise to image areas of high brightness. In order that these windows are not identified as aberrations, knowledge about expected location, position and size of these windows can be taken into account when identifying aberrations.
  • a bandpass filter may be used.
  • the aberrations are removed by being replaced by decision-neutral data (see box 5 ).
  • a check is made on the proportion of the banknote image identified as aberrant. If this proportion is above a specified threshold then the banknote is rejected if it has not already been rejected at the recognition algorithm stage (box 7 ). This ensures that counterfeit notes formed from parts of genuine notes joined to parts of obscured counterfeit notes are rejected. Also, in this way it is possible to place a limit on the amount of aberrant data that may be replaced. As the process tends towards 100% of the banknote image being replaced by decision-neutral data the ability to detect counterfeits is reduced.
  • the resulting modified image of the banknote is then passed to a banknote validation system (see box 6 ) to be validated.
  • the pre-specified banknote validation process uses a classifier formed as now described.
  • FIG. 2 is a high level flow diagram of a method of creating a classifier for banknote validation.
  • a training set of images of genuine banknotes (see box 10 of FIG. 1 ). These are images of the same type taken of banknotes of the same currency and denomination.
  • the type of image relates to how the images are obtained, and this may be in any manner known in the art. For example, reflection images, transmission images, images on any of a red, blue or green channel, thermal images, infrared images, ultraviolet images, x-ray images or other image types.
  • the images in the training set are in registration and are the same size. Pre-processing can be carried out to align the images and scale them to size if necessary, as known in the art.
  • the segmentation map comprises information about how to divide an image into a plurality of segments.
  • the segments may be non-continuous, that is, a given segment can comprise more than one patch in different regions of the image.
  • the segmentation map also comprises a specified number of segments to be used.
  • feature we mean any statistic or other characteristic of a segment. For example, the mean pixel intensity, median pixel intensity, mode of the pixel intensities, texture, histogram, Fourier transform descriptors, wavelet transform descriptors and/or any other statistics in a segment.
  • a classifier is then formed using the feature information (see box 18 of FIG. 2 ).
  • Any suitable type of classifier can be used as known in the art.
  • the classifier is a one-class classifier and no information about counterfeit banknotes is needed.
  • the method in FIG. 2 enables a classifier for validation of banknotes of a particular currency and denomination to be formed simply, quickly and effectively. To create classifiers for other currencies or denominations the method is repeated with appropriate training set images.
  • Embodiments described herein may use a different method of forming the segmentation map which removes the need for using a genetic algorithm or equivalent method to search for a good segmentation map within a large number of possible segmentation maps. This reduces computational cost and improves performance. In addition the need for information about counterfeit banknotes is removed.
  • this method can be thought of as specifying how to divide the image plane into a plurality of segments, each comprising a plurality of specified pixels.
  • the segments can be non-continuous as mentioned above.
  • this specification is made on the basis of information from all images in the training set.
  • segmentation using a rigid grid structure does not require information from images in the training set.
  • each segmentation map comprises information about relationships of corresponding image elements between all images in the training set.
  • pixel intensity profiles In a preferred example we use these pixel intensity profiles. However, it is not essential to use pixel intensity profiles. It is also possible to use other information from all images in the training set. For example, intensity profiles for blocks of 4 neighboring pixels or mean values of pixel intensities for pixels at the same location in each of the training set images.
  • a row vector [a ji ,a j2 , . . . ,a jN ] in A can be seen as an intensity profile for a particular pixel (jth) across N images. If two pixels come from the same pattern region of the image they are likely to have the similar intensity values and hence have a strong temporal correlation. Note the term “temporal” here need not exactly correspond to the time axis but is borrowed to indicate the axis across different images in the ensemble. Our algorithm tries to find these correlations and segments the image plane spatially into regions of pixels that have similar temporal behavior. We measure this correlation by defining a metric between intensity profiles. A simple way is to use the Euclidean distance, i.e.
  • d(j,k) the stronger the correlation between the two pixels.
  • the image plane In order to decompose the image plane spatially using the temporal correlations between pixels, we run a clustering algorithm on the pixel intensity profiles (the rows of the design matrix A). It will produce clusters of temporally correlated pixels. The most straightforward choice is to employ the K-means algorithm, but it could be any other clustering algorithm. As a result the image plane is segmented into several segments of temporally correlated pixels. This can then be used as a map to segment all images in the training set; and a classifier can be built on features extracted from those segments of all images in the training set.
  • one-class classifier is preferable. Any suitable type of one-class classifier can be used as known in the art. For example, neural network based one-class classifiers and statistical based one-class classifiers.
  • Suitable statistical methods for one-class classification are in general based on maximization of the log-likelihood ratio under the null-hypothesis that the observation under consideration is drawn from the target class and these include the D 2 test (described in Morrison, D F: Multivariate Statistical Methods (third edition). McGraw-Hill Publishing Company, New York, 1990) which assumes a multivariate Gaussian distribution for the target class (genuine currency).
  • the density of the target class can be estimated using for example a semi-parametric Mixture of Gaussians (described in Bishop, C M: Neural Networks for Pattern Recognition, Oxford University Press, New York, 1995) or a non-parametric Parzen window (described in Duda, R O, Hart, P E, Stork, D G: Pattern Classification (second edition), John Wiley & Sons, INC, New York, 2001) and the distribution of the log-likelihood ratio under the null-hypothesis can be obtained by sampling techniques such as the bootstrap (described in Wang, S, Woodward, W A, Gary, H L et al. A new test for outlier detetion from a multivariate mixture distribution, Journal of Computational and Graphical Statistics, 6 (3): 285-299,1997).
  • Support Vector Data Domain Description (described in Tax, D M J, Duin, R P W: Support vector domain description, Pattern Recognition Letters, 20 (11-12): 1191-1199, 1999), also known as ‘support estimation’ (described in Hayton, P, Schölkopf, B, Tarrassenko, L, Anuzis, P: Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra, Advances in Neural Information Processing Systems, 13, eds Leen, Todd K and Dietterich, Thomas G and Tresp, Volker, MIT Press, 946-952, 2001) and Extreme Value Theory (EVT) (described in Roberts, S J: Novelty detection using extreme value statistics.
  • SVDD Support Vector Data Domain Description
  • support estimation also known as ‘support estimation’ (described in Hayton, P, Schölkopf, B, Tarrassenko, L, Anuzis, P: Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra, Advances in Neural Information Processing
  • ⁇ ) sup ⁇ ⁇ ⁇ n 1 N ⁇ ( x n
  • C ⁇ N + 1 - 1 N + 1 N ⁇ ( C ⁇ N - 1 - C ⁇ N - 1 ⁇ ( x N + 1 - ⁇ ⁇ N ) ⁇ ( x N + 1 - ⁇ ⁇ N ) T ⁇ C ⁇ N - 1 N + 1 + ( x N + 1 - ⁇ ⁇ N ) T ⁇ C ⁇ N - 1 N + 1 + ( x N + 1 - ⁇ ⁇ N ) T ⁇ C ⁇ N - 1 ⁇ ( x N + 1 - ⁇ ⁇ N ) ) .
  • 10 10
  • semi-parametric e.g. Gaussian Mixture Model
  • non-parametric e.g. Parzen window method
  • the method of forming the classifier is repeated for different numbers of segments and tested using images of banknotes known to be either counterfeit or not.
  • the number of segments giving the best performance is then selected and the classifier using that number of segments used. We found that the best number of segments to be from about 2 to 15 although any suitable number of segments can be used.
  • FIG. 3 is a flow diagram of the process of replacing the aberrant image elements with decision-neutral data.
  • a distribution is accessed (box 301 ) for that image position.
  • the distribution is an estimated distribution for that image position across all images in a training set of images.
  • the training set of images may be a plurality of images of genuine banknotes as described above.
  • the distribution may be a pixel intensity profile or an intensity profile for a block of four pixel positions, or similar as described above.
  • the distribution is the same as that used during a process of forming a segmentation map for the banknote validator as described above. This reduces computation costs and saves time as those distributions are already estimated.
  • a value is then selected (box 302 ) from the accessed distribution on the basis of a significance level (also referred to as a confidence level). That significance level is related to that of a classifier used in the banknote validator. For example, the significance level is the same as that used by the classifier.
  • a significance level is related to that of a classifier used in the banknote validator.
  • the significance level is the same as that used by the classifier.
  • FIG. 4 is a schematic diagram of an apparatus 20 for creating a classifier 22 for banknote validation. It comprises:
  • FIG. 5 is a schematic diagram of a banknote validator 31 . It comprises:
  • FIG. 6 is a flow diagram of a method of validating a banknote. The method comprises:
  • FIG. 7 is a schematic diagram of a self-service apparatus 51 with a banknote validator 53 . It comprises:
  • the segmentations may be formed on the basis of the images of only one type, say the red channel.
  • the segmentation map may be formed on the basis of the images of all types, say the red, blue and green channel. It is also possible to form a plurality of segmentation maps, one for each type of image or combination of image types. For example, there may be three segmentation maps one for the red channel images, one for the blue channel images and one for the green channel images. In that case, during validation of an individual note, the appropriate segmentation map/classifier is used depending on the type of image selected. Thus each of the methods described above may be modified by using images of different types and corresponding segmentation maps/classifiers.
  • the means for accepting banknotes is of any suitable type as known in the art as is the imaging means. Any feature selection algorithm known in the art may be used to select one or more types of feature to use in the step of extracting features. Also, the classifier can be formed on the basis of specified information about a particular denomination or currency of banknotes in addition to the feature information discussed herein. For example, information about particularly data rich regions in terms of color or other information, spatial frequency or shapes in a given currency and denomination.

Abstract

Automatic media item validation is typically problematic in the case of media items that are damaged or marked. A method of processing images of media items before automatic validation which addresses this problem is described. Aberrant image elements are identified, for example, using a bandpass filter. The aberrant image elements are replaced by neutral decision making data. This data is neutral with respect to a decision making process being a specified automatic media item validation process. For example, for each aberrant image element an estimated distribution is accessed for that image position across all images in a training set of images of media items. A value is selected from the estimated distribution on the basis of a significance level which is related to a significance level used by the automatic media item validation process. In this way media items which have tears, holes, marks or soiling may be successfully processed by an automated media item validator.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation-in-part application of U.S. patent application Ser. No. 11/366,147, filed on Mar. 2, 2006, which is a continuation-in-part application of U.S. patent application Ser. No. 11/305,537, filed on Dec. 16, 2005. Application Ser. No. 11/366,147, filed on Mar. 2, 2006 and application Ser. No. 11/305,537, filed on Dec. 16, 2005 are hereby incorporated by reference.
  • TECHNICAL FIELD
  • The present invention relates to a method and apparatus for processing images of media items before validation. It is particularly related to, but in no way limited to, processing images of media items such as banknotes, passports, bonds, share certificates, checks and the like.
  • BACKGROUND
  • There is a growing need for automatic verification and validation of banknotes of different currencies and denominations in a simple, reliable, and cost effective manner. This is required, for example, in self-service apparatus which receives banknotes, such as self-service kiosks, ticket vending machines, automated teller machines arranged to take deposits, self-service currency exchange machines and the like.
  • Previously, manual methods of currency validation have involved image examination, transmission effects such as watermarks and thread registration marks, feel and even smell of banknotes. Other known methods have relied on semi-overt features requiring semi-manual interrogation. For example, using magnetic means, ultraviolet sensors, fluorescence, infrared detectors, capacitance, metal strips, image patterns and similar. However, by their very nature these methods are manual or semi-manual and are not suitable for many applications where manual intervention is unavailable for long periods of time. For example, in self-service apparatus.
  • There are significant problems to be overcome in order to create an automatic currency validator. For example, many different types of currency exist with different security features and even substrate types. Within those different denominations also exist commonly with different levels of security features. There is therefore a need to provide a generic method of easily and simply performing currency validation for those different currencies and denominations.
  • Previous automatic validation methods typically require a relatively large number of examples of counterfeit banknotes to be known in order to train the classifier. In addition, those previous classifiers are trained to detect known counterfeits only. This is problematic because often little or no information is available about possible counterfeits. For example, this is particularly problematic for newly introduced denominations or newly introduced currency.
  • In an earlier paper entitled, “Employing optimized combinations of one-class classifiers for automated currency validation”, published in Pattern Recognition 37, (2004) pages 1085-1096, by Chao He, Mark Girolami and Gary Ross (two of whom are inventors of the present application) an automated currency validation method is described (Patent No. EP1484719, US2004247169). This involves segmenting an image of a whole banknote into regions using a grid structure. Individual “one-class” classifiers are built for each region and a small subset of the region specific classifiers are combined to provide an overall decision. (The term, “one-class” is explained in more detail below.) The segmentation and combination of region specific classifiers to achieve good performance is achieved by employing a genetic algorithm. This method requires a small number of counterfeit samples at the genetic algorithm stage and as such is not suitable when counterfeit data is unavailable.
  • There is also a need to perform automatic currency validation in a computationally inexpensive manner which can be performed in real time.
  • Automatic currency validation is typically problematic in the case of banknotes that are damaged or marked. For example, if a banknote has tears, holes, stains and/or folded corners. Aging of banknotes and soiling that occurs during wear of banknotes is also problematic for automatic currency validation systems.
  • Many of the issues mentioned above also apply to validation of other types of media such as passports, share certificates, bond, checks and the like.
  • SUMMARY
  • Automatic media validation is typically problematic in the case of media items that are damaged or marked. A method of processing images of media items before automatic validation which addresses this problem is described. Aberrant image elements are identified, for example, using a bandpass filter. The aberrant image elements are replaced by neutral decision making data. This data is neutral with respect to a decision making process being a specified automatic currency validation process. For example, for each aberrant image element an estimated distribution is accessed for that image position across all images in a training set of media item images. A value is selected from the estimated distribution on the basis of a significance level which is related to a significance level used by the automatic media validation process. In this way media items which have tears, holes, marks or soiling may be successfully processed by an automated media validator.
  • The methods described herein may be performed by software in machine readable form on a storage medium. The method steps may be carried out in any suitable order and/or in parallel as is apparent to the skilled person in the art.
  • This acknowledges that software can be a valuable, separately tradable commodity. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions, (and therefore the software essentially defines the functions of the media validator, and can therefore be termed a media validator, even before it is combined with its standard hardware). For similar reasons, it is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
  • The preferred features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the aspects of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:
  • FIG. 1 is a flow diagram of a method of identifying and replacing aberrant image elements in a banknote image;
  • FIG. 2 is a flow diagram of a method of creating a classifier for banknote validation;
  • FIG. 3 is a flow diagram of a method of replacing aberrant image elements in a banknote image;
  • FIG. 4 is a schematic diagram of an apparatus for creating a classifier for banknote validation;
  • FIG. 5 is a schematic diagram of a banknote validator;
  • FIG. 6 is a flow diagram of a method of validating a banknote;
  • FIG. 7 is a schematic diagram of a self-service apparatus with a banknote validator.
  • DETAILED DESCRIPTION
  • Embodiments of the present invention are described below by way of example only. These examples represent the best ways of putting the invention into practice that are currently known to the Applicant although they are not the only ways in which this could be achieved. Although the present examples are described and illustrated herein as being implemented for automatic currency validation, the systems described herein are described as examples and not limitations. As those skilled in the art will appreciate, the present examples are suitable for application in a variety of different types of media validation systems, including but not limited to passport validation systems, check validation systems and validation systems for bonds and share certificates.
  • The term “one class classifier” is used to refer to a classifier that is formed or built using information about examples only from a single class but which is used to allocate newly presented examples either to that single class or not. This differs from a conventional binary classifier which is created using information about examples from two classes and which is used to allocate new examples to one or other of those two classes. A one-class classifier can be thought of as defining a boundary around a known class such that examples falling out with that boundary are deemed not to belong to the known class.
  • As mentioned above, Automatic currency validation is typically problematic in the case of banknotes that are damaged or marked. For example, if a banknote has tears, holes, stains and/or folded corners. Aging of banknotes and soiling that occurs during wear of banknotes is also problematic for automatic currency validation systems.
  • For example, an automatic currency validation system may use a process whereby an image of a banknote to be validated is divided into segments. Those segments may be formed using a grid structure or other method using spatial position information alone. Alternatively, the segments may be formed using a segmentation map that uses information about relative values of image elements between corresponding image elements in each member of a set of training banknote images.
  • If a banknote to be validated is damaged or marked then this leads to problems in the automatic banknote validation process because some of the information is aberrant or corrupt. For example, holes in a banknote may result in pixels of abnormally high intensity in an image of that banknote. Also, soiling or marks on a banknote may result in pixels of abnormally low intensity in an image of that banknote.
  • In the case that an image of a banknote to be validated is divided into segments as part of the validation process, one option is to ignore those segments which contain aberrant data (for example, holes, marks, folds, tears, etc.). However, where only a low number of segments are used this means that a large proportion of data is ignored. Also, if the ignored segment happens to contain important banknote regions such as a security feature (e.g. hologram, thread mark, watermark, etc.) then the confidence level of the banknote validator will drop.
  • In order to address these problems we identify aberrant image elements in an image of a media item such as a banknote to be validated and replace those by decision-neutral data. By “decision-neutral data” or “neutral decision making data” we mean data which will not influence the outcome of a pre-specified media item validation process. That media item validation process may be of any suitable type, including but not limited to, the particular media item validation processes described herein.
  • FIG. 1 is a high level flow diagram of a method of processing an image of a banknote to be validated.
  • An image of a banknote to be validated is captured (see box 1) using any suitable technique as described in more detail below. The image is normalized and/or pre-processed (see box 2) for example to align it in a particular orientation and to scale it to a particular size. This enables variations in sensors and lighting conditions to be taken into account. An optional step (see box 3) then involves using a recognition algorithm to determine one or more of the currency, series, denomination and orientation of the banknote. If the recognition algorithm fails then it may be retried by referencing different edges or corners of the banknote image. If all four edges are attempted and failed then the note is rejected (see box 7). Otherwise the process continues and looks for aberrations in the image (see box 4).
  • Aberrations may be identified in any suitable manner. For example, missing areas or holes in a banknote typically give rise to image areas of abnormally high brightness. In this case, all image areas, elements or pixels with an intensity above a specified threshold may be identified as aberrations.
  • In some currencies, polymer notes are used with windows. Such windows also give rise to image areas of high brightness. In order that these windows are not identified as aberrations, knowledge about expected location, position and size of these windows can be taken into account when identifying aberrations.
  • Stains, marker pen marks, staples, folds and other such damage gives rise to overly opaque areas in banknote images. In this case, all image areas, elements or pixels with an intensity below a specified threshold may be identified as aberrations. Optionally, information about the expected intensities of image elements for particular currencies and denominations may be taken into account when identifying the aberrations.
  • To quickly identify image elements with intensities either above or below specified thresholds a bandpass filter may be used.
  • Once the aberrations are identified, they are removed by being replaced by decision-neutral data (see box 5). Optionally, a check is made on the proportion of the banknote image identified as aberrant. If this proportion is above a specified threshold then the banknote is rejected if it has not already been rejected at the recognition algorithm stage (box 7). This ensures that counterfeit notes formed from parts of genuine notes joined to parts of obscured counterfeit notes are rejected. Also, in this way it is possible to place a limit on the amount of aberrant data that may be replaced. As the process tends towards 100% of the banknote image being replaced by decision-neutral data the ability to detect counterfeits is reduced.
  • The resulting modified image of the banknote is then passed to a banknote validation system (see box 6) to be validated.
  • The process of forming the decision neutral data is described in more detail below with reference to FIG. 3.
  • In a particular group of embodiments the pre-specified banknote validation process uses a classifier formed as now described.
  • FIG. 2 is a high level flow diagram of a method of creating a classifier for banknote validation.
  • First we obtain a training set of images of genuine banknotes (see box 10 of FIG. 1). These are images of the same type taken of banknotes of the same currency and denomination. The type of image relates to how the images are obtained, and this may be in any manner known in the art. For example, reflection images, transmission images, images on any of a red, blue or green channel, thermal images, infrared images, ultraviolet images, x-ray images or other image types. The images in the training set are in registration and are the same size. Pre-processing can be carried out to align the images and scale them to size if necessary, as known in the art.
  • We next create a segmentation map using information from the training set images (see box 12 of FIG. 2). The segmentation map comprises information about how to divide an image into a plurality of segments. The segments may be non-continuous, that is, a given segment can comprise more than one patch in different regions of the image. Preferably, but not essentially, the segmentation map also comprises a specified number of segments to be used.
  • Using the segmentation map we segment each of the images in the training set (see box 14 of FIG. 2). We then extract one or more features from each segment in each of the training set images (see box 16 of FIG. 2). By the term “feature” we mean any statistic or other characteristic of a segment. For example, the mean pixel intensity, median pixel intensity, mode of the pixel intensities, texture, histogram, Fourier transform descriptors, wavelet transform descriptors and/or any other statistics in a segment.
  • A classifier is then formed using the feature information (see box 18 of FIG. 2). Any suitable type of classifier can be used as known in the art. In a particularly preferred embodiment of the invention the classifier is a one-class classifier and no information about counterfeit banknotes is needed. However, it is also possible to use a binary classifier or other type of classifier of any suitable type as known in the art.
  • The method in FIG. 2 enables a classifier for validation of banknotes of a particular currency and denomination to be formed simply, quickly and effectively. To create classifiers for other currencies or denominations the method is repeated with appropriate training set images.
  • Previously in EP1484719 and US2004247169, (as mentioned in the background section) we used a segmentation technique that involved using a grid structure over the image plane and a genetic algorithm method to form the segmentation map. This necessitated using some information about counterfeit notes, and incurring computational costs when performing the genetic algorithm search.
  • Embodiments described herein may use a different method of forming the segmentation map which removes the need for using a genetic algorithm or equivalent method to search for a good segmentation map within a large number of possible segmentation maps. This reduces computational cost and improves performance. In addition the need for information about counterfeit banknotes is removed.
  • We believe that generally it is difficult in the counterfeiting process to provide a uniform quality of imitation across the whole note and therefore certain regions of a note are more difficult than others to be copied successfully. We therefore recognized that rather than using a rigidly uniform grid segmentation we could improve banknote validation by using a more sophisticated segmentation. Empirical testing that we carried out indicated that this is indeed the case. Segmentation based on morphological characteristics such as pattern, color and texture led to a better performance in detecting counterfeits. However, traditional image segmentation methods, such as using edge detectors, when applied to each image in the training set were difficult to use. This is because varying results are obtained for each training set member and it is difficult to align corresponding features in different training set images. In order to avoid this problem of aligning segments we used, in one preferred embodiment, a so called “spatio-temporal image decomposition”.
  • Details about the method of forming the segmentation map are now given. At a high level this method can be thought of as specifying how to divide the image plane into a plurality of segments, each comprising a plurality of specified pixels. The segments can be non-continuous as mentioned above. For example, this specification is made on the basis of information from all images in the training set. In contrast, segmentation using a rigid grid structure does not require information from images in the training set.
  • For example, each segmentation map comprises information about relationships of corresponding image elements between all images in the training set.
  • Consider the images in the training set as being stacked and in registration with one another in the same orientation. Taking a given pixel in the note image plane this pixel is thought of as having a “pixel intensity profile” comprising information about the pixel intensity at that particular pixel position in each of the training set images. Using any suitable clustering algorithm, pixel positions in the image plane are clustered into segments, where pixel positions in those segments have similar or correlated pixel intensity profiles.
  • In a preferred example we use these pixel intensity profiles. However, it is not essential to use pixel intensity profiles. It is also possible to use other information from all images in the training set. For example, intensity profiles for blocks of 4 neighboring pixels or mean values of pixel intensities for pixels at the same location in each of the training set images.
  • A particularly preferred embodiment of our method of forming the segmentation map is now described in detail. This is based on the method taught in the following publication “EigenSegments: A spatio-temporal decomposition of an ensemble of images” by Avidan, S. Lecture Notes in Computer Science, 2352: 747-758, 2002.
  • Given an ensemble of images {I}i=1,2, . . . ,N which have been registered and scaled to the same size r×c, each image Ii can be represented by its pixels as [a1i,a2i, . . . ,aMi]T in vector form, where aji(j=1,2, . . . ,M) is the intensity of the jth pixel in the ith image and M=r·c is the total number of pixels in the image. A design matrix A∈
    Figure US20070154078A1-20070705-P00900
    M×N can then be generated by stacking vectors Ii (zeroed using the mean value) of all images in the ensemble, thus A=[I1,I2, . . . ,IN]. A row vector [aji,aj2, . . . ,ajN] in A can be seen as an intensity profile for a particular pixel (jth) across N images. If two pixels come from the same pattern region of the image they are likely to have the similar intensity values and hence have a strong temporal correlation. Note the term “temporal” here need not exactly correspond to the time axis but is borrowed to indicate the axis across different images in the ensemble. Our algorithm tries to find these correlations and segments the image plane spatially into regions of pixels that have similar temporal behavior. We measure this correlation by defining a metric between intensity profiles. A simple way is to use the Euclidean distance, i.e. the temporal correlation between two pixels j and k can be denoted as d ( j , k ) = i = 1 N ( a ji - a ki ) 2 .
    The smaller d(j,k), the stronger the correlation between the two pixels.
  • In order to decompose the image plane spatially using the temporal correlations between pixels, we run a clustering algorithm on the pixel intensity profiles (the rows of the design matrix A). It will produce clusters of temporally correlated pixels. The most straightforward choice is to employ the K-means algorithm, but it could be any other clustering algorithm. As a result the image plane is segmented into several segments of temporally correlated pixels. This can then be used as a map to segment all images in the training set; and a classifier can be built on features extracted from those segments of all images in the training set.
  • In order to achieve the training without utilizing counterfeit notes, one-class classifier is preferable. Any suitable type of one-class classifier can be used as known in the art. For example, neural network based one-class classifiers and statistical based one-class classifiers.
  • Suitable statistical methods for one-class classification are in general based on maximization of the log-likelihood ratio under the null-hypothesis that the observation under consideration is drawn from the target class and these include the D2 test (described in Morrison, D F: Multivariate Statistical Methods (third edition). McGraw-Hill Publishing Company, New York, 1990) which assumes a multivariate Gaussian distribution for the target class (genuine currency). In the case of an arbitrary non-Gaussian distribution the density of the target class can be estimated using for example a semi-parametric Mixture of Gaussians (described in Bishop, C M: Neural Networks for Pattern Recognition, Oxford University Press, New York, 1995) or a non-parametric Parzen window (described in Duda, R O, Hart, P E, Stork, D G: Pattern Classification (second edition), John Wiley & Sons, INC, New York, 2001) and the distribution of the log-likelihood ratio under the null-hypothesis can be obtained by sampling techniques such as the bootstrap (described in Wang, S, Woodward, W A, Gary, H L et al. A new test for outlier detetion from a multivariate mixture distribution, Journal of Computational and Graphical Statistics, 6 (3): 285-299,1997).
  • Other methods which can be employed for one-class classification are Support Vector Data Domain Description (SVDD) (described in Tax, D M J, Duin, R P W: Support vector domain description, Pattern Recognition Letters, 20 (11-12): 1191-1199, 1999), also known as ‘support estimation’ (described in Hayton, P, Schölkopf, B, Tarrassenko, L, Anuzis, P: Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra, Advances in Neural Information Processing Systems, 13, eds Leen, Todd K and Dietterich, Thomas G and Tresp, Volker, MIT Press, 946-952, 2001) and Extreme Value Theory (EVT) (described in Roberts, S J: Novelty detection using extreme value statistics. IEE Proceedings on Vision, Image & Signal Processing, 146 (3): 124-129,1999). In SVDD the support of the data distribution is estimated, whilst the EVT estimates the distribution of extreme values. For this particular application, large numbers of examples of genuine notes are available, so in this case it is possible to obtain reliable estimates of the target class distribution. We therefore choose one-class classification methods that can estimate the density distribution explicitly in a preferred embodiment, although this is not essential. In a preferred embodiment we use one-class classification methods based on the parametric D2 test).
  • In a preferred embodiment, the statistical hypothesis tests used for our one-class classifier are detailed as follows:
  • Consider N independent and identically distributed p-dimensional vector samples (the feature set for each banknote) x1, . . . ,XN ÅC with an underlying density function with parameters θ given as p(x|θ). The following hypothesis test is given for a new point xN+1 such that H0:xN+1ÅC vs. H1: xN+1 ∉C, where C denotes the region where the null hypothesis is true and is defined by p(x|θ). Assuming that the distribution under the alternate hypothesis is uniform then the standard log-likelihood ratio for the null and alternate hypothesis λ = sup θ Θ L 0 ( θ ) sup θ Θ L 1 ( θ ) = sup θ n = 1 N + 1 p ( x n | θ ) sup θ n = 1 N ( x n | θ ) ( 1 )
    can be employed as a test statistic for the null-hypothesis. In this preferred embodiment we can use the log-likelihood ratio as test statistic for the validation of a newly presented note.
  • 1) Feature vectors with multivariate Gaussian density: Under the assumption that the feature vectors describing individual points in a sample are multivariate Gaussian, a test that emerges from the above likelihood ratio (1), to assess whether each point in a sample shares a common mean is described in (Morrison, D F: Multivariate Statistical Methods (third edition). McGraw-Hill Publishing Company, New York, 1990). Consider N independent and identically distributed p-dimensional vector samples x1, . . . ,xN from a multivariate normal distribution with mean μ and covariance C, whose sample estimates are {circumflex over (μ)}N and ĈN. From the sample consider a random selection denoted as x0, the associated squared Mahalanobis distance
    D 2=(x0−{circumflex over (μ)}N)T Ĉ N −1(x0−{circumflex over (μ)}N)   (2)
    can be shown to be distributed as a central F-distribution with p and N−p−1 degrees of freedom by F = ( N - p - 1 ) ND 2 p ( N - 1 ) 2 - NpD 2 . ( 3 )
  • Then, the null hypothesis of a common population mean vector x0 and the remaining xi will be rejected if
    F>Fa;p,N−p−1,   (4)
    where Fa;p,N−p−1 is the upper α·100% point of the F-distribution with (p,N−p−1) degrees of freedom.
    Now suppose that xo was chosen as the observation vector with the maximum D2 statistic. The distribution of the maximum D2 from a random sample of size N is complicated. However a conservative approximation to the 100α percent upper critical value can be obtained by the Bonferroni inequality. Therefore we might conclude that x0 is an outlier if F > F α N ; p , N - p - 1 . ( 5 )
  • In practice, either equations (4) or (5) can be used for outlier detection.
    We can make use of the following incremental estimates of the mean and covariance in devising a test for new examples which do not form part of the original sample when an additional datum xN+1 is made available, i.e. the mean μ ^ N + 1 = 1 N + 1 { N μ ^ N + x N + 1 } ( 6 )
    and the covariance C ^ N + 1 = N N + 1 C ^ N + N ( N + 1 ) 2 ( x N + 1 - μ ^ N ) ( x N + 1 - μ ^ N ) T . ( 7 )
  • By using the expression of (6), (7) and the matrix inversion lemma, Equation (2) for an N-sample reference set and an N+1′th test point becomes
    D 2N+1 T Ĉ N+1 −1σN+1,   (8)
    where σ N + 1 = ( x N + 1 - μ ^ N + 1 ) = N N + 1 ( x N + 1 - μ ^ N ) , and ( 9 ) C ^ N + 1 - 1 = N + 1 N ( C ^ N - 1 - C ^ N - 1 ( x N + 1 - μ ^ N ) ( x N + 1 - μ ^ N ) T C ^ N - 1 N + 1 + ( x N + 1 - μ ^ N ) T C ^ N - 1 ( x N + 1 - μ ^ N ) ) . ( 10 )
  • Denoting (xN+1−{circumflex over (μ)}N)T Ĉ N −1(xN+1−{circumflex over (μ)}N) by DN+1,N, then D 2 = ND N + 1 , N 2 N + 1 + D N + 1 , N 2 . ( 11 )
  • So a new point xN+1 can be tested against an estimated and assumed normal distribution for a common estimated mean {circumflex over (μ)}N and covariance ĈN. Though the assumption of multivariate Gaussian feature vectors often does not hold in practice, it has been found as an appropriate pragmatic choice for many applications. We relax this assumption and consider arbitrary densities in the following section.
  • 2) Feature Vectors with arbitrary Density: A probability density estimate {circumflex over (p)}(x;θ) can be obtained from the finite data sample S={x1, . . . ,xN}∈
    Figure US20070154078A1-20070705-P00900
    d drawn from an arbitrary density p(x), by using any suitable semi-parametric (e.g. Gaussian Mixture Model) or non-parametric (e.g. Parzen window method) density estimation methods as known in the art. This density can then be employed in computing the log-likelihood ratio (1). Unlike the case of the multivariate Gaussian distribution there is no analytic distribution for the test statistic (λ) under the null hypothesis. So to obtain this distribution, numerical bootstrap methods can be employed to obtain the otherwise non-analytic null distribution under the estimated density and so the various critical values of λcrit can be established from the empirical distribution obtained. It can be shown that in the limit as N→∞, the likelihood ratio can be estimated by the following λ = sup θ Θ L 0 ( θ ) sup θ Θ L 1 ( θ ) p ^ ( x N + 1 ; θ ^ N ) ( 12 )
    where {circumflex over (p)}(xN+1;{circumflex over (θ)}N) denotes the probability density of xN+1 under the model estimated by the original N samples.
  • After generating B sets bootstrap of N samples from the reference data set and using each of these to estimate the parameters of the density distribution {circumflex over (θ)}N i, B bootstrap replicates of the test statistic λcrit i,i=1, . . . ,B can be obtained by randomly selecting an N+1′th sample and computing {circumflex over (p)}(xN+1;{circumflex over (θ)}N i)≈λcrit i. By ordering λcrit i in ascending order, the critical value a can be defined to reject the null-hypothesis at the desired significance level if λ≦λa, where λa is the jth smallest value of λcrit i, and α=j/(B+1).
  • Preferably the method of forming the classifier is repeated for different numbers of segments and tested using images of banknotes known to be either counterfeit or not. The number of segments giving the best performance is then selected and the classifier using that number of segments used. We found that the best number of segments to be from about 2 to 15 although any suitable number of segments can be used.
  • As mentioned above, a particular problem involves identifying and replacing aberrant image elements in an image of a banknote to be validated. FIG. 3 is a flow diagram of the process of replacing the aberrant image elements with decision-neutral data. For each image element (box 300) for example, pixel, group of pixels, a distribution is accessed (box 301) for that image position. The distribution is an estimated distribution for that image position across all images in a training set of images. The training set of images may be a plurality of images of genuine banknotes as described above. For example, the distribution may be a pixel intensity profile or an intensity profile for a block of four pixel positions, or similar as described above. Preferably, the distribution is the same as that used during a process of forming a segmentation map for the banknote validator as described above. This reduces computation costs and saves time as those distributions are already estimated.
  • A value is then selected (box 302) from the accessed distribution on the basis of a significance level (also referred to as a confidence level). That significance level is related to that of a classifier used in the banknote validator. For example, the significance level is the same as that used by the classifier. By selecting the value in this way decision-neutral data is obtained because the significance level is related to that of the classifier. The value at the aberrant image element is then replaced by the selected value (see box 303). By using decision-neutral data in this way we ensure that the remainder of the note dictates the classification results of the banknote validator. This is an advantage over conventional approaches where missing or corrupt data on a genuine note means that to avoid many false rejects, the false accept rate would suffer. In this way we are able to successfully deal with damaged, worn, torn or partially faded notes without the need to modify the core banknote validation process. Pre-processing of the banknote images is all that is required. In addition, this is achieved without compromising the false accept rate.
  • FIG. 4 is a schematic diagram of an apparatus 20 for creating a classifier 22 for banknote validation. It comprises:
      • an input 21 arranged to access a training set of banknote images;
      • a processor 23 arranged to create a segmentation map using the training set images;
      • a segmentor 24 arranged to segmenting each of the training set images using the segmentation map;
      • a feature extractor 25 arranged to extract one or more features from each segment in each of the training set images; and
      • classification forming means 26 arranged to form the classifier using the feature information;
        wherein the processor is arranged to create the segmentation map on the basis of information from all images in the training set. For example, by using spatio-temporal image decomposition described above.
  • FIG. 5 is a schematic diagram of a banknote validator 31. It comprises:
      • an input arranged to receive at least one image 30 of a banknote to be validated;
      • a segmentation map 32;
      • a processor 36 arranged to identify aberrations in the image;
      • an image modifier 37 arranged to form a modified image by replacing the identified aberrations by neutral decision making data, that data being neutral decision making data with respect to the classifier 35
      • another processor 33 (which may be integral with processor 36) arranged to segment the image of the banknote using the segmentation map;
      • a feature extractor 34 arranged to extract one or more features from each segment of the banknote image;
      • a classifier 35 arranged to classify the banknote as being either valid or not on the basis of the extracted features;
        wherein the segmentation map comprises information about relationships of corresponding image elements between all images in a training set of images of banknotes. It is noted that it is not essential for the components of FIG. 5 to be independent of one another, these may be integral.
  • FIG. 6 is a flow diagram of a method of validating a banknote. The method comprises:
      • accessing at least one image of a banknote to be validated (box 40);
      • identify aberrant image elements (box 41);
      • replace aberrant image elements by decision neutral data (box 42);
      • accessing a segmentation map (box 43);
      • segmenting the image of the banknote using the segmentation map (box 44);
      • extracting features from each segment of the banknote image (box 45);
      • classifying the banknote as being either valid or not on the basis of the extracted features using a classifier (box 46);
        wherein the segmentation map is formed on the basis of information about each of a set of training images of banknotes. These method steps can be carried out in any suitable order or in combination as is known in the art. The segmentation map can be said to implicitly comprise information about each of the images in the training set because it has been formed on the basis of that information. However, the explicit information in the segmentation map can be a simple file with a list of pixel addresses to be included in each segment.
  • FIG. 7 is a schematic diagram of a self-service apparatus 51 with a banknote validator 53. It comprises:
      • a means for accepting banknotes 50,
      • imaging means for obtaining digital images of the banknotes 52;
      • a processor for replacing aberrant image elements with decision-neutral data 54; and
      • a banknote validator 53 as described above.
  • The methods described herein are performed on images or other representations of banknotes, those images/representations being of any suitable type. For example, images on any of a red, blue and green channel or other images as mentioned above.
  • The segmentations may be formed on the basis of the images of only one type, say the red channel. Alternatively, the segmentation map may be formed on the basis of the images of all types, say the red, blue and green channel. It is also possible to form a plurality of segmentation maps, one for each type of image or combination of image types. For example, there may be three segmentation maps one for the red channel images, one for the blue channel images and one for the green channel images. In that case, during validation of an individual note, the appropriate segmentation map/classifier is used depending on the type of image selected. Thus each of the methods described above may be modified by using images of different types and corresponding segmentation maps/classifiers.
  • The means for accepting banknotes is of any suitable type as known in the art as is the imaging means. Any feature selection algorithm known in the art may be used to select one or more types of feature to use in the step of extracting features. Also, the classifier can be formed on the basis of specified information about a particular denomination or currency of banknotes in addition to the feature information discussed herein. For example, information about particularly data rich regions in terms of color or other information, spatial frequency or shapes in a given currency and denomination.
  • Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
  • It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art.

Claims (21)

1. A method of processing an image of a media item comprising:
(i) identifying aberrations in the image;
(ii) forming a modified image by replacing the identified aberrations by neutral decision making data, that data being neutral decision making data with respect to a decision making process being a pre-specified media item validation process.
2. A method as claimed in claim 1 wherein the step of identifying aberrations in the image comprises applying a bandpass filter.
3. A method as claimed in claim 1 wherein the method comprises obtaining the neutral decision making data by, for each aberrant image element, accessing an estimated distribution for that image position across all images in a training set of images of media items and selecting a value from that estimated distribution.
4. A method as claimed in claim 3 wherein the value is selected from the estimated distribution on the basis of a significance level, being a significance level of the pre-specified media item validation process.
5. A method as claimed in claim 3 wherein the training set of images of media items comprises only images of genuine media items.
6. A method as claimed in claim 3 wherein the distribution is estimated on the basis of a pixel intensity profile.
7. A method as claimed in 1 wherein said pre-specified media item validation process comprises using a one-class classifier.
8. A method as claimed in claim 1 which further comprises providing the modified image as input to the pre-specified media item validation process.
9. An apparatus for processing an image of a media item the apparatus comprising:
(i) a processor arranged to identify aberrations in the image;
(ii) an image modifier arranged to form a modified image by replacing the identified aberrations by neutral decision making data, that data being neutral decision making data with respect to a decision making process being a pre-specified media item validation process.
10. An apparatus as claimed in claim 9 wherein the processor comprises a bandpass filter for identifying aberrations in the image.
11. An apparatus as claimed in claim 9 wherein the image modifier is arranged to obtain the neutral decision making data by, for each aberrant image element, accessing an estimated distribution for that image position across all images in a training set of images of media items and selecting a value from that estimated distribution.
12. An apparatus as claimed in claim 11 wherein the image modifier is arranged to select the value from the estimated distribution on the basis of a significance level, being a significance level of the pre-specified media item validation process.
13. An apparatus as claimed in claim 11 wherein the image modifier is arranged to estimate the distribution on the basis of a pixel intensity profile.
14. An apparatus as claimed in claim 11 wherein the image modifier is arranged to estimate the distribution from a training set of images comprising only images of genuine media items.
15. An apparatus as claimed in 9 comprising a banknote validator and wherein the image modifier is arranged to input the modified image to the media item validator.
16. An apparatus as claimed in claim 15 wherein the media item validator comprises a one-class classifier.
17. A media item validator comprising:
(i) an input arranged to receive at least one image of a media item to be validated;
(ii) a processor arranged to identify aberrations in the image;
(iii) an image modifier arranged to form a modified image by replacing the identified aberrations by neutral decision making data, that data being neutral decision making data with respect to a classifier of the media item validator;
(iv) a segmentation map;
(v) a processor arranged to segment the image of the media item using the segmentation map;
(vi) a feature extractor arranged to extract one or more features from each segment of the image of the media item;
(vii) a classifier arranged to classify the media item on the basis of the extracted features;
wherein the segmentation map comprises information about relationships of corresponding image elements between all images in a set of training images of media items.
18. A media item validator as claimed in claim 17 wherein the image modifier is arranged to obtain the neutral decision making data by, for each aberrant image element, accessing an estimated distribution for that image position across all images in a training set of images of media items and selecting a value from that estimated distribution.
19. A computer program comprising computer program code means adapted to perform all the steps of a method of processing an image of a banknote comprising:
(i) identifying aberrations in the image;
(ii) forming a modified image by replacing the identified aberrations by neutral decision making data, that data being neutral decision making data with respect to a decision making process being a pre-specified banknote validation process,
when said program is run on a computer.
20. A computer program as claimed in claim 19 embodied on a computer readable medium.
21. A self-service apparatus comprising:
(i) a means for accepting media items,
(ii) imaging means for obtaining digital images of the media items; and
(iii) a media item validator comprising:
(i) an input arranged to receive at least one image of a media item to be validated;
(ii) a processor arranged to identify aberrations in the image;
(iii) an image modifier arranged to form a modified image by replacing the identified aberrations by neutral decision making data, that data being neutral decision making data with respect to a classifier of the media item validator;
(iv) a segmentation map;
(v) a processor arranged to segment the image of the media item using the segmentation map;
(vi) a feature extractor arranged to extract one or more features from each segment of the image of the media item;
(vii) a classifier arranged to classify the media item on the basis of the extracted features;
wherein the segmentation map comprises information about relationships of corresponding image elements between all images in a set of training images of media items.
US11/639,593 2005-12-16 2006-12-15 Processing images of media items before validation Abandoned US20070154078A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/639,593 US20070154078A1 (en) 2005-12-16 2006-12-15 Processing images of media items before validation

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US30553705A 2005-12-16 2005-12-16
US11/366,147 US20070140551A1 (en) 2005-12-16 2006-03-02 Banknote validation
US11/639,593 US20070154078A1 (en) 2005-12-16 2006-12-15 Processing images of media items before validation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/366,147 Continuation-In-Part US20070140551A1 (en) 2005-12-16 2006-03-02 Banknote validation

Publications (1)

Publication Number Publication Date
US20070154078A1 true US20070154078A1 (en) 2007-07-05

Family

ID=37529297

Family Applications (4)

Application Number Title Priority Date Filing Date
US11/366,147 Abandoned US20070140551A1 (en) 2005-12-16 2006-03-02 Banknote validation
US11/639,597 Abandoned US20070154079A1 (en) 2005-12-16 2006-12-15 Media validation
US11/639,576 Active 2028-12-14 US8086017B2 (en) 2005-12-16 2006-12-15 Detecting improved quality counterfeit media
US11/639,593 Abandoned US20070154078A1 (en) 2005-12-16 2006-12-15 Processing images of media items before validation

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US11/366,147 Abandoned US20070140551A1 (en) 2005-12-16 2006-03-02 Banknote validation
US11/639,597 Abandoned US20070154079A1 (en) 2005-12-16 2006-12-15 Media validation
US11/639,576 Active 2028-12-14 US8086017B2 (en) 2005-12-16 2006-12-15 Detecting improved quality counterfeit media

Country Status (5)

Country Link
US (4) US20070140551A1 (en)
EP (4) EP1964073A1 (en)
JP (4) JP5219211B2 (en)
BR (4) BRPI0619845A2 (en)
WO (4) WO2007068867A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080185530A1 (en) * 2001-12-10 2008-08-07 Giesecke & Devrient Gmbh Methods and apparatuses for checking the authenticity of sheet material
US20100163466A1 (en) * 2007-06-06 2010-07-01 De La Rue International Limited Apparatus for analysing a security document
US20100303332A1 (en) * 2007-12-10 2010-12-02 Glory Ltd. Banknote handling machine and banknote handling method
US8472676B2 (en) 2007-06-06 2013-06-25 De La Rue International Limited Apparatus and method for analysing a security document
US20130272598A1 (en) * 2010-12-21 2013-10-17 Giesecke & Devrient Gmbh Method and device for examining the optical state of value documents
US8630475B2 (en) 2007-12-10 2014-01-14 Glory Ltd. Banknote handling machine and banknote handling method
US8739955B1 (en) * 2013-03-11 2014-06-03 Outerwall Inc. Discriminant verification systems and methods for use in coin discrimination
US9036890B2 (en) 2012-06-05 2015-05-19 Outerwall Inc. Optical coin discrimination systems and methods for use with consumer-operated kiosks and the like
US20160121370A1 (en) * 2013-07-11 2016-05-05 Grg Banking Equipment Co., Ltd. Banknote recognition and classification method and system
US9443367B2 (en) 2014-01-17 2016-09-13 Outerwall Inc. Digital image coin discrimination for use with consumer-operated kiosks and the like
US20170309109A1 (en) * 2016-04-22 2017-10-26 Ncr Corporation Image correction
US10542961B2 (en) 2015-06-15 2020-01-28 The Research Foundation For The State University Of New York System and method for infrasonic cardiac monitoring
US20210035293A1 (en) * 2018-04-09 2021-02-04 Toshiba Energy Systems & Solutions Corporation Medical image processing device, medical image processing method, and storage medium

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070140551A1 (en) * 2005-12-16 2007-06-21 Chao He Banknote validation
US8540142B1 (en) * 2005-12-20 2013-09-24 Diebold Self-Service Systems Banking machine controlled responsive to data read from data bearing records
JP4999163B2 (en) * 2006-04-17 2012-08-15 富士フイルム株式会社 Image processing method, apparatus, and program
JP2009545049A (en) * 2006-07-28 2009-12-17 エムイーアイ インコーポレーテッド Classification using support vector machines and variable selection
US8503796B2 (en) 2006-12-29 2013-08-06 Ncr Corporation Method of validating a media item
US8094917B2 (en) * 2008-04-14 2012-01-10 Primax Electronics Ltd. Method for detecting monetary banknote and performing currency type analysis operation
US20090260947A1 (en) * 2008-04-18 2009-10-22 Xu-Hua Liu Method for performing currency value analysis operation
US8682056B2 (en) * 2008-06-30 2014-03-25 Ncr Corporation Media identification
US8085972B2 (en) * 2008-07-03 2011-12-27 Primax Electronics Ltd. Protection method for preventing hard copy of document from being released or reproduced
US7844098B2 (en) * 2008-07-21 2010-11-30 Primax Electronics Ltd. Method for performing color analysis operation on image corresponding to monetary banknote
ES2403458T3 (en) * 2008-07-29 2013-05-20 Mei, Inc. Cash discrimination
CN102165454B (en) * 2008-09-29 2015-08-05 皇家飞利浦电子股份有限公司 For improving the method for computer-aided diagnosis to the probabilistic robustness of image procossing
CN101853389A (en) 2009-04-01 2010-10-06 索尼株式会社 Detection device and method for multi-class targets
RU2421818C1 (en) 2010-04-08 2011-06-20 Общество С Ограниченной Ответственностью "Конструкторское Бюро "Дорс" (Ооо "Кб "Дорс") Method for classification of banknotes (versions)
RU2438182C1 (en) 2010-04-08 2011-12-27 Общество С Ограниченной Ответственностью "Конструкторское Бюро "Дорс" (Ооо "Кб "Дорс") Method of processing banknotes (versions)
CN101908241B (en) * 2010-08-03 2012-05-16 广州广电运通金融电子股份有限公司 Method and system for identifying valued documents
DE102010055974A1 (en) * 2010-12-23 2012-06-28 Giesecke & Devrient Gmbh Method and device for determining a class reference data set for the classification of value documents
NL2006990C2 (en) * 2011-06-01 2012-12-04 Nl Bank Nv Method and device for classifying security documents such as banknotes.
CN102324134A (en) * 2011-09-19 2012-01-18 广州广电运通金融电子股份有限公司 Valuable document identification method and device
CN102592352B (en) * 2012-02-28 2014-02-12 广州广电运通金融电子股份有限公司 Recognition device and recognition method of papery medium
US9734648B2 (en) 2012-12-11 2017-08-15 Ncr Corporation Method of categorising defects in a media item
PL2951791T3 (en) * 2013-02-04 2019-10-31 Kba Notasys Sa Authentication of security documents and mobile device to carry out the authentication
US20140241618A1 (en) * 2013-02-28 2014-08-28 Hewlett-Packard Development Company, L.P. Combining Region Based Image Classifiers
US9727821B2 (en) * 2013-08-16 2017-08-08 International Business Machines Corporation Sequential anomaly detection
US10650232B2 (en) 2013-08-26 2020-05-12 Ncr Corporation Produce and non-produce verification using hybrid scanner
CN103729645A (en) * 2013-12-20 2014-04-16 湖北微模式科技发展有限公司 Second-generation ID card area positioning and extraction method and device based on monocular camera
ES2549461B1 (en) * 2014-02-21 2016-10-07 Banco De España METHOD AND DEVICE FOR THE CHARACTERIZATION OF THE STATE OF USE OF BANK TICKETS, AND ITS CLASSIFICATION IN APTOS AND NOT SUITABLE FOR CIRCULATION
US9336638B2 (en) * 2014-03-25 2016-05-10 Ncr Corporation Media item validation
US9824268B2 (en) * 2014-04-29 2017-11-21 Ncr Corporation Media item validation
US10762736B2 (en) 2014-05-29 2020-09-01 Ncr Corporation Currency validation
CN104299313B (en) * 2014-11-04 2017-08-08 浙江大学 A kind of banknote discriminating method, apparatus and system
DE102015012148A1 (en) * 2015-09-16 2017-03-16 Giesecke & Devrient Gmbh Apparatus and method for counting value document bundles, in particular banknote bundles
CN106056752B (en) * 2016-05-25 2018-08-21 武汉大学 A kind of banknote false distinguishing method based on random forest
US10452908B1 (en) 2016-12-23 2019-10-22 Wells Fargo Bank, N.A. Document fraud detection
CN108460649A (en) * 2017-02-22 2018-08-28 阿里巴巴集团控股有限公司 A kind of image-recognizing method and device
US10475846B2 (en) * 2017-05-30 2019-11-12 Ncr Corporation Media security validation
EP3829152B1 (en) * 2019-11-26 2023-12-20 European Central Bank Computer-implemented method for copy protection, data processing device and computer program product
US20210342797A1 (en) * 2020-05-04 2021-11-04 Bank Of America Corporation Dynamic Unauthorized Activity Detection and Control System
CN113240643A (en) * 2021-05-14 2021-08-10 广州广电运通金融电子股份有限公司 Banknote quality detection method, system and medium based on multispectral image

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5048095A (en) * 1990-03-30 1991-09-10 Honeywell Inc. Adaptive image segmentation system
US5729623A (en) * 1993-10-18 1998-03-17 Glory Kogyo Kabushiki Kaisha Pattern recognition apparatus and method of optimizing mask for pattern recognition according to genetic algorithm
US6163618A (en) * 1997-11-21 2000-12-19 Fujitsu Limited Paper discriminating apparatus
US20030021459A1 (en) * 2000-05-24 2003-01-30 Armando Neri Controlling banknotes
US20030042438A1 (en) * 2001-08-31 2003-03-06 Lawandy Nabil M. Methods and apparatus for sensing degree of soiling of currency, and the presence of foreign material
US20030128874A1 (en) * 2002-01-07 2003-07-10 Xerox Corporation Image type classification using color discreteness features
US20030217906A1 (en) * 2002-05-22 2003-11-27 Gaston Baudat Currency validator
US20040183923A1 (en) * 2003-03-17 2004-09-23 Sharp Laboratories Of America, Inc. System and method for attenuating color-cast correction in image highlight areas
US20040247169A1 (en) * 2003-06-06 2004-12-09 Ncr Corporation Currency validation

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2949823B2 (en) * 1990-10-12 1999-09-20 株式会社村田製作所 Method for manufacturing flat type electrochemical device
CN1095079C (en) * 1993-05-28 2002-11-27 千年风险集团公司 An automatic inspection apparatus
JP3611006B2 (en) * 1997-06-19 2005-01-19 富士ゼロックス株式会社 Image area dividing method and image area dividing apparatus
JP2000215314A (en) * 1999-01-25 2000-08-04 Matsushita Electric Ind Co Ltd Image identifying device
JP2000341512A (en) * 1999-05-27 2000-12-08 Matsushita Electric Ind Co Ltd Image reader
JP2001331839A (en) * 2000-05-22 2001-11-30 Glory Ltd Method and device for discriminating paper money
EP1217589B1 (en) 2000-12-15 2007-02-21 MEI, Inc. Currency validator
US20030099379A1 (en) * 2001-11-26 2003-05-29 Monk Bruce C. Validation and verification apparatus and method
JP4102647B2 (en) * 2002-11-05 2008-06-18 日立オムロンターミナルソリューションズ株式会社 Banknote transaction equipment
JP4252294B2 (en) * 2002-12-04 2009-04-08 株式会社高見沢サイバネティックス Bill recognition device and bill processing device
JP4332414B2 (en) * 2003-03-14 2009-09-16 日立オムロンターミナルソリューションズ株式会社 Paper sheet handling equipment
FR2857481A1 (en) * 2003-07-08 2005-01-14 Thomson Licensing Sa METHOD AND DEVICE FOR DETECTING FACES IN A COLOR IMAGE
JP4532915B2 (en) * 2004-01-29 2010-08-25 キヤノン株式会社 Pattern recognition learning method, pattern recognition learning device, image input device, computer program, and computer-readable recording medium
JP3978614B2 (en) * 2004-09-06 2007-09-19 富士ゼロックス株式会社 Image region dividing method and image region dividing device
JP2006338548A (en) * 2005-06-03 2006-12-14 Sony Corp Printing paper sheet management system, printing paper sheet registration device, method, and program, printing paper sheet discrimination device, method and program
US7961937B2 (en) * 2005-10-26 2011-06-14 Hewlett-Packard Development Company, L.P. Pre-normalization data classification
US20070140551A1 (en) * 2005-12-16 2007-06-21 Chao He Banknote validation
US8611665B2 (en) * 2006-12-29 2013-12-17 Ncr Corporation Method of recognizing a media item
US8503796B2 (en) * 2006-12-29 2013-08-06 Ncr Corporation Method of validating a media item

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5048095A (en) * 1990-03-30 1991-09-10 Honeywell Inc. Adaptive image segmentation system
US5729623A (en) * 1993-10-18 1998-03-17 Glory Kogyo Kabushiki Kaisha Pattern recognition apparatus and method of optimizing mask for pattern recognition according to genetic algorithm
US6163618A (en) * 1997-11-21 2000-12-19 Fujitsu Limited Paper discriminating apparatus
US20030021459A1 (en) * 2000-05-24 2003-01-30 Armando Neri Controlling banknotes
US20030042438A1 (en) * 2001-08-31 2003-03-06 Lawandy Nabil M. Methods and apparatus for sensing degree of soiling of currency, and the presence of foreign material
US20030128874A1 (en) * 2002-01-07 2003-07-10 Xerox Corporation Image type classification using color discreteness features
US20030217906A1 (en) * 2002-05-22 2003-11-27 Gaston Baudat Currency validator
US20040183923A1 (en) * 2003-03-17 2004-09-23 Sharp Laboratories Of America, Inc. System and method for attenuating color-cast correction in image highlight areas
US20040247169A1 (en) * 2003-06-06 2004-12-09 Ncr Corporation Currency validation

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7550736B2 (en) * 2001-12-10 2009-06-23 Giesecke & Devrient Gmbh Methods and apparatuses for checking the authenticity of sheet material
US20080185530A1 (en) * 2001-12-10 2008-08-07 Giesecke & Devrient Gmbh Methods and apparatuses for checking the authenticity of sheet material
US20100163466A1 (en) * 2007-06-06 2010-07-01 De La Rue International Limited Apparatus for analysing a security document
US8464875B2 (en) * 2007-06-06 2013-06-18 De La Rue International Limited Apparatus for analysing a security document
US8472676B2 (en) 2007-06-06 2013-06-25 De La Rue International Limited Apparatus and method for analysing a security document
US20100303332A1 (en) * 2007-12-10 2010-12-02 Glory Ltd. Banknote handling machine and banknote handling method
US8630475B2 (en) 2007-12-10 2014-01-14 Glory Ltd. Banknote handling machine and banknote handling method
US9547949B2 (en) * 2010-12-21 2017-01-17 Giesecke & Devrient Gmbh Method and device for examining the optical state of value documents
US20130272598A1 (en) * 2010-12-21 2013-10-17 Giesecke & Devrient Gmbh Method and device for examining the optical state of value documents
US9036890B2 (en) 2012-06-05 2015-05-19 Outerwall Inc. Optical coin discrimination systems and methods for use with consumer-operated kiosks and the like
US9594982B2 (en) 2012-06-05 2017-03-14 Coinstar, Llc Optical coin discrimination systems and methods for use with consumer-operated kiosks and the like
US8739955B1 (en) * 2013-03-11 2014-06-03 Outerwall Inc. Discriminant verification systems and methods for use in coin discrimination
US20160121370A1 (en) * 2013-07-11 2016-05-05 Grg Banking Equipment Co., Ltd. Banknote recognition and classification method and system
US9827599B2 (en) * 2013-07-11 2017-11-28 Grg Banking Equipment Co., Ltd. Banknote recognition and classification method and system
US9443367B2 (en) 2014-01-17 2016-09-13 Outerwall Inc. Digital image coin discrimination for use with consumer-operated kiosks and the like
US10542961B2 (en) 2015-06-15 2020-01-28 The Research Foundation For The State University Of New York System and method for infrasonic cardiac monitoring
US11478215B2 (en) 2015-06-15 2022-10-25 The Research Foundation for the State University o System and method for infrasonic cardiac monitoring
US20170309109A1 (en) * 2016-04-22 2017-10-26 Ncr Corporation Image correction
US10275971B2 (en) * 2016-04-22 2019-04-30 Ncr Corporation Image correction
US20210035293A1 (en) * 2018-04-09 2021-02-04 Toshiba Energy Systems & Solutions Corporation Medical image processing device, medical image processing method, and storage medium
US11830184B2 (en) * 2018-04-09 2023-11-28 Toshiba Energy Systems & Solutions Corporation Medical image processing device, medical image processing method, and storage medium

Also Published As

Publication number Publication date
BRPI0620625A2 (en) 2011-11-16
WO2007068867A1 (en) 2007-06-21
JP2009527028A (en) 2009-07-23
US20070140551A1 (en) 2007-06-21
BRPI0619926A2 (en) 2011-10-25
JP5177817B2 (en) 2013-04-10
EP1964075A1 (en) 2008-09-03
EP1964076A1 (en) 2008-09-03
US8086017B2 (en) 2011-12-27
WO2007068923A1 (en) 2007-06-21
WO2007068930A1 (en) 2007-06-21
US20070154079A1 (en) 2007-07-05
EP1964074A1 (en) 2008-09-03
JP2009527027A (en) 2009-07-23
WO2007068928A1 (en) 2007-06-21
BRPI0619845A2 (en) 2011-10-18
JP2009527029A (en) 2009-07-23
JP5175210B2 (en) 2013-04-03
JP5044567B2 (en) 2012-10-10
US20070154099A1 (en) 2007-07-05
EP1964073A1 (en) 2008-09-03
JP5219211B2 (en) 2013-06-26
JP2009519532A (en) 2009-05-14
BRPI0620308A2 (en) 2011-11-08

Similar Documents

Publication Publication Date Title
US20070154078A1 (en) Processing images of media items before validation
US7639858B2 (en) Currency validation
CN101331527B (en) Processing images of media items before validation
JP5344668B2 (en) Method for automatically confirming securities media item and method for generating template for automatically confirming securities media item
Zeggeye et al. Automatic recognition and counterfeit detection of Ethiopian paper currency
Youn et al. Efficient multi-currency classification of CIS banknotes
Dhar et al. Paper currency detection system based on combined SURF and LBP features
Baek et al. Banknote simulator for aging and soiling banknotes using Gaussian models and Perlin noise
Patgar et al. An unsupervised intelligent system to detect fabrication in photocopy document using geometric moments and gray level co-occurrence matrix
KR20120084946A (en) Method for detecting counterfeits of banknotes using bayesian approach
Alsandi Image Splicing Detection Scheme Using Surf and Mean-LBP Based Morphological Operations
Andrushia et al. An Intelligent Method for Indian Counterfeit Paper Currency Detection
Vishnu et al. Currency detection using similarity indices method
Al-Frajat Selection of Robust Features for Coin Recognition and Counterfeit Coin Detection
Kavinila et al. Detection and Implementation of Indian Currencies Based on Computer Vision Approach
Ganjave et al. Currency Detector for Visually Impaired (Study of The System Which Identifies Indian Currency for Blind People)

Legal Events

Date Code Title Description
AS Assignment

Owner name: NCR CORPORATION, OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HE, CHAO;ROSS, GARY A.;REEL/FRAME:019050/0086;SIGNING DATES FROM 20070124 TO 20070125

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE