CN101366061A - Detecting improved quality counterfeit media items - Google Patents

Detecting improved quality counterfeit media items Download PDF

Info

Publication number
CN101366061A
CN101366061A CNA2006800473687A CN200680047368A CN101366061A CN 101366061 A CN101366061 A CN 101366061A CN A2006800473687 A CNA2006800473687 A CN A2006800473687A CN 200680047368 A CN200680047368 A CN 200680047368A CN 101366061 A CN101366061 A CN 101366061A
Authority
CN
China
Prior art keywords
segmentation
mrow
classifier
image
training set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2006800473687A
Other languages
Chinese (zh)
Other versions
CN101366061B (en
Inventor
何超
佳里·罗斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NCR Voyix Corp
Original Assignee
NCR Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/366,147 external-priority patent/US20070140551A1/en
Application filed by NCR Corp filed Critical NCR Corp
Publication of CN101366061A publication Critical patent/CN101366061A/en
Application granted granted Critical
Publication of CN101366061B publication Critical patent/CN101366061B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Inspection Of Paper Currency And Valuable Securities (AREA)

Abstract

A method of creating a classifier for media validation is described. Information from all of a set of training images from genuine media items is used to form a segmentation map which is then used to segment each of the training set images. Features are extracted from the segments and used to form a classifier which is preferably a one-class statistical classifier. Classifiers can be quickly and simply formed, for example when the media is a banknote for different currencies and denominations in this way and without the need for examples of counterfeit banknotes. A media validator using such a classifier is described as well as a method of validating a banknote using such a classifier. In a preferred embodiment a plurality of segmentation maps are formed, having different numbers of segments. If higher quality counterfeit media items come into the population of media items, the media validator is able to react immediately by switching to using a segmentation map having a higher number of segments without the need for re-training.

Description

Detecting counterfeit media of improved quality
Cross Reference to Related Applications
This application is a continuation-in-part application of U.S. patent application No. 11/366,147 filed on 3/2/2006 and U.S. patent application No. 11/366,147 is a continuation-in-part application of U.S. patent application No. 11/305,537 filed on 16/12/2005. Application 11/366,147 filed on 3/2/2006 and 11/305,537 filed on 12/16/2005 are hereby incorporated by reference.
Technical Field
The present description relates to a method and apparatus for media authentication. And in particular to such methods and apparatus that can work with improved quality counterfeit media (e.g., passports, checks, banknotes, bonds, stocks) or other such media.
Background
There is an increasing need to automatically check and verify banknotes of different denominations and denominations in a simple, reliable and cost-effective manner. This is necessary, for example, in self-service devices that receive banknotes (e.g., self-service kiosks, ticket machines, automated teller machines arranged to transact deposits, self-service currency exchanges, etc.). Automatic validation of other types of valuable media (e.g., passports, checks, etc.) may also be necessary.
Previously, manual methods of media validation have involved image inspection of banknotes, passports, checks, etc., see-through effects such as watermarks and grain alignment marks, and hand feel and even smell. Other known methods rely on semi-public features that require semi-manual interrogation. For example, magnetic devices, ultraviolet sensors, fluorescence, infrared detectors, capacitors, metal strips, image patterns, and the like are used. However, by themselves, these methods are manual or semi-manual and are not suitable for many applications where manual intervention is not possible for a long time. For example in a self-service device.
There are significant problems to overcome to create an automatic media validator. For example, there are many different types of currency with different security features and even substrate types. It is also common to have different levels of security features at different denominations. It is therefore desirable to provide a general method for those different currencies and denominations that is easy and convenient to perform currency validation.
In short, the task of a banknote validator is to determine whether a given banknote is genuine or counterfeit. Previous automatic validation methods typically require a relatively large number of known counterfeit banknote samples to train the classifier. In addition, those previous classifiers were trained to detect only known counterfeit banknotes. This is problematic because there is little or no information available for possible counterfeit banknotes. This is problematic, for example, in particular for newly introduced denominations or newly introduced currencies.
In an earlier paper published in Pattern Recognition stage 37 (Pattern Recognition 37) (2004), page 1085-1096, by chap He, MarkGirolami and Gary Ross (two of which are the inventors of the present application), entitled "applying optimized combinations of one-class classifiers for automated currency validation", an automated currency validation method is described (patent numbers EP1484719, US 20042447169). This involves using a grid structure to segment the image of the entire note into regions. A separate "one-class" classifier is constructed for each region, and a small subset of the region-specific classifiers are combined to provide a comprehensive description (the term "single-class" is explained in more detail below). Segmentation and combination of region-specific classifiers to achieve good performance is achieved by employing genetic algorithms. This method requires a small amount of counterfeit samples in the genetic algorithm stage, and as such, it is not applicable when no counterfeit data can be obtained.
There is also a need to perform automatic currency validation in a manner that is computationally inexpensive to perform in real time.
A further problem relates to the situation where an automated currency validation system is operating relatively successfully in place and under given circumstances. For example, the environment includes a population (population) of genuine and counterfeit banknotes with a given number range and distribution. Such automatic currency validation systems are often difficult to adapt if the environment changes abruptly. For example, police wisdom, manual validation, and other sources of information may indicate the presence of a higher quality counterfeit note, assuming that a new higher quality counterfeit note suddenly begins to enter the banknote population. In such a case, if a bank or other provider finds that counterfeit notes are being accepted at the automatic currency validator, a business decision is typically made to stop using those machines. However, this is expensive because manual verification is required for replacement, and consumers are inconvenient. Significant time and money is also required to upgrade the automatic currency validation system to handle higher quality counterfeit bills.
Many of the problems mentioned above also apply to the validation of other types of valuable media (e.g., passports, checks, etc.).
Disclosure of Invention
A method of creating a classifier for media validation is described. A segmentation map is formed using only information from all images in a training set of images of genuine media objects (media items), and each training set image is then segmented using the segmentation map. Features are extracted from the segments and used to form a classifier, which is preferably a one-class statistical classifier. In this way classifiers can be quickly and simply formed for different currencies and denominations and without the need to counterfeit samples of media items. Media validators using such classifiers and methods of validating banknotes using such classifiers are described. In a preferred embodiment, a plurality of segmentation maps are formed, the segmentation maps having different numbers of segments. If a higher quality counterfeit media item enters the population of media items, the media validator can react immediately without retraining by switching to using a segmentation map with a higher number of segments.
The method may be performed by software in machine-readable form on a storage medium. The steps of the method may be performed in a suitable order and/or in parallel as will be clear to a person skilled in the art.
This means that the software can be a valuable, separately marketable commodity. This means that software is included that runs or controls "dumb" or standard software to perform the desired function (thus, software essentially defines the function of a register and may therefore be referred to as a register even before it is combined with its target hardware). For similar reasons, software that "describes" or defines the construction of hardware, such as HDL (hardware description language) software used to design silicon chips or to construct general purpose programmable chips, is also included to perform the desired functions.
As will be clear to the skilled person, the preferred features may be combined as appropriate and with any of the various aspects of the invention.
Drawings
Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:
FIG. 1 is a flow chart of a method of creating a classifier for banknote validation;
FIG. 2 is a schematic diagram of an apparatus for creating a classifier for banknote validation;
FIG. 3 is a schematic view of a bill validator;
FIG. 4 is a flow chart of a method of validating a banknote;
FIG. 5 is a flow chart of a method of dynamically reacting to the occurrence of counterfeit banknotes of improved quality;
FIG. 6 is a schematic diagram of a segmentation map for two segments;
FIG. 7 is a graph of error acceptance rate/error rejection rate versus the number of segments in the segmentation map for three different currencies;
FIG. 8 is a graph similar to FIG. 7 indicating the selection of the number of segments;
fig. 9 is a graph of the improved quality of counterfeit notes of fig. 8 in the event of an entry into the population;
FIG. 10 is a graph with an exaggerated false rejection rate for the display of FIG. 8;
FIG. 11 is a graph for the case of FIG. 9 but using a segmentation map with a higher number of segments;
FIG. 12 is a schematic diagram of a self-service device with a bill validator.
Detailed Description
Embodiments of the present invention are described below, by way of example only. These examples represent the best modes of practicing the invention presently known to the applicant, and are not the only modes in which the invention can be practiced. Although the present examples are described and illustrated herein as being implemented in a banknote validation system, the described system is provided as an example and not a limitation. Those skilled in the art will appreciate that the present examples are suitable for application in a variety of different types of media validation systems, including but not limited to passport validation systems, check validation systems, bond validation systems, and stock validation systems.
The term "single class classifier" is used to denote a classifier that is formed or constructed using information about samples from only a single class, but which is used to assign newly appearing samples to or not to the single class. This is different from a conventional binary classifier that is created by using information about samples of two classes and is used to assign new samples to one or the other of the two classes. A single class classifier may be considered to define a boundary around a known class, such that samples that fall off the boundary are considered not to belong to the known class.
FIG. 1 is a schematic flow chart of a method of creating a classifier for banknote validation.
First, we obtain a training set of images of genuine banknotes (see block 10 of fig. 1). These are the same type of images taken of banknotes of the same currency and denomination. The type of image relates to how the image is obtained, which may be in any manner known in the art. For example, a reflectance image, a transmission image, an image on any channel of red, blue or green, a thermal image, an infrared image, an ultraviolet image, an x-ray image or other image types. The images in the training set are aligned and are the same size. As is known in the art, preprocessing may be performed to align the images and scale the images, if necessary.
Next, we create a segmentation map by using information from the training set images (see block 12 of FIG. 1). The segmentation map includes information on how to divide the image into a plurality of segments. The segments may be discontinuous, i.e. a given segment may comprise more than one patch (batch) in different regions of the image. The segmentation map is formed in any suitable manner, and examples of some methods are given in detail below. For example, the segments are formed based on the distribution of the amplitudes of each pixel of the plurality of images used in the training set of images and the relationship to the amplitudes of the other pixels that make up the images. Preferably, but not necessarily, the segmentation map also includes a certain number of segments to be used. For example, fig. 6 is a schematic illustration of a segmentation map 60 having two segments numbered 1 and 2 in the figure. The segmentation map corresponds to the surface area (area) of a banknote having segments 1 and 2, segment 1 including those areas labeled 1 and segment 2 including those areas labeled 2. One segmentation map will include a representation of the entire surface area of the note. When the segmentation is based on pixel information, the maximum number of segments is the total number of pixels in the image of the banknote.
Using the segmentation map, we segment each image in the training set (see block 14 of fig. 1). Then, we extract one or more features from each segment in each training set image (see block 15 of fig. 1). By the term "feature" we mean any statistic or other characteristic of the segment. For example, mean pixel intensity (intensity), median pixel intensity, pattern of pixel intensities, texture, histogram, fourier transform descriptor, wavelet transform descriptor, and/or any other statistic in the segment.
Then, a classifier is formed by using the feature information (see block 16 of fig. 1). Any suitable type of classifier may be used as is known in the art. In a particularly preferred embodiment of the invention, the classifier is a one-class classifier, which does not require information about counterfeit banknotes. However, a binary classifier or any other type of classifier of any suitable type as known in the art may also be used.
The method of figure 1 enables a classifier for validation of banknotes of a particular currency and denomination to be formed simply, quickly and efficiently. To create classifiers for other currencies or denominations, the method is repeated with the appropriate training set images.
Segmentation maps using different numbers of segmentations yield different results. In addition, as the number of segments increases, the processing required for each note also increases. Thus, in a preferred embodiment, we perform tracking during training and verification (if information about counterfeit banknotes is available) to select the optimal number of segments for segmentation mapping.
This is illustrated in figure 1. The classifier is checked (see block 17) to access the performance of the classifier based on the error acceptance rate and/or the error rejection rate. The false acceptance rate is an indication of the frequency with which the classifier indicates that a counterfeit note is genuine. The false reject rate is an indication of how often the classifier indicates that a genuine banknote is false. The verification uses known counterfeits or "dummy" counterfeits created for verification purposes.
The method of fig. 1 is then repeated for different numbers of segments in the segmentation map and the optimal number of segments is selected (see block 18). This is done, for example, by forming a curve similar to that of fig. 7 and 8. If no counterfeits are available for verification, the number of segments can be set to a number that works well for most currencies. Our experimental results show that currencies with good safety design only require 2 to 5 segments to achieve good false acceptance and false rejection performance; while a currency with a poor security design may require about 15 segments.
The best segmentation map and one or more other optional segmentation maps are then stored (see block 19 of fig. 1). For each of these segmentation maps, an associated set of classification parameters may be computed and stored.
FIG. 7 is a graph of the false accept/false reject rate versus the number of segments in a segmentation map for three currencies using the banknote validation method described herein. The false acceptance rates for the three currencies are shown by the curves a, b, c. The false reject rate is similar for each currency and is represented by line 70.
It can be seen that as the number of segments in the segmentation map increases, the chances of false acceptances decrease. However, there is a small increase in the risk of rejecting genuine banknotes.
In a preferred embodiment, we select the minimum number of segments so that the false acceptance rate is almost zero. For example, FIG. 8, which is similar to FIG. 7, shows the number of segments X selected using this criterion.
However, there may be a certain time during the lifetime of the currency, the quality of counterfeit banknotes improves. For example, the currency may be the target of a more organized forged team (countefeit ring). In addition, more advanced replication techniques or techniques may become available. In this case, counterfeit banknotes may be accepted as genuine by the automated system. This results in an increased false acceptance rate, shown at 90 in fig. 9. If the automatic currency validation system had a split map using only a small number of segments X (see FIGS. 9 and 10), then all that would be possible would be to increase the false reject rate very high (see 100 of FIG. 10). This means that counterfeit banknotes are not accepted, but at the cost of rejecting a large proportion of genuine banknotes (100% in the extreme case, i.e. such currency/denomination is not temporarily supported, as is common in current practice). To solve this problem without the need to cut off service or retrain the classifier, we simply replace the original segmentation map with a predefined alternative segmentation map with a higher number of segments. The first set of classification parameters related to the original segmentation map may be replaced by another set of classification parameters related to a predefined selectable segmentation map.
This is shown in fig. 11. The number of segments in the segmentation map is now Y, which is larger than X. It can be seen that the false reject rate at Y remains low as the false accept rate.
By replacing the set of classification parameters in this way, no retraining is necessary. Thus, the system for automatic currency validation can be quickly and simply adjusted in response to the introduction of higher quality counterfeit notes. This will be described in more detail later in this document with reference to fig. 5.
More details regarding examples of segmentation techniques are now given.
Previously, in EP1484719 and US20042447169 (as mentioned in the background section) we used segmentation techniques and genetic algorithm methods involving the use of a mesh structure for the image plane to form the segmentation map. This necessarily uses information about counterfeit banknotes and incurs increased computational costs when performing genetic algorithm searches.
The present invention uses a different method of forming segmentation maps that does not require the use of genetic algorithms or equivalent methods to search for good segmentation maps among a large number of possible segmentation maps. This reduces computational cost and improves performance. In addition, no information about counterfeit banknotes is required.
It is believed that in a counterfeiting process it is often difficult to provide consistent quality emulation of the entire note, and therefore, certain regions of the note are more difficult to replicate successfully than others. We have therefore recognised that instead of using a strictly uniform grid segmentation, we can improve banknote validation by using more complex segmentations. This is the case in fact when we perform empirical tests indicating the above-mentioned case. Segmentation based on morphological characteristics such as mode, color and texture results in better performance in detecting counterfeit banknotes. However, when applying a conventional image segmentation method, such as using an edge detector, to each image in the training set, it is difficult to use the conventional image segmentation method. This is because different results are obtained for each training set item and it is difficult to align the corresponding features in the different training set images. To avoid the problem of aligning the segments, in a preferred embodiment we use what is known as "spatio-temporal image decomposition".
Details regarding the method of forming the segmentation map are now given. In summary, the method can be considered to specify how to divide an image plane into a plurality of segments, each segment comprising a plurality of specified pixels. As mentioned above, the segments may be non-contiguous. In the present invention, the present specification is written based on information from all images in the training set. In contrast, segmentation using a strict mesh structure does not require information from the images in the training set.
For example, each segmentation map includes information about the relationship of corresponding image elements between all images in the training set.
The images in the training set are considered to be stacked and aligned with each other in the same orientation. A given pixel in the banknote image plane is acquired, which pixel is considered to have a "pixel intensity profile" (profile) that includes information about the intensity of the pixel at a particular pixel location in each training set image. The pixel locations in the image plane are clustered into segments using any suitable clustering algorithm, with pixel locations in the segments having similar or related pixel intensity profiles.
In a preferred example, we use these pixel intensity profiles. However, it is not necessary to use a pixel brightness profile. Other information from all images in the training set may also be used. For example, a luminance profile of a block of 4 neighboring pixels or an average of the pixel luminance of pixels at the same location in each training set image.
A specific preferred embodiment of our method of forming a segmentation map will now be described in detail. This is based on the approach taught in the following publications: feature Notes in computer science, 2352: 747 Eigensegments of Avidan, S. in 758,2002: a spatial-temporal composition of an ensemble of sensitive objects.
Given an image ensemble (ensemble) that has been aligned and scaled to the same size r × ciI ═ 1, 2, Λ, N, per image IiCan be represented by its pixels as alpha in vector form1i,α2i,Λ,αMi]TWherein α isji(j=1, 2, Λ, M) is the luminance of the jth pixel in the ith image, and M — r · c is the total number of pixels in the image. The vectors I for all images in the ensemble may then be stacked (stacking)i(using mean return to zero) to generate a design matrix
Figure A200680047368D00161
Thus, a ═ I1,I2,Λ,IN]. Row vector in A [ alpha ]j1,αj2,Λ,αjN]Can be seen as the luminance profile of a particular pixel (jth) of the N images. If two pixels are from the same pattern region of the image, they may have similar luminance values and therefore a strong temporal correlation. Note that the term "time" herein need not correspond exactly to the time axis, but is used to indicate the axis in the ensemble that passes through the different images. Our algorithm attempts to find these correlations and spatially divides the image plane into regions of pixels with similar temporal behavior. We measure this correlation by defining a matrix between the luminance profiles. A simple way is to use Euclidean distance, i.e. the time correlation between two pixels j and k can be expressed as <math> <mrow> <mi>d</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <msqrt> <msubsup> <mi>&Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <msup> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mi>ji</mi> </msub> <mo>-</mo> <msub> <mi>a</mi> <mi>ki</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </msqrt> <mo>.</mo> </mrow></math> The smaller d (j, k), the stronger the correlation between two pixels.
To spatially resolve the image plane using the temporal correlation between pixels, we perform a clustering algorithm on the pixel intensity profile (the rows of the design matrix a). This will result in clusters of temporally related pixels. The most straightforward choice is to use the K-means algorithm, but any other clustering algorithm may be used. As a result, the image plane is divided into segments of temporally related pixels. This segment can then be used as a template to segment all the images in the training set; and a classifier can be constructed with respect to features extracted from those segments of all images in the training set.
In order to achieve training without using counterfeit banknotes, a one-class classifier is preferred. Any suitable type of single-class classifier known in the art may be used. Such as neural network based single class classifiers and statistical based single class classifiers.
Suitable statistical methods for single class classification are typically based on log-likelihood ratio maximization under the null assumption of extracting the considered observations from the target class, and these methods include assuming the target class as a multivariate gaussian distributed D2The test (described in Morrison, DF: multivariate statistical Methods (third edition.) McGrawHill publishing company, New York, 1990). In the case of any non-Gaussian distribution, the density of the target class can be estimated by using, for example, a semi-parametric mixture of Gauss (described in Bishop, CM: Neural Networks for Pattern recognition, Oxford university Press, New York, 1995) or a non-parametric Parzen window (described in Duda, RO, Hart, PE, Stork, DG: Pattern Classification (second edition), John Wiley and Sons, INC, New York, 2001), and the distribution of log-likelihood ratios under null hypotheses can be obtained by sampling techniques such as self-sampling (described in Wang, S, Woodward, WA, Gary, HL et al: A new test for outlier detection from a multivariate mixture distribution), Journal of computational and Graphical Statistics, 6 (3): 285; 299, 1997).
Other methods that may be employed for single class classification are support directionVolume data field description (SVDD) (in Tax, DMJ, Duin, RPW: support vector domain description, Pattern Recognition Letters, 20 (11-12): 1191-1199, 1999), and also known "support estimation" (described in Hayton, P, Scholkopf, B, Tarassassenko, L, Anuzis, P: support Vector Detection Applied to Jet Engine vibration spectra (new Detection of Support vectors Applied to Jet Engine vibration spectra), advanced in Neural Information Processing Systems (Neural network Information Processing Systems), 13, eds Leen, Todd K and Dietterich, Thomas G and Tresp, Volker, MIT Press, 946 Ash 952, 2001) and Extreme Value Theory (EVT) (in Roberts, SJ: IEE Proceedings on Vision, Image.&Signal Processing (IEE conference record on vision, image and Signal Processing), 146 (3): 124-129, 1999). In SVDD, the distribution of supported data is estimated, while the EVT estimates the distribution of extreme values. For this particular application, a large number of genuine banknote samples are used, and therefore, in this case, a reliable estimate of the distribution of the target classes can be obtained. Thus, in a preferred embodiment, we choose a single class classification method that can unambiguously estimate the density distribution, although this is not essential. In a preferred embodiment, we use parameter-based D2And (4) a single classification method of the test.
For example, the statistics hypothesis test detailing the statistics for our single class classifier is as follows:
uniformly distributed p-dimensional vector samples (feature sets per note) x assuming N-independence1,Λ,xNC has a base density function p (x | theta) with respect to a parameter theta. For new point xN+1The following hypothesis test is given such that H0:xN+1E is C, and H1: <math> <mrow> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>&NotElement;</mo> <mi>C</mi> </mrow></math> Where C denotes a region where invalid hypothesis is true, and C is defined by p (x | θ). Normal log-likelihood ratio of null and alternative hypotheses assuming uniform distribution under the alternative hypothesis
<math> <mrow> <mi>&lambda;</mi> <mo>=</mo> <mfrac> <mrow> <munder> <mi>sup</mi> <mrow> <mi>&theta;</mi> <mo>&Element;</mo> <mi>&Theta;</mi> </mrow> </munder> <msub> <mi>L</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>sup</mi> <mrow> <mi>&theta;</mi> <mo>&Element;</mo> <mi>&Theta;</mi> </mrow> </munder> <msub> <mi>L</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>=</mo> <mfrac> <mrow> <munder> <mi>sup</mi> <mi>&theta;</mi> </munder> <msubsup> <mi>&Pi;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msubsup> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>n</mi> </msub> <mo>|</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>sup</mi> <mi>&theta;</mi> </munder> <msubsup> <mi>&Pi;</mi> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>n</mi> </msub> <mo>|</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow></math>
May be used as test statistics for invalid hypotheses. In the preferred embodiment, we can use log-likelihood ratios as test statistics for the validation of newly presented notes.
Feature vector with multivariate gaussian density: assuming that the feature vectors describing individual points in the sample are Multivariate gaussians, the test presented from the above likelihood ratio (1) evaluates whether each point in the sample shares a common mean value (described in (Morrison, DF: Multivariate statistical methods (third edition): McGraw-Hill publishing company, new york, 1990)). Assuming N independent, uniformly distributed p-dimensional vector samples x1, Λ, xNFrom multivariate Normal distribution with mean μ and covariance C, the sample estimate isAndthe random selection of samples is denoted x0Distance of related squared Mahalanobis (Mahalanobis)
<math> <mrow> <msup> <mi>D</mi> <mn>2</mn> </msup> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mn>0</mn> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow></math>
Can be expressed as a central F distribution distributed with p and N-p-1 degrees of freedom
F = ( N - p - 1 ) ND 2 p ( N - 1 ) 2 - Np D 2 - - - ( 3 )
Then, if
F>Fα;p,N-p-1 (4)
Then the common population mean vector x0And x remainsiWill be rejected, wherein Fα;p,N-P-1Is the upper alpha.100% of the F distribution with a degree of freedom (p, N-p-1)And (4) point.
Now assume that x is selected0As having a maximum of D2A vector of observations of the statistics. Maximum D from random samples of size N2The distribution of (2) is complicated. However, a conservative approximation of 100 α percent above this critical value can be obtained by the penultimate (Bonferroni) inequality. Therefore, if
<math> <mrow> <mi>F</mi> <mo>></mo> <msub> <mi>F</mi> <mrow> <mfrac> <mi>&alpha;</mi> <mi>N</mi> </mfrac> <mo>;</mo> <mi>p</mi> <mo>,</mo> <mi>N</mi> <mo>-</mo> <mi>p</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow></math>
Then we can conclude that x0Is an outlier.
In fact, both equation (4) and equation (5) may be used for outlier detection.
When adding data xN+1When available, in an experiment that designs a new sample that does not form part of the original sample, we can use the following incremental estimates of mean and covariance, i.e., mean
<math> <mrow> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> <mrow> <mo>{</mo> <mi>N</mi> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>+</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>}</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow></math>
Sum covariance
<math> <mrow> <msub> <mover> <mi>C</mi> <mo>^</mo> </mover> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mfrac> <mi>N</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> <msub> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>+</mo> <mfrac> <mi>N</mi> <msup> <mrow> <mo>(</mo> <mi>N</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mfrac> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <mo>.</mo> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>7</mn> <mo>)</mo> </mrow> </mrow></math>
By using expressions (6) and (7) and the matrix inversion theorem, equation (2) for the N sampled reference sets and the N +1 th checkpoint becomes
<math> <mrow> <msup> <mi>D</mi> <mn>2</mn> </msup> <mo>=</mo> <msubsup> <mi>&sigma;</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>T</mi> </msubsup> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>&sigma;</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow></math>
Wherein,
<math> <mrow> <msub> <mi>&sigma;</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mi>N</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </mfrac> <mrow> <mo></mo> <mo></mo> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo></mo> <mo>|</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>9</mn> <mo>)</mo> </mrow> </mrow></math>
<math> <mrow> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <mfrac> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>N</mi> </mfrac> <mrow> <mo>(</mo> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <mfrac> <mrow> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> </mrow> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </mrow></math>
by passing
Figure A200680047368D00197
To represent <math> <mrow> <mrow> <mo></mo> <msup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> <msubsup> <mover> <mi>C</mi> <mo>^</mo> </mover> <mi>N</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mover> <mi>&mu;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> </mrow> <mo>,</mo> </mrow></math> Then
D 2 = ND N + 1 , N 2 N + 1 + D N + 1 , N 2 - - - ( 11 )
Therefore, the mean value can be based on a common estimation
Figure A200680047368D00201
Sum covariance
Figure A200680047368D00202
To test for a new point xN+1. Although the assumption of multivariate gaussian feature vectors was found to be a suitable choice for many applications, the assumption of multivariate gaussian feature vectors is often not true in practice. In the following section we abandon this assumption and consider arbitrary densities.
Feature vectors with arbitrary density: finite data samples extracted from any arbitrary density p (x) can be extracted using any suitable semi-parametric (e.g., gaussian mixture model) or non-parametric (e.g., parever window method) density estimation method known in the art
Figure A200680047368D00203
Figure A200680047368D00204
Obtaining probability density estimatesThis density can then be used in calculating the log-likelihood ratio (1). Unlike the case of multivariate gaussian distributions, under the null assumption, there is no analytical distribution of the test statistic (λ). Therefore, to obtain such a distribution, a digital self-sampling method may be employed to obtain an otherwise non-analytic null distribution at the estimated density, and therefore, various threshold values λ may be established from the obtained empirical distributioncrit. It can be seen that at the limit value N → ∞, the likelihood ratio can be estimated by
<math> <mrow> <mi>&lambda;</mi> <mo>=</mo> <mfrac> <mrow> <munder> <mi>sup</mi> <mrow> <mi>&theta;</mi> <mo>&Element;</mo> <mi>&Theta;</mi> </mrow> </munder> <msub> <mi>L</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> <mrow> <munder> <mi>sup</mi> <mrow> <mi>&theta;</mi> <mo>&Element;</mo> <mi>&Theta;</mi> </mrow> </munder> <msub> <mi>L</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>&theta;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>&RightArrow;</mo> <mover> <mi>p</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>;</mo> <msub> <mover> <mi>&theta;</mi> <mo>^</mo> </mover> <mi>N</mi> </msub> <mo>)</mo> </mrow> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>12</mn> <mo>)</mo> </mrow> </mrow></math>
Wherein,
Figure A200680047368D00207
representing x under a model estimated from the original N samplesN+1The probability density of (c).
Parameters for estimating density distribution in generating B-set self-sampling of N samples from reference data set and using same
Figure A200680047368D00208
Thereafter, the number of samples N +1 can be calculated by randomly selecting the sample <math> <mrow> <mover> <mi>p</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mrow> <mi>N</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>;</mo> <msubsup> <mover> <mi>&theta;</mi> <mo>^</mo> </mover> <mi>N</mi> <mi>i</mi> </msubsup> <mo>)</mo> </mrow> <mo>&ap;</mo> <msubsup> <mi>&lambda;</mi> <mi>crit</mi> <mi>i</mi> </msubsup> </mrow></math> To obtain B-self-sampled replicated test statistics <math> <mrow> <msubsup> <mi>&lambda;</mi> <mi>crit</mi> <mi>i</mi> </msubsup> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mi>K</mi> <mo>,</mo> <mi>B</mi> <mo>.</mo> </mrow></math> By pairing in ascending order
Figure A200680047368D002011
Ordering, a threshold α can be defined such that if λ ≦ λαThen the null hypothesis is rejected at the desired significance level, where λαIs that
Figure A200680047368D002012
And α ═ j/(B + 1).
Preferably, the method of forming the classifier is repeated for different numbers of segments and verified using an image of a banknote known to be authentic. Then, the number of segments that give the best performance and correspond to the set of classification parameters is selected. We have found that the optimum number of segments is from about 2 to 15 for most currencies, although any suitable number of segments may be used.
FIG. 2 is a schematic diagram of an apparatus 20 for creating a sorter 22 for banknote validation. It includes:
an input 21 configured to access a training set of banknote images;
a processor 23 configured to create a plurality of segmentation maps using the training set images, each segmentation map having a different number of segments;
a segmenter 24 configured to segment each training set image using a selected one of the segmentation maps;
a feature extractor 25 configured to extract one or more features from each segment in each training set image;
the processor 23 may also be configured to calculate a set of classification parameters for each segmentation map using the results of the segmenter 24 and the feature extractor 25.
A classification forming means 26 configured to use a first selected set of classification parameter sets; and
an adapter 27 configured to replace the first selected set of classification parameters with one of the other sets of classification parameters,
wherein the processor is configured to create a segmentation map based on information from all images in the training set. For example by using the spatio-temporal image decomposition described above.
Optionally, the apparatus for creating the classifier further comprises a selector for selecting the best segmentation map and/or the set of relevant classification parameters and one or more optional segmentation maps and/or sets of relevant classification parameters by evaluating the classification performance of each segmentation map.
Fig. 3 is a schematic diagram of the bill validator 31. It includes:
an input configured to receive at least one image 30 of a banknote to be validated;
a plurality of segmentation maps 32, each having a different number of segments, consisting of one optimal segmentation map determined during the training phase and one or more alternative segmentation maps.
A processor 33 configured to segment the image of the banknote using a first segmentation map;
a feature extractor 34 configured to extract one or more features from each segment of the banknote image;
a classifier 35 configured to classify the banknote as valid or invalid based on the extracted features; and
an adapter 36 configured to replace the first segmentation map with one of the other segmentation maps and to replace the classifier with the classifier associated with said other segmentation map; wherein the segmentation map is formed based on information about each of the set of training images of the banknote. Note that the devices of fig. 3 are not necessarily independent of each other, and these devices may be integrated.
FIG. 4 is a flow chart of a method of validating a banknote. The method comprises the following steps:
accessing at least one image of the banknote to be validated (block 40);
access the segmentation map (block 41);
segmenting the image of the banknote using the segmentation map (block 42);
extracting features from each segment of the banknote image (block 43);
classifying the banknote as valid or invalid based on the extracted features using a classifier (block 44);
wherein the segmentation map is formed based on information relating to each of a set of training images of the banknote. The steps of the method may be performed in any suitable order or combination as known in the art. The segmentation map may implicitly include information about each image in the training set, as the segmentation map may be formed based on the information. However, the information implied in the partition map may be a simple file with a list of pixel addresses to be included in each segment.
FIG. 5 is a flow chart of a method of dynamically adjusting a bill validator. Information relating to the occurrence of counterfeit banknotes that may be accepted by the system is received (see block 50). The information is received at the bill validator or at a central administrative location which then communicates the information to one or more bill validators. For example, the central management node issues instructions to the banknote validator over a communications network or in any other suitable manner.
The information or received instructions trigger activation of an optional stored segmentation map (see block 51). The segmentation map has a different number (typically a higher number of segments) than the previously used segmentation map. This optional segmentation map may be stored locally in the self-service device in advance, or in a server which then distributes centrally to the affected devices remotely over the network, if necessary. Once the optional segmentation map is activated, it is distributed instead of the previous segmentation map as described with reference to fig. 4. That is, the image 52 is segmented using the alternative segmentation map. Features are extracted from each segment (see block 53) and the note is sorted based on the extracted features (see block 54). Each stored segmentation map may also be associated with a pre-computed, stored set of classification parameters. In this case, the received information (block 50) may trigger a selectable set of classification parameters to be used in a classifier for classifying media items as described herein.
While the alternative segmentation map is now used, developers can create a new segmentation map that uses a smaller number of segments than the alternative segmentation map to combat counterfeit attacks. Thus, the use of the optional segmentation map enables an automatic currency validation process while any retraining, template development and publication (distribution) of synthetic material occurs.
In the above method, only one optional segmentation map is created and stored. However, a plurality of such alternative segmentation maps having different numbers of segments may be created and stored. That alternative segmentation map may then be selected for use based on trial and error, or based on prior experience and/or detailed information about the particular counterfeit attack being experienced.
In addition, the method described herein focuses on the case where the number of segments increases. However, the number of segments may also be reduced. For example, assume that the alternative template being used has 15 segments. This incurs relatively high processing costs and burdens. Later, the source of counterfeit banknotes is prevented so that a segmentation template with fewer segments can be returned.
Previously, segmentation was based solely on spatial location, which we improved by segmenting based on feature values such as pixel intensity profiles of pixels in a training set. In this way, each training set image has an effect on the segmentation. However, previously, this was not the case when mesh segmentation was used.
Figure 12 is a schematic diagram of a self-service device 121 with a banknote validator 123. It includes:
means 120 for accepting banknotes;
an imaging device 122 for obtaining a digital image of the banknote; and
the banknote validator 123 as described above.
As with the imaging device, the device for accepting the banknotes is of any suitable type known in the art. A feature selection algorithm may be used to select one or more features for use in the extracting features step. In addition, in addition to the feature information discussed herein, a classifier may be formed based on specific information relating to a specific denomination or currency of a banknote. For example, information associated with regions that are significantly rich in data in terms of color, shape in a given currency and denomination, or spatial frequency.
The methods described herein may be performed on an image or other representation of a banknote, which image/representation is of any suitable type. For example, images on the red, blue and green channels or other images as described above.
The segmentation may be formed based on only one type of image, such as the red channel. Alternatively, the segmentation map may be formed based on images of all types (e.g., red, blue, and green channels). Multiple segmentation maps may also be formed, one for each image or for a combination of multiple image types. For example, there may be three segmentation maps, one for the red channel image, one for the blue channel image, and one for the green channel image. In this case, during individual banknote validation, an appropriate segmentation map/classifier may be used depending on the type of image selected. Thus, each of the above methods can be modified by using different types of images and corresponding segmentation maps/classifiers.
As will be apparent to the skilled person: any range or device value given herein may be extended or altered without loss of effect.
It should be understood that the above description of the preferred embodiments is given by way of example only and that various modifications may be made by those skilled in the art.

Claims (20)

1. A method of creating a classifier for media validation, the method comprising the steps of:
(i) accessing a training set of images of media items;
(ii) creating a plurality of segmentation maps using a training set image, each segmentation map comprising information about relationships of corresponding image elements between all images in the training set, and each segmentation map having a different number of segments;
(iii) for each segmentation map, computing a set of classification parameters by segmenting each training set image using the segmentation map, and extracting one or more features from each segment of each of the training set images;
(iv) forming the classifier using a first selected one of a set of classification parameters; and
(v) replacing the first set of classification parameters with one of the other sets of classification parameters.
2. The method of claim 1, wherein the first selected set of classification parameters is selected based on a verification of a classifier using information about known counterfeits.
3. The method of claim 1, wherein the first selected set of classification parameters is selected based on information about classification performance of a segmentation map having a plurality of different numbers of segments.
4. The method of claim 1, wherein the step of replacing the first selected set of classification parameters is performed based on information relating to a change in the population of media items.
5. The method of claim 1, wherein the segmentation map is created by using a clustering algorithm to cluster pixels located in an image plane based on traversing all images in the training set.
6. The method of claim 1, further comprising selecting one or more types of features for use in the step (iii) of extracting features using a feature selection algorithm.
7. The method of claim 1, wherein the sorter is for banknote validation, and the method further comprises: the classifier is formed based on specific information relating to a particular denomination and currency of the banknote.
8. The method of claim 1, further comprising: (vi) the combined classifiers necessary in step (v) of forming said classifier.
9. An apparatus for creating a banknote classifier comprising:
(i) an input configured to access a training set of images of media items;
(ii) a processor configured to create a plurality of segmentation maps using a training set of images, each segmentation map comprising information about the relationship of corresponding image elements between all images in the training set and each segmentation map having a different number of segments;
(iii) a segmenter configured to segment each of the training set images using a first one of the segmentation maps;
(iv) a feature extractor configured to extract one or more features from each segment in each of the training set images;
(v) a classification formation device configured to form the classifier using the feature information;
(vi) (iii) a selector configured to select the best segmentation map and one or more alternative segmentation maps by evaluating the classification performance corresponding to each of the segmentation maps created in step (ii).
10. A media validator comprising:
(i) an input configured to receive at least one image of a media item to be authenticated;
(ii) a plurality of segmentation maps, each segmentation map having a different number of segments and each segmentation map including information about the relationship of corresponding image elements between all images in a training set of media items;
(iii) a processor configured to segment the image of the media item using a first one of the segmentation maps;
(iv) a feature extractor configured to extract one or more features from each segment of the image of the media item;
(v) a classifier configured to classify the media item as valid or invalid based on the extracted features; and
(vi) an adapter configured to replace the first segmentation map with one of the other segmentation maps and to modify the classifier accordingly.
11. A media validator as claimed in claim 10 wherein the segmentation maps comprise morphological information.
12. A media validator as claimed in claim 10 wherein the segmentation maps comprise information about pixels at the same location in each of the training set images.
13. A media validator as claimed in claim 10 wherein the segmentation maps comprise pixel intensity profiles.
14. A media validator as claimed in claim 10 wherein the classifier is a one-class classifier.
15. A method of authenticating a media item, comprising:
(i) accessing at least one image of a media item to be authenticated;
(ii) accessing a plurality of segmentation maps, the plurality of segmentation maps comprising information about relationships of corresponding image elements between all images in the training set, and each segmentation map having a different number of segments;
(iii) selecting one of a plurality of segmentation maps;
(iv) segmenting the image of the media item using the selected segmentation map;
(v) extracting features from each segment of the image of the media item;
(vi) classifying the media item based on the extracted features using a classifier.
16. The method of claim 15, wherein the segmentation map in step (iii) is selected based on information about changes in the population of media items.
17. The method of claim 16, wherein the information comprises information relating to the quality of counterfeit media items.
18. A computer program comprising computer program code means adapted to perform all the steps of a method of creating a classifier for media validation, said method comprising the steps of:
(i) accessing a training set of images of media items;
(ii) creating a plurality of segmentation maps using a training set image, each segmentation map comprising information about relationships of corresponding image elements between all images in the training set, and each segmentation map having a different number of segments;
(iii) for each segmentation map, computing a set of classification parameters by segmenting each training set image using the segmentation map, and extracting one or more features from each segment of each of the training set images;
(iv) forming the classifier using a first selected one of a set of classification parameters; and
(v) replacing the first set of classification parameters with one of the other sets of classification parameters.
19. The computer program of claim 18 embodied on a computer readable medium.
20. A self-service device comprising:
(i) means for accepting a media item;
(ii) an imaging device for obtaining a digital image of a media item;
(iii) a media validator comprising
(i) An input configured to receive at least one image of a media item to be authenticated;
(ii) a plurality of segmentation maps, each segmentation map having a different number of segments and each segmentation map including information about the relationship of corresponding image elements between all images in a training set of media items;
(iii) a processor configured to segment the image of the media item using a first one of the segmentation maps;
(iv) a feature extractor configured to extract one or more features from each segment of the image of the media item;
(v) a classifier configured to classify the media item as valid or invalid based on the extracted features; and
(vi) an adapter configured to replace the first segmentation map with one of the other segmentation maps and to modify the classifier accordingly.
CN2006800473687A 2005-12-16 2006-12-14 Detecting improved quality counterfeit media items Active CN101366061B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US30553705A 2005-12-16 2005-12-16
US11/305,537 2005-12-16
US11/366,147 US20070140551A1 (en) 2005-12-16 2006-03-02 Banknote validation
US11/366,147 2006-03-02
PCT/GB2006/004670 WO2007068928A1 (en) 2005-12-16 2006-12-14 Detecting improved quality counterfeit media

Publications (2)

Publication Number Publication Date
CN101366061A true CN101366061A (en) 2009-02-11
CN101366061B CN101366061B (en) 2010-12-08

Family

ID=40206435

Family Applications (4)

Application Number Title Priority Date Filing Date
CN2006800473583A Expired - Fee Related CN101331526B (en) 2005-12-16 2006-09-26 Banknote validation
CN2006800472788A Expired - Fee Related CN101366060B (en) 2005-12-16 2006-12-14 Media validation
CN2006800473687A Active CN101366061B (en) 2005-12-16 2006-12-14 Detecting improved quality counterfeit media items
CN2006800475165A Expired - Fee Related CN101331527B (en) 2005-12-16 2006-12-14 Processing images of media items before validation

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN2006800473583A Expired - Fee Related CN101331526B (en) 2005-12-16 2006-09-26 Banknote validation
CN2006800472788A Expired - Fee Related CN101366060B (en) 2005-12-16 2006-12-14 Media validation

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN2006800475165A Expired - Fee Related CN101331527B (en) 2005-12-16 2006-12-14 Processing images of media items before validation

Country Status (1)

Country Link
CN (4) CN101331526B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110323A (en) * 2011-01-14 2011-06-29 深圳市怡化电脑有限公司 Method and device for examining money
WO2012145909A1 (en) * 2011-04-28 2012-11-01 中国科学院自动化研究所 Method for detecting tampering with color digital image based on chroma of image

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102010055974A1 (en) 2010-12-23 2012-06-28 Giesecke & Devrient Gmbh Method and device for determining a class reference data set for the classification of value documents
CN102306415B (en) * 2011-08-01 2013-06-26 广州广电运通金融电子股份有限公司 Portable valuable file identification device
CN102565074B (en) * 2012-01-09 2014-02-05 西安印钞有限公司 System and method for rechecking images of suspected defective products by small sheet sorter
US8983168B2 (en) * 2012-04-30 2015-03-17 Ncr Corporation System and method of categorising defects in a media item
US9299225B2 (en) * 2014-06-23 2016-03-29 Ncr Corporation Value media dispenser recognition systems
CN105184954B (en) * 2015-08-14 2018-04-06 深圳怡化电脑股份有限公司 A kind of method and banknote tester for detecting bank note
DE102015016716A1 (en) * 2015-12-22 2017-06-22 Giesecke & Devrient Gmbh Method for transmitting transmission data from a transmitting device to a receiving device for processing the transmission data and means for carrying out the method
CN108074320A (en) * 2016-11-10 2018-05-25 深圳怡化电脑股份有限公司 A kind of image-recognizing method and device
CN108806058A (en) * 2017-05-05 2018-11-13 深圳怡化电脑股份有限公司 A kind of paper currency detecting method and device
CN107705417A (en) * 2017-10-10 2018-02-16 深圳怡化电脑股份有限公司 Recognition methods, device, finance device and the storage medium of bank note version
CN111480167A (en) * 2017-12-20 2020-07-31 艾普维真股份有限公司 Authenticated machine learning with multi-digit representation
CN110910561B (en) * 2018-09-18 2021-11-16 深圳怡化电脑股份有限公司 Banknote contamination identification method and device, storage medium and financial equipment
TWI709188B (en) * 2018-09-27 2020-11-01 財團法人工業技術研究院 Fusion-based classifier, classification method, and classification system
CN111599081A (en) * 2020-05-15 2020-08-28 上海应用技术大学 Method and system for collecting and dividing RMB paper money
CN113538809B (en) * 2021-06-11 2023-08-04 深圳怡化电脑科技有限公司 Data processing method and device based on self-service equipment

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5729623A (en) * 1993-10-18 1998-03-17 Glory Kogyo Kabushiki Kaisha Pattern recognition apparatus and method of optimizing mask for pattern recognition according to genetic algorithm
JP3369088B2 (en) * 1997-11-21 2003-01-20 富士通株式会社 Paper discrimination device
CN100414473C (en) * 2001-10-30 2008-08-27 松下电器产业株式会社 Method, system, device and computer program for mutual authentication and content protection
JP4105694B2 (en) * 2002-08-30 2008-06-25 富士通株式会社 Paper piece discrimination apparatus, paper piece discrimination method and program
US7194105B2 (en) * 2002-10-16 2007-03-20 Hersch Roger D Authentication of documents and articles by moiré patterns
GB0313002D0 (en) * 2003-06-06 2003-07-09 Ncr Int Inc Currency validation
JP2005018688A (en) * 2003-06-30 2005-01-20 Asahi Seiko Kk Banknote validator with reflecting optical sensor

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110323A (en) * 2011-01-14 2011-06-29 深圳市怡化电脑有限公司 Method and device for examining money
WO2012145909A1 (en) * 2011-04-28 2012-11-01 中国科学院自动化研究所 Method for detecting tampering with color digital image based on chroma of image

Also Published As

Publication number Publication date
CN101331527B (en) 2011-07-06
CN101331527A (en) 2008-12-24
CN101366061B (en) 2010-12-08
CN101366060B (en) 2012-08-29
CN101331526B (en) 2010-10-13
CN101331526A (en) 2008-12-24
CN101366060A (en) 2009-02-11

Similar Documents

Publication Publication Date Title
CN101366061B (en) Detecting improved quality counterfeit media items
JP5219211B2 (en) Banknote confirmation method and apparatus
US7639858B2 (en) Currency validation
JP5344668B2 (en) Method for automatically confirming securities media item and method for generating template for automatically confirming securities media item
CN104298989B (en) False distinguishing method and its system based on zebra stripes Infrared Image Features
Zeggeye et al. Automatic recognition and counterfeit detection of Ethiopian paper currency
Shahani et al. Analysis of banknote authentication system using machine learning techniques
KR101232684B1 (en) Method for detecting counterfeits of banknotes using Bayesian approach
Sodhi et al. A Robust Invariant Image-Based Paper-Currency Recognition Based on F-kNN
US10438436B2 (en) Method and system for detecting staining
WoldeHana et al. An Explainable Counterfeit and Genuine Ethiopian Banknote Classification Using Deep Learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant