WO2015193152A1 - Procédé de détection d'une caractéristique d'un document, dépendant de l'angle de vue - Google Patents

Procédé de détection d'une caractéristique d'un document, dépendant de l'angle de vue Download PDF

Info

Publication number
WO2015193152A1
WO2015193152A1 PCT/EP2015/062943 EP2015062943W WO2015193152A1 WO 2015193152 A1 WO2015193152 A1 WO 2015193152A1 EP 2015062943 W EP2015062943 W EP 2015062943W WO 2015193152 A1 WO2015193152 A1 WO 2015193152A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
document
camera
document image
spatial position
Prior art date
Application number
PCT/EP2015/062943
Other languages
German (de)
English (en)
Inventor
Andreas Hartl
Dieter Schmalstieg
Olaf Dressel
Original Assignee
Bundesdruckerei Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bundesdruckerei Gmbh filed Critical Bundesdruckerei Gmbh
Priority to EP15728835.8A priority Critical patent/EP3158543B1/fr
Publication of WO2015193152A1 publication Critical patent/WO2015193152A1/fr

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D7/00Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
    • G07D7/20Testing patterns thereon
    • G07D7/2008Testing patterns thereon using pre-processing, e.g. de-blurring, averaging, normalisation or rotation
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D7/00Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
    • G07D7/003Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency using security elements
    • G07D7/0032Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency using security elements using holograms

Definitions

  • the present invention relates to the field of detection of viewing-angle-dependent features, in particular of holograms, on documents.
  • Viewing angle dependent features such as holograms
  • viewing-angle dependent features are applied to identification documents or banknotes to make it difficult to copy the documents.
  • Viewpoint-dependent features can have viewing angle-dependent representations.
  • a verification of a viewing-angle-dependent feature for checking the authenticity or authenticity of a document can be performed manually by a person.
  • the detection of the viewing angle-dependent feature of the document is done visually by the person.
  • the viewing angle-dependent feature can be visually verified by the person, for example by a visual comparison of the representations of the viewing angle-dependent feature with previously known reference representations. Detection and verification of a viewing-angle-dependent feature by a person is usually very time-consuming.
  • the use of electronic assistance systems is of particular interest for the verification of a viewing-angle-dependent feature of a document.
  • the invention is based on the finding that the above object can be achieved by capturing images of the document in different spatial positions relative to the document and by determining an image difference between the captured images.
  • the viewing angle-dependent feature in the images of the document has different viewing angle-dependent representations.
  • image differences between the captured images image areas with strong optical changes can be efficiently assigned to the viewing-angle dependent feature.
  • the invention relates to a method for detecting a viewing-angle-dependent feature of a document using an image camera, wherein the viewing angle-dependent feature comprises viewing angle-dependent representations, with a capture of a first image of the document by the image camera in a first spatial position of the document relative to the image camera to obtain a first document image, capturing a second image of the document by the image camera in a second spatial position of the document relative to the image camera to obtain a second document image, and detecting an image difference between the first document image and the second document image, to detect the viewing angle dependent feature of the document.
  • the advantage is achieved that an efficient concept for detecting a viewing angle-dependent feature of a document can be realized.
  • the viewing angle-dependent feature may have viewing angle-dependent representations and / or lighting angle-dependent representations.
  • the document may be one of the following documents: an identity document such as an identity card, a passport, an access control card Permission card, business card, tax stamp, ticket, birth certificate, driver's license, car pass, or a form of payment, such as a bank card or credit card.
  • the document may further comprise an electronically readable circuit, such as an RFID chip.
  • the document may be single-layer or multi-layered as well as paper and / or plastic-based.
  • the document may be constructed of plastic-based films which are joined together to form a card body by means of gluing and / or lamination, the films preferably having similar material properties.
  • the first spatial position of the document relative to the image camera may include an arrangement and / or inclination of the document relative to the image camera.
  • the first spatial position may comprise a six degree of freedom pose, where three degrees of freedom may be associated with the assembly, and wherein three degrees of freedom may be associated with the inclination, including, for example, translation and rotation.
  • the second spatial position of the document relative to the image camera may include an arrangement and / or inclination of the document, for example comprising a translation and a rotation, relative to the image camera.
  • the second spatial position may include a six-degree of freedom pose, where three degrees of freedom may be associated with the assembly, and wherein three degrees of freedom may be associated with the inclination, including, for example, translation and rotation.
  • the first document image may be a color image or a grayscale image.
  • the first document image may include a plurality of pixels.
  • the second document image may be a color image or a grayscale image.
  • the second document image may include a plurality of pixels.
  • the first document image and the second document image can form a picture stack.
  • the image difference between the first document image and the second document image may be detected based on the plurality of pixels of the first document image and the plurality of pixels of the second document image.
  • the method comprises capturing a plurality of images of the document by the image camera in different spatial locations of the document relative to the image camera, wherein capturing the first image of the document comprises selecting the first image from the plurality of images in the first Spatial location, and wherein capturing the second image of the document comprises selecting the second image from the plurality of images in the second spatial location.
  • the capturing of the plurality of images of the document may include determining a respective spatial location based on a respective image.
  • the respective spatial locations may be compared to the first spatial location to select the first image from the plurality of images.
  • the respective spatial locations may be compared to the second spatial location to select the second image from the plurality of images.
  • the first spatial position and the second spatial position can be predetermined.
  • detecting the first image of the document further comprises perspective equalization of the first document image based on the first spatial position
  • detecting the second image of the document further comprises perspective equalizing the second document image based on the second spatial position.
  • a rectangular-shaped first document image By the perspective equalization of the first document image, a rectangular-shaped first document image can be provided.
  • a rectangular second document image By the perspective equalization of the second document image, a rectangular second document image can be provided.
  • the perspective equalization of the first document image may include scaling the first document image.
  • the perspective equalization of the second document image may include scaling the second document image.
  • the method further comprises determining the first spatial position of the document relative to the image camera on the basis of the first document image and / or determining the second spatial position of the document relative to the image camera on the basis of the second document image.
  • Determining the first spatial position and determining the second spatial position may include determining a respective homography.
  • the respective spatial position is determined by means of edge detection.
  • the edge detection may comprise a detection of lines, rectangles, parallelograms or trapezoids in the first document image and / or in the second document image.
  • the edge detection can be performed using a Hough transform.
  • the respective document image is low-pass filtered for noise reduction. This achieves the advantage that the image difference can be detected efficiently.
  • the low-pass filtering can be performed by means of a windowed average filter or a windowed Gaussian filter.
  • the low-pass filtering may further comprise determining a respective integral image of the respective document image, wherein the low-pass filtering may be performed using the respective integral image.
  • the first document image is compared to the second document image to determine an orientation of the first document image relative to the second document image, wherein the first document image and the second document image are aligned with respect to each other based on the particular orientation. This provides the advantage that the image difference can be efficiently determined.
  • the comparing the first document image with the second document image may include extracting and comparing image features of the first document image and the second document image.
  • the image features may be, for example, BRISK image features or SURF image features.
  • determining the image difference between the first document image and the second document image comprises determining a difference image based on the first document image and the second document image, the difference image indicating an image difference between the first document image and the second document image. This provides the advantage that the image difference can be displayed efficiently based on the difference image.
  • the difference image can be a grayscale image.
  • the difference image may include a plurality of pixels.
  • the difference image can also be assigned to a picture stack.
  • an average is determined from a first pixel value of a pixel of the first document image and a second pixel value of a pixel of the second document image, wherein a first deviation of the first pixel value from the average value is determined, wherein a second deviation of the second pixel value from the average value and wherein the image difference is detected based on the first deviation and the second deviation.
  • the first pixel value and / or the second pixel value may be gray level values.
  • the mean can be an arithmetic mean or a median.
  • the deviation may be a quadratic deviation or an absolute deviation.
  • a first document image mask is determined based on the first document image, wherein a second document image mask is determined based on the second document image, and wherein the image difference is detected based on the first document image mask and the second document image mask.
  • the first document image mask may include pixels having binary-valued pixel values to indicate valid and invalid pixels of the first document image.
  • the second document image mask may include pixels having binary-valued pixel values to indicate valid and invalid pixels of the second document image.
  • the respective document image mask displays pixels of the respective document image which are usable for detecting the image difference. As a result, the advantage is achieved that only valid pixels of the respective document image are used to detect the image difference.
  • a pixel of a respective document image may be invalid if the pixel is associated with a portion of the document that has been incompletely captured.
  • the image difference is segmented into a plurality of image segments, wherein the viewing angle dependent feature of the document is detected based on at least one image segment of the plurality of image segments.
  • image segments can be used to detect the viewing angle-dependent feature.
  • the image difference may be displayed by a difference image, with the difference image being segmented into the plurality of image segments.
  • the segmentation may be performed by means of a pixel-oriented image segmentation method, an edge-oriented image segmentation method, a region-oriented image segmentation method, a model-oriented image segmentation method, or a texture-oriented image segmentation method.
  • the image segmentation method may include, for example, a maximally stable extremal region (MSER) method or a mean-shift method.
  • MSER maximally stable extremal region
  • the image segments can be contiguous image segments.
  • an image segment measure is determined for an image segment of the plurality of image segments, wherein the determined image segment measure is compared to a predetermined image segment measure to qualify the image segment for the detection of the viewing angle dependent feature. This achieves the advantage that an image segment which has the predetermined image segment size can be used for the detection of the viewing angle-dependent feature.
  • the image segment measure may be an area of the image segment, a aspect ratio of the image segment, a compactness of the image segment, a pixel value of a pixel of the image segment, or a homogeneity measure of the image segment.
  • an image segment of the plurality of image segments is associated with a first document image segment of the first document image and a second document image segment of the second document image, wherein the first document image segment is compared with the second document image segment to qualify the image segment for the detection of the viewing angle dependent feature.
  • the comparison of the first document image segment with the second document image segment can be carried out by means of a normalized cross-correlation.
  • the image segment can be qualified, for example, for the detection of the viewing angle-dependent feature if the first document image segment and the second document image segment are different.
  • the viewing-angle-dependent feature comprises a hologram or a printing ink with viewing angle-dependent reflection properties or absorption properties.
  • the invention relates to a mobile device for detecting a viewing-angle-dependent feature of a document, wherein the viewing-angle-dependent feature has viewing angle-dependent representations, with an image camera, which is designed to capture a first image of the document in a first spatial position of the document relative to the image camera to obtain a first document image, and to acquire a second image of the document in a second spatial position of the document relative to the image camera to obtain a second document image, and a processor which is adapted to make a difference in image between the first document image and the image second document image to detect the viewing angle-dependent feature of the document.
  • the mobile device may be a mobile phone or a smartphone.
  • the image camera can be a digital image camera.
  • the processor can execute a computer program.
  • the mobile device may further comprise a lighting device for illuminating the document.
  • the illumination device may be an LED illumination device.
  • the method can be carried out by means of the mobile device. Other features of the mobile device result directly from the functionality of the method.
  • the invention relates to a computer program with a program code for carrying out the method when the computer program is executed on a computer. This provides the advantage that the process can be automated and repeatable.
  • the computer program may be in machine readable form.
  • the program code may comprise a sequence of instructions for a processor.
  • the computer program can be executed by the processor of the mobile device.
  • the invention can be implemented in hardware and / or software.
  • FIG. 1 is a diagram of a method for detecting a viewing-angle-dependent feature of a document according to an embodiment
  • FIG. 2 is a diagram of a mobile device for detecting a viewing-angle-dependent feature of a document according to an embodiment
  • 3 is a diagram of a method for detecting a viewing-angle-dependent feature of a document according to an embodiment
  • 4 is a diagram of a detection scenario for detecting a perspective-dependent feature of a document according to an embodiment
  • 5 is a diagram of a plurality of captured images of the document according to an embodiment
  • 6 is a surface diagram of a difference image according to an embodiment
  • FIG. 7 shows a diagram of a difference image and a contour diagram of a segmented difference image according to an embodiment
  • FIG. 8 shows contour diagrams with image segments for a plurality of captured images of a document according to an embodiment
  • FIG. 9 is a diagram of a plurality of spatial locations for acquiring a plurality of images of the document according to one embodiment.
  • FIG. 1 shows a diagram of a method 100 for detecting a viewing-angle-dependent feature of a document according to an embodiment.
  • the method 100 is performed using an image camera.
  • the viewing angle-dependent feature has viewing angle-dependent representations.
  • the method 100 comprises capturing 101 a first image of the document by the image camera in a first spatial position of the document relative to the image camera to obtain a first document image, capturing 103 a second image of the document by the image camera in a second spatial position of the document relative to the image camera to obtain a second document image, and detecting 105 an image difference between the first document image and the second document image to detect the viewing-angle dependent feature of the document.
  • the viewing angle-dependent feature may have viewing angle-dependent representations and / or lighting angle-dependent representations.
  • the document may be one of the following: an identity document such as a passport, a passport, an access control card, a passport, a business card, a tax stamp, a ticket, a birth certificate, a driver's license, a vehicle pass, or a means of payment such as a Bank card or a credit card.
  • the document may further comprise an electronically readable circuit, such as an RFID chip.
  • the document may be single-layer or multi-layered as well as paper and / or plastic-based.
  • the document can be constructed of plastic-based films, which can be used for a Card body are joined together by means of gluing and / or lamination, wherein the films preferably have similar material properties.
  • the first spatial position of the document relative to the image camera may include an arrangement and / or inclination of the document, for example comprising a translation and a rotation, relative to the image camera.
  • the first spatial location may include a six degree of freedom pose, where three degrees of freedom may be associated with the array, and where three degrees of freedom may be associated with the slope.
  • the second spatial position of the document relative to the image camera may include an arrangement and / or inclination of the document relative to the image camera.
  • the second spatial position may comprise a six-degree-of-freedom pose, where three degrees of freedom may be associated with the assembly, and where three degrees of freedom may be associated with the inclination.
  • the first document image may be a color image or a grayscale image.
  • the first document image may include a plurality of pixels.
  • the second document image may be a color image or a grayscale image.
  • the second document image may include a plurality of pixels.
  • the first document image and the second document image can form a picture stack.
  • FIG. 2 shows a diagram of a mobile device 200 for detecting a viewing-angle-dependent feature of a document according to one embodiment.
  • the viewing angle-dependent feature has viewing angle-dependent representations.
  • the mobile device 200 includes an image camera 201 that is configured to capture a first image of the document in a first spatial position of the document relative to the image camera to obtain a first document image, and a second image of the document in a second spatial location of the document relative to capture the image camera to obtain a second document image, and a processor 203, which is formed, an image difference between the first document image and the second document image to detect the viewing angle-dependent feature of the document.
  • the mobile device 200 may be a mobile phone or a smartphone.
  • the image camera 201 may be a digital image camera.
  • the processor 203 may execute a computer program.
  • the image camera 201 may be connected to the processor 203.
  • the mobile device 200 may further include a lighting device for illuminating the document.
  • the illumination device may be an LED illumination device.
  • FIG. 3 shows a diagram of a method 100 for detecting a viewing-angle-dependent feature of a document according to an embodiment.
  • the method 100 includes a step sequence 301 and a step sequence 303.
  • the step sequence 301 is performed for each captured image.
  • the step sequence 303 is performed once per document.
  • the step sequence 301 includes a step 305 of image selection, a step 307 of registering an image, and a step 309 of spatially filtering the image.
  • a plurality of acquired images and a plurality of specific spatial locations are processed by the step sequence 301 to provide an image stack.
  • the step sequence 303 comprises a step 31 1 of a difference image generation and a step 313 of a segmentation and filtering.
  • the image stack is processed by the step sequence 303 to provide the location of the features.
  • the diagram therefore shows step sequences 301, 303 which can be carried out for a detection of a viewing angle-dependent feature, for example a hologram, per image and per document, as well as an evaluation of the image stack.
  • a mobile device such as a standard smartphone.
  • an image stack of images of the document can be constructed and evaluated to automatically determine the location and size of viewing-angle-dependent features of the document determine.
  • Automatic detection of both the existence and location of viewing-angle dependent features on a document can be accomplished using a mobile augmented reality (AR) arrangement.
  • AR augmented reality
  • Documents are usually made of paper or cardboard and have a rectangular shape. For reasons of robustness and efficiency, the focus is on flat areas of documents. Detecting such documents with a mobile device can be a challenging task due to varying personal data on the document, due to changes in the viewing angle, due to lighting, due to unexpected user behavior, and / or due to limitations of the image camera. Consequently, multiple captured images should be evaluated for robustness, which can be achieved using a mobile augmented reality (AR) device.
  • AR augmented reality
  • a suitable document template can be generated which can be used for a picture-to-picture tracking or for a dedicated registration step. This can be based on an algorithm for the detection of perspective distorted rectangles, and be executed in real time on a mobile device, and thus serve as a basic building block.
  • the user may be asked to place an image camera of the mobile device in front of a document or object and to trigger the detection.
  • an edge image may be calculated using, for example, a Canny edge detector with automatic threshold selection.
  • Image areas with textual structures can be filtered to remove noise, followed by detection of lines, for example, using a Hough transform.
  • the detected lines can be grouped according to their coarse direction.
  • An initial hypothesis for a rectangular area may be formed by considering pairs of line bundles, which may comprise a total of four lines, for example.
  • a final ordered list of rectangular hypotheses can be generated by computing a support function on an extended edge image.
  • the top candidate of the list can be selected and a homography can be computed to produce an equalized representation.
  • the dimensions of the rectified image may be determined by averaging the pixel width and / or height of the chosen hypothesis.
  • the equalized image may be used to generate a planar tracking template, which may be displayed as an image pyramid at run time, and which may be tracked using natural image features.
  • a Harris corner detector and a normalized cross correlation (NCC) may be used to align image areas over subsequent images and to provide homography between the current image and the rectified image or tracking template.
  • a motion model can be used to estimate and predict the motion of the image camera, thus saving computational resources.
  • the algorithm can be executed in real time on mobile devices, such as standard smartphones, and can provide a full six-degree-of-freedom (6DOF) attitude or pose for each captured image.
  • 6DOF six-degree-of-freedom
  • the arrangement has the advantage of allowing interaction with previously unknown documents with any personal data.
  • CV computer vision
  • the algorithm comprises three main parts to generate an image stack: the image selection step 305, the equalization and / or the registration step 307, and the spatial filtering step 309.
  • the image selection in step 305 may be performed as follows.
  • the image stack should comprise a plurality of images with spatial locations that best utilize the variability of the viewing-angle dependent feature. For inexperienced users, this task can be challenging. Therefore, the task of image selection, in favor of repeatability and reduced cognitive burden, should not be performed by the user.
  • the particular poses can be used to automatically select images based on a 2D orientation map.
  • the visibility and similarity to the template can be taken into account to select suitable images.
  • the equalization or registration in step 307 may be performed as follows. For each image passing the selection step, an estimated homography from the tracking space location may be used to produce an equalized image. A complete set of images can thus form a stack of images of equal size. Basically, the document tracking algorithm can be robust and can successfully track the document over a wide range of viewing angles. However, portions of the document may move out of the current camera image and the images may have perspective distortions. The rectified images may therefore be incomplete and / or not ideally aligned.
  • alignment adjustment may be performed using image feature extraction, windowed matching and / or homography estimation. However, this can reduce the frame rate, which may not be desirable. Since images are continuously captured and provided by the image camera, inappropriate equalized or registered images can be discarded using NCC rating, which can be computationally more efficient. Due to real-time tracking, this can be an efficient way to automatically select images.
  • Spatial filtering in step 309 may be performed as follows. Any new layer that is dropped onto the stack of equalized images can be spatially filtered to better deal with noise and remaining registration inaccuracies.
  • a windowed mean value filter can be used, which can be based on an integral image calculation. Incomplete image information, such as undefined and / or black areas on equalization, may be taken into account by detecting valid image areas used in filtering using a second mask.
  • Spatial filtering in step 309 may be performed using a predetermined window size, for example, 3x3 pixels.
  • a predetermined window size for example, 3x3 pixels.
  • the algorithm for processing the image batch comprises two main parts: the step 31 1 of generating a difference image by statistically based evaluation and the step 313 of segmenting and searching a mode for producing a final detection result.
  • an optional verification step may be performed which may use NCC calculations on the estimated location of the viewing-angle dependent feature between equalized or registered images of the image stack to discard false-positive detections.
  • the generation of the difference image in step 31 1 can be performed as follows.
  • the image stack can be understood as a temporal sequence for each layer (x, y).
  • the degree of change may be evaluated by calculating an appropriate amount of deviation of a model m at the location (x, y) over the entire image stack, with respect to document image masks that may be determined in the previous step.
  • an intermediate representation may be provided for signs of viewing angle dependency, which may also be referred to as a difference image.
  • the viewing angle dependent feature is a hologram and the difference image is a hologram map.
  • ⁇ i ( ⁇ y) - ⁇ v d x > y) - m ( 2 )
  • L (x, y) may denote the number of image stack layers comprising valid pixel values for the layer (x, y) corresponding to the document image masks, where vi (x, y) may designate a pixel value in layer I.
  • vi (x, y) may designate a pixel value in layer I.
  • the model generation and deviation calculation can be done directly, and require only a small amount of computational resources.
  • the segmentation and filtering in step 313 may be performed as follows. Dominant spatial peaks within the image difference or the difference image and adjacent image areas with large changes of comparable value or amount are to be localized. Consequently, this provides one Image segmentation task, wherein the choice of the image segmentation method can affect both the quality and the runtime.
  • a global threshold may not be sufficient in some cases. Then locally calculated thresholds can be used, which can be additionally adjusted using global information. To save runtime, integral images can be used for filtering.
  • the calculated ranges can then be filtered to reduce the number of false positive detections. Criteria regarding a minimum area, a aspect ratio, and a compactness together with a minimum pixel value and / or a homogeneity for the obtained area may be used.
  • the process of detecting a perspective-dependent feature may include detecting the document and moving a mobile device with an image camera or moving the document, and capturing images of the document and associated spatial locations. These data can then be processed and analyzed by the algorithm. In this case, a lighting device of the mobile device can be switched on or off. The illumination device can be advantageous in order to capture all relevant representations of the viewing angle-dependent feature.
  • the generation and updating of the image stack can be performed per image.
  • the generation and evaluation of the difference image with an optional validation step can then be carried out.
  • the viewing angle dependent feature in an image may be highlighted using a surrounding frame or box at the appropriate location.
  • Real-time tracking of the document can be used to obtain registered images from a plurality of viewing angles.
  • only the difference image can be segmented in order to obtain possible image areas, which can then be validated.
  • a method can be realized which can be easily integrated into existing document verification applications.
  • FIG. 4 shows a diagram of a detection scenario for detecting a perspective-dependent feature 402 of a document 401 according to an embodiment.
  • the diagram shows the detection of a plurality of images of the document 401 from different spatial positions.
  • a first document image 403 in a first spatial position a second document image 405 in a second spatial position, a third document image 407 in a third spatial position, and an Nth document image 409 in an Nth spatial position are detected.
  • FIG. 5 shows a diagram of a plurality of captured images of the document according to an embodiment.
  • a first document image 403 from a first spatial position, a second document image 405 from a second spatial position, a third document image 407 from a third spatial position, and an Nth document image 409 from an Nth spatial position are shown one above the other in the form of an image stack.
  • the first document image 403, the second document image 405, the third document image 407, and the Nth document image 409 are shown together with respective document image masks.
  • the document can be tracked, whereby a plurality of images of the document can be captured from different spatial positions. Based on an estimated spatial position or homography, each document image can be equalized and placed on the image stack.
  • the captured document images may be equalized and may have a predetermined resolution.
  • the surface diagram 600 shows pixel values of the difference image as a function of a position (x, y) for a document.
  • the surface plot 600 has high pixel values.
  • an image segment may be determined in the difference image associated with that region.
  • FIG. 7 shows a diagram 701 of a difference image and a contour diagram 703 of a segmented difference image according to an embodiment.
  • the diagram 701 shows pixel values of the difference image as a function of a position (x, y) for a document.
  • the diagram 701 corresponds to a scaled intensity image.
  • the plurality of captured images include a first document image 403, a second document image 405, a third document image 407, and an N-th document image 409.
  • the first document image 403 is assigned the contour diagrams 801.
  • the second document image 405 is assigned the contour diagrams 803.
  • the third document image 407 is assigned the contour diagrams 805.
  • the Nth document image 409 is assigned the contour charts 807.
  • Various segmentation techniques can be used, such as a Maximally Stable Extremal Regions (MSER) method or a mean-shift method.
  • MSER Maximally Stable Extremal Regions
  • a highlight detector can be used, and further, inpainting can be performed.
  • the plurality of captured images or the image stacks can be further analyzed.
  • the contour diagrams 801, 803, 805, 807 show a segmentation of a picture stack, for example with a layer.
  • the document images 403, 405, 407, 409 are shown in the upper row.
  • the middle row shows image segments, which for example be determined by means of an MSER method.
  • the lower row shows image segments which are determined, for example, by the MSER method, using modified images of the document, for example using highlight detection and / or inpainting.
  • FIG. 9 shows a diagram 900 of a plurality of spatial positions for capturing a plurality of images of the document according to one embodiment.
  • the chart 900 includes a 2D orientation map for capturing the images of the document and / or monitoring the capture of the images of the document from various angles.
  • Predetermined spatial positions for capturing images of the document are highlighted by dots.
  • the predetermined spatial positions may correspond to an azimuth and elevation of the document relative to an image camera.
  • the predetermined spatial positions can be defined in quantized and / or discretized form.
  • Viewpoint-dependent features such as holograms
  • Viewpoint-dependent features can change their representations depending on the viewing direction and direction of illumination of existing light sources in the environment.
  • Viewpoint-dependent features can be delimited from the environment in the document and / or have a limited extent in the document.
  • a local change in the appearance with respect to the viewing angle can be used.
  • the document should be taken from different angles. Therefore for example, a Mobile Augmented Reality (AR) device can be used for image acquisition.
  • AR Mobile Augmented Reality
  • the area of the document should first be detected. Thereafter, a document image or an equalized document image may be passed to a tracking algorithm.
  • information about the spatial position can be available in every single document image. Disregarding rotation around a line of sight, the acquisition of the images may be controlled with an orientation map which may indicate an angle to the x-axis and the y-axis. This can be filled according to the current spatial position or pose and ensure that the document is viewed from different angles.
  • the extraction of the document can then be carried out by an equalization by means of the determined spatial position of the tracker.
  • an image stack with equalized and / or registered images can be formed.
  • an additional check can be carried out by means of a normalized cross-correlation.
  • a model can be formed from the image stack (mo, ⁇ ).
  • the deviations can be fused by using each layer of the image stack by means of a deviation measure (eo, ei) to form a difference image, for example in the form of a hologram map.
  • This difference image characterizes the document with regard to the position and extent of viewing angle-dependent features.
  • it can be segmented to obtain a set of image segments.
  • the filtered and validated image segments can represent the result of the detection.
  • the verification and / or validation of the image segments may reduce the number of false-positive detected viewing-angle dependent features.
  • a respective image segment can be extracted from each layer of the image stack.
  • Each image segment or patch can then be compared to the remaining image segments or patches by a normalized cross correlation (NCC) and classified as a match or a deviation using a threshold th nC c. If the relative proportion above a threshold value th va iidation is, it can be assumed that the current image segment having sufficient visual changes with a change of the viewing angle.
  • NCC normalized cross correlation
  • th va iidation it can be assumed that the current image segment having sufficient visual changes with a change of the viewing angle.
  • the illustrated approach can be extended as follows. A more detailed analysis of the registered image batch can be performed. First, highlights that are caused by, for example, a lighting device or an LED light can be detected and removed.
  • each layer of the image stack can be segmented individually using, for example, the Maximally Stable Extremal Region (MSER) method. From the obtained image segments sequences of image segments can be extracted, which can be approximately locally constant. Each sequence can then be viewed, segmented, filtered, and validated as a single difference image, such as a hologram map.
  • MSER Maximally Stable Extremal Region
  • segmentation of the difference image using local adaptive thresholding with automatic selection of a suitable window size can be used to improve the scaling invariance.
  • the determined image segment may be used in the filtering instead of a respective bounding rectangle.
  • a characterization of the peaks detected in the previous step in the difference image can be realized by a comparison with the immediate environment in the difference image. This eliminates the verification or validation step using normalized cross-correlation (NCC) depending on the application.
  • NCC normalized cross-correlation
  • a detection of viewing-angle-dependent features, for example of holograms, on unknown documents without existing reference information by means of a mobile device can be carried out. It is thus achieved that a detection of a viewing angle-dependent feature can also be carried out without knowledge of the document type or the document layout.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé (100) pour détecter une caractéristique d'un document, dépendant de l'angle de vue, par l'utilisation d'une caméra, dans lequel la caractéristique dépendant de l'angle de vue comprend des représentations dépendant de l'angle de vue, comportant une détection (101) d'une première image du document par la caméra dans une première position spatiale du document par rapport à la caméra, afin d'obtenir une première image de document, une détection (103) d'une seconde image du document par la caméra dans une seconde position spatiale du document par rapport à la caméra, afin d'obtenir une seconde image de document, et une détection (105) d'une différence d'images entre la première image de document et la seconde image de document, afin de détecter la caractéristique du document, dépendant de l'angle de vue.
PCT/EP2015/062943 2014-06-17 2015-06-10 Procédé de détection d'une caractéristique d'un document, dépendant de l'angle de vue WO2015193152A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP15728835.8A EP3158543B1 (fr) 2014-06-17 2015-06-10 Procédé pour la détection d'une caractéristique dépendant de l'angle d'observation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102014108492.6 2014-06-17
DE102014108492.6A DE102014108492A1 (de) 2014-06-17 2014-06-17 Verfahren zum Detektieren eines blickwinkelabhängigen Merkmals eines Dokumentes

Publications (1)

Publication Number Publication Date
WO2015193152A1 true WO2015193152A1 (fr) 2015-12-23

Family

ID=53396483

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/062943 WO2015193152A1 (fr) 2014-06-17 2015-06-10 Procédé de détection d'une caractéristique d'un document, dépendant de l'angle de vue

Country Status (3)

Country Link
EP (1) EP3158543B1 (fr)
DE (1) DE102014108492A1 (fr)
WO (1) WO2015193152A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020173693A1 (fr) 2019-02-28 2020-09-03 Sicpa Holding Sa Procédé d'authentification d'une marque induite magnétiquement avec un dispositif portable
WO2022049025A1 (fr) 2020-09-02 2022-03-10 Sicpa Holding Sa Marquage de sécurité, procédé et dispositif de lecture du marquage de sécurité, document de sécurité marqué avec le marquage de sécurité, et procédé et système de vérification dudit document de sécurité

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201602198D0 (en) * 2016-02-08 2016-03-23 Idscan Biometrics Ltd Method computer program and system for hologram extraction
RU2644513C1 (ru) 2017-02-27 2018-02-12 Общество с ограниченной ответственностью "СМАРТ ЭНДЖИНС СЕРВИС" Способ детектирования голографических элементов в видеопотоке

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992001975A1 (fr) * 1990-07-16 1992-02-06 D.A.H.T. Foundation Procede et dispositif d'identification d'un hologramme
US20050129282A1 (en) * 2003-12-11 2005-06-16 O'doherty Phelim A. Method and apparatus for verifying a hologram and a credit card
US20090154813A1 (en) * 2007-12-12 2009-06-18 Xerox Corporation Method and apparatus for validating holograms
WO2010116279A1 (fr) * 2009-04-07 2010-10-14 Latent Image Technology Ltd. Dispositif et procédé de vérification automatique d'images à polarisation variable
US20120163666A1 (en) * 2006-01-23 2012-06-28 Rhoads Geoffrey B Object Processing Employing Movement

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6473165B1 (en) * 2000-01-21 2002-10-29 Flex Products, Inc. Automated verification systems and methods for use with optical interference devices
DE202005018964U1 (de) * 2005-12-02 2006-03-16 Basler Ag Vorrichtung zum Prüfen der Echtheit von Dokumenten
US8953037B2 (en) * 2011-10-14 2015-02-10 Microsoft Corporation Obtaining spatially varying bidirectional reflectance distribution function
DE102013101587A1 (de) * 2013-02-18 2014-08-21 Bundesdruckerei Gmbh Verfahren zum überprüfen der echtheit eines identifikationsdokumentes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992001975A1 (fr) * 1990-07-16 1992-02-06 D.A.H.T. Foundation Procede et dispositif d'identification d'un hologramme
US20050129282A1 (en) * 2003-12-11 2005-06-16 O'doherty Phelim A. Method and apparatus for verifying a hologram and a credit card
US20120163666A1 (en) * 2006-01-23 2012-06-28 Rhoads Geoffrey B Object Processing Employing Movement
US20090154813A1 (en) * 2007-12-12 2009-06-18 Xerox Corporation Method and apparatus for validating holograms
WO2010116279A1 (fr) * 2009-04-07 2010-10-14 Latent Image Technology Ltd. Dispositif et procédé de vérification automatique d'images à polarisation variable

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BROWN L G: "A SURVEY OF IMAGE REGISTRATION TECHNIQUES", ACM COMPUTING SURVEYS, ACM, NEW YORK, NY, US, US, vol. 24, no. 4, 1 December 1992 (1992-12-01), pages 325 - 376, XP000561460, ISSN: 0360-0300, DOI: 10.1145/146370.146374 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020173693A1 (fr) 2019-02-28 2020-09-03 Sicpa Holding Sa Procédé d'authentification d'une marque induite magnétiquement avec un dispositif portable
US11823003B2 (en) 2019-02-28 2023-11-21 Sicpa Holding Sa Method for authenticating a magnetically induced mark with a portable device
WO2022049025A1 (fr) 2020-09-02 2022-03-10 Sicpa Holding Sa Marquage de sécurité, procédé et dispositif de lecture du marquage de sécurité, document de sécurité marqué avec le marquage de sécurité, et procédé et système de vérification dudit document de sécurité

Also Published As

Publication number Publication date
DE102014108492A1 (de) 2015-12-17
EP3158543A1 (fr) 2017-04-26
EP3158543B1 (fr) 2021-10-13

Similar Documents

Publication Publication Date Title
DE102011106050B4 (de) Schattenentfernung in einem durch eine fahrzeugbasierte Kamera erfassten Bild zur Detektion eines freien Pfads
Hassen et al. Image sharpness assessment based on local phase coherence
DE102017220307B4 (de) Vorrichtung und Verfahren zum Erkennen von Verkehrszeichen
Bu et al. Crack detection using a texture analysis-based technique for visual bridge inspection
DE60038158T2 (de) Zielerfassungsvorrichtung und verfahren zur schätzung des azimuts von radarzielen mittels radon transformation
Papari et al. A biologically motivated multiresolution approach to contour detection
DE102016120775A1 (de) System und Verfahren zum Erkennen von Linien in einem Bild mit einem Sichtsystem
DE102015209822A1 (de) Erfassungseinrichtung, Erfassungsprogramm, Erfassungsverfahren, mit Erfassungseinrichtung ausgerüstetes Fahrzeug, Parameterberechnungseinrichtung, Parameter berechnende Parameter, Parameterberechnungsprogramm, und Verfahren zum Berechnen von Parametern
DE102014117102B4 (de) Spurwechselwarnsystem und Verfahren zum Steuern des Spurwechselwarnsystems
DE112020005932T5 (de) Systeme und verfahren zur segmentierung von transparenten objekten mittels polarisationsmerkmalen
EP3158543B1 (fr) Procédé pour la détection d'une caractéristique dépendant de l'angle d'observation
DE102011106072A1 (de) Schattenentfernung in einem durch eine fahrzeugbasierte kamera erfassten bild unter verwendung einer optimierten ausgerichteten linearen achse
DE102017220752A1 (de) Bildverarbeitungsvorrichtung, Bildbverarbeitungsverfahren und Bildverarbeitungsprogramm
DE102015207903A1 (de) Vorrichtung und Verfahren zum Erfassen eines Verkehrszeichens vom Balkentyp in einem Verkehrszeichen-Erkennungssystem
DE112020005864T5 (de) Verfahren und Einrichtung zur Verifizierung der Authentizität eines Produkts
EP3053098A1 (fr) Procédé de positionnement d'un appareil mobile par rapport à une caractéristique de sécurité d'un document
Haselhoff et al. On visual crosswalk detection for driver assistance systems
Kanter Color Crack: Identifying Cracks in Glass
DE102015122116A1 (de) System und Verfahren zur Ermittlung von Clutter in einem aufgenommenen Bild
DE102019105293A1 (de) Schätzung der Bewegung einer Bildposition
CN111275687B (zh) 一种基于连通区域标记的细粒度图像拼接检测方法
EP3259703B1 (fr) Appareil mobile pour détecter une zone de texte sur un document d'identification
Vaishnav et al. An integrated automatic number plate recognition for recognizing multi language fonts
Anagnostopoulos et al. Using sliding concentric windows for license plate segmentation and processing
DE102015200434A1 (de) Verfahren und Vorrichtung zur Verbesserung der Objekterkennung bei unterschiedlichenBeleuchtungssituationen

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15728835

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2015728835

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015728835

Country of ref document: EP