WO2020143165A1 - 一种翻拍图像的识别方法、***及终端设备 - Google Patents

一种翻拍图像的识别方法、***及终端设备 Download PDF

Info

Publication number
WO2020143165A1
WO2020143165A1 PCT/CN2019/091504 CN2019091504W WO2020143165A1 WO 2020143165 A1 WO2020143165 A1 WO 2020143165A1 CN 2019091504 W CN2019091504 W CN 2019091504W WO 2020143165 A1 WO2020143165 A1 WO 2020143165A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
classified
channel
remake
value
Prior art date
Application number
PCT/CN2019/091504
Other languages
English (en)
French (fr)
Inventor
钱根双
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020143165A1 publication Critical patent/WO2020143165A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Definitions

  • the present application belongs to the field of computer technology, and particularly relates to a method, system and terminal device for recognizing remake images.
  • Image authentication technology as an important part of the field of information security, is used to verify the authenticity of images.
  • the remake image is the second acquired image, which refers to the new image obtained after the image has undergone more than two digital image imaging processes. For example, after the picture is displayed on the LCD screen or after laser printing, then the digital camera shoots the image, that is The image of the picture.
  • the retailer will arrange the inspection staff to go to the store for inspection regularly. It is necessary for the inspection staff to take pictures on site and upload them to the verification system to prove it.
  • the inspection staff usually have the phenomenon of fraud, that is, the image uploaded to the verification system is The remake image is not the real picture taken on the spot, and the verification system cannot accurately distinguish the remake image from the actually taken image, so it is impossible to accurately determine whether the inspector actually went to the store for inspection.
  • the embodiments of the present application provide a method, system and terminal device for identifying a remake image, so as to solve the problem that the current verification system cannot accurately distinguish the remake image from the actually taken image.
  • the first aspect of the present application provides a method for identifying a remake image, including:
  • the feature value of the image to be classified is classified and judged by the remake image classifier, and whether the image to be classified is a remake image is identified based on the classification result.
  • the second aspect of the present application provides a recognition system for remake images, including:
  • a classifier construction module configured to construct a remake image classifier based on multiple training samples, the training sample includes a training image and a corresponding classification result, and the classification result is a real image or a remake image;
  • the feature extraction module is used to extract the feature value of the image to be classified, the feature value includes the Y channel luminance conversion rate of the image to be classified and the surface gradient feature value of the image to be classified;
  • the identification module is used for classifying and discriminating the feature value of the image to be classified by the remake image classifier, and determining whether the image to be classified is a remake image based on the classification result.
  • a third aspect of the present application provides a terminal device, including a memory, a processor, and computer-readable instructions stored in the memory and executable on the processor, the processor executing the computer-readable instructions Implement the following steps when instructing:
  • the training sample includes a training image and a corresponding classification result, the classification result is a real image or a remake image; extract the feature value of the image to be classified, the feature value includes the The Y channel brightness conversion rate of the image to be classified and the surface gradient characteristic value of the image to be classified;
  • the feature value of the image to be classified is classified and judged by the remake image classifier, and whether the image to be classified is a remake image is identified based on the classification result.
  • a fourth aspect of the present application provides a computer-readable storage medium that stores computer-readable instructions, which when executed by a processor implements the following steps:
  • the feature value of the image to be classified is classified and judged by the remake image classifier, and whether the image to be classified is a remake image is identified based on the classification result.
  • a method, system and terminal device for reprint image recognition provided by the present application, through a trained remake image classifier, to identify whether the image to be classified is a remake image according to the Y channel brightness conversion rate and surface gradient feature value of the image to be classified, It can efficiently and intelligently determine whether the image is a real image or a remake image, which effectively avoids fraudulent behavior, and solves the problem that it is impossible to accurately distinguish the remake image from the actual image taken at present.
  • FIG. 1 is a schematic flowchart of an implementation of a method for identifying a remake image provided in Embodiment 1 of the present application;
  • FIG. 2 is a schematic flowchart of an implementation process corresponding to step S101 in Embodiment 1 provided in Embodiment 2 of the present application;
  • FIG. 3 is a schematic flowchart of an implementation process corresponding to step S102 of Embodiment 1 provided in Embodiment 3 of the present application; FIG.
  • FIG. 4 is a schematic flowchart of an implementation process corresponding to step S102 in Embodiment 1 provided in Embodiment 4 of the present application;
  • FIG. 5 is a schematic structural diagram of a remake image recognition system provided in Embodiment 5 of the present application.
  • FIG. 6 is a schematic structural diagram of a classifier construction module 101 corresponding to Embodiment 5 provided in Embodiment 6 of the present application;
  • FIG. 7 is a schematic structural diagram of the feature extraction module 102 in the fifth embodiment corresponding to the seventh embodiment of the present application;
  • FIG. 8 is a schematic structural diagram of the feature extraction module 102 in Embodiment 5 corresponding to Embodiment 8 of the present application;
  • Embodiment 9 is a schematic diagram of a terminal device provided in Embodiment 9 of the present application.
  • this embodiment provides a method for identifying a remake image, which specifically includes:
  • Step S101 Construct a remake image classifier based on multiple training samples, where the training sample includes a training image and a corresponding classification result, and the classification result is a real image or a remake image.
  • the remake image is the secondary imaging of the image
  • the Y-channel brightness conversion rate of the remake image is different from the Y-channel brightness conversion rate of the real image (image taken on site), and the surface gradient characteristics of the remake image
  • the value is different from the surface gradient feature value of the real image, so the Y channel brightness conversion rate and surface gradient feature value of the image are used as the judgment factors to determine whether the image is a remake image, and the Y channel brightness conversion rate and surface gradient feature of the image are integrated If the values are unified, it can be recognized whether the image is a remake image.
  • the trained remake image classifier is obtained .
  • Step S102 Extract feature values of the image to be classified, the feature values include a Y channel luminance conversion rate of the image to be classified and a surface gradient feature value of the image to be classified.
  • the feature value of the image to be classified is first extracted, and the Y channel luminance conversion rate of the image to be classified and the surface gradient feature value of the image to be classified are extracted.
  • the histogram quantitatively represents the G channel
  • the characteristic value of the surface gradient is obtained as the characteristic value of the histogram
  • the characteristic value of the histogram is the characteristic value of the surface gradient of the image to be classified.
  • deep neural networks can be constructed to extract feature values.
  • the image to be classified of the upload system is input into the deep neural network model, and the deep neural network module automatically outputs the Y channel brightness conversion rate and surface of the image to be classified Gradient eigenvalue.
  • the above-mentioned deep neural network may be a VGG19 neural network model. Since the VGG19 neural network is an existing technology, its specific structure and training method will not be described for the time being.
  • Step S103 classify and judge the feature value of the image to be classified by the remake image classifier, and identify whether the image to be classified is a remake image based on the classification result.
  • the extracted Y-channel brightness conversion rate and surface gradient feature values of the image to be classified are input into the remake image classifier, and the above-mentioned remake image classifier will be based on the Y-channel brightness conversion rate and surface gradient of the image to be classified
  • the feature values are classified to obtain a classification result, and whether the image to be classified is a remake image can be identified according to the classification result.
  • the remake image classifier combines the input Y channel brightness conversion rate and the surface gradient feature value. If the Y channel brightness conversion rate and surface gradient feature value of the image to be classified meet the parameter conditions of the real image, The remake image classifier identifies the classification result of the image to be classified as a real image. If the Y channel brightness conversion rate and surface gradient feature value of the image to be classified meet the parameter conditions of the remake image, the remake image classifier will classify the image to be classified The classification result is identified as a remake image.
  • the remake image classifier can be judged first based on the Y channel brightness conversion rate, firstly based on the surface gradient feature value, or based on both the Y channel brightness and surface gradient feature value. That is, the remake image classifier first determines whether the Y-channel brightness conversion rate of the image to be classified meets the Y-channel brightness conversion rate of the remake image.
  • the remake image classifier identifies the classification result of the image to be classified as a remake image; otherwise, it determines Whether the surface gradient feature value of the classified image meets the requirement of the surface gradient feature value of the remake image; if so, the remake image classifier identifies the classification result of the image to be classified as a remake image; otherwise, the remake image classifier identifies the classification result of the image to be classified It is a real image.
  • the remake image classifier first judges whether the surface gradient feature value of the image to be classified meets the surface gradient feature value requirement of the remake image, and if so, the remake image classifier identifies the classification result of the image to be classified as a remake image; otherwise, it judges that it is to be classified Whether the Y channel brightness conversion rate of the image meets the Y channel brightness conversion rate requirement of the remake image; if so, the remake image classifier identifies the classification result of the image to be classified as a remake image; otherwise, the remake image classifier classifies the classification result of the image to be classified
  • the logo is a real image.
  • the remake image classifier determines whether the Y channel brightness conversion rate of the image to be classified meets the Y channel brightness conversion rate of the remake image and determines whether the surface gradient feature value of the image to be classified meets the surface gradient feature value requirement of the remake image, if The Y channel brightness conversion rate of the image to be classified meets the Y channel brightness conversion rate of the remake image and the surface gradient feature value of the image to be classified meets the surface gradient feature value requirement of the remake image, then the remake image classifier identifies the classification result of the image as Remake the image; otherwise, the remake image classifier identifies the classification result of the image as a real image. It should be noted that the remake image classifier can automatically output the classification results, and based on multiple eigenvalue parameters (Y channel brightness conversion rate and surface gradient eigenvalues) to determine, can accurately and quickly determine whether the image to be classified is a remake image .
  • the method for recognizing a remake image recognizes whether the image is a remake image according to the Y channel brightness conversion rate and surface gradient feature value of the image to be classified through a trained remake image classifier, which can efficiently and intelligently identify Whether the image to be classified is a real image or a remake image effectively avoids fraudulent behavior, and solves the problem that the remake image cannot be accurately distinguished from the actual captured image.
  • step S101 in Embodiment 1 specifically includes:
  • Step S201 Acquire training images, and divide the training images into real image groups and remake image groups.
  • a large number of training images are acquired through the verification system, and based on the real images or remake images of the images, the large number of training images are divided into real image groups and remake image groups.
  • Step S202 Extract the feature value of the real image group and the feature value of the remake image group, respectively.
  • the feature values of multiple images of the real image group are obtained, and each image The feature values of the acquired image are stored in association.
  • the feature values of the multiple images of the remake image group are obtained, and each image The feature values of the acquired image are stored in association.
  • the above feature values include the Y channel luminance conversion rate of the image and the surface gradient feature value.
  • Step S203 Train the remake image classifier using the feature values of the real image group as input parameters, so that the classification result output by the remake image classifier is that the image is a real image.
  • the feature value of each picture in the real image group is input into the remake image classifier, so that the classification result output by the remake image classifier is that the image is a real image, and recognition training for the real image is completed.
  • Step S204 Train the remake image classifier using the feature values of the remake image group as input parameters, so that the result output by the remake image classifier is that the image is a remake image.
  • the feature value of each picture in the remake image group is input into the remake image classifier, so that the classification result output by the remake image classifier is that the image is a remake image, and recognition training for the remake image is completed.
  • step S102 in Embodiment 1 specifically includes:
  • Step S301 Initialize the image to be classified to obtain the channel luminance value of the Y channel of the image to be classified except for the specular reflection portion.
  • the color space conversion of the image to be classified is performed to obtain the Y-channel brightness histogram of the image to be classified, and the Y-channel brightness histogram is subjected to normalization processing, equalization processing and polynomial conversion processing to obtain the classification to be classified
  • the Y channel of the image removes the channel brightness value of the specular reflection part.
  • Step S302 extract the channel brightness value of the specular reflection part of the Y channel of the image to be classified, remove the channel brightness value of the specular reflection part according to the Y channel and the channel brightness of the specular reflection part of the Y channel of the image to be classified The value calculates the Y channel brightness conversion rate.
  • the Y channel brightness conversion rate is calculated according to the original brightness value of the Y channel and the channel brightness value of the specular reflection part of the Y channel of the image to be classified.
  • Step S303 Calculate the surface gradient value of the G channel of the image to be classified, and draw a histogram according to the surface gradient value to obtain the feature value of the histogram.
  • step S301 specifically includes:
  • Step S3011 perform color space conversion on the image to be classified, and extract a luminance histogram of the Y channel of the image to be classified;
  • Step S3012 normalize the luminance histogram of the Y channel of the image to be classified to obtain the original histogram of the Y channel of the image to be classified;
  • Step S3013 Perform equalization processing on the original histogram to obtain a luminance-balanced histogram of the Y channel of the image to be classified;
  • Step S3014 Map the luminance equalization histogram of the channel Y through a polynomial conversion function to obtain the channel luminance value of the Y channel of the image to be classified except for the specular reflection part.
  • P is the coefficient matrix
  • step S102 in the first embodiment specifically includes:
  • Step S401 Obtain a large number of training images and Y channel luminance conversion rates and surface gradient feature values of the training images.
  • a large number of training images are acquired through the verification system, and the Y-channel brightness conversion rate and surface gradient feature values of the training images are calculated according to the method provided in Embodiment 3.
  • Step S402 Use the training image as the input of the VGG19 neural network model, and use the Y channel luminance conversion rate and surface gradient feature value as the output of the VGG19 neural network model to train the VGG19 neural network to make the VGG19 neural network convergence function convergence.
  • Step S403 input the image to be classified into the VGG19 neural network to obtain the Y channel luminance conversion rate of the image to be classified and the surface gradient feature value of the image.
  • the VGG19 neural network is trained through a large number of training images, and the VGG19 neural network that can obtain the Y channel brightness conversion rate and surface gradient characteristic value of the image to be classified by inputting the image to be classified is obtained by training VGG19 neural network can quickly extract the feature value of the image.
  • this embodiment provides a recognition system 100 for remake images, for performing the method steps in Embodiment 1, which includes a classifier construction module 101, a feature extraction module 102, and a recognition module 103.
  • the classifier construction module 101 is used to construct a remake image classifier based on multiple training samples.
  • the training sample includes a training image and a corresponding classification result, and the classification result is a real image or a remake image.
  • the feature extraction module 102 is used to extract the feature values of the image to be classified, the feature values including the Y channel luminance conversion rate of the image to be classified and the surface gradient feature value of the image to be classified.
  • the recognition module 103 is used for classifying and discriminating the feature value of the image to be classified by the remake image classifier, and determining whether the image to be classified is a remake image based on the classification result.
  • the image processing system provided by the embodiment of the present application is based on the same concept as the method embodiment shown in FIG. 1 of the present application, and its technical effects are the same as the method embodiment shown in FIG. 1 of the present application. Please refer to the description in the method embodiment shown in FIG. 1 of the present application, which will not be repeated here.
  • the recognition system for remake images provided in this embodiment can also recognize whether the image to be classified is a remake image according to the Y channel brightness conversion rate and surface gradient feature value of the image to be classified through the trained remake image classifier. It can efficiently and intelligently determine whether the image is a real image or a remake image, which effectively avoids fraudulent behavior, and solves the problem that it is impossible to accurately distinguish the remake image from the actual image taken at present.
  • the classifier construction module 101 in Embodiment 5 includes a structure for performing the method steps in the embodiment corresponding to FIG. 2, which includes an image acquisition unit 201 and feature value extraction Unit 202, first training unit 203, and second training unit 204.
  • the image acquisition unit 201 is used to acquire a training image, and divide the training image into a real image group and a remake image group.
  • the feature value extraction unit 202 is used to extract the feature value of the real image group and the feature value of the remake image group, respectively.
  • the first training unit 203 is configured to train the remake image classifier using the feature values of the real image group as input parameters, so that the classification result output by the remake image classifier is that the image is a real image.
  • the second training unit 204 is configured to train the remake image classifier using the feature values of the remake image group as input parameters, so that the result output by the remake image classifier is that the image is a remake image.
  • the feature extraction module 102 in Embodiment 5 includes a structure for executing the method steps in the embodiment corresponding to FIG. 3, which includes an initialization unit 301 and a conversion rate calculation unit 302 ⁇ characteristic value acquiring unit 303.
  • the initialization unit 301 is configured to perform initialization processing on the image to be classified to obtain a channel brightness value of the Y channel of the image to be classified excluding the specular reflection portion.
  • the conversion rate calculation unit 302 is used to extract the channel brightness value of the specular reflection part of the Y channel of the image to be classified, remove the channel brightness value of the specular reflection part according to the Y channel and the specular reflection of the Y channel of the image to be classified Part of the channel brightness value calculates the Y channel brightness conversion rate.
  • the feature value obtaining unit 303 is used to calculate the surface gradient value of the G channel of the image to be classified, and draw a histogram according to the surface gradient value to obtain the feature value of the histogram.
  • the feature extraction module 102 in Embodiment 5 includes a structure for executing the method steps in the embodiment corresponding to FIG. 4, which includes an acquisition unit 401 , Training unit 402 and extraction unit 403.
  • the obtaining unit 401 is used to obtain a large number of training images and the Y channel luminance conversion rate and surface gradient feature values of the training images.
  • the training unit 402 is used to take the training image as the input of the VGG19 neural network model, and use the Y channel brightness conversion rate and the surface gradient feature value as the output of the VGG19 neural network model to train the VGG19 neural network to make the VGG19 neural network
  • the convergence function converges.
  • the extraction unit 403 is used to input the image to be classified into the VGG19 neural network to obtain the Y channel luminance conversion rate of the image to be classified and the surface gradient feature value of the image to be classified.
  • the terminal device 9 of this embodiment includes: a processor 90, a memory 91, and computer-readable instructions 92 stored in the memory 91 and executable on the processor 90, such as programs.
  • the processor 90 executes the computer-readable instructions 92, the steps in the above embodiments of each image processing method are implemented, for example, steps S101 to S103 shown in FIG. 1.
  • the processor 90 executes the computer-readable instructions 92, the functions of the modules/units in the foregoing system embodiments are realized, for example, the functions of the modules 101 to 103 shown in FIG. 5.
  • the computer-readable instructions 92 may be divided into one or more modules/units, the one or more modules/units are stored in the memory 91 and executed by the processor 90, To complete this application.
  • the one or more modules/units may be a series of computer-readable instruction instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 92 in the terminal device 9.
  • the computer-readable instructions 92 may be divided into a classifier construction module, a feature extraction module, and an identification module. The specific functions of each module are as follows:
  • a classifier construction module configured to construct a remake image classifier based on multiple training samples, the training sample includes a training image and a corresponding classification result, and the classification result is a real image or a remake image;
  • the feature extraction module is used to extract the feature value of the image to be classified, the feature value includes the Y channel luminance conversion rate of the image to be classified and the surface gradient feature value of the image to be classified;
  • the identification module is used for classifying and discriminating the feature value of the image to be classified by the remake image classifier, and determining whether the image to be classified is a remake image based on the classification result.
  • the terminal device 9 may be a computing device such as a desktop computer, a notebook, a palmtop computer and a cloud management server.
  • the terminal device may include, but is not limited to, the processor 90 and the memory 91.
  • FIG. 8 is only an example of the terminal device 9 and does not constitute a limitation on the terminal device 9, and may include more or fewer components than the illustration, or a combination of certain components or different components.
  • the terminal device may further include an input and output device, a network access device, a bus, and the like.
  • the processor 90 may be a central processing unit (Central Processing Unit (CPU), can also be other general-purpose processors, digital signal processors (DSP), application-specific integrated circuits (Application Specific Integrated Circuit (ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 91 may be an internal storage unit of the terminal device 9, such as a hard disk or a memory of the terminal device 9.
  • the memory 91 may also be an external storage device of the terminal device 9, for example, a plug-in hard disk equipped on the terminal device 9, a smart memory card (Smart Media Card, SMC), and secure digital (SD) Flash card Card) etc.
  • the memory 91 may also include both an internal storage unit of the terminal device 9 and an external storage device.
  • the memory 91 is used to store the computer-readable instructions and other programs and data required by the terminal device.
  • the memory 91 can also be used to temporarily store data that has been or will be output.
  • the system/terminal device embodiments described above are only schematic.
  • the division of the module or unit is only a logical function division, and in actual implementation, there may be another division manner, such as multiple units Or components can be combined or integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, systems or units, and may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separate, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units on. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • the integrated module/unit is implemented in the form of a software functional unit and set as an independent product for sale or use, it may be stored in a computer-readable storage medium.
  • the present application can implement all or part of the processes in the methods of the above embodiments, and can also be completed by instructing relevant hardware through computer-readable instructions, which can be stored in a computer-readable storage medium
  • the computer-readable instructions are executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer readable instructions include computer readable instruction codes, and the computer readable instruction codes may be in source code form, object code form, executable file or some intermediate form, etc.
  • the computer-readable medium may include: any entity or system capable of carrying the computer-readable instruction code, recording medium, U disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunications signals
  • software distribution media any entity or system capable of carrying the computer-readable instruction code
  • recording medium U disk, removable hard disk, magnetic disk, optical disk
  • computer memory read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunications signals
  • telecommunications signals and software distribution media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本申请适用于图像识别技术领域,提供了一种翻拍图像的识别方法、***及终端设备,包括根据多个训练样本构建翻拍图像分类器,训练样本包括训练图像以及对应的分类结果,分类结果为真实图像或翻拍图像;提取待分类图像的特征值,特征值包括待分类图像的Y通道亮度转换率及待分类图像的表面梯度特征值;通过翻拍图像分类器对待分类图像的特征值进行分类判别,基于分类结果识别所述图像是否为翻拍图像。通过训练好的翻拍图像分类器根据图像的Y通道亮度转换率和表面梯度特征值对图像是否为翻拍图像进行识别,能够高效地、智能地判别出图像是真实图像还是翻拍图像,有效地避免了作假行为,解决了目前无法准确区分翻拍图像和实际拍摄的图像的问题。

Description

一种翻拍图像的识别方法、***及终端设备
本申请申明享有2019年01月07日递交的申请号为201910012454.3、名称为“一种翻拍图像的识别方法、***及终端设备”中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本申请属于计算机技术领域,尤其涉及一种翻拍图像的识别方法、***及终端设备。
背景技术
图像认证技术作为信息安全领域的重要组成部分,用于对图像的真实性进行验证。翻拍图像即为二次获取的图像,是指图像经过了两次以上的数字图像成像过程而得到的新图像,例如将图片现实与LCD屏幕或经过激光打印后,再由数码相机拍摄成像,即图片的图像。在零售场景中,零售厂商会安排巡查人员定期前往门店进行巡查,需要巡查人员现场拍摄图片并上传至验证***以作证明,然而巡查人员通常会存在作假的现象,即上传到验证***的图像是翻拍图像而非现场真实拍摄的图片,而验证***无法准确区分翻拍图像和实际拍摄的图像,因此无法准确判断巡查人员是否真实到店进行巡查。
因此,亟需一种能够准确识别图像是否为翻拍图像的方法,以避免巡查人员的作假行为。
技术问题
有鉴于此,本申请实施例提供了一种翻拍图像的识别方法、***及终端设备,以解决目前验证***无法准确区分翻拍图像和实际拍摄的图像的问题。
技术解决方案
本申请的第一方面提供了一种翻拍图像的识别方法,包括:
根据多个训练样本构建翻拍图像分类器,所述训练样本包括训练图像以及对应的分类结果,所述分类结果为真实图像或翻拍图像;
提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值;
通过所述翻拍图像分类器对所述待分类图像的特征值进行分类判别,基于分类结果识别所述待分类图像是否为翻拍图像。
本申请的第二方面提供了一种翻拍图像的识别***,包括:
分类器构建模块,用于根据多个训练样本构建翻拍图像分类器,所述训练样本包括训练图像以及对应的分类结果,所述分类结果为真实图像或翻拍图像;
特征提取模块,用于提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值;
识别模块,用于通过所述翻拍图像分类器对所述待分类图像的特征值进行分类判别,基于分类结果判断所述待分类图像是否为翻拍图像。
本申请的第三方面提供了一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现以下步骤:
根据多个训练样本构建翻拍图像分类器,所述训练样本包括训练图像以及对应的分类结果,所述分类结果为真实图像或翻拍图像;提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值;
通过所述翻拍图像分类器对所述待分类图像的特征值进行分类判别,基于分类结果识别所述待分类图像是否为翻拍图像。
本申请的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,所述计算机可读指令被处理器执行时实现以下步骤:
根据多个训练样本构建翻拍图像分类器,所述训练样本包括训练图像以及对应的分类结果,所述分类结果为真实图像或翻拍图像;
提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值;
通过所述翻拍图像分类器对所述待分类图像的特征值进行分类判别,基于分类结果识别所述待分类图像是否为翻拍图像。
有益效果
本申请提供的一种翻拍图像的识别方法、***及终端设备,通过训练好的翻拍图像分类器根据待分类图像的Y通道亮度转换率和表面梯度特征值对待分类图像是否为翻拍图像进行识别,能够高效地、智能地判别出图像是真实图像还是翻拍图像,有效地避免了作假行为,解决了目前无法准确区分翻拍图像和实际拍摄的图像的问题。
附图说明
图1是本申请实施例一提供的一种翻拍图像的识别方法的实现流程示意图;
图2是本申请实施例二提供的对应实施例一步骤S101的实现流程示意图;
图3是本申请实施例三提供的对应实施例一步骤S102的实现流程示意图;
图4是本申请实施例四提供的对应实施例一步骤S102的实现流程示意图;
图5是本申请实施例五提供的一种翻拍图像的识别***的结构示意图;
图6是本申请实施例六提供的对应实施例五中分类器构建模块101的结构示意图;
图7是本申请实施例七提供的对应实施例五中特征提取模块102的结构示意图;
图8是本申请实施例八提供的对应实施例五中特征提取模块102的结构示意图;
图9是本申请实施例九提供的终端设备的示意图。
本发明的实施方式
以下描述中,为了说明而不是为了限定,提出了诸如特定***结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的***、***、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。
实施例一:
如图1所示,本实施例提供了一种翻拍图像的识别方法,其具体包括:
步骤S101:根据多个训练样本构建翻拍图像分类器,所述训练样本包括训练图像以及对应的分类结果,所述分类结果为真实图像或翻拍图像。
在具体应用中,由于翻拍图像是对图像的二次成像,因此翻拍图像的Y通道亮度转换率与真实图像(现场拍摄的图像)的Y通道亮度转换率存在差异,且翻拍图像的表面梯度特征值与真实图像的表面梯度特征值也存在差异,因此将图像的Y通道亮度转换率和表面梯度特征值作为判断图像是否为翻拍图像的判定因子,综合图像的Y通道亮度转换率和表面梯度特征值进行统一判断,就能够识别出图像是否为翻拍图像。
在具体应用中,通过构建基于图像的Y通道亮度转换率和表面梯度特征值为判断因子的翻拍图像分类器,并基于大量的训练图像对该分类器进行训练,得到训练完成的翻拍图像分类器。
步骤S102:提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值。
在具体应用中,对于上传***的待分类图像,先对该待分类图像进行特征值提取,提取该待分类图像的Y通道亮度转换率和该待分类图像的表面梯度特征值。
在具体应用中,通过对上传***的待分类图像进行初始化处理,并计算得到该待分类图片的Y通道去处镜面反射部分的通道亮度值和Y通道的镜面反射部分的通道亮度值,基于Y通道去处镜面反射部分的通道亮度值和Y通道的镜面反射部分的通道亮度值计算得到该待分类图像的Y通道亮度转换率。
在具体应用中,通过提取待分类图像的G通道,并计算待分类图像的G通道的表面梯度特征值,并根据G通道的表面梯度特征值绘制出直方图,通过直方图定量表示G通道的表面梯度特征值,获取该直方图的特征值,则该直方图的特征值为该待分类图像的表面梯度特征值。
在具体应用中,为了更方便快捷地对图像进行识别,可以通过构建深度神经网络来实现特征值提取。通过构建和训练能够输出图像特征值的深度神经网络模型,将上传***的待分类图像输入到该深度神经网络模型中,该深度神经网络模块自动输出该待分类图像的Y通道亮度转换率和表面梯度特征值。需要说明的是,上述深度神经网络可以是VGG19神经网络模型。由于VGG19神经网络为现有技术,因此,暂不对其具体结构和训练方式进行赘述。
步骤S103:通过所述翻拍图像分类器对所述待分类图像的特征值进行分类判别,基于分类结果识别所述待分类图像是否为翻拍图像。
在具体应用中,将提取到的待分类图像的Y通道亮度转换率和表面梯度特征值输入到翻拍图像分类器中,上述翻拍图像分类器会根据待分类图像的Y通道亮度转换率和表面梯度特征值进行分类,得出分类结果,根据分类结果就能够识别出该待分类图像是否为翻拍图像。
在具体应用中,翻拍图像分类器结合输入的Y通道亮度转换率和表面梯度特征值这两个判断因子,若待分类图像的Y通道亮度转换率和表面梯度特征值满足真实图像的参数条件,则翻拍图像分类器将该待分类图像的分类结果标识为真实图像,若待分类图像的Y通道亮度转换率和表面梯度特征值满足翻拍图像的参数条件,则翻拍图像分类器将该待分类图像的分类结果标识为翻拍图像。
在具体应用中,翻拍图像分类器可以先基于Y通道亮度转换率进行判断,也可以先基于表面梯度特征值进行判断,还可以同时基于Y通道亮度和表面梯度特征值进行判断。即翻拍图像分类器先判断待分类图像的Y通道亮度转换率是否满足翻拍图像的Y通道亮度转换率要求,若是,则翻拍图像分类器将待分类图像的分类结果标识为翻拍图像;否则判断待分类图像的表面梯度特征值是否满足翻拍图像的表面梯度特征值要求,若是,则翻拍图像分类器将待分类图像的分类结果标识为翻拍图像;否则翻拍图像分类器将待分类图像的分类结果标识为真实图像。或者,翻拍图像分类器先判断待分类图像的表面梯度特征值是否满足翻拍图像的表面梯度特征值要求,若是,则翻拍图像分类器将待分类图像的分类结果标识为翻拍图像;否则判断待分类图像的Y通道亮度转换率是否满足翻拍图像的Y通道亮度转换率要求,若是,则翻拍图像分类器将待分类图像的分类结果标识为翻拍图像;否则翻拍图像分类器将待分类图像的分类结果标识为真实图像。又或者,翻拍图像分类器判断待分类图像的Y通道亮度转换率是否满足翻拍图像的Y通道亮度转换率要求并判断待分类图像的表面梯度特征值是否满足翻拍图像的表面梯度特征值要求,若待分类图像的Y通道亮度转换率满足翻拍图像的Y通道亮度转换率要求且待分类图像的表面梯度特征值满足翻拍图像的表面梯度特征值要求,则翻拍图像分类器将图像的分类结果标识为翻拍图像;否则翻拍图像分类器将图像的分类结果标识为真实图像。需要说明的是,翻拍图像分类器能够自动输出分类结果,并且基于多个特征值参数(Y通道亮度转换率和表面梯度特征值)进行判断,能够精准快速地判断出待分类图像是否为翻拍图像。
本实施例提供的翻拍图像的识别方法,通过训练好的翻拍图像分类器根据待分类图像的Y通道亮度转换率和表面梯度特征值对图像是否为翻拍图像进行识别,能够高效地、智能地判别出待分类图像是真实图像还是翻拍图像,有效地避免了作假行为,解决了目前无法准确区分翻拍图像和实际拍摄的图像的问题。
实施例二:
如图2所示,在本实施例中,实施例一中的步骤S101具体包括:
步骤S201:获取训练图像,将所述训练图像分为真实图像组和翻拍图像组。
在具体应用中,通过验证***获取大量的训练图像,并基于图像时真实图像或翻拍图像,将大量的训练图像分为真实图像组和翻拍图像组。
步骤S202:分别提取所述真实图像组的特征值和所述翻拍图像组的特征值。
在具体应用中,通过将真实图像组的多张图像输入到预先构建好的用于获取图像特征值的深度神经网络中,获取该真实图像组的多张图像的特征值,并将每张图像与获取到的图像的特征值进行关联保存。
在具体应用中,通过将翻拍图像组的多张图像输入到预先构建好的用于获取图像特征值的深度神经网络中,获取该翻拍图像组的多张图像的特征值,并将每张图像与获取到的图像的特征值进行关联保存。
在具体应用中,上述特征值包括图像的Y通道亮度转换率和表面梯度特征值。
步骤S203:将所述真实图像组的特征值作为输入参数对翻拍图像分类器进行训练,以使所述翻拍图像分类器输出的分类结果为所述图像为真实图像。
在具体应用中,将真实图像组中的每张图片的特征值输入到翻拍图像分类器中,使得翻拍图像分类器输出的分类结果为该图像为真实图像,完成对真实图像的识别训练。
步骤S204:将所述翻拍图像组的特征值作为输入参数对翻拍图像分类器进行训练,以使所述翻拍图像分类器输出的结果为所述图像为翻拍图像。
在具体应用中,将翻拍图像组中的每张图片的特征值输入到翻拍图像分类器中,使得翻拍图像分类器输出的分类结果为该图像为翻拍图像,完成对翻拍图像的识别训练。
实施例三:
如图3所示,在本实施例中,实施例一中的步骤S102具体包括:
步骤S301:对所述待分类图像进行初始化处理,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值。
在具体应用中,对待分类图像进行色彩空间转换,得到待分类图像的Y通道的亮度直方图,对Y通道的亮度直方图进行归一化处理、均衡化处理以及多项式转换处理后,得到待分类图像的Y通道去除镜面反射部分的通道亮度值。
步骤S302:提取所述待分类图像的Y通道的镜面反射部分的通道亮度值,根据所述Y通道去除镜面反射部分的通道亮度值及所述待分类图像的Y通道的镜面反射部分的通道亮度值计算Y通道亮度转换率。
在具体应用中,获取待分类图像的Y通道的原始亮度值,基于Y通道的原始亮度值和Y通道去除镜面反射部分的通道亮度值计算Y通道的镜面反射部分的通道亮度值,计算公式为:Y S=Y 0-Y d,其中,Y S为Y通道的镜面反射部分的通道亮度值,Y 0为Y通道的原始亮度值,Y d为Y通道去除镜面反射部分的通道亮度值。
在具体应用中,当计算得到Y通道的镜面反射部分的通道亮度值后,根据Y通道的原始亮度值及待分类图像的Y通道的镜面反射部分的通道亮度值计算Y通道亮度转换率,计算公式为:Y t=Y S/Y d,其中,Y S为Y通道的镜面反射部分的通道亮度值,Y 0为Y通道的原始亮度值,Y t为Y通道亮度转换率。
需要说明的是,如何提取待分类图像的Y通道的原始亮度值是本领域技术人员所熟知,在此不加以赘述。
步骤S303:计算所述待分类图像的G通道的表面梯度值,并根据所述表面梯度值绘制直方图,获取所述直方图的特征值。
需要说明的是,提取待分类图像的G通道以及计算G通达的表面梯度特征值是本领域技术人员所熟知的,在此不在加以赘述。另外,如何根据G通道的表面梯度特征值绘制直方图以及获取直方图的特征值也是本领域技术人员所熟知的,因此不加以赘述。
在一个实施例中,上述步骤S301具体包括:
步骤S3011:对所述待分类图像进行色彩空间转换,提取所述待分类图像的Y通道的亮度直方图;
步骤S3012:对所述待分类图像的Y通道的亮度直方图进行归一化处理,得到所述待分类图像的Y通道的原始直方图;
步骤S3013:对所述原始直方图进行均衡化处理,获取所述待分类图像的Y通道的亮度均衡直方图;
步骤S3014:通过多项式转换函数对通道Y的亮度均衡直方图进行映射,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值。
在具体应用中,上述多项式转换函数为:
F(x)=P 0X 4+P 1X 3+P 2X 2+P 3X+P 4
P=[P 0,P 1,P 2,P 3,P 4]=[1.54,-3.426,-1.733,0.7435,0.00436];
其中,P为系数矩阵。
实施例四:
如图4所示,区别于上述实施例三,在本实施例中,实施例一中的步骤S102具体包括:
步骤S401:获取大量训练图像和所述训练图像的Y通道亮度转换率及表面梯度特征值。
在具体应用中,通过验证***获取大量的训练图像,并根据实施例三提供的方法计算训练图像的Y通道亮度转换率和表面梯度特征值。
步骤S402:将所述训练图像作为VGG19神经网络模型的输入,将Y通道亮度转换率和表面梯度特征值作为VGG19神经网络模型的输出,对VGG19神经网络进行训练,以使VGG19神经网络的收敛函数收敛。
步骤S403:将所述待分类图像输入到所述VGG19神经网络中,得到所述待分类图像的Y通道亮度转换率及所述图像的表面梯度特征值。
在具体应用中,通过大量的训练图像来VGG19神经网络进行训练,得到通过输入待分类图像就能得到该待分类图像的Y通道亮度转换率和表面梯度特征值的VGG19神经网络,通过训练完成的VGG19神经网络能够快速提取出图像的特征值。
实施例五:
如图5所示,本实施例提供一种翻拍图像的识别***100,用于执行实施例一中的方法步骤,其包括分类器构建模块101、特征提取模块102以及识别模块103。
分类器构建模块101用于根据多个训练样本构建翻拍图像分类器,训练样本包括训练图像以及对应的分类结果,所述分类结果为真实图像或翻拍图像。
特征提取模块102用于提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值。
识别模块103用于通过所述翻拍图像分类器对所述待分类图像的特征值进行分类判别,基于分类结果判断所述待分类图像是否为翻拍图像。
需要说明的是,本申请实施例提供的图片处理***,由于与本申请图1所示方法实施例基于同一构思,其带来的技术效果与本申请图1所示方法实施例相同,具体内容可参见本申请图1所示方法实施例中的叙述,此处不再赘述。
因此,本实施例提供的一种翻拍图像的识别***,同样能够通过训练好的翻拍图像分类器根据待分类图像的Y通道亮度转换率和表面梯度特征值对待分类图像是否为翻拍图像进行识别,能够高效地、智能地判别出图像是真实图像还是翻拍图像,有效地避免了作假行为,解决了目前无法准确区分翻拍图像和实际拍摄的图像的问题。
实施例六:
如图6所示,在本实施例中,实施例五中的分类器构建模块101包括用于执行图2所对应的实施例中的方法步骤的结构,其包括图像获取单元201、特征值提取单元202、第一训练单元203以及第二训练单元204。
图像获取单元201用于获取训练图像,将所述训练图像分为真实图像组和翻拍图像组。
特征值提取单元202用于分别提取所述真实图像组的特征值和所述翻拍图像组的特征值。
第一训练单元203用于将所述真实图像组的特征值作为输入参数对翻拍图像分类器进行训练,以使所述翻拍图像分类器输出的分类结果为所述图像为真实图像。
第二训练单元204用于将所述翻拍图像组的特征值作为输入参数对翻拍图像分类器进行训练,以使所述翻拍图像分类器输出的结果为所述图像为翻拍图像。
实施例七:
如图7所示,在本实施例中,实施例五中的特征提取模块102包括用于执行图3所对应的实施例中的方法步骤的结构,其包括初始化单元301、转换率计算单元302以及特征值获取单元303。
初始化单元301用于对所述待分类图像进行初始化处理,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值。
转换率计算单元302用于提取所述待分类图像的Y通道的镜面反射部分的通道亮度值,根据所述Y通道去除镜面反射部分的通道亮度值及所述待分类图像的Y通道的镜面反射部分的通道亮度值计算Y通道亮度转换率。
特征值获取单元303用于计算所述待分类图像的G通道的表面梯度值,并根据所述表面梯度值绘制直方图,获取所述直方图的特征值。
实施例八:
如图8所示,区别于实施例七,在本实施例中,实施例五中的特征提取模块102包括用于执行图4所对应的实施例中的方法步骤的结构,其包括获取单元401、训练单元402以及提取单元403。
获取单元401用于获取大量训练图像和所述训练图像的Y通道亮度转换率及表面梯度特征值。
训练单元402用于将所述训练图像作为VGG19神经网络模型的输入,将Y通道亮度转换率和表面梯度特征值作为VGG19神经网络模型的输出,对VGG19神经网络进行训练,以使VGG19神经网络的收敛函数收敛。
提取单元403用于将所述待分类图像输入到所述VGG19神经网络中,得到所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值。
实施例九:
图9是本申请实施例九提供的终端设备的示意图。如图9所示,该实施例的终端设备9包括:处理器90、存储器91以及存储在所述存储器91中并可在所述处理器90上运行的计算机可读指令92,例如程序。所述处理器90执行所述计算机可读指令92时实现上述各个图片处理方法实施例中的步骤,例如图1所示的步骤S101至S103。或者,所述处理器90执行所述计算机可读指令92时实现上述***实施例中各模块/单元的功能,例如图5所示模块101至103的功能。
示例性的,所述计算机可读指令92可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器91中,并由所述处理器90执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令指令段,该指令段用于描述所述计算机可读指令92在所述终端设备9中的执行过程。例如,所述计算机可读指令92可以被分割成分类器构建模块、特征提取模块以及识别模块,各模块具体功能如下:
分类器构建模块,用于根据多个训练样本构建翻拍图像分类器,所述训练样本包括训练图像以及对应的分类结果,所述分类结果为真实图像或翻拍图像;
特征提取模块,用于提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值;
识别模块,用于通过所述翻拍图像分类器对所述待分类图像的特征值进行分类判别,基于分类结果判断所述待分类图像是否为翻拍图像。
所述终端设备9可以是桌上型计算机、笔记本、掌上电脑及云端管理服务器等计算设备。所述终端设备可包括,但不仅限于,处理器90、存储器91。本领域技术人员可以理解,图8仅仅是终端设备9的示例,并不构成对终端设备9的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述终端设备还可以包括输入输出设备、网络接入设备、总线等。
所称处理器90可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器91可以是所述终端设备9的内部存储单元,例如终端设备9的硬盘或内存。所述存储器91也可以是所述终端设备9的外部存储设备,例如所述终端设备9上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器91还可以既包括所述终端设备9的内部存储单元也包括外部存储设备。所述存储器91用于存储所述计算机可读指令以及所述终端设备所需的其他程序和数据。所述存储器91还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述***的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述无线终端中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
例如,以上所描述的***/终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,***或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述设置为分离部件说明的单元可以是或者也可以不是物理上分开的,设置为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
所述集成的模块/单元如果以软件功能单元的形式实现并设置为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一计算机可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机可读指令包括计算机可读指令代码,所述计算机可读指令代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机可读指令代码的任何实体或***、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括是电载波信号和电信信号。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种翻拍图像的识别方法,其特征在于,包括:
    根据多个训练样本构建翻拍图像分类器,所述训练样本包括训练图像以及对应的分类结果,所述分类结果为真实图像或翻拍图像;
    提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值;
    通过所述翻拍图像分类器对所述待分类图像的特征值进行分类判别,基于分类结果识别所述待分类图像是否为翻拍图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据多个训练样本构建翻拍图像分类器,包括:
    获取训练图像,将所述训练图像分为真实图像组和翻拍图像组;
    分别提取所述真实图像组的特征值和所述翻拍图像组的特征值;
    将所述真实图像组的特征值作为输入参数对翻拍图像分类器进行训练,以使所述翻拍图像分类器输出的分类结果为所述图像为真实图像;
    将所述翻拍图像组的特征值作为输入参数对翻拍图像分类器进行训练,以使所述翻拍图像分类器输出的结果为所述图像为翻拍图像。
  3. 根据权利要求1所述的方法,其特征在于,所述提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值,包括:
    对所述待分类图像进行初始化处理,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值;
    提取所述待分类图像的Y通道的镜面反射部分的通道亮度值,根据所述Y通道去除镜面反射部分的通道亮度值及所述待分类图像的Y通道的镜面反射部分的通道亮度值计算Y通道亮度转换率;
    计算所述待分类图像的G通道的表面梯度值,并根据所述表面梯度值绘制直方图,获取所述直方图的特征值。
  4. 根据权利要求3所述的方法,其特征在于,对所述待分类图像进行初始化处理,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值,包括:
    对所述待分类图像进行色彩空间转换,提取所述待分类图像的Y通道的亮度直方图;
    对所述待分类图像的Y通道的亮度直方图进行归一化处理,得到所述待分类图像的Y通道的原始直方图;
    对所述原始直方图进行均衡化处理,获取所述待分类图像的Y通道的亮度均衡直方图;
    通过多项式转换函数对通道Y的亮度均衡直方图进行映射,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值;
    所述多项式转换函数为:
    F(x)=P 0X 4+P 1X 3+P 2X 2+P 3X+P 4
    P=[P 0,P 1,P 2,P 3,P 4]=[1.54,-3.426,-1.733,0.7435,0.00436];
    其中,P为系数矩阵。
  5. 根据权利要求1所述的方法,其特征在于,所述提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值,包括:
    获取大量训练图像和所述训练图像的Y通道亮度转换率及表面梯度特征值;
    将所述训练图像作为VGG19神经网络模型的输入,将Y通道亮度转换率和表面梯度特征值作为VGG19神经网络模型的输出,对VGG19神经网络进行训练,以使VGG19神经网络的收敛函数收敛;
    将所述待分类图像输入到所述VGG19神经网络中,得到所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值。
  6. 一种翻拍图像的识别***,其特征在于,包括:
    分类器构建模块,用于根据多个训练样本构建翻拍图像分类器,训练样本包括训练图像以及对应的分类结果,所述分类结果为真实图像或翻拍图像;
    特征提取模块,用于提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值;
    识别模块,用于通过所述翻拍图像分类器对所述待分类图像的特征值进行分类判别,基于分类结果判断所述待分类图像是否为翻拍图像。
  7. 根据权利要求6所述的翻拍图像的识别***,其特征在于,所述特征提取模块包括:
    初始化单元,用于对所述待分类图像进行初始化处理,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值;
    转换率计算单元,用于提取所述待分类图像的Y通道的镜面反射部分的通道亮度值,根据所述Y通道去除镜面反射部分的通道亮度值及所述待分类图像的Y通道的镜面反射部分的通道亮度值计算Y通道亮度转换率;
    特征值获取单元,用于计算所述待分类图像的G通道的表面梯度值,并根据所述表面梯度值绘制直方图,获取所述直方图的特征值。
  8. 根据权利要求6所述的翻拍图像的识别***,其特征在于,所述特征提取模块包括:
    获取单元,用于获取大量训练图像和所述训练图像的Y通道亮度转换率及表面梯度特征值;
    训练单元,用于将所述训练图像作为VGG19神经网络模型的输入,将Y通道亮度转换率和表面梯度特征值作为VGG19神经网络模型的输出,对VGG19神经网络进行训练,以使VGG19神经网络的收敛函数收敛;
    提取单元,用于将所述待分类图像输入到所述VGG19神经网络中,得到所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值。
  9. 根据权利要求6所述的翻拍图像的识别***,其特征在于,所述分类器构建模块包括:
    图像获取单元,用于获取训练图像,将所述训练图像分为真实图像组和翻拍图像组;
    特征提取单元,用于分别提取所述真实图像组的特征值和所述翻拍图像组的特征值;
    第一训练单元,用于将所述真实图像组的特征值作为输入参数对翻拍图像分类器进行训练,以使所述翻拍图像分类器输出的分类结果为所述图像为真实图像;
    第二训练单元,用于将所述翻拍图像组的特征值作为输入参数对翻拍图像分类器进行训练,以使所述翻拍图像分类器输出的结果为所述图像为翻拍图像。
  10. 根据权利要求7所述的翻拍图像的识别***,其特征在于,所述转换率计算单元包括:
    直方图提取单元,用于对所述待分类图像进行色彩空间转换,提取所述待分类图像的Y通道的亮度直方图;
    归一化处理单元,用于对所述待分类图像的Y通道的亮度直方图进行归一化处理,得到所述待分类图像的Y通道的原始直方图;
    均衡化处理单元,用于对所述原始直方图进行均衡化处理,获取所述待分类图像的Y通道的亮度均衡直方图;
    多项式转换单元,用于通过多项式转换函数对通道Y的亮度均衡直方图进行映射,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值;
    所述多项式转换函数为:
    F(x)=P 0X 4+P 1X 3+P 2X 2+P 3X+P 4
    P=[P 0,P 1,P 2,P 3,P 4]=[1.54,-3.426,-1.733,0.7435,0.00436];
    其中,P为系数矩阵。
  11. 一种终端设备,其特征在于,所述终端设备包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机可读指令,所述处理器执行所述计算机可读指令时实现如下步骤:
    根据多个训练样本构建翻拍图像分类器,所述训练样本包括训练图像以及对应的分类结果,所述分类结果为真实图像或翻拍图像;
    提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值;
    通过所述翻拍图像分类器对所述待分类图像的特征值进行分类判别,基于分类结果识别所述待分类图像是否为翻拍图像。
  12. 根据权利要求11所述的终端设备,其特征在于,所述根据多个训练样本构建翻拍图像分类器,包括:
    获取训练图像,将所述训练图像分为真实图像组和翻拍图像组;
    分别提取所述真实图像组的特征值和所述翻拍图像组的特征值;
    将所述真实图像组的特征值作为输入参数对翻拍图像分类器进行训练,以使所述翻拍图像分类器输出的分类结果为所述图像为真实图像;
    将所述翻拍图像组的特征值作为输入参数对翻拍图像分类器进行训练,以使所述翻拍图像分类器输出的结果为所述图像为翻拍图像。
  13. 根据权利要求11所述的终端设备,其特征在于,所述提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值,包括:
    对所述待分类图像进行初始化处理,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值;
    提取所述待分类图像的Y通道的镜面反射部分的通道亮度值,根据所述Y通道去除镜面反射部分的通道亮度值及所述待分类图像的Y通道的镜面反射部分的通道亮度值计算Y通道亮度转换率;
    计算所述待分类图像的G通道的表面梯度值,并根据所述表面梯度值绘制直方图,获取所述直方图的特征值。
  14. 根据权利要求13所述的终端设备,其特征在于,对所述待分类图像进行初始化处理,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值,包括:
    对所述待分类图像进行色彩空间转换,提取所述待分类图像的Y通道的亮度直方图;
    对所述待分类图像的Y通道的亮度直方图进行归一化处理,得到所述待分类图像的Y通道的原始直方图;
    对所述原始直方图进行均衡化处理,获取所述待分类图像的Y通道的亮度均衡直方图;
    通过多项式转换函数对通道Y的亮度均衡直方图进行映射,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值;
    所述多项式转换函数为:
    F(x)=P 0X 4+P 1X 3+P 2X 2+P 3X+P 4
    P=[P 0,P 1,P 2,P 3,P 4]=[1.54,-3.426,-1.733,0.7435,0.00436];
    其中,P为系数矩阵。
  15. 根据权利要求11所述的终端设备,其特征在于,所述提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值,包括:
    获取大量训练图像和所述训练图像的Y通道亮度转换率及表面梯度特征值;
    将所述训练图像作为VGG19神经网络模型的输入,将Y通道亮度转换率和表面梯度特征值作为VGG19神经网络模型的输出,对VGG19神经网络进行训练,以使VGG19神经网络的收敛函数收敛;
    将所述待分类图像输入到所述VGG19神经网络中,得到所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值。
  16. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,其特征在于,所述计算机可读指令被处理器执行时实现如下步骤:
    根据多个训练样本构建翻拍图像分类器,所述训练样本包括训练图像以及对应的分类结果,所述分类结果为真实图像或翻拍图像;
    提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值;
    通过所述翻拍图像分类器对所述待分类图像的特征值进行分类判别,基于分类结果识别所述待分类图像是否为翻拍图像。
  17. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述根据多个训练样本构建翻拍图像分类器,包括:
    获取训练图像,将所述训练图像分为真实图像组和翻拍图像组;
    分别提取所述真实图像组的特征值和所述翻拍图像组的特征值;
    将所述真实图像组的特征值作为输入参数对翻拍图像分类器进行训练,以使所述翻拍图像分类器输出的分类结果为所述图像为真实图像;
    将所述翻拍图像组的特征值作为输入参数对翻拍图像分类器进行训练,以使所述翻拍图像分类器输出的结果为所述图像为翻拍图像。
  18. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值,包括:
    对所述待分类图像进行初始化处理,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值;
    提取所述待分类图像的Y通道的镜面反射部分的通道亮度值,根据所述Y通道去除镜面反射部分的通道亮度值及所述待分类图像的Y通道的镜面反射部分的通道亮度值计算Y通道亮度转换率;
    计算所述待分类图像的G通道的表面梯度值,并根据所述表面梯度值绘制直方图,获取所述直方图的特征值。
  19. 根据权利要求18任一项所述的计算机可读存储介质,其特征在于,对所述待分类图像进行初始化处理,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值,包括:
    对所述待分类图像进行色彩空间转换,提取所述待分类图像的Y通道的亮度直方图;
    对所述待分类图像的Y通道的亮度直方图进行归一化处理,得到所述待分类图像的Y通道的原始直方图;
    对所述原始直方图进行均衡化处理,获取所述待分类图像的Y通道的亮度均衡直方图;
    通过多项式转换函数对通道Y的亮度均衡直方图进行映射,获得所述待分类图像的Y通道去除镜面反射部分的通道亮度值;
    所述多项式转换函数为:
    F(x)=P 0X 4+P 1X 3+P 2X 2+P 3X+P 4
    P=[P 0,P 1,P 2,P 3,P 4]=[1.54,-3.426,-1.733,0.7435,0.00436];
    其中,P为系数矩阵。。
  20. 根据权利要求16所述的计算机可读存储介质,其特征在于,所述提取待分类图像的特征值,所述特征值包括所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值,包括:
    获取大量训练图像和所述训练图像的Y通道亮度转换率及表面梯度特征值;
    将所述训练图像作为VGG19神经网络模型的输入,将Y通道亮度转换率和表面梯度特征值作为VGG19神经网络模型的输出,对VGG19神经网络进行训练,以使VGG19神经网络的收敛函数收敛;
    将所述待分类图像输入到所述VGG19神经网络中,得到所述待分类图像的Y通道亮度转换率及所述待分类图像的表面梯度特征值。
PCT/CN2019/091504 2019-01-07 2019-06-17 一种翻拍图像的识别方法、***及终端设备 WO2020143165A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910012454.3 2019-01-07
CN201910012454.3A CN109784394A (zh) 2019-01-07 2019-01-07 一种翻拍图像的识别方法、***及终端设备

Publications (1)

Publication Number Publication Date
WO2020143165A1 true WO2020143165A1 (zh) 2020-07-16

Family

ID=66500020

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/091504 WO2020143165A1 (zh) 2019-01-07 2019-06-17 一种翻拍图像的识别方法、***及终端设备

Country Status (2)

Country Link
CN (1) CN109784394A (zh)
WO (1) WO2020143165A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507923A (zh) * 2020-12-16 2021-03-16 平安银行股份有限公司 证件翻拍检测方法、装置、电子设备及介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784394A (zh) * 2019-01-07 2019-05-21 平安科技(深圳)有限公司 一种翻拍图像的识别方法、***及终端设备
CN111275685B (zh) * 2020-01-20 2024-06-11 中国平安人寿保险股份有限公司 身份证件的翻拍图像识别方法、装置、设备及介质
CN111461143A (zh) * 2020-03-31 2020-07-28 珠海格力电器股份有限公司 一种图片翻拍识别方法和装置及电子设备
CN112927221B (zh) * 2020-12-09 2022-03-29 广州市玄武无线科技股份有限公司 一种基于图像细粒度特征翻拍检测方法及***
CN114677526A (zh) * 2022-03-25 2022-06-28 平安科技(深圳)有限公司 图像分类方法、装置、设备及介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521614A (zh) * 2011-12-20 2012-06-27 中山大学 一种翻拍数字图像的鉴定方法
US20140177947A1 (en) * 2012-12-24 2014-06-26 Google Inc. System and method for generating training cases for image classification
CN104598933A (zh) * 2014-11-13 2015-05-06 上海交通大学 一种基于多特征融合的图像翻拍检测方法
CN105117729A (zh) * 2015-05-11 2015-12-02 杭州金培科技有限公司 一种识别翻拍图像的方法和装置
CN105118048A (zh) * 2015-07-17 2015-12-02 北京旷视科技有限公司 翻拍证件图片的识别方法及装置
CN106991451A (zh) * 2017-04-14 2017-07-28 武汉神目信息技术有限公司 一种证件图片的识别***及方法
CN108171689A (zh) * 2017-12-21 2018-06-15 深圳大学 一种显示器屏幕图像翻拍的鉴定方法、装置及存储介质
CN108520285A (zh) * 2018-04-16 2018-09-11 清华大学 物品鉴别方法、***、设备及存储介质
CN109784394A (zh) * 2019-01-07 2019-05-21 平安科技(深圳)有限公司 一种翻拍图像的识别方法、***及终端设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521614A (zh) * 2011-12-20 2012-06-27 中山大学 一种翻拍数字图像的鉴定方法
US20140177947A1 (en) * 2012-12-24 2014-06-26 Google Inc. System and method for generating training cases for image classification
CN104598933A (zh) * 2014-11-13 2015-05-06 上海交通大学 一种基于多特征融合的图像翻拍检测方法
CN105117729A (zh) * 2015-05-11 2015-12-02 杭州金培科技有限公司 一种识别翻拍图像的方法和装置
CN105118048A (zh) * 2015-07-17 2015-12-02 北京旷视科技有限公司 翻拍证件图片的识别方法及装置
CN106991451A (zh) * 2017-04-14 2017-07-28 武汉神目信息技术有限公司 一种证件图片的识别***及方法
CN108171689A (zh) * 2017-12-21 2018-06-15 深圳大学 一种显示器屏幕图像翻拍的鉴定方法、装置及存储介质
CN108520285A (zh) * 2018-04-16 2018-09-11 清华大学 物品鉴别方法、***、设备及存储介质
CN109784394A (zh) * 2019-01-07 2019-05-21 平安科技(深圳)有限公司 一种翻拍图像的识别方法、***及终端设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FENG, LI: "Blind Forensics of Recaptured Image Based on Specularity Distribution and Surface Gradient", CHINESE MASTER’S THESES FULL-TEXT DATABASE, 15 December 2013 (2013-12-15), pages 1 - 59, XP009522060 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507923A (zh) * 2020-12-16 2021-03-16 平安银行股份有限公司 证件翻拍检测方法、装置、电子设备及介质
CN112507923B (zh) * 2020-12-16 2023-10-31 平安银行股份有限公司 证件翻拍检测方法、装置、电子设备及介质

Also Published As

Publication number Publication date
CN109784394A (zh) 2019-05-21

Similar Documents

Publication Publication Date Title
WO2020143165A1 (zh) 一种翻拍图像的识别方法、***及终端设备
WO2021057848A1 (zh) 网络的训练方法、图像处理方法、网络、终端设备及介质
CN110084135B (zh) 人脸识别方法、装置、计算机设备及存储介质
CN112381775B (zh) 一种图像篡改检测方法、终端设备及存储介质
JP6629513B2 (ja) ライブネス検査方法と装置、及び映像処理方法と装置
CN110197146B (zh) 基于深度学习的人脸图像分析方法、电子装置及存储介质
WO2020253127A1 (zh) 脸部特征提取模型训练方法、脸部特征提取方法、装置、设备及存储介质
WO2020024744A1 (zh) 一种图像特征点检测方法、终端设备及存储介质
WO2020143330A1 (zh) 一种人脸图像的捕捉方法、计算机可读存储介质及终端设备
CN111488756A (zh) 基于面部识别的活体检测的方法、电子设备和存储介质
WO2020253508A1 (zh) 异常细胞检测方法、装置及计算机可读存储介质
TW202036367A (zh) 人臉識別方法及裝置
WO2022127111A1 (zh) 跨模态人脸识别方法、装置、设备及存储介质
WO2019119396A1 (zh) 人脸表情识别方法及装置
WO2022166207A1 (zh) 人脸识别方法、装置、设备及存储介质
WO2021184847A1 (zh) 一种遮挡车牌字符识别方法、装置、存储介质和智能设备
CN113642639B (zh) 活体检测方法、装置、设备和存储介质
WO2020164266A1 (zh) 一种活体检测方法、***及终端设备
CN110879986A (zh) 人脸识别的方法、设备和计算机可读存储介质
CN113743378B (zh) 一种基于视频的火情监测方法和装置
CN111461143A (zh) 一种图片翻拍识别方法和装置及电子设备
CN113158773B (zh) 一种活体检测模型的训练方法及训练装置
CN110895811A (zh) 一种图像篡改检测方法和装置
WO2020248848A1 (zh) 智能化异常细胞判断方法、装置及计算机可读存储介质
TWI425429B (zh) 影像紋理信號的萃取方法、影像識別方法與影像識別系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19908225

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19908225

Country of ref document: EP

Kind code of ref document: A1