WO2022222575A1 - 用于目标识别的方法和*** - Google Patents

用于目标识别的方法和*** Download PDF

Info

Publication number
WO2022222575A1
WO2022222575A1 PCT/CN2022/075531 CN2022075531W WO2022222575A1 WO 2022222575 A1 WO2022222575 A1 WO 2022222575A1 CN 2022075531 W CN2022075531 W CN 2022075531W WO 2022222575 A1 WO2022222575 A1 WO 2022222575A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
image
verification
images
target
Prior art date
Application number
PCT/CN2022/075531
Other languages
English (en)
French (fr)
Inventor
张明文
张天明
赵宁宁
Original Assignee
北京嘀嘀无限科技发展有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京嘀嘀无限科技发展有限公司 filed Critical 北京嘀嘀无限科技发展有限公司
Publication of WO2022222575A1 publication Critical patent/WO2022222575A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4014Identity check for transactions
    • G06Q20/40145Biometric identity checks

Definitions

  • This specification relates to the technical field of image processing, and in particular, to a method and system for object recognition.
  • Target recognition is a technology for biometric identification based on targets acquired by image acquisition devices.
  • face recognition technology targeting faces is widely used in application scenarios such as permission verification and identity verification.
  • permission verification and identity verification In order to ensure the security of target recognition, it is necessary to determine the authenticity of the target image.
  • One of the embodiments of this specification provides a target recognition method, the method includes: determining an illumination sequence, where the illumination sequence is used to determine multiple colors of multiple illuminations illuminating a target object; acquiring multiple target images, wherein the The shooting time has a corresponding relationship with the irradiation time of the plurality of illuminations; and based on the illumination sequence and the plurality of target images, the authenticity of the plurality of target images is determined.
  • One of the embodiments of the present specification provides a target recognition system, the system includes: a determination module for determining a lighting sequence, where the lighting sequence is used for determining multiple colors of multiple lights illuminating a target object; an acquiring module for acquiring Multiple target images, the shooting time of the multiple target images has a corresponding relationship with the irradiation time of multiple illuminations; the verification module is used to determine the authenticity of the multiple target images based on the illumination sequence and the multiple target images.
  • One of the embodiments of the present specification provides a target recognition apparatus, including a processor, where the processor is configured to execute the target recognition method disclosed in the present specification.
  • One of the embodiments of this specification provides a computer-readable storage medium, the storage medium stores computer instructions, and after the computer reads the computer instructions in the storage medium, the computer executes the target identification method disclosed in this specification.
  • FIG. 1 is a schematic diagram of an application scenario of a target recognition system according to some embodiments of the present specification
  • FIG. 2 is an exemplary flowchart of a target recognition method according to some embodiments of the present specification
  • FIG. 3 is a schematic diagram of a lighting sequence according to some embodiments of the present specification.
  • FIG. 4 is another schematic diagram of a lighting sequence according to some embodiments of the present specification.
  • FIG. 5 is an exemplary flowchart of acquiring multiple target images according to some embodiments of the present specification.
  • FIG. 6 is a schematic diagram of texture replacement according to some embodiments of the present specification.
  • FIG. 7 is an exemplary flowchart of determining authenticity of multiple target images according to some embodiments of the present specification.
  • FIG. 8 is another exemplary flowchart for determining the authenticity of multiple target images according to some embodiments of the present specification.
  • FIG. 9 is another exemplary flowchart of determining the authenticity of multiple target images according to some embodiments of the present specification.
  • FIG. 10 is a schematic structural diagram of a first verification model according to some embodiments of the present specification.
  • FIG. 11 is another exemplary flowchart for determining the authenticity of multiple target images according to some embodiments of the present specification.
  • FIG. 12 is another exemplary flowchart for determining the authenticity of multiple target images according to some embodiments of the present specification.
  • FIG. 13 is a schematic structural diagram of a second verification model according to some embodiments of the present specification.
  • FIG. 14 is another exemplary flowchart of determining authenticity of multiple target images according to some embodiments of the present specification.
  • FIG. 15 is a schematic structural diagram of a third verification model according to some embodiments of the present specification.
  • system means for distinguishing different components, elements, parts, parts or assemblies at different levels.
  • device means for converting components, elements, parts, parts or assemblies to different levels.
  • Target recognition is a technology for biometric recognition based on target objects acquired by image acquisition equipment.
  • the target object may be a human face, a fingerprint, a palm print, a pupil, and the like.
  • object recognition may be applied to authorization verification.
  • authorization verification For example, access control authority authentication and account payment authority authentication.
  • target recognition may also be used for authentication.
  • employee attendance certification and self-registration identity security certification may be based on matching of the target image captured by the image capture device in real time and the pre-acquired biometric features, thereby verifying the target identity.
  • image capture devices can be hacked or hijacked, and attackers can upload fake target images for authentication.
  • attacker A can directly upload the face image of user B after attacking or hijacking the image acquisition device.
  • the target recognition system performs face recognition based on user B's face image and pre-acquired user B's face biometrics, thereby passing user B's identity verification.
  • FIG. 1 is a schematic diagram of an application scenario of a target recognition system according to some embodiments of the present specification.
  • the object recognition system 100 may include a processing device 110 , a network 120 , a terminal 130 and a storage device 140 .
  • the processing device 110 may be used to process data and/or information from at least one component of the object recognition system 100 and/or an external data source (eg, a cloud data center). For example, the processing device 110 may determine a lighting sequence, acquire multiple target images, determine the authenticity of multiple target images, and the like. For another example, the processing device 110 may perform preprocessing (eg, replace textures, etc.) on multiple initial images obtained from the terminal 130 to obtain multiple target images. During processing, the processing device 110 may obtain data (eg, instructions) from other components of the object recognition system 100 (eg, the storage device 140 and/or the terminal 130 ) directly or through the network 120 and/or send the processed data to the other components described above for storage or display.
  • data eg, instructions
  • processing device 110 may be a single server or group of servers.
  • the server group may be centralized or distributed (eg, processing device 110 may be a distributed system).
  • processing device 110 may be local or remote.
  • the processing device 110 may be implemented on a cloud platform, or provided in a virtual fashion.
  • cloud platforms may include private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, internal clouds, multi-tier clouds, etc., or any combination thereof.
  • the network 120 may connect components of the system and/or connect the system with external components.
  • the network 120 enables communication between the various components of the object recognition system 100 and between the object recognition system 100 and external components, facilitating the exchange of data and/or information.
  • the network 120 may be any one or more of a wired network or a wireless network.
  • the network 120 may include a cable network, a fiber optic network, a telecommunications network, the Internet, a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a metropolitan area network (MAN), a public switched telephone network (PSTN) , Bluetooth network, ZigBee network (ZigBee), near field communication (NFC), intra-device bus, intra-device line, cable connection, etc. or any combination thereof.
  • the network connection between the various parts in the object recognition system 100 may adopt one of the above-mentioned manners, or may adopt multiple manners.
  • the network 120 may be of various topologies such as point-to-point, shared, centralized, or a combination of topologies.
  • network 120 may include one or more network access points.
  • network 120 may include wired or wireless network access points, such as base stations and/or network switching points 120-1, 120-2, . . . , through which one or more components of object recognition system 100 may Connect to network 120 to exchange data and/or information.
  • the terminal 130 refers to one or more terminal devices or software used by the user.
  • the terminal 130 may include an image capturing device 131 (eg, a camera, a camera), and the image capturing device 131 may photograph a target object and acquire multiple target images.
  • the terminal 130 eg, the screen and/or other light emitting elements of the terminal 130
  • the terminal 130 may sequentially emit light of multiple colors in the lighting sequence to illuminate the target object.
  • the terminal 130 may communicate with the processing device 110 through the network 120 and send the captured multiple target images to the processing device 110 .
  • the terminal 130 may be a mobile device 130-1, a tablet computer 130-2, a laptop computer 130-3, other devices with input and/or output capabilities, the like, or any combination thereof.
  • the above examples are only used to illustrate the broadness of the types of terminals 130 and not to limit the scope thereof.
  • the storage device 140 may be used to store data (eg, lighting sequences, multiple initial images or multiple target images, etc.) and/or instructions.
  • the storage device 140 may include one or more storage components, and each storage component may be an independent device or a part of other devices.
  • storage device 140 may include random access memory (RAM), read only memory (ROM), mass storage, removable memory, volatile read-write memory, the like, or any combination thereof.
  • mass storage may include magnetic disks, optical disks, solid state disks, and the like.
  • storage device 140 may be implemented on a cloud platform.
  • cloud platforms may include private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, internal clouds, multi-tier clouds, etc., or any combination thereof.
  • storage device 140 may be integrated or included in one or more other components of object recognition system 100 (eg, processing device 110, terminal 130, or other possible components).
  • the object recognition system 100 may include a determination module, an acquisition module, and a verification module.
  • the determination module may be used to determine a lighting sequence for determining a plurality of colors of a plurality of lights illuminating the target object.
  • the acquisition module can be used to acquire multiple target images, and the shooting time of the multiple target images has a corresponding relationship with the irradiation time of the multiple lights.
  • the acquisition module may be configured to acquire multiple initial images, and preprocess the multiple initial images to obtain multiple target images.
  • the acquisition module may be used to acquire a color verification model, which is a machine learning model with preset parameters.
  • a color verification model which is a machine learning model with preset parameters.
  • the preset parameters of the color verification model are obtained through training, etc.
  • the verification module can be used to determine the authenticity of multiple target images based on the illumination sequence and multiple target images.
  • the plurality of colors includes at least one reference color and at least one verification color.
  • the relationship between the at least one reference color and the at least one verification color may be various. For example, each of the at least one verification color is determined based on at least a portion of the at least one reference color, as another example, one or more of the at least one reference color is the same as one or more of the at least one verification color.
  • the plurality of target images include at least one verification image and at least one reference image, each of the at least one verification image corresponds to one of the at least one verification color, and each of the plurality of reference images One corresponds to one of the at least one reference color.
  • the verification module may be configured to determine, based on the at least one reference image, the color of the lighting when the at least one verification image was taken, and to determine the number of lighting based on the lighting sequence and the color of the lighting when the at least one verification image was taken. the authenticity of the target image.
  • the verification module may be configured to determine a first image sequence determined based on multiple target images and a second image sequence determined based on multiple color template images, and determine multiple image sequences based on the first image sequence and the second image sequence The authenticity of the target image.
  • multiple color template images are generated based on lighting sequences.
  • the verification module may be configured to, based on a first color relationship between the at least one reference image and the at least one verification image, and a second color relationship between the at least one reference color and the at least one verification color, and based on the The first color relationship and the second color relationship determine the authenticity of the plurality of target images.
  • the verification module may be used to determine the authenticity of the plurality of target images based on the lighting sequence and the color verification model. For example, the verification model processes multiple target images based on the color verification model, obtains the processing results, and determines the authenticity of the multiple target images by combining the processing results and the lighting sequence.
  • the above description of the target recognition system and its modules is only for the convenience of description, and does not limit the description to the scope of the illustrated embodiments. It can be understood that for those skilled in the art, after understanding the principle of the system, various modules may be combined arbitrarily, or a subsystem may be formed to connect with other modules without departing from the principle.
  • the determination module, the acquisition module, and the verification module disclosed in FIG. 1 may be different modules in a system, or may be one module to implement the functions of the above-mentioned two or more modules.
  • each module may share one storage module, and each module may also have its own storage module. Such deformations are all within the protection scope of this specification.
  • the method for target recognition performed by the processing device of the target recognition system 100 may include: determining an illumination sequence, the illumination sequence being used to determine a plurality of colors of a plurality of illuminations illuminating a target object; acquiring a plurality of target images, The shooting times of the plurality of target images have a corresponding relationship with the irradiation times of the plurality of illuminations; and based on the illumination sequence and the plurality of target images, the authenticity of the plurality of target images is determined.
  • acquiring a plurality of target images by the processing device may include: acquiring a plurality of initial images, the plurality of initial images including a first initial image and a second initial image; and using the processed second initial image as one of the plurality of target images.
  • the processing device replaces the texture of the second initial image with the texture of the first initial image to generate the processed second initial image may include: based on a color migration algorithm, converting the second initial image When the image is captured, the color of the light is transferred to the first initial image to obtain the processed second initial image.
  • the plurality of colors include at least one reference color and at least one verification color, each of the at least one verification color being determined based on at least a portion of the at least one reference color. In some embodiments, one or more of the at least one reference color is the same as one or more of the at least one verification color.
  • the plurality of target images include at least one verification image and at least one reference image, each of the at least one verification image corresponds to one of the at least one verification color, and the Each of the at least one reference image corresponds to one of the at least one reference color
  • the processing device determining, based on the illumination sequence and the plurality of target images, the authenticity of the plurality of target images may include : extract the reference color feature of the at least one reference image and the verification color feature of the at least one verification image; for each of the at least one verification image, based on the verification color feature of the verification image and The reference color feature of the at least one reference image determines the color of the illumination when the verification image is taken; and determines the plurality of images based on the illumination sequence and the color of the illumination when the at least one verification image is taken. The authenticity of the target image.
  • the processing device determining the authenticity of the plurality of target images based on the illumination sequence and the plurality of target images includes: determining a first image sequence based on the plurality of target images; based on the plurality of target images A color template image, determining a second image sequence, the multiple color template images are generated based on the illumination sequence; processing the first image sequence based on the first extraction layer, and extracting the first feature information of the first image sequence; Process the second image sequence based on a second extraction layer to extract second feature information of the second image sequence; and process the first feature information and the second feature information based on a discriminant layer to determine the plurality of images
  • the authenticity of the target image wherein the first extraction layer, the second extraction layer and the discrimination layer are machine learning models with preset parameters, and the first extraction layer and the second extraction layer share parameters.
  • the preset parameters of the first extraction layer, the second extraction layer and the discrimination layer are obtained through an end-to-end training manner.
  • the plurality of colors include at least one reference color and at least one verification color
  • the plurality of target images include at least one reference image and at least one verification image
  • each of the at least one reference image one corresponds to one of the at least one reference color
  • each of the at least one verification image corresponds to one of the at least one verification color
  • the processing device is based on the lighting sequence and the plurality of target image
  • determining the authenticity of the multiple target images includes: extracting the reference color feature of each pair of the at least one reference image and the verification color feature of each pair of the at least one verification image; For each of the at least one reference image, based on the reference color feature of the reference image and the verification color feature of each verification image, determine the first reference image and each verification image. color relationships; for each of the at least one reference color, determining a second color relationship for the reference color and each of the verification colors; and based on the at least one first color relationship and the at least one second color relationship The color relationship determines the authenticity of the multiple target images.
  • the processing device determining the authenticity of the multiple target images based on the illumination sequence and the multiple target images includes: acquiring a color verification model, where the color verification model is machine learning with preset parameters and, based on the illumination sequence, using the color verification model to process the plurality of target images to determine the authenticity of the plurality of target images.
  • FIG. 2 is an exemplary flowchart of a method for object recognition according to some embodiments of the present specification. As shown in Figure 2, the process 200 includes the following steps:
  • Step 210 determine the lighting sequence.
  • the lighting sequence is used to determine multiple colors of multiple lights illuminating the target object.
  • step 210 may be performed by a determination module.
  • the target object refers to an object that needs to be identified.
  • the target object may be a specific body part of the user, such as face, fingerprint, palm print, or pupil.
  • the target object refers to the face of a user who needs to be authenticated and/or authenticated.
  • the platform needs to verify whether the driver who takes the order is a registered driver user reviewed by the platform, and the target object is the driver's face.
  • the payment system needs to verify the payment authority of the payer, and the target object is the payer's face.
  • the terminal is instructed to emit the illumination sequence.
  • the lighting sequence includes a plurality of lighting for illuminating the target object.
  • the colors of different lights in the lighting sequence may be the same or different.
  • the plurality of lights include at least two lights with different colors, that is, the plurality of lights have multiple colors.
  • determining a lighting sequence refers to determining information for each lighting in a plurality of lightings included in the lighting sequence, such as color information, lighting time, and the like.
  • the color information of multiple lights in the lighting sequence may be represented in the same or different manners.
  • the color information of the plurality of lights may be represented by color categories.
  • the colors of the multiple lights in the lighting sequence may be represented as red, yellow, green, purple, cyan, blue, and red.
  • the color information of the plurality of lights may be represented by color parameters.
  • the colors of multiple lights in the lighting sequence can be represented as RGB(255, 0, 0), RGB(255, 255, 0), RGB(0, 255, 0), RGB(255, 0, 255) , RGB(0, 255, 255), RGB(0, 0, 255).
  • the lighting sequence which may also be referred to as a color sequence, contains color information for the plurality of lighting.
  • the illumination times of the plurality of illuminations in the illumination sequence may include the start time, end time, duration, etc., or any combination thereof, for each illumination plan to illuminate the target object.
  • the start time of illuminating the target object with red light is 14:00
  • the start time of illuminating the target object with green light is 14:02.
  • the durations for which the red light and the green light illuminate the target object are both 0.1 seconds.
  • the durations for different illuminations to illuminate the target object may be the same or different.
  • the irradiation time can be expressed in other ways, which will not be repeated here.
  • the terminal may sequentially emit multiple illuminations in a particular order.
  • the terminal may emit light through the light emitting element.
  • the light-emitting element may include a light-emitting element built in the terminal, for example, a screen, an LED light, and the like.
  • the light-emitting element may also include an externally-connected light-emitting element. For example, external LED lights, light-emitting diodes, etc.
  • the terminal when the terminal is hijacked or attacked, the terminal may receive an instruction to emit light, but does not actually emit light. For more details about the lighting sequence, please refer to FIG. 3 and FIG. 4 and their related descriptions, which will not be repeated here.
  • a terminal or processing device may generate a lighting sequence randomly or based on preset rules. For example, a terminal or processing device may randomly select a plurality of colors from a color library to generate a lighting sequence.
  • the lighting sequence may be set by the user at the terminal, determined according to the default settings of the object recognition system 100, or determined by the processing device through data analysis (eg, using a determination model), or the like.
  • the terminal or storage device may store the lighting sequence.
  • the obtaining module can obtain the lighting sequence from the terminal or the storage device through the network.
  • Step 220 acquiring multiple target images.
  • step 220 may be performed by an acquisition module.
  • the plurality of target images are images used for target recognition.
  • the formats of the multiple target images may include Joint Photographic Experts Group (JPEG), Tagged Image File Format (TIFF), Graphics Interchange Format (GIF), Kodak Flash PiX (FPX), Digital Imaging and Communications in Medicine (DICOM), etc. .
  • the multiple target images may be two-dimensional (2D, two-dimensional) images or three-dimensional (3D, three-dimensional) images.
  • the acquiring module may acquire the multiple target images based on the terminal. For example, the acquisition module may send acquisition instructions to the terminal through the network, and then receive multiple target images sent by the terminal through the network. Alternatively, the terminal may send the multiple target images to a storage device for storage, and the acquiring module may acquire the multiple target images from the storage device. The target image may not contain or contain the target object.
  • the target image may be captured by an image acquisition device of the terminal, or may be determined based on data (eg, video or image) uploaded by the user.
  • the target recognition system 100 will issue a lighting sequence to the terminal.
  • the terminal may sequentially emit the plurality of illuminations according to the illumination sequence.
  • its image acquisition device may be instructed to acquire one or more images within the illumination time of the illumination.
  • the image capture device of the terminal may be instructed to capture video during the entire illumination period of the plurality of illuminations.
  • the terminal or other computing device may intercept one or more images collected during the illumination time of each illumination from the video according to the illumination time of each illumination.
  • One or more images collected by the terminal during the irradiation time of each illumination may be used as the multiple target images.
  • the multiple target images are real images captured by the target object when it is illuminated by the multiple illuminations. It can be understood that there is a corresponding relationship between the irradiation time of the multiple lights and the shooting time of the multiple target images. If one image is collected within the irradiation time of a single light, the corresponding relationship is one-to-one; if multiple images are collected within the irradiation time of a single light, the corresponding relationship is one-to-many.
  • the hijacker can upload images or videos through the terminal device.
  • the uploaded image or video may contain target objects or specific body parts of other users, and/or other objects.
  • the uploaded image or video may be a historical image or video shot by the terminal or other terminals, or a synthesized image or video.
  • the terminal or other computing device eg, processing device 110
  • the terminal or other computing device may determine the plurality of target images based on the uploaded image or video.
  • the hijacked terminal may extract one or more images corresponding to each illumination from the uploaded image or video according to the illumination sequence and/or illumination duration of each illumination in the illumination sequence.
  • the illumination sequence includes five illuminations arranged in sequence, and the hijacker can upload five target images through the terminal device.
  • the terminal or other computing device will determine the target image corresponding to each of the five illuminations according to the sequence in which the five target images are uploaded.
  • the irradiation time of the five lights in the lighting sequence is 0.5 seconds, respectively, and the hijacker can upload a video with a duration of 2.5 seconds through the terminal.
  • the terminal or other computing device can divide the uploaded video into five videos of 0-0.5 seconds, 0.5-1 seconds, 1-1.5 seconds, 1.5-2 seconds and 2-2.5 seconds, and intercept each video a target image.
  • the five target images captured from the video correspond to the five illuminations in sequence.
  • the multiple target images are fake images uploaded by the hijacker, rather than the real images taken by the target object when illuminated by the multiple lights.
  • the uploading time of the target image or the shooting time in the video may be regarded as the shooting time. It can be understood that when the terminal is hijacked, there is also a corresponding relationship between the irradiation time of multiple lights and the shooting time of multiple target images.
  • the determining module may use the color of the light corresponding to the irradiation time in the light sequence and the shooting time of the target image as the color corresponding to the target image. Specifically, if the irradiation time of the light corresponds to the shooting time of one or more target images, the color of the light is used as the color corresponding to the one or more target images. It can be understood that when the terminal is not hijacked or attacked, the colors corresponding to the multiple target images should be the same as the multiple colors of the multiple lights in the lighting sequence. For example, the multiple colors of multiple lights in the lighting sequence are "red, yellow, blue, green, purple, and red".
  • the colors corresponding to the multiple target images obtained by the terminal should also be "red, Yellow, blue, green, purple, red".
  • the colors corresponding to multiple target images and the multiple colors of multiple lights in the lighting sequence may be different.
  • the acquiring module may acquire multiple initial images from the terminal, and preprocess the multiple initial images to acquire the multiple target images.
  • the multiple initial images may be photographed by the terminal or uploaded by the hijacker through the terminal.
  • the shooting time of the multiple initial images and the irradiation time of the multiple lights there is a corresponding relationship between the shooting time of the multiple initial images and the irradiation time of the multiple lights. If the multiple target images are obtained based on the preprocessing of multiple initial images, the corresponding relationship between the shooting time of the multiple target images and the irradiation time of the multiple lights actually reflects the shooting time and the shooting time of the multiple initial images corresponding to the multiple target images. The corresponding relationship between the irradiation times of multiple lights, the color of the light when the target image is shot actually reflects the color of the light when the initial image corresponding to the target image was shot.
  • the preprocessing may include texture uniform processing.
  • the texture of an image refers to the grayscale distribution of elements (such as pixels) in the image and their surrounding spatial neighborhoods. It can be understood that, if the multiple initial images are captured by the terminal, the textures of the multiple initial images may be different because the distance, angle, and background of the terminal and the target object may change.
  • the texture unification processing can make the textures of the multiple initial images the same or substantially the same, reduce the interference of texture features, and thus improve the efficiency and accuracy of target recognition.
  • the acquisition module may implement texture uniform processing through texture replacement.
  • Texture replacement refers to replacing all the textures of the original image with the textures of the specified image.
  • the specified image may be one of multiple initial images, that is, the acquisition module may replace the texture of one of the multiple initial images with the texture of other initial images to achieve texture consistency.
  • the designated image may be an image of the target object other than the plurality of initial images. For example, the images of the target object taken in the past and stored in the storage device.
  • texture replacement reference may be made to FIG. 5 and related descriptions, which will not be repeated here.
  • the acquisition module may implement texture uniformity processing by means of background culling, shooting angle correction, and the like. For example, taking the target object as the target face as an example, the parts other than the face in the multiple initial images are cut out, and then the angle of the face in the remaining part is corrected to a preset angle (for example, the face is facing the image collection equipment, etc.).
  • the background cutout may identify the face contour of each of the plurality of initial images through image recognition technology, and then cut out the part other than the face contour.
  • the angle correction can be achieved by a correction algorithm (eg, a face correction algorithm) or a model.
  • the acquisition module may also implement texture uniform processing in other ways, which is not limited herein.
  • preprocessing may also include image screening, image denoising, image enhancement, and the like.
  • Image screening may include screening out images that do not include the target object or specific body parts of the user.
  • the object to be screened may be the initial image collected by the terminal, or may be an image obtained by the initial image after other preprocessing (for example, texture uniform processing).
  • the acquisition module may perform matching based on the characteristics of the initial image and the characteristics of the image containing the target object, and filter out the images that do not contain the target object in the plurality of initial images.
  • Image denoising may include removing interfering information in an image.
  • the interference information in the image will not only reduce the quality of the image, but also affect the color features extracted based on the image.
  • the acquisition module may implement image denoising through median filters, machine learning models, and the like.
  • Image enhancement can add missing information in an image. Missing information in the image can cause image blur and also affect the color features extracted based on the image. For example, image enhancement can adjust the brightness, contrast, saturation, hue, etc. of an image, increase its sharpness, reduce noise, etc.
  • the acquisition module may implement image enhancement through smoothing filters, median filters, or the like.
  • the target of image denoising or image enhancement can be the original image, or the original image after other preprocessing.
  • the preprocessing may also include other operations, which are not limited herein.
  • the object recognition system 100 may further include a preprocessing module for preprocessing the initial image.
  • Step 230 based on the illumination sequence and the multiple target images, determine the authenticity of the multiple target images.
  • step 230 may be performed by a verification module.
  • the authenticity of the multiple target images may reflect whether the multiple target images are images obtained by shooting the target object under illumination of multiple colors of light. For example, when the terminal is not hijacked or attacked, its light-emitting element can emit light of multiple colors, and its image acquisition device can record or photograph the target object to obtain the target image. At this time, the target image has authenticity. For another example, when the terminal is hijacked or attacked, the target image is obtained based on the image or video uploaded by the attacker. At this time, the target image has no authenticity.
  • the authenticity of multiple target images can also be referred to as the authenticity of multiple initial images, which can reflect whether the multiple initial images corresponding to the multiple target images are It is an image obtained by shooting the target object under the illumination of multiple colors of light.
  • the authenticity of the multiple target images and the authenticity of the multiple initial images are collectively referred to as the authenticity of the multiple target images below.
  • the authenticity of the target image can be used to determine whether the terminal's image capture device has been hijacked by an attacker. For example, if at least one target image in the multiple target images is not authentic, it means that the image acquisition device is hijacked. For another example, if more than a preset number of target images in the multiple target images are not authentic, it means that the image acquisition device is hijacked.
  • the verification module may determine the authenticity of the plurality of target images based on color characteristics and lighting sequences of the plurality of target images. For more details about determining the authenticity of the target image based on the color feature of the target image, reference may be made to FIG. 7 and related descriptions thereof, which will not be repeated here.
  • the color feature of an image refers to information related to the color of the image.
  • the color of the image includes the color of the light when the image is captured, the color of the subject in the image, the color of the background in the image, and the like.
  • the color features may include deep features and/or complex features extracted by a neural network.
  • Color features can be represented in a number of ways.
  • the color feature can be represented based on the color value of each pixel in the image in the color space.
  • a color space is a mathematical model that describes color using a set of numerical values, each of which can represent the color value of a color feature on each color channel of the color space.
  • a color space may be represented as a vector space, each dimension of the vector space representing a color channel of the color space. Color features can be represented by vectors in this vector space.
  • the color space may include, but is not limited to, RGB color space, L ⁇ color space, LMS color space, HSV color space, YCrCb color space, HSL color space, and the like.
  • the RGB color space includes red channel R, green channel G, and blue channel B, and color features can be represented by the color values of each pixel in the image on the red channel R, green channel G, and blue channel B, respectively.
  • color features may be represented by other means (eg, color histograms, color moments, color sets, etc.).
  • the histogram statistics are performed on the color values of each pixel in the image in the color space to generate a histogram representing the color features.
  • a specific operation eg, mean, squared difference, etc. is performed on the color value of each pixel in the image in the color space, and the result of the specific operation represents the color feature of the image.
  • the verification module may extract color features of the plurality of target images through a color feature extraction algorithm and/or a color verification model (or a portion thereof).
  • Color feature extraction algorithms include: color histogram, color moment, color set, etc.
  • the verification module can count the gradient histogram based on the color value of each pixel in the image in each color channel of the color space, so as to obtain the color histogram.
  • the verification module can divide the image into multiple regions, and use the set of binary indices of the multiple regions established by the color values of each pixel in the image in each color channel of the color space to determine the color of the image. set.
  • Figures 10, 13 and 15 see Figures 10, 13 and 15 and their related descriptions.
  • the verification module may process multiple target images based on the lighting sequence and use a color verification model, which is a machine learning model with preset parameters, to determine the authenticity of the multiple target images.
  • a color verification model which is a machine learning model with preset parameters
  • the target recognition system 100 sends a lighting sequence to the terminal, and acquires from the terminal a target image that has a corresponding relationship with multiple lightings in the lighting sequence.
  • the processing device can determine whether the target image or its corresponding initial image is an image captured under the illumination sequence of the target object by identifying the color of the illumination when the target image is captured, and further determine whether the terminal is hijacked or attacked. It is understandable that when an attacker does not know the lighting sequence, it is difficult for the color of the light to be the same as the color of multiple lights in the light sequence when the uploaded image or the image in the uploaded video is captured. Even if the kinds of colors are the same, the order of the positions of each color is difficult to be the same.
  • the method disclosed in this specification can improve the difficulty of an attacker's attack and ensure the security of target identification.
  • FIG. 3 is a schematic diagram of a lighting sequence according to some embodiments of the present specification.
  • the colors of the multiple lights in the lighting sequence may be exactly the same, completely different, or partially the same.
  • the colors of the plurality of lights are all red.
  • at least two of the plurality of lights have different colors, that is, the plurality of lights have multiple colors.
  • the plurality of colors includes white.
  • the plurality of colors includes red, blue, and green.
  • illumination sequence a includes four illuminations arranged in sequence: red light, white light, blue light, and green light
  • illumination sequence b includes four illuminations arranged in sequence: white light, blue light, red light, and green light
  • illumination sequence c includes Four illuminations of red light, white light, blue light, and white light are arranged in sequence
  • the illumination sequence d includes four illuminations of red light, white light, white light, and blue light arranged in sequence.
  • Lighting sequence a and lighting sequence b have the same color of multiple lights, but they are arranged in different order.
  • multiple lights in lighting sequence c and lighting sequence d have the same color, but are arranged in different orders.
  • the colors of the 4 lights in the light sequences a and b are completely different, and the colors of the two lights in the light sequences c and d are the same.
  • FIG. 4 is another schematic diagram of a lighting sequence according to some embodiments of the present specification.
  • the plurality of colors of lighting in the lighting sequence may include at least one reference color and at least one verification color.
  • the verification color is one of the colors that is directly used to verify the authenticity of the image.
  • the reference color is a color among the colors used to assist the verification color to determine the authenticity of the target image.
  • the target image corresponding to the reference color also referred to as the reference image
  • the verification module may determine the authenticity of the plurality of target images based on the color of the illumination when the verification image was captured.
  • the target image corresponding to the reference color may be used to verify the target image corresponding to the color (also referred to as the verification image) to determine the first color relationship.
  • the verification module may determine the authenticity of the plurality of target images based on the first color relationship.
  • the illumination sequence e contains multiple reference colors of illumination “red light, green light, blue light”, and multiple verification colors of illumination “yellow light, purple light... cyan light”;
  • the illumination sequence f contains multiple illuminations Lighting of the reference color "red light, white light...blue light”, and light of multiple verification colors "red light..green light”.
  • multiple colors exist for verification.
  • the plurality of verification colors may be identical.
  • the verification color can be red, red, red, red.
  • multiple verification colors can be completely different.
  • the verification color can be red, yellow, blue, green, violet.
  • the plurality of verification colors may be partially the same.
  • the verification color can be yellow, green, purple, yellow, red.
  • there are multiple reference colors and the multiple reference colors may be identical, completely different, or partially identical.
  • the verification color may contain only one color, such as green.
  • the at least one reference color and the at least one verification color may be determined according to a default setting of the object recognition system 100, manually set by a user, or determined by a determination module.
  • the determination module may randomly select a reference color and a verification color.
  • the determination module may randomly select a part of the colors from the plurality of colors as the at least one reference color, and the remaining colors as the at least one verification color.
  • the determination module may determine the at least one reference color and the at least one verification color based on a preset rule.
  • the preset rules may be rules regarding the relationship between verification colors, the relationship between reference colors, and/or the relationship between verification colors and reference colors, and the like.
  • the preset rule is that the verification color can be generated based on the fusion of the reference color, and so on.
  • each of the at least one verification color may be determined based on at least a portion of the at least one reference color.
  • the verification color may be obtained by fusion based on at least a part of the at least one reference color.
  • the at least one reference color may comprise a primary or primary color of the color space.
  • the at least one reference color may include the three primary colors of the RGB space, ie, "red, green, and blue".
  • multiple verification colors "yellow, purple...cyan” in the lighting sequence e can be determined based on three reference colors "red, green, blue”.
  • “yellow” can be obtained by fusing the reference colors “red, green, blue” based on the first ratio
  • “purple” can be obtained by fusing the reference colors "red, green, blue” based on the second ratio.
  • one or more of the at least one reference color is the same as one or more of the at least one verification color.
  • the at least one reference color and the at least one verification color may be completely identical or partially identical.
  • a certain one of the at least one verification color may be the same as a certain one of the at least one reference color.
  • the verification color can also be determined based on at least one reference color, that is, the specific reference color can be used as the verification color. As shown in Figure 4, in the illumination sequence f, multiple reference colors "red, white...blue” and multiple verification colors "red...green” all contain red.
  • the at least one reference color and the at least one verification color may also have other relationships, which are not limited herein.
  • the at least one reference color and the at least one verification color are the same or different in color family.
  • at least one reference color belongs to a warm color system (eg, red, yellow, etc.)
  • at least one reference color belongs to a cool color system (eg, gray, etc.).
  • the lighting corresponding to the at least one reference color may be arranged in front of or behind the lighting corresponding to the at least one verification color.
  • illuminations of multiple reference colors “red light, green light, blue light” are arranged in front of illuminations of multiple verification colors “yellow light, purple light... cyan light”.
  • illuminations of multiple reference colors “red light, white light...blue light” are arranged behind multiple verification colors “red light...green light”.
  • the illumination corresponding to the at least one reference color may also be arranged at intervals with the illumination corresponding to the at least one verification color, which is not limited herein.
  • FIG. 5 is an exemplary flowchart of acquiring multiple target images according to some embodiments of the present specification.
  • process 500 may be performed by an acquisition module. As shown in Figure 5, the process 500 includes the following steps:
  • Step 510 acquiring multiple initial images.
  • the plurality of initial images are unprocessed images acquired from the terminal.
  • the multiple initial images may be images captured by an image acquisition device of the terminal, or may be images determined by the hijacked terminal based on images or videos uploaded by the hijacker.
  • the plurality of initial images may include a first initial image and a second initial image.
  • the acquisition module may perform texture replacement on the plurality of initial images to generate the plurality of target images described in step 220 .
  • the acquisition module may replace the texture of the plurality of initial images with the texture in the specified image.
  • the first initial image refers to a designated image among the plurality of initial images, that is, the initial image that provides the texture for replacement.
  • the first initial image needs to contain the target object.
  • the acquiring module may acquire a first initial image including the target object from a plurality of initial images through image filtering.
  • the first initial image may be any one of multiple initial images.
  • the first initial image may be the one with the earliest shooting time among the plurality of initial images.
  • the first initial image may be the one with the simplest background among the plurality of initial images.
  • the simplicity of the background is judged by the color variety of the background, and the less color variety of the background, the simpler the background.
  • the simplicity of the background is also judged by the complexity of the lines of the background. The fewer the lines of the background, the simpler the background.
  • white light is present in the lighting sequence.
  • the first initial image may be an initial image whose acquisition time corresponds to the white light irradiation time among the multiple target images.
  • the second initial image is the initial image of the replaced texture among the plurality of initial images.
  • the second initial image may be any initial image other than the first initial image.
  • the second initial image may be one or more images.
  • the terminal may acquire a corresponding initial image according to the illumination time of the illumination in the illumination sequence.
  • the acquiring module may acquire the plurality of initial images from the terminal through the network.
  • the hijacker can upload images or videos through the end device.
  • the acquisition module may determine the plurality of initial images based on the uploaded images or videos.
  • Step 520 replace the texture of the second initial image with the texture of the first initial image to generate a processed second initial image.
  • the acquisition module may implement texture replacement based on a color transfer algorithm. Specifically, the acquisition module can transfer the color of the illumination when the second initial image is captured to the first initial image based on the color transfer algorithm, so as to generate the processed second initial image.
  • a color transfer algorithm is a method of transferring the color of one image to another image to generate a new image.
  • Color migration algorithms include but are not limited to Reinhard algorithm, Welsh algorithm, fuzzy clustering algorithm, adaptive migration algorithm, etc.
  • the color transfer algorithm may extract color features of the second initial image, and then transfer the color features of the second initial image to the first initial image to generate a processed second initial image.
  • color features please refer to step 230 and its related description.
  • FIG. 6 and related descriptions which will not be repeated here.
  • the acquisition module can migrate the color features of the illumination when the second initial image is captured to the first initial image based on the color transfer algorithm, and the color features of the newly generated image are still the same as the second initial image, But the texture becomes the texture of the first initial image.
  • N is an integer greater than or equal to 1
  • the color features of the illumination when each of the N second initial images is captured are transferred to the first initial image, and N new images can be obtained.
  • the color features of the N newly generated images respectively represent the color of the illumination when the N second initial images were taken, but the textures of the N newly generated images are all the textures of the first initial images.
  • the acquisition module may also implement texture replacement using a texture feature migration algorithm.
  • the texture feature migration algorithm can extract the texture features of the first initial image and the texture features of the second initial image, and replace the texture features of the second initial image with the texture features of the first initial image to generate the processed second initial image. image.
  • methods for extracting texture features may include, but are not limited to, geometric methods, gray level co-occurrence matrix methods, model methods, signal processing methods, and machine learning models.
  • the machine learning model may include, but is not limited to, a deep neural network model, a recurrent neural network model, a custom model structure, and the like, which is not limited here.
  • Step 530 taking the processed second initial image as one of the multiple target images.
  • the color of the illumination when the processed second initial image is captured is the same as that of the second initial image, but the texture features come from the first initial image. If the colors of the light when the first initial image and the second initial image are photographed are different, the first initial image and the processed second initial image may be two images with the same content and different colors.
  • the plurality of initial images includes one or more second initial images. For each second initial image, the acquiring module may replace the texture of the second initial image with the texture of the first initial image to generate a corresponding processed second initial image. Optionally, the acquiring module may also use the first initial image as one of the multiple target images. At this time, the plurality of target images include the first initial image and one or more processed second initial images.
  • the textures in the multiple target images are made the same through texture unification processing, thereby reducing the influence of the textures in the target images on the light color recognition, and better determining the authenticity of the multiple target images.
  • FIG. 6 is a schematic diagram of texture replacement according to some embodiments of the present specification.
  • the acquisition module may select the first initial image 610-1 as the first initial image, and the initial images 610-2, 610-3..., 610-m as the second initial image.
  • Each second initial image differs from the first initial image in addition to color and texture.
  • the location of the target object in the second initial image 610-m is different from that in the first initial image 610-1.
  • the shooting backgrounds of the target objects in the second initial images 610-2, 610-3..., 610-m are all different from those in the first initial image 610-1.
  • the texture differences of the initial images 610-1, 610-2, 610-3..., 610-m may lead to low accuracy of the image authenticity judgment result and increased data analysis amount.
  • the second initial image can be preprocessed by using a color migration algorithm.
  • the acquisition module extracts the color features of m-1 second initial images 610-2, 610-3 . . . 610-m respectively (that is, the color features corresponding to red, orange, cyan, . .
  • the acquisition module respectively transfers the color features of m-1 second initial images to the first initial image 610-1, and generates m-1 processed second initial images 620-2, 620-3..., 620-m .
  • the processed second initial image incorporates the texture feature of the first initial image and the color feature of the second initial image, which is equivalent to an image obtained by replacing the texture of the second initial image with the texture of the first initial image.
  • the first initial image and the second initial image are RGB images.
  • the acquiring module may first convert the first initial image and the second initial image from the RGB color space to the L ⁇ color space.
  • the acquisition module may convert the target image (eg, the first initial image or the second initial image) from the RGB color space to the L ⁇ color space through a neural network.
  • the acquisition module may first convert the target image from the RGB color space to the LMS color space based on multiple transition matrices, and then convert from the LMS color space to the L ⁇ color space.
  • the acquisition module can extract the color features of the transformed second initial image and the transformed first initial image in the L ⁇ color space.
  • the acquisition module may calculate the average value ⁇ _2j and the standard deviation value ⁇ _2j of all the pixels of the transformed second initial image on each channel of L ⁇ .
  • j represents the color channel number in the L ⁇ color space, 0 ⁇ j ⁇ 2. When j is equal to 0, 1, and 2, it represents the luminance channel L, the yellow and blue channels ⁇ , and the red and green channels ⁇ , respectively.
  • the acquisition module can calculate the mean value ⁇ _1j and the standard deviation value ⁇ _1j of all the pixels of the transformed first initial image on each channel of L ⁇ .
  • the acquiring module can transfer the color feature of the transformed second initial image to the transformed first initial image.
  • the acquisition module can then multiply the updated value of each pixel in each L ⁇ channel by the scaling factor ⁇ _j of the channel, and add the average value ⁇ _2j of the transformed second initial image in the corresponding L ⁇ channel to generate a processed image. of the second initial image.
  • the acquisition module may also convert the processed second initial image from the L ⁇ color space to the RGB color space.
  • Some embodiments of this specification transfer the color features of the second initial image to the first initial image based on the color transfer algorithm, which not only avoids extracting complex texture features, but also enables the processed second initial image to contain more detailed and Accurate color feature information can improve the efficiency and accuracy of determining the authenticity of the target image.
  • FIG. 7 is a flowchart of determining the authenticity of a target image based on color features according to some embodiments of the present specification.
  • process 700 may be performed by a verification module. As shown in FIG. 7, the process 700 may include the following steps:
  • Step 710 extracting color features of multiple target images.
  • step 230 See step 230 and its related description for more details on color features.
  • Step 720 Determine the authenticity of the multiple target images based on the color features and lighting sequences of the multiple target images.
  • the verification module may determine the color of the illumination when the target image was captured based on the color feature of the target image, and then determine the color corresponding to the target image based on the illumination sequence . Further, the verification module can determine the authenticity of the target image.
  • the verification module can form a new color space (ie, the reference color space in FIG. 9 ) based on the reference color feature of the at least one reference image. Further, the verification module may determine, based on the new color space and verification color features of the verification image, the color of the illumination when the verification image is captured. Further, the verification module may determine the authenticity of the verification image in combination with the color corresponding to the verification image. For determining the authenticity of the target image based on the reference color space, reference may be made to FIG. 9 , FIG. 11 and their related descriptions, and details are not repeated here.
  • the verification module may determine the color relationship between the multiple target images based on the color features of the multiple target images, and then based on the color relationship between the multiple target images and a plurality of the multiple illuminations in the illumination sequence The color relationship between colors determines the authenticity of multiple target images. Regarding the determination of the authenticity of the target image based on the color relationship, reference may be made to FIG. 12 and its related description, which will not be repeated here.
  • the verification module may determine the color feature of the target image and its corresponding color feature based on the color feature of the target image and the color feature of the corresponding illumination The matching degree between the color features of the light. Further, the verification module may determine the authenticity of the target image based on the matching degree. For example, if the matching degree between the color feature of the target image and the color feature of the corresponding illumination is greater than a preset threshold, the target image has authenticity.
  • the matching degree may be determined based on the similarity between the color features of the target image and the color features of the lighting. Similarity can be measured by Euclidean distance, Manhattan distance, etc.
  • the verification module may be based on color features of a first image sequence constructed from multiple target images (ie, the first feature information in FIG. 14 ) and color features of a second image sequence constructed from multiple color template images (ie, the second characteristic information in FIG. 14 ), the degree of matching between the first characteristic information and the second characteristic information is determined. Further, the verification module may determine the authenticity of the multiple target images based on the matching degree. For more details on determining the authenticity of multiple target images based on sequences, see FIG. 14 and its related description.
  • the preset threshold set for the image authenticity determination in some embodiments of this specification may be related to the degree of shooting stability.
  • the shooting stability degree is the stability degree when the image acquisition device of the terminal acquires the target image.
  • the preset threshold is positively related to the degree of shooting stability. It can be understood that the higher the shooting stability, the higher the quality of the acquired target image, and the more the color features extracted based on multiple target images can truly reflect the color of the illumination when shooting, and the larger the preset threshold is.
  • the shooting stability may be measured based on a motion parameter of the terminal detected by a motion sensor of the terminal (eg, a vehicle-mounted terminal or a user terminal, etc.). For example, the motion speed, vibration frequency, etc.
  • the motion sensor may be a sensor that detects the driving situation of the vehicle, and the vehicle may be the vehicle used by the target user.
  • the target user refers to the user to which the target object belongs.
  • the motion sensor may be a motion sensor on the driver's end or the in-vehicle terminal.
  • the preset threshold may also be related to the shooting distance and the shooting angle.
  • the shooting distance is the distance between the target object when the image acquisition device acquires the target image.
  • the shooting angle is the angle between the front of the target object and the terminal screen when the image acquisition device acquires the target image.
  • both the shooting distance and the shooting angle are negatively correlated with the preset threshold. It can be understood that the shorter the shooting distance, the higher the quality of the acquired target image, and the more the color features extracted based on multiple target images can truly reflect the color of the illumination when shooting, and the larger the preset threshold is. The smaller the shooting angle, the higher the quality of the acquired target image, and similarly, the larger the preset threshold.
  • the shooting distance and shooting angle may be determined based on the target image through image recognition techniques.
  • the verification module may perform specific operations (eg, averaging, standard deviation, etc.) on the shooting stability, shooting distance, and shooting angle of each target image, and based on the specific calculation, the shooting stability, shooting distance and The shooting angle determines the preset threshold. For example, the verification module determines the corresponding sub-threshold value based on the shooting stability degree, shooting distance and shooting angle after the specific operation, and then based on the sub-threshold value corresponding to the shooting stability degree, the sub-threshold value corresponding to the shooting distance and the sub-threshold value corresponding to the shooting angle, Determine preset thresholds. For example, the three sub-thresholds can be averaged, weighted averaged, and the like.
  • FIG. 8 is a flowchart of determining the authenticity of a target image based on a color verification model according to some embodiments of the present specification.
  • process 800 may be performed by a verification module. As shown in FIG. 8, the process 800 may include the following steps:
  • Step 810 acquiring a color verification model.
  • a color verification model is a model used to verify that an image is authentic.
  • the color verification model is a machine learning model with preset parameters. Preset parameters refer to the model parameters learned during the training of the machine learning model. Taking a neural network as an example, the model parameters include weight and bias.
  • the preset parameters of the color verification model are determined during the training process. For example, the model acquisition module can train an initial color verification model based on a plurality of training samples with labels to obtain a color verification model.
  • the color verification model can be stored in a storage device, and the verification module can obtain the color verification model from the storage device through a network.
  • the color verification model may be acquired through a training process.
  • For the training process of the color verification model please refer to Figures 10, 13 and 15 and their related descriptions.
  • Step 820 Based on the lighting sequence, use the color verification model to process the multiple target images to determine the authenticity of the multiple target images.
  • the color verification model may include a first verification model.
  • the first verification model may include a first color feature extraction layer and a color classification layer.
  • the first color feature extraction layer extracts color features of the target image.
  • the color classification layer determines the color corresponding to the target image based on the color features of the target image.
  • the color verification model may include a second verification model.
  • the second verification model may include a second color feature extraction layer and a color relationship determination layer.
  • the second color feature extraction layer extracts the color features of the target image.
  • the color relationship determination layer determines the relationship (for example, whether they are the same) between colors corresponding to different target images based on the color features of the target images.
  • the color verification model may include a third verification model.
  • the third verification model may include a first extraction layer, a second extraction layer and a discriminant layer.
  • the first extraction layer extracts the color features of the sequence constructed by multiple target images.
  • the second extraction layer extracts the color features of a sequence constructed from multiple color template images.
  • the discriminative layer determines the relationship between the two sequences based on the color features of the two sequences. For determining the authenticity of multiple target images based on the third verification model, reference may be made to FIG. 14 and related descriptions.
  • FIG. 9 is an exemplary flowchart of determining authenticity of multiple target images according to some embodiments of the present specification.
  • flowchart 900 may be performed by a verification module. As shown in FIG. 9, the process 900 may include the following steps:
  • the plurality of colors of lighting in the lighting sequence includes at least one reference color and at least one verification color.
  • Each of the at least one verification color may be determined based on at least a portion of the at least one reference color.
  • each of the at least one verification color may be fused based on one or more reference colors.
  • the plurality of images include at least one reference image and at least one verification image.
  • Each of the at least one verification image corresponds to one of the at least one verification color.
  • Each of the plurality of reference images corresponds to one of the at least one reference color.
  • the target image corresponds to a specific color, indicating that if the terminal is not hijacked (that is, the target image is real), the target image should have the specific color.
  • Step 910 Extract reference color features of at least one reference image and verification color features of at least one verification image.
  • the reference color feature refers to the color feature of the reference image. Verifying color features refers to verifying the color features of an image. For the color feature and its extraction, please refer to the description of step 230 .
  • the verification module may extract color features of the image based on the first color feature extraction layer included in the first verification model. For details of extracting color features based on the first color feature extraction layer, reference may be made to FIG. 10 and its related descriptions, which will not be repeated here.
  • Step 920 for each of the at least one verification image, based on the verification color feature of the verification image and the reference color feature of the at least one reference image, determine the color of the illumination when the verification image is captured.
  • reference color features of at least one reference image may be used to construct a reference color space.
  • the reference color space has the at least one reference color as its color channel.
  • the reference color feature corresponding to each reference image can be used as the reference value of the corresponding color channel in the reference color space.
  • the color space (also referred to as the original color space) corresponding to the multiple target images may be the same as or different from the reference color space.
  • the multiple target images may correspond to the RGB color space, and the at least one reference color is red, blue and green, then the original color space corresponding to the multiple target images and the reference color space constructed based on the reference colors belong to the same color space.
  • two color spaces can be considered the same color space if their primary or primary colors are the same.
  • the verification color can be obtained by fusing one or more reference colors. Therefore, the verification module may determine the color corresponding to the verification color feature based on the reference color feature and/or the reference color space constructed by the reference color feature. In some embodiments, the verification module may map the verification color feature of the verification image based on the reference color space, and determine the color of the illumination when the verification image is captured. For example, the verification module can determine the parameters of the verification color feature on each color channel based on the relationship between the verification color feature and the reference value of each color channel in the reference color space, and then determine the color corresponding to the verification color feature based on the parameters, That is, verify the color of the light when the image was taken.
  • the verification module may use the reference color features x ⁇ , y ⁇ , and z ⁇ extracted based on the reference images a, b, and c as reference values of color channel I, color channel II, and color channel III, respectively.
  • Color Channel I, Color Channel II and Color Channel III are the three color channels of the reference color space.
  • the verification module may determine the color corresponding to the verification color feature based on the parameters ⁇ _1, ⁇ _2, and ⁇ _3, that is, the color of the light when the verification image is captured.
  • the corresponding relationship between parameters and color categories may be preset, or may be learned through a model.
  • the base color space may be the same color as the color channel of the original color space.
  • the original space color may be an RGB space
  • the at least one reference color may be red, green, and blue.
  • the verification module can construct a new RGB color space (that is, the reference color space) based on the reference color features of the three reference images corresponding to red, green, and blue, and determine the verification color features of each verification image in the new RGB color space. RGB values to determine the color of the lighting when the verification image was taken.
  • the verification module may process the reference color feature and the verification color feature based on the color classification layer in the first verification model, and determine the color of the illumination when the verification image is captured. For details, please refer to FIG. 10 and its related descriptions. No longer.
  • Step 930 Determine the authenticity of the multiple target images based on the lighting sequence and the color of the lighting when the at least one verification image was taken.
  • the verification module may determine a verification color corresponding to the verification image based on the lighting sequence. Further, the verification module may determine the authenticity of the verification image based on the verification color corresponding to the verification image. For example, the verification module determines the authenticity of the verification image based on the first judgment result of whether the verification color corresponding to the verification image is consistent with the color of the illumination when the image was captured. If the verification color corresponding to the verification image is the same as the color of the light when it was photographed, it means that the verification image is authentic. For another example, the verification module determines the authenticity of the verification images based on whether the relationship between the verification colors corresponding to the multiple verification images (eg, whether they are the same) is consistent with the relationship between the colors of the illumination when the multiple verification images were captured.
  • the verification module may determine whether the terminal's image capture device has been hijacked based on the authenticity of the at least one verification image. For example, if the number of authentic verification images exceeds the first threshold, it means that the image acquisition device of the terminal is not hijacked. For another example, if the number of verification images that do not have authenticity exceeds the second threshold (for example, 1), it means that the image acquisition device of the terminal is hijacked.
  • the second threshold for example, 1
  • the verification module may combine the reference color space with other verification methods to determine the authenticity of the multiple target images.
  • the verification module may determine the updated color characteristics of each target image in the verification image and the benchmark image based on the benchmark color space.
  • the updated color feature refers to the feature after converting the original color feature to the reference color space.
  • the verification module can replace the original color features based on the updated color features of each target image, and determine the authenticity of the multiple target images in combination with other verification methods. For example, the verification module may determine the first color relationship between the multiple target images based on the updated color features of the multiple target images, and determine the authenticity of the multiple target images based on the first color relationship.
  • the verification module determines the color features of the first image sequence constructed by the multiple target images based on the updated color features of the multiple target images, and determines the authenticity of the multiple target images based on the color features of the first image sequence.
  • the verification module determines the color features of the first image sequence constructed by the multiple target images based on the updated color features of the multiple target images, and determines the authenticity of the multiple target images based on the color features of the first image sequence.
  • the determination of the authenticity of the target image is also more accurate. For example, when the lighting in the lighting sequence is weaker than the ambient light, the lighting hitting the target object may be difficult to detect. Alternatively, when the ambient light is colored light, the lighting hitting the target object may be disturbed. When the terminal is not hijacked, the reference image and the verification image are taken under the same (or substantially the same) ambient light.
  • the reference color space constructed based on the reference image incorporates the influence of ambient light, therefore, compared with the original color space, the color of the illumination when the verification image was captured can be more accurately identified. Furthermore, the methods disclosed herein can avoid interference of the light emitting elements of the terminal. When the terminal is not hijacked, both the reference image and the verification image are shot under the illumination of the same light-emitting element. Using the reference color space can eliminate or weaken the influence of the light-emitting element, and improve the accuracy of identifying the light color.
  • FIG. 10 is a schematic diagram of a first verification model according to some embodiments of the present specification.
  • the verification module may determine the authenticity of the plurality of target images based on the first verification model and the lighting sequence.
  • the first verification model may include a first color feature extraction layer and a color classification layer.
  • the first color feature extraction layer may include a reference color feature extraction layer and a verification color feature extraction layer.
  • the first verification model may include a reference color feature extraction layer 1030 , a verification color feature extraction layer 1040 and a color classification layer 1070 .
  • the reference color feature extraction layer 1030 and the verification color feature extraction layer 1040 may be used to implement step 910 .
  • Color classification layer 1070 may be used to implement step 920 .
  • the verification module determines the authenticity of the verification image based on the color and illumination sequence corresponding to the verification image.
  • the color feature extraction model (eg, the first color feature extraction layer, the reference color feature extraction layer 1030, the verification color feature extraction layer 1040, etc.) can extract the color features of the target image.
  • the type of the color feature extraction model may include a convolutional neural network model such as ResNet, DenseNet, MobileNet, ShuffleNet or EfficientNet, or a recurrent neural network model such as a long short-term memory recurrent neural network.
  • the types of reference color feature extraction layer 1030 and verification color feature extraction layer 1040 may be the same or different.
  • the reference color feature extraction layer 1030 extracts reference color features 1050 of at least one reference image 1010 .
  • the at least one reference image 1010 may include multiple reference images.
  • the reference color feature 1050 may be a fusion of color features of the plurality of reference images 1010 .
  • the plurality of reference images 1010 may be spliced, and after splicing, the reference color feature extraction layer 1030 may be input, and the reference color feature extraction layer 1030 may output the reference color feature 1050 .
  • the reference color feature 1050 is a feature vector formed by splicing color feature vectors of the reference images 1010-1, 1010-2, and 1010-3.
  • the verification color feature extraction layer 1040 extracts the verification color features 1060 of the at least one verification image 1020 .
  • the verification module may perform a color judgment on each of the at least one verification image 1020, respectively.
  • the verification module may input at least one reference image 1010 into the reference color feature extraction layer 1030 , and input the verification image 1020 - 2 into the verification color feature extraction layer 1040 .
  • the verification color feature extraction layer 1040 may output the verification color feature 1060 of the verification image 1020-2.
  • the color classification layer 1070 may determine the color of the light when the verification image 1020-2 was photographed based on the reference color feature 1050 and the verification color feature 1060 of the verification image 1020-2.
  • the verification module may perform color judgment on multiple verification images 1020 at the same time.
  • the verification module may input at least one reference image 1010 into the reference color feature extraction layer 1030, and input multiple verification images 1020 (including the verification images 1020-1, 1020-2...1020-n) into the verification color feature extraction layer 1040.
  • the verification color feature extraction layer 1040 can output the verification color features 1060 of multiple verification images 1020 at the same time.
  • the color classification layer 1070 can simultaneously determine the color of the illumination when each of the multiple verification images is photographed.
  • the color classification layer 1070 may determine the color of the illumination when the verification image was photographed based on the reference color feature and the verification color feature of the verification image. For example, the color classification layer 1070 may determine a value or probability based on the reference color feature and the verification color feature of the verification image, and then determine the color of the light when the verification image is captured based on the value or probability. The numerical value or probability corresponding to the verification image may reflect the possibility that the color of the light when the verification image is photographed belongs to each color. In some embodiments, the color classification layer 1070 may include, but is not limited to, fully connected layers, deep neural networks, and the like.
  • the first validation model is a machine learning model with preset parameters. It can be understood that the reference color feature extraction layer, the verification color feature extraction layer and the color classification layer included in the first verification model are machine learning models with preset parameters.
  • the preset parameters of the first verification model can be determined during the model training process. For example, the acquisition module may train the initial first verification model based on the first training sample with the first label to obtain preset parameters of the first verification model.
  • the first training sample includes at least one sample reference image and at least one sample verification image of the sample target object, and the first label of the first training sample is the color of illumination when each sample verification image is photographed. Wherein, the color of the illumination when the at least one sample reference image is captured is the same as the color of the at least one reference image. For example, if the at least one reference color includes red, green, and blue, the at least sample reference image includes three target images captured by the sample target subject under illumination of red light, green light, and blue light.
  • the acquisition module may input the first training sample into the initial first verification model, and update the parameters of the initial verification color feature extraction layer, the initial reference color feature extraction layer and the initial color classification layer through training, until the updated first A verification model satisfies the first preset condition.
  • the updated first verification model may be designated as the first verification model with preset parameters, in other words, the updated first verification model may be designated as the trained first verification model.
  • the first preset condition may be that the loss function of the updated color feature model is smaller than a threshold, converges, or the number of training iterations reaches a threshold.
  • the acquisition module can train the initial verification color feature extraction layer, the initial reference color feature extraction layer and the initial color classification layer in the initial first verification model through an end-to-end training method.
  • the end-to-end training method refers to inputting training samples into an initial model, determining a loss value based on the output of the initial model, and updating the initial model based on the loss value.
  • the initial model may contain multiple sub-models or modules for performing different data processing operations, which are treated as a whole during training and updated simultaneously.
  • at least one sample reference image can be input into the initial reference color feature extraction layer
  • at least one sample verification image can be input into the initial verification color feature extraction layer, based on the output of the initial color classification layer.
  • a loss function is established based on the result and the first label, and the parameters of each initial model in the initial first verification model are updated simultaneously based on the loss function.
  • the first verification model may be pre-trained by the processing device or a third party and stored in the storage device, and the processing device may directly call the first verification model from the storage device.
  • the authenticity of the verification image is determined by the first verification model, which can improve the efficiency of the authenticity verification of the target image.
  • using the first verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of the performance difference of the terminal device, and further determine the authenticity of the target image.
  • the color light of the same color emitted by the terminal screens of different manufacturers may have differences in parameters such as saturation and brightness, resulting in a large intra-class gap of the same color.
  • the first training samples of the initial first verification model may be captured by terminals with different performances.
  • the initial first verification model is learned in the training process, so that the first verification model can take into account the difference in terminal performance when judging the color of the target object, and more accurately determine the color of the target image. Moreover, when the terminal is not hijacked, since both the reference image and the verification image are captured under the same ambient light conditions. In some embodiments, when a reference color space is established based on the reference color feature extraction layer in the first verification model, and the authenticity of multiple target images is determined based on the reference color space, the influence of external ambient light can be eliminated or reduced.
  • FIG. 11 is another exemplary flowchart for determining the authenticity of multiple target images according to some embodiments of the present specification.
  • flowchart 1100 may be performed by a verification module. As shown in Figure 11, the process 1100 includes the following steps:
  • Step 1110 Extract the verification color feature of at least one verification image.
  • step 910 For the specific description of extracting the verification color feature, please refer to step 910 and its related description.
  • Step 1120 extracting reference color features of at least one reference image.
  • step 910 For the specific description of extracting the reference color feature, please refer to step 910 and its related description.
  • Step 1130 for each of the at least one verification image, based on the illumination sequence and the reference color feature, generate a target color feature of the verification color corresponding to the verification image.
  • the target color feature refers to the feature represented by the verification color corresponding to the verification image in the reference color space.
  • the verification module may determine a verification color corresponding to the verification image based on the illumination sequence, and generate a target color feature of the verification image based on the verification color and the reference color feature. For example, the verification module can fuse the color feature of the verification color with the reference color feature to obtain the target color feature.
  • Step 1140 Determine the authenticity of the multiple target images based on the target color feature and the verification color feature corresponding to each of the at least one verification image.
  • the verification module may determine the authenticity of the verification image based on the similarity between its corresponding target color feature and the verification color feature.
  • the similarity between the target color feature and the verification color feature can be calculated by vector similarity, for example, determined by Euclidean distance, Manhattan distance, and the like.
  • the similarity between the target color feature and the verification color feature is greater than the third threshold, the verification image has authenticity, otherwise it does not have authenticity.
  • FIG. 12 is an exemplary flowchart of a method for determining authenticity of multiple target images according to some embodiments of the present specification.
  • process 1200 may be performed by a verification module. As shown in Figure 12, the process 1200 includes the following steps:
  • the multiple colors corresponding to the multiple lights in the lighting sequence include at least one reference color and at least one verification color.
  • one or more of the at least one reference color is the same as one or more of the at least one verification color.
  • the multiple target images include at least one reference image and at least one verification image, each of the at least one reference image corresponds to one of the at least one reference color, and the at least one verification image is Each image corresponds to one of the at least one verification color.
  • Step 1210 Extract the reference color feature of each of the at least one reference image and the verification color feature of each pair of the at least one verification image.
  • step 910 For the extraction of the reference color feature and the verification of the color feature, reference may be made to step 910 and its related description, which will not be repeated here.
  • the verification module may extract the reference color feature and the verification color feature based on the second color feature extraction layer included in the second verification model. For details of extracting color features based on the second color feature extraction layer, reference may be made to FIG. 13 and related descriptions, and details are not repeated here.
  • Step 1220 For each of the at least one reference image, determine the first color relationship between the reference image and each verification image based on the reference color feature of the reference image and the verification color feature of each verification image.
  • the first color relationship between the reference image and the verification image refers to the relationship between the color of the light when the reference image is captured and the color of the light when the verification image is captured.
  • the first color relationship includes the same, different, or similar, and the like.
  • the first color relationship may be represented numerically. For example, the same is represented by "1", and the difference is represented by "0".
  • the at least one first color relationship determined based on the at least one reference image and the at least one verification image may be represented by a vector, and each element in the vector may represent one and at least one of the at least one reference image.
  • a first color relationship between one of the verification images For example, if the first color relationship of each of the 1 reference image and the 5 verification images is the same, different, the same, the same, and different, then the first color relationship of the 1 reference image and the 5 verification images can be represented by a vector (1,0,1,1,0) means.
  • the at least one first color relationship determined based on the at least one reference image and the at least one verification image may also be represented by a verification code.
  • the subcode for each position in the verification code may represent a first color relationship between one of the at least one reference image and one of the at least one verification image.
  • the first color relationship between the above-mentioned one reference image and the five verification images can be represented by the verification code 10110.
  • the verification module may determine the first color relationship therebetween based on the reference color feature of the reference image and the verification color feature of the verification image. For example, the verification module may determine the similarity between the reference color feature of the reference image and the verification color feature of the verification image, and determine at least one first color relationship based on the similarity and the threshold. For example, if the similarity is greater than the fourth threshold, it is determined to be the same, if it is smaller than the fifth threshold, it is determined to be different, or larger than the sixth threshold and smaller than the fourth threshold, it is determined to be similar, and so on.
  • the fourth threshold may be greater than the fifth threshold and the sixth threshold, and the sixth threshold may be greater than the fifth threshold.
  • the similarity may be characterized by the distance between the reference color feature and the verification color feature.
  • the distance may include, but is not limited to, Euclidean distance, Manhattan distance, Chebyshev distance, Minkowski distance, Mahalanobis distance, included angle cosine distance, and the like.
  • the verification module may further acquire the first color relationship based on the color relationship determination layer included in the second verification model.
  • the color relationship determination layer For a detailed description of the color relationship determination layer, reference may be made to FIG. 13 and its related descriptions, which will not be repeated here.
  • Step 1230 for each of the at least one reference color, determine a second color relationship between the reference color and each of the verification colors.
  • the second color relationship of the reference color and the verification color may indicate whether the two colors are the same, different, or similar.
  • the type and representation of the second color relationship may be similar to the first color relationship, and details are not described herein again.
  • the verification module may determine its second color relationship based on the reference color and the verification color category or color parameter. For example, if the categories in the reference color and the verification color are the same or the numerical difference of the color parameters is less than a certain threshold, the two colors are judged to be the same, otherwise, the two colors are judged to be different.
  • the verification module may extract the first color feature of the color template image of the reference color and the second color feature of the color template image of the verification color.
  • the verification module may further determine a second color relationship between the reference color and the verification color based on the first color feature and the second color feature. For example, the verification module may calculate the similarity between the first color feature and the second color feature to determine the second color relationship.
  • the first color relationship between the reference image and the verification image corresponds to the second color relationship between the reference color corresponding to the reference image and the verification color corresponding to the verification image.
  • Step 1240 Determine the authenticity of the plurality of target images based on the at least one first color relationship and the at least one second color relationship.
  • the verification module may determine the authenticity of the plurality of target images based on some or all of the at least one first color relationship and the corresponding second color relationship.
  • the first color relationship and the second color relationship may be represented by vectors.
  • the verification module may select part or all of the at least one first color relationship to construct the first vector, and construct the second vector based on the second color relationship corresponding to the selected first color relationship. Further, the verification module may determine the authenticity of the plurality of target images based on the similarity between the first vector and the second vector. For example, if the similarity is greater than the seventh threshold, the multiple target images are authentic. It can be understood that the arrangement order of the elements in the first vector and the second vector is determined based on the corresponding relationship between the first color relationship and the second color relationship. For example, the element corresponding to a first color relationship in the first vector A is Aij, and the element corresponding to the second color relationship corresponding to the first color relationship in the second vector B is Bij.
  • the first color relationship and the second color relationship may also be represented by a verification code.
  • the verification module may select part or all of at least one first color relationship to construct a corresponding first verification code, and construct a corresponding second verification code based on the second color relationship corresponding to the selected first color relationship , to determine the authenticity of multiple target images. Similar to the first vector and the second vector, the positions of the sub-codes in the first verification code and the second verification code are determined based on the correspondence between the first color relationship and the second color relationship. For example, if the first verification code and the second verification code are different, the multiple target images do not have authenticity. For example, if the first verification code is 10110 and the second verification code is 10111, the multiple target images are not authentic.
  • the verification module may determine the authenticity of the multiple target images based on the same number of sub-codes in the first verification code and the second verification code. For example, if the number of identical subcodes is greater than the eighth threshold, the authenticity of the multiple target images is determined, and if the number of identical subcodes is less than the ninth threshold, it is determined that the multiple target images are not authentic.
  • the eighth threshold is 3
  • the ninth threshold is 1, the first verification code is 10110, the second verification code is 10111, and the first, second, and third digits of the first verification code and the second verification code are If the correspondence with the subcode of the fourth bit is the same, it is determined that the multiple target images are authentic.
  • the verification module may determine, based on the reference color space, the color of the illumination when the verification image and the reference image were captured, further determine the first color relationship, and then determine the authenticity of the multiple target images in combination with the corresponding second color relationship.
  • the verification module may determine the updated verification color feature of the verification image and the updated reference color feature of the reference image based on the reference color space. Further, the verification module determines the first color relationship based on the updated verification color feature and the updated reference color feature, and then determines the authenticity of the multiple target images in combination with the corresponding second color relationship.
  • both the reference image and the verification image are captured under the same ambient light conditions and illuminated by the same light-emitting element. Therefore, based on the relationship between the reference image and the verification image, the In the case of authenticity, the influence of external ambient light and light-emitting elements can be eliminated or weakened, thereby improving the accuracy of light color recognition.
  • FIG. 13 is a schematic diagram of a second verification model according to some embodiments of the present specification.
  • the verification module may determine the authenticity of the plurality of target images based on the second verification model and the lighting sequence.
  • the second verification model may include a second color feature extraction layer 1330 and a color relationship determination layer 1360 .
  • the second color feature extraction layer 1330 may be used to implement step 1210
  • the color relationship determination layer 1360 may be used to implement step 1220 .
  • the verification module may determine the authenticity of the multiple target images based on the first color relationship and the lighting sequence.
  • the at least one reference image and the at least one verification image may form one or more image pairs.
  • Each image pair includes one of at least one reference image and one of at least one verification image.
  • the verification module may analyze one or more image pairs, respectively, and determine the first color relationship between the reference image and the verification image in the image pair.
  • the at least one reference image includes "1320-1...1320-y" and the at least one verification image includes "1310-1...1310-x".
  • the image pair formed by the reference image 1320-y and the verification image 1310-1 is taken as an example to expand.
  • the second color feature extraction layer 1330 may extract the reference color feature 1350-y of the reference image 1320-y and the verification color feature 1340-1 of the verification image 1310-1.
  • the type of the second color feature extraction layer 1330 may include a convolutional neural network (Convolutional Neural Networks, CNN) model such as ResNet, ResNeXt, SE-Net, DenseNet, MobileNet, ShuffleNet, RegNet, EfficientNet, or Inception, or a recurrent neural network model.
  • CNN convolutional Neural Networks
  • the input to the second color feature extraction layer 1330 may be an image pair (eg, a reference image 1320-y and a verification image 1310-1).
  • the reference image 1320-y and the verification image 1310-1 can be spliced and input into the second color feature extraction layer 1330.
  • the output of the second color feature extraction layer 1330 may be the color features of the image pair (eg, the reference color feature 1350-y of the reference image 1320-y and the verification color feature 1340-1 of the verification image 1310-1).
  • the output of the second color feature extraction model 1330 may be the color feature after the verification color feature 1340-1 of the verification image 1310-1 and the reference color feature 1350-y of the reference image 1320-y are concatenated.
  • the color relationship determination layer 1360 is configured to determine the first color relationship of the image pair based on the color features of the image pair. For example, the verification module may input the reference color feature 1350-y of the reference image 1320-y and the verification color feature 1340-1 of the verification image 1310-1 into the color relationship determination layer 1360, which outputs the reference image 1320-y and Verify the first color relationship of image 1310-1.
  • the verification module may input multiple image pairs consisting of at least one reference image and at least one verification image together into the second verification model.
  • the second verification model may simultaneously output the first color relationship for each of the plurality of pairs of images.
  • the verification module may input one of the plurality of image pairs into the second verification model.
  • the second verification model may output the first color relationship for the pair of images.
  • the color relationship determination layer 1360 may be a classification model, including but not limited to fully connected layers, deep neural networks, decision trees, and the like.
  • the second verification model is a machine learning model with preset parameters. It can be understood that the second color feature extraction layer and the color relationship determination layer included in the second verification model are machine learning models with preset parameters.
  • the preset parameters of the second verification model can be determined during the training process. For example, the acquisition module may train an initial second verification model based on the second training samples with the second label to obtain the second verification model.
  • the second training samples include one or more sample image pairs with second labels. Each sample image pair includes two target images of the sample target object taken under the same or different lights.
  • the second label of the second training sample may indicate whether the color of the illumination when the sample image pair was captured is the same.
  • the acquisition module may input the second training sample into the initial second verification model, and update the parameters of the initial second color feature extraction layer and the initial color relationship determination layer through training, until the updated second verification model satisfies the first Two preset conditions.
  • the updated second verification model may be designated as the second verification model with preset parameters, in other words, the updated second verification model may be designated as the trained second verification model.
  • the second preset condition may be that the loss function of the updated second verification model is smaller than the threshold, converges, or the number of training iterations reaches the threshold.
  • the acquisition module may train the initial second color feature extraction layer and the initial color relationship determination layer in the initial second verification model through an end-to-end training method.
  • the end-to-end training method refers to inputting training samples into an initial model, determining a loss value based on the output of the initial model, and updating the initial model based on the loss value.
  • the initial model may contain multiple sub-models or modules for performing different data processing operations, which are treated as a whole during training and updated simultaneously.
  • At least one sample reference image and at least one verification image can be input into the initial color feature extraction model, and a loss function can be established based on the output result of the initial color relationship determination layer and the second label, and based on The loss function simultaneously updates the parameters of each initial model in the initial second verification model.
  • the second verification model may be pre-trained by the processing device or a third party and stored in the storage device, and the processing device may directly call the second verification model from the storage device.
  • the second verification model may be used to determine the first color relationship.
  • the color relationship determination layer of the second verification model may include only a small number of neurons (eg, two neurons) to judge whether the colors are the same. Compared with the color recognition network in the traditional method, the structure of the second verification model disclosed in this specification is simpler.
  • the target object analysis based on the second verification model also requires relatively less computing resources (eg, computing space), thereby improving the efficiency of light color recognition.
  • the input of the model can be a target image corresponding to any color.
  • the embodiment of this specification has higher applicability.
  • using the second verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of the performance difference of the terminal device, and further determine the authenticity of the target image. It can be understood that there are certain differences in the hardware of different terminals. For example, the color light of the same color emitted by the terminal screens of different manufacturers may have differences in parameters such as saturation and brightness, resulting in a large intra-class gap of the same color.
  • the second training samples of the initial second verification model may be captured by terminals with different performances.
  • the initial second verification model is learned in the training process, so that the second verification model can consider the terminal performance difference when judging the color of the target object, and more accurately determine the color of the target image.
  • both the reference image and the verification image are taken under the same ambient light conditions. Therefore, when the reference image and the verification image are processed based on the second verification model to determine the authenticity of the multiple target images, the influence of external ambient light can be eliminated or reduced.
  • FIG. 14 is another exemplary flowchart of a method for determining the authenticity of multiple target images according to some embodiments of the present specification.
  • process 1400 may be performed by a verification module. As shown in Figure 14, the process 1400 includes the following steps:
  • Step 1410 Determine a first image sequence based on the multiple target images.
  • the first image sequence is a collection of multiple target images arranged in a specific order.
  • the verification module may sequence the plurality of target images by their respective capture times to generate the first sequence of images. For example, the plurality of target images may be sorted from first to last according to their respective shooting times.
  • Step 1420 Determine a second image sequence based on the plurality of color template images.
  • a color template image is a template image generated based on the colors of the lights in the lighting sequence.
  • a color template image for a color is a solid-color image that contains only that color. For example, a red color template image contains only red, no colors other than red, and no texture.
  • the verification module may generate the plurality of color template images based on the lighting sequence. For example, the verification module may generate a color template image corresponding to the color of each light in the light sequence according to the color type and/or color parameter of the light. In some embodiments, a color template image of each color in the lighting sequence may be pre-stored in the storage device, and the verification module may obtain a color template image corresponding to the color of the lighting in the lighting sequence from the storage device through the network.
  • the second image sequence is a collection of multiple color template images arranged in sequence.
  • the verification module may sort the plurality of color template images according to their corresponding illumination times to generate a second sequence of images.
  • the plurality of color template images may be sorted from first to last according to their corresponding illumination times.
  • the arrangement order of the plurality of color template images in the second image sequence is consistent with the arrangement order of the plurality of target images in the first image sequence.
  • the irradiation time of the illumination corresponding to the plurality of color template images in the second image sequence corresponds to the shooting time of the plurality of target images in the first image sequence. For example, if the multiple target images are arranged from first to last according to their shooting time, the multiple color template images are also arranged from first to last based on the irradiation time of their corresponding lighting.
  • Step 1430 extract the first feature information of the first image sequence.
  • the first feature information may include color features of the plurality of target images in the first image sequence. For more details on extracting color features, see step 230 and its related description.
  • the verification module may extract the first feature information of the first image sequence based on the first extraction layer in the third verification model. For the extraction of the first feature information based on the first extraction layer, please refer to FIG. 15 and its related descriptions.
  • Step 1440 extract the second feature information of the second image sequence.
  • the second feature information may include color features of the plurality of color template images in the second image sequence. For more details on extracting color features, see step 230 and its related description.
  • the verification module may extract the second feature information based on a second extraction layer in the second color verification model. For more details on extracting the second color feature based on the second extraction layer, see FIG. 15 and its related description.
  • Step 1450 based on the first feature information and the second feature information, determine the authenticity of the multiple target images.
  • the verification module may determine, based on the degree of matching between the first feature information and the second feature information, a color sequence of illumination when the multiple target images in the first image sequence are captured and multiple template colors in the second image sequence The second judgment result of whether the color sequences of the images are consistent. For example, the verification module may take the similarity between the first feature information and the second feature information as the matching degree, and then determine the second judgment result based on the relationship between the similarity between the first feature information and the second feature information and the threshold. For example, if the similarity between the first feature information and the second feature information is greater than the tenth threshold, the second judgment result is consistent. If the similarity between the first feature information and the second feature information is less than the eleventh threshold, the second judgment result is inconsistent. Further, the verification module may determine the authenticity of the plurality of target images based on the second judgment result. For example, if the second judgment result is consistent, the multiple target images are authentic.
  • the verification module may determine the second determination result based on the discrimination layer in the third color verification model. For more details about determining the second judgment result based on the discriminant layer, please refer to FIG. 15 and its related description.
  • Some embodiments of the present specification generate a second image sequence based on an artificially constructed color template image, and determine the authenticity of the multiple target images by comparing the second image sequence with the first image sequence (sequence of multiple target images). sex. Compared to directly identifying the color of the first image sequence, the method disclosed in this specification can make the task of identifying the target image simpler.
  • a third validation model may be used for target image authenticity analysis. Using the second image sequence can make the recognition task of the third verification model simpler and the learning difficulty lower, thereby making the recognition accuracy higher.
  • the multiple target images in the first image sequence are all captured under the same ambient light conditions and illuminated by the same light-emitting element, therefore, the multiple targets are determined based on the relationship between the reference image and the verification image.
  • the influence of external ambient light and light-emitting elements can be eliminated or weakened, thereby improving the recognition accuracy of lighting color.
  • FIG. 15 is a diagram showing an exemplary structure of a third verification model according to some embodiments of the present specification.
  • the verification module may determine the authenticity of the plurality of target images based on the third verification model and the lighting sequence.
  • the third color verification model may include a first extraction layer 1530 , a second extraction layer 1540 and a discrimination layer 1570 .
  • the verification module may implement steps 1430-1450 using the third verification model to determine the second judgment result.
  • the first extraction layer 1530 implements step 1430
  • the second extraction layer 1540 implements step 1440
  • the discrimination layer 1570 implements step 1450 .
  • the verification module determines the authenticity of the multiple target images based on the second judgment result and the lighting sequence.
  • the input of the first extraction layer 1530 is the first image sequence 1510 and the output is the first feature information 1550 .
  • the verification module may sequentially stitch multiple target images in the first image sequence 1510 into the first extraction layer 1530 .
  • the output first feature information 1550 may be a feature obtained by splicing color features corresponding to multiple target images in the first image sequence 1510 .
  • the input of the second extraction layer 1540 is the second image sequence 1520 , and the output is the second feature information 1560 .
  • the verification module may sequentially stitch multiple color template images in the second image sequence 1520 into the second extraction layer 1540 .
  • the output second feature information 1560 may be a feature obtained by splicing color features corresponding to multiple color template images in the second image sequence 1520 .
  • the types of the first extraction layer and the second extraction layer include, but are not limited to, Convolutional Neural Networks such as ResNet, ResNeXt, SE-Net, DenseNet, MobileNet, ShuffleNet, RegNet, EfficientNet, or Inception. Networks, CNN) model, or recurrent neural network model.
  • the types of the first extraction layer and the second extraction layer may be the same or different.
  • the input of the discrimination layer 1570 is the first characteristic information 1550 and the second characteristic information 1560, and the output is the second determination result.
  • the discriminative layer may be a model that implements classification, including but not limited to a fully connected layer, a deep neural network (DNN), and the like.
  • the third validation model is a machine learning model with preset parameters. It can be understood that the first extraction layer, the second extraction layer and the discrimination layer included in the third verification model are machine learning models with preset parameters.
  • the preset parameters of the third verification model can be determined during the model training process. For example, the acquisition module may train an initial third verification model based on the third training sample with the third label to obtain the third verification model.
  • the third training sample includes a first sample image sequence and a second sample image sequence.
  • the first sample image sequence consists of multiple sample target images of the sample target object, and the second sample image sequence consists of multiple samples of multiple sample colors. Color template image composition.
  • the third label of the third training sample is whether the color sequence of illumination when the multiple sample target images of the first sample image sequence are captured is consistent with the color sequence of multiple sample color templates in the second sample image sequence.
  • the acquisition module may input the third training sample into the initial third verification model, and update the parameters of the initial first extraction layer, the initial second extraction layer and the initial discriminant layer through training until the updated third verification model
  • the first preset condition is satisfied.
  • the updated third verification model may be designated as a preset parameter third verification model, in other words, the updated third verification model may be designated as a trained third verification model.
  • the third preset condition may be that the updated loss function of the third verification model is smaller than the threshold, converges, or the number of training iterations reaches the threshold.
  • the acquisition module can train the initial first extraction layer, the initial second extraction layer and the initial discrimination layer in the initial third verification model in an end-to-end training manner.
  • the end-to-end training method refers to inputting training samples into an initial model, determining a loss value based on the output of the initial model, and updating the initial model based on the loss value.
  • the initial model may contain multiple sub-models or modules for performing different data processing operations, which are treated as a whole during training and updated simultaneously.
  • the first sample image sequence can be input into the initial first extraction layer, and the second sample image sequence can be input into the initial second extraction layer, based on the output result of the initial discriminant layer and the third label A loss function is established, and the parameters of each initial model in the initial third verification model are updated simultaneously based on the loss function.
  • all or part of the parameters of the first extraction layer and the second extraction layer may be shared.
  • the authenticity of the target image is determined by the third verification model, without identifying the type of illumination when the target image is captured, directly by comparing whether the first image sequence containing the target image and the second image sequence containing the color template image are Consistent for target recognition. This is equivalent to transforming the color recognition task into a binary classification task of judging whether the colors are the same.
  • a third verification model may be used to determine whether the first sequence of images and the second sequence of images are identical.
  • the discrimination layer of the second verification model may include only a small number of neurons (eg, two neurons) to judge whether the sequences are the same. Compared with the color recognition network in the traditional method, the structure of the second verification model disclosed in this specification is simpler.
  • the target object analysis based on the third verification model also requires relatively less computing resources (eg, computing space), thereby improving the efficiency of light color recognition.
  • the input of the model can be a target image corresponding to any color.
  • the embodiment of this specification has higher applicability.
  • using the third verification model can improve the reliability of the authenticity verification of the target object, reduce or remove the influence of the performance difference of the terminal equipment, and further determine the authenticity of the target image. It can be understood that there are certain differences in the hardware of different terminals. For example, the color light of the same color emitted by the terminal screens of different manufacturers may have differences in parameters such as saturation and brightness, resulting in a large intra-class gap of the same color.
  • the third training samples of the initial third verification model may be captured by terminals with different performances.
  • the initial third verification model is learned in the training process, so that the trained third verification model can consider the terminal performance difference when judging the color of the target object, and more accurately determine the color of the target image.
  • the multiple target images in the first sequence of images are all captured under the same ambient light conditions. Therefore, when the first image sequence is processed based on the third verification model and the authenticity of the multiple target images is determined, the influence of external ambient light can be eliminated or reduced.
  • the verification module may determine the updated color features of a plurality of target images (including at least one verification target object image and at least one reference image) based on the reference color space, and generate the updated color features based on the updated color features of the plurality of target images.
  • the updated first feature information of the first image sequence may be determined.
  • the verification module may generate the updated second feature information of the second image sequence based on the updated color features of the plurality of color template images.
  • the verification module may further determine the authenticity of the plurality of target images based on the updated first characteristic information (or the first characteristic information) and the updated second characteristic information (or the second characteristic information).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)

Abstract

本说明书实施例公开了一种目标识别方法和***。所述目标识别方法包括:确定光照序列,光照序列用于确定终端在照射目标对象时发出的多个光照的多个颜色;基于终端获取多幅目标图像,多幅目标图像的拍摄时间与多个光照的照射时间具有对应关系;以及基于光照序列和多幅目标图像,确定多幅目标图像的真实性。

Description

用于目标识别的方法和***
优先权声明
本申请要求2021年04月20提交的中国专利申请号202110423528.X的优先权,其内容全部通过引用并入本文。
技术领域
本说明书涉及图像处理技术领域,特别涉及用于目标识别的方法和***。
背景技术
目标识别是基于图像采集设备获取的目标进行生物识别的技术,例如,以人脸为目标的人脸识别技术,被广泛应用于权限验证、身份验证等应用场景。为了保证目标识别的安全性,需要确定目标图像的真实性。
因此,希望提供一种目标识别的方法和***,可以确定目标图像的真实性发明内容。
发明内容
本说明书实施例之一提供一种目标识别方法,所述方法包括:确定光照序列,光照序列用于确定照射目标对象的多个光照的多个颜色;获取多幅目标图像,多幅目标图像的拍摄时间与多个光照的照射时间具有对应关系;以及基于光照序列和多幅目标图像,确定多幅目标图像的真实性。
本说明书实施例之一提供一种目标识别***,所述***包括:确定模块,用于确定光照序列,光照序列用于确定照射目标对象的多个光照的多个颜色;获取模块,用于获取多幅目标图像,多幅目标图像的拍摄时间与多个光照的照射时间具有对应关系;验证模块,用于基于光照序列和多幅目标图像,确定多幅目标图像的真实性。
本说明书实施例之一提供一种目标识别装置,包括处理器,所述处理器用于执行本说明书披露的目标识别方法。
本说明书实施例之一提供一种计算机可读存储介质,所述存储介质存储计算机指令,当计算机读取存储介质中的计算机指令后,计算机执行本说明书披露的目标识别方法。
附图说明
本说明书将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本说明书一些实施例所示的目标识别***的应用场景示意图;
图2是根据本说明书一些实施例所示的目标识别方法的示例性流程图;
图3是根据本说明书一些实施例所示的光照序列的示意图;
图4是根据本说明书一些实施例所示的光照序列的另一示意图;
图5是根据本说明书一些实施例所示的获取多幅目标图像的示例性流程图;
图6是根据本说明书一些实施例所示的纹理替换的示意图;
图7是根据本说明书一些实施例所示的确定多幅目标图像的真实性的示例性流程图;
图8是根据本说明书一些实施例所示的确定多幅目标图像的真实性的另一示例性流程图;
图9是根据本说明书一些实施例所示的确定多幅目标图像的真实性的另一示例性流程图;
图10是根据本说明书一些实施例所示的第一验证模型的结构示意图;
图11是根据本说明书一些实施例所示的确定多幅目标图像的真实性的另一示例性流程图;
图12是根据本说明书一些实施例所示的确定多幅目标图像的真实性的另一示例性流程图;
图13是根据本说明书一些实施例所示的第二验证模型的结构示意图;
图14是根据本说明书一些实施例所示的确定多幅目标图像的真实性的另一示例性流程图;以及
图15是根据本说明书一些实施例所示的第三验证模型的结构示意图。
具体实施方式
为了更清楚地说明本说明书实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本说明书的一些示例或实施例,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本说明书应用于其它类似情景。除非从语 言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本文使用的“***”、“装置”、“单元”和/或“模块”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。
如本说明书和权利要求书中所示,除非上下文明确提示例外情形,“一”、“一个”、“一种”和/或“该”等词并非特指单数,也可包括复数。一般说来,术语“包括”与“包含”仅提示包括已明确标识的步骤和元素,而这些步骤和元素不构成一个排它性的罗列,方法或者设备也可能包含其它的步骤或元素。
本说明书中使用了流程图用来说明根据本说明书的实施例的***所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。
目标识别是基于图像采集设备获取的目标对象进行生物识别的技术。在一些实施例中,目标对象可以是人脸、指纹、掌纹和瞳孔等。在一些实施例中,目标识别可以应用于权限验证。例如,门禁权限认证和账户支付权限认证等。在一些实施例中,目标识别还可以用于身份验证。例如,员工考勤认证和本人注册身份安全认证。仅作为示例,目标识别可以基于图像采集设备实时采集到的目标图像和预先获取的生物特征进行匹配,从而验证目标身份。
然而,图像采集设备可能被攻击或劫持,攻击者可以上传虚假的目标图像通过身份验证。例如,攻击者A可以在攻击或劫持图像采集设备后,直接上传用户B的人脸图像。目标识别***基于用户B的人脸图像和预先获取的用户B的人脸生物特征进行人脸识别,从而通过用户B的身份验证。
因此,为了保证目标识别的安全性,需要确定目标图像的真实性,即确定目标图像是图像采集设备在目标识别过程中实时采集到的。
图1是根据本说明书一些实施例所示的目标识别***的应用场景示意图。如图1所示,目标识别***100可以包括处理设备110、网络120、终端130和存储设备140。
处理设备110可以用于处理来自目标识别***100的至少一个组件和/或外部数据源(例如,云数据中心)的数据和/或信息。例如,处理设备110可以确定光照序列,获取多幅目标图像,以及确定多幅目标图像的真实性等。又例如,处理设备110可以对从终端130获取的多幅初始图像进行预处理(例如,替换纹理等),得到多幅目标图像。在处理过程中,处理设备110可以直接或通过网络120从目标识别***100的其他组件(如存储设备140和/或终端130)获取数据(如指令)和/或将处理后的数据发送给所述其他组件进行存储或显示。
在一些实施例中,处理设备110可以是单一服务器或服务器组。该服务器组可以是集中式或分布式的(例如,处理设备110可以是分布式***)。在一些实施例中,处理设备110可以是本地的或者远程的。在一些实施例中,处理设备110可以在云平台上实施,或者以虚拟方式提供。仅作为示例,云平台可以包括私有云、公共云、混合云、社区云、分布云、内部云、多层云等或其任意组合。
网络120可以连接***的各组成部分和/或连接***与外部部分。网络120使得目标识别***100中各组成部分之间、目标识别***100与外部部分之间可以进行通讯,促进数据和/或信息的交换。在一些实施例中,网络120可以是有线网络或无线网络中的任意一种或多种。例如,网络120可以包括电缆网络、光纤网络、电信网络、互联网、局域网络(LAN)、广域网络(WAN)、无线局域网络(WLAN)、城域网(MAN)、公共交换电话网络(PSTN)、蓝牙网络、紫蜂网络(ZigBee)、近场通信(NFC)、设备内总线、设备内线路、线缆连接等或其任意组合。在一些实施例中,目标识别***100中各部分之间的网络连接可以采用上述一种方式,也可以采取多种方式。在一些实施例中,网络120可以是点对点的、共享的、中心式的等各种拓扑结构或者多种拓扑结构的组合。在一些实施例中,网络120可以包括一个或以上网络接入点。例如,网络120可以包括有线或无线网络接入点,例如基站和/或网络交换点120-1、120-2、…,通过这些网络接入点,目标识别***100的一个或多个组件可连接到网络120以交换数据和/或信息。
终端130指用户所使用的一个或多个终端设备或软件。在一些实施例中,终端130可以包括图像采集设备131(例如,摄像头、照相机),图像采集设备131可以拍摄目标对象,获取多幅目标图像。在一些实施例中,图像采集设备131在拍摄目标对象时,终端130(例如,终端130的屏幕和/或其他灯光发射原件)可以依次发射光照序列中的多个颜色的光照射目标对象。在一些实施例中,终端130可以通过网络120与处理设备110通信,并将拍摄的多幅目标图像发送到处理设备110。在一些实施例中,终端130可以是移动设备130-1、平板计算机130-2、膝上型计算机130-3、其他具有输入和/或输出功能的设备等或其任意组合。上述示例仅用于说明所述终端130的类型的广泛性而非对其范围的限制。
存储设备140可以用于存储数据(如光照序列、多幅初始图像或多幅目标图像等)和/或指令。存储设备140可以包括一个或多个存储组件,每个存储组件可以是一个独立的设备,也可以是其他设备的一 部分。在一些实施例中,存储设备140可包括随机存取存储器(RAM)、只读存储器(ROM)、大容量存储器、可移动存储器、易失性读写存储器等或其任意组合。示例性地,大容量储存器可以包括磁盘、光盘、固态磁盘等。在一些实施例中,存储设备140可在云平台上实现。仅作为示例,云平台可以包括私有云、公共云、混合云、社区云、分布云、内部云、多层云等或其任意组合。在一些实施例中,存储设备140可以集成或包括在目标识别***100的一个或多个其他组件(例如,处理设备110、终端130或其他可能的组件)中。
在一些实施例中,所述目标识别***100可以包括确定模块、获取模块和验证模块。
确定模块可以用于确定光照序列,光照序列用于确定照射目标对象的多个光照的多个颜色。
获取模块可以用于获取多幅目标图像,多幅目标图像的拍摄时间与多个光照的照射时间具有对应关系。
在一些实施例中,获取模块可以用于获取多幅初始图像,并对多幅初始图像进行预处理,得到多幅目标图像。
在一些实施例中,获取模块可以用于获取颜色验证模型,颜色验证模型是预置参数的机器学习模型。例如,通过训练获取颜色验证模型的预置参数等。
验证模块可以用于基于光照序列和多幅目标图像,确定多幅目标图像的真实性。
在一些实施例中,多个颜色包括至少一个基准颜色和至少一个验证颜色。至少一个基准颜色和至少一个验证颜色之间的关系可以是多种。例如,至少一个验证颜色中的每一个基于至少一个基准颜色中的至少一部分颜色确定,又例如,至少一个基准颜色中的一个或多个与至少一个验证颜色中的一个或多个相同。
在一些实施例中,多幅目标图像包括至少一幅验证图像和至少一幅基准图像,至少一幅验证图像中的每一幅与至少一个验证颜色中的一个对应,多幅基准图像中的每一幅与至少一个基准颜色中的一个对应。
在一些实施例中,验证模块可以用于基于至少一幅基准图像,确定至少一幅验证图像被拍摄时光照的颜色,并基于光照序列和至少一幅验证图像被拍摄时光照的颜色,确定多幅目标图像的真实性。
在一些实施例中,验证模块可以用于基于多幅目标图像确定的第一图像序列,以及多幅颜色模板图像确定的第二图像序列,并基于第一图像序列和第二图像序列确定多幅目标图像的真实性。其中,多幅颜色模板图像基于光照序列生成的。
在一些实施例中,验证模块可以用于基于至少一幅基准图像和至少一幅验证图像之间的第一颜色关系,至少一个基准颜色和至少一个验证颜色之间的第二颜色关系,并基于第一颜色关系和第二颜色关系确定多幅目标图像的真实性。
在一些实施例中,验证模块可以用于基于光照序列以及颜色验证模型,确定多幅目标图像的真实性。例如,验证模型基于颜色验证模型对多幅目标图像进行处理,得到处理结果,结合处理结果和光照序列确定多幅目标图像的真实性。
关于确定模块、获取模块和验证模块的更多详细描述可以参见图2-图15,在此不再赘述。
需要注意的是,以上对于目标识别***及其模块的描述,仅为描述方便,并不能把本说明书限制在所举实施例范围之内。可以理解,对于本领域的技术人员来说,在了解该***的原理后,可能在不背离这一原理的情况下,对各个模块进行任意组合,或者构成子***与其他模块连接。在一些实施例中,图1中披露的确定模块、获取模块和验证模块可以是一个***中的不同模块,也可以是一个模块实现上述的两个或两个以上模块的功能。例如,各个模块可以共用一个存储模块,各个模块也可以分别具有各自的存储模块。诸如此类的变形,均在本说明书的保护范围之内。
在一些实施例中,目标识别***100的处理设备进行目标识别的方法可以包括:确定光照序列,所述光照序列用于确定照射目标对象的多个光照的多个颜色;获取多幅目标图像,所述多幅目标图像的拍摄时间与所述多个光照的照射时间具有对应关系;以及基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性。
在一些实施例中,处理设备获取多幅目标图像可以包括:获取多幅初始图像,所述多幅初始图像包括第一初始图像和第二初始图像;用所述第一初始图像的纹理替换所述第二初始图像的纹理,以生成处理后的第二初始图像;以及将所述处理后的第二初始图像作为所述多幅目标图像中的一幅。
在一些实施例中,处理设备以所述第一初始图像的纹理替换所述第二初始图像的纹理,以生成处理后的第二初始图像可以包括:基于色彩迁移算法,将所述第二初始图像被拍摄时光照的颜色迁移至所述第一初始图像,得到所述处理后的第二初始图像。
在一些实施例中,所述多个颜色包括至少一个基准颜色和至少一个验证颜色,所述至少一个验证颜色中的每一个基于所述至少一个基准颜色中的至少一部分颜色确定。在一些实施例中,所述至少一个基 准颜色中的一个或多个与所述至少一个验证颜色中的一个或多个相同。
在一些实施例中,所述多幅目标图像包括至少一幅验证图像和至少一幅基准图像,所述至少一幅验证图像中的每一幅与所述至少一个验证颜色中的一个对应,所述至少一幅基准图像中的每一幅与所述至少一个基准颜色中的一个对应,处理设备基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性可以包括:提取所述至少一幅基准图像的基准颜色特征和所述至少一幅验证图像的验证颜色特征;对所述至少一幅验证图像中的每一幅,基于所述验证图像的验证颜色特征和所述至少一幅基准图像的基准颜色特征,确定所述验证图像被拍摄时光照的颜色;以及基于所述光照序列和所述至少一幅验证图像被拍摄时光照的颜色,确定所述多幅目标图像的真实性。
在一些实施例中,处理设备基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性包括:基于所述多幅目标图像,确定第一图像序列;基于多幅颜色模板图像,确定第二图像序列,所述多幅颜色模板图像基于所述光照序列生成;基于第一提取层处理所述第一图像序列,提取所述第一图像序列的第一特征信息;基于第二提取层处理所述第二图像序列,提取所述第二图像序列的第二特征信息;以及基于判别层处理所述第一特征信息和所述第二特征信息,确定所述多幅目标图像的真实性,其中所述第一提取层、第二提取层和判别层为预置参数的机器学习模型,所述第一提取层和所述第二提取层共享参数。
在一些实施例中,所述第一提取层、所述第二提取层和所述判别层的所述预置参数通过端到端的训练方式获得。
在一些实施例中,所述多个颜色包括至少一个基准颜色和至少一个验证颜色,所述多幅目标图像包括至少一幅基准图像和至少一幅验证图像,所述至少一幅基准图像的每一幅与所述至少一幅基准颜色中的一个对应,所述至少一幅验证图像的每一幅与所述至少一个验证颜色中的一个对应,处理设备基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性包括:提取所述至少一幅基准图像中每一副的基准颜色特征和所述至少一幅验证图像中每一副的验证颜色特征;对所述至少一幅基准图像中的每一幅,基于所述基准图像的基准颜色特征和所述每一幅验证图像的验证颜色特征,确定所述基准图像和所述每一幅验证图像的第一颜色关系;对所述至少一个基准颜色中的每一个,确定所述基准颜色和每个所述验证颜色的第二颜色关系;以及基于所述至少一个第一颜色关系和所述至少一个第二颜色关系,确定所述多幅目标图像的真实性。
在一些实施例中,处理设备基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性包括:获取颜色验证模型,所述颜色验证模型为预置参数的机器学习模型;以及基于所述光照序列,利用所述颜色验证模型对所述多幅目标图像进行处理,确定所述多幅目标图像的真实性。
图2是根据本说明书一些实施例所示的目标识别方法的示例性流程图。如图2所示,流程200包括下述步骤:
步骤210,确定光照序列。所述光照序列用于确定照射目标对象的多个光照的多个颜色。在一些实施例中,步骤210可以由确定模块执行。
所述目标对象指需要进行目标识别的对象。例如目标对象可以是用户的特定身体部位,如面部、指纹、掌纹或瞳孔等。在一些实施例中,所述目标对象指需要进行身份验证和/或权限认证的用户的面部。例如,在网约车应用场景中,平台需要验证接单司机是否为平台审核过的注册司机用户,则所述目标对象是司机的面部。又例如,在人脸支付应用场景中,支付***需要验证支付人员的支付权限,则所述目标对象是支付人员的面部。
为对所述目标对象进行目标识别,所述终端会被指示发射所述光照序列。所述光照序列包括多个光照,用于照射所述目标对象。所述光照序列中不同光照的颜色可以相同,也可以不同。在一些实施例中,所述多个光照包含至少两个颜色不同的光照,即所述多个光照有多个颜色。
在本文中所使用的“确定光照序列”指的是确定所述光照序列中包含的多个光照中每个光照的信息,例如,颜色信息、照射时间等。所述光照序列中多个光照的颜色信息可以采用相同或不同的方式表示。例如,所述多个光照的颜色信息可以用颜色类别来表示。示例的,所述光照序列中多个光照的颜色可以表示为红、黄、绿、紫、青、蓝、红。又例如,所述多个光照的颜色信息可以用颜色参数来表示。例如,所述光照序列中多个光照的颜色可以表示为RGB(255,0,0)、RGB(255,255,0)、RGB(0,255,0)、RGB(255,0,255)、RGB(0,255,255)、RGB(0,0,255)。在一些实施例中,光照序列也可以被称为颜色序列,其包含所述多个光照的颜色信息。
光照序列中多个光照的照射时间可以包括每个光照计划照射目标对象上的开始时间、结束时间、持续时长等或其任意组合。例如,红光照射目标对象的开始时间为14:00、绿光照射目标对象的开始时间为14:02。又例如,红光和绿光照射目标对象的持续时长均为0.1秒。在一些实施例中,不同光照照射目标对象的持续时长可以相同,也可以不同。照射时间可以通过其他方式表示,在此不再赘述。
在一些实施例中,终端可以按照特定顺序依次发射多个光照。在一些实施例中,终端可以通过发光元件发射光照。发光元件可以包括终端内置的发光元件,例如,屏幕、LED灯等。发光元件也可以包括外接的发光元件。例如,外接LED灯、发光二极管等。在一些实施例中,当所述终端被劫持或攻击时,所述终端可能会接受发射光照的指示,但实际并不会发出光照。关于光照序列的更多细节可以参见图3和图4及其相关描述,此处不再赘述。
在一些实施例中,终端或处理设备(例如,确定模块)可以随机生成或者基于预设规则生成光照序列。例如,终端或处理设备可以从颜色库中随机抽取多个颜色生成光照序列。在一些实施例中,光照序列可以由用户在终端设定、根据目标识别***100的默认设置确定、或由处理设备通过数据分析(例如,利用确定模型)确定等。在一些实施例中,终端或者存储设备可以存储所述光照序列。相应的,获取模块可以通过网络从终端或者存储设备中获取光照序列。
步骤220,获取多幅目标图像。在一些实施例中,步骤220可以由获取模块执行。
所述多幅目标图像是用于目标识别的图像。所述多幅目标图像的格式可以包括Joint Photographic Experts Group(JPEG)、Tagged Image File Format(TIFF)、Graphics Interchange Format(GIF)、Kodak Flash PiX(FPX)、Digital Imaging and Communications in Medicine(DICOM)等。所述多幅目标图像可以是二维(2D,two-dimensional)图像或三维(3D,three-dimensional)图像。
在一些实施例中,获取模块可以基于终端获取所述多幅目标图像。例如,获取模块可以通过网络发送获取指令至终端,然后通过网络接收终端发送的多幅目标图像。或者,终端可以将所述多幅目标图像发送至存储设备中进行存储,所述获取模块可以从所述存储设备中获取所述多幅目标图像。所述目标图像中可能不包含或包含目标对象。
所述目标图像可以是由终端的图像采集设备拍摄,也可以是基于用户上传的数据(例如,视频或图像)确定。例如,在目标对象验证的过程中,目标识别***100会给终端下发光照序列。当终端未被劫持或攻击时,终端可以根据所述光照序列依次发射所述多个光照。当终端发出多个光照中某一个时,其图像采集设备可以被指示在该光照的照射时间内采集一幅或多幅图像。或者,终端的图像采集设备可以被指示在所述多个光照的整个照射期间拍摄视频。终端或其他计算设备(例如处理设备110)可以根据各光照的照射时间从视频中截取各光照的照射时间内采集的一幅或多幅图像。所述终端在各个光照的照射时间内采集的一幅或多幅图像可以作为所述多幅目标图像。此时,所述多幅目标图像为所述目标对象在被所述多个光照照射时拍摄的真实图像。可以理解,所述多个光照的照射时间与所述多幅目标图像的拍摄时间之间存在对应关系。若在单个光照的照射时间内采集一幅图像,则该对应关系是一对一;若在单个光照的照射时间内采集多幅图像,则该对应关系是一对多。
当所述终端被劫持时,劫持者可以通过终端设备上传图像或视频。所述上传的图像或视频可以包含目标对象或者其他用户的特定身体部位,和/或其他物体。所述上传的图像或视频可以是由所述终端或者其他终端拍摄的历史图像或视频,或者是合成的图像或视频。所述终端或其他计算设备(例如处理设备110)可以基于所述上传的图像或视频确定所述多幅目标图像。例如,被劫持的终端可以根据所述光照序列中每个光照的照射顺序和/或照射时长,从所述上传的图像或视频中抽取所述每个光照对应的一幅或多幅图像。仅作为示例,所述光照序列中包含依次排列的五个光照,劫持者可以通过终端设备上传五幅目标图像。终端或其他计算设备会根据所述五幅目标图像被上传的先后顺序确定五个光照中每个光照对应的目标图像。又例如,所述光照序列中五个光照的照射时间分别为0.5秒,劫持者可以通过终端上传时长2.5秒的视频。终端或其他计算设备可以将所述被上传的视频分为0-0.5秒、0.5-1秒、1-1.5秒、1.5-2秒和2-2.5秒五段视频,并在每段视频中截取一幅目标图像。从视频中截取的五幅目标图像与所述五个光照依次对应。此时,所述多幅目标图像是被劫持者上传的虚假图像,而非所述目标对象在被所述多个光照照射时拍摄的真实图像。在一些实施例中,若目标图像是由劫持者通过终端上传,可以将该目标图像的上传时间或其在视频中拍摄时间视为其拍摄时间。可以理解,当所述终端被劫持时,多个光照的照射时间与多幅目标图像的拍摄时间之间同样存在对应关系。
对于所述多幅目标图像中的每一幅,确定模块可以将光照序列中照射时间与所述目标图像拍摄时间对应的光照的颜色,作为目标图像对应的颜色。具体的,若光照的照射时间与一幅或多幅目标图像的拍摄时间相对应,则将所述光照的颜色作为所述一幅或多幅目标图像对应的颜色。可以理解,当终端未被劫持或攻击时,多幅目标图像对应的颜色应当和光照序列中多个光照的多个颜色相同。例如,光照序列多个光照的多个颜色是“红、黄、蓝、绿、紫、红”,当终端未被劫持或攻击时,终端获取的多幅目标图像对应的颜色应该也是“红、黄、蓝、绿、紫、红”。当终端被劫持或攻击时,多幅目标图像对应的颜色和光照序列中多个光照的多个颜色可能不同。
在一些实施例中,获取模块可以从终端获取多幅初始图像,并对多幅初始图像进行预处理,以获取所述多幅目标图像。所述多幅初始图像可以由终端拍摄或者劫持者通过终端上传。
可以理解的,多幅初始图像的拍摄时间与多个光照的照射时间存在对应关系。若多幅目标图像是基于对多幅初始图像预处理得到,则多幅目标图像的拍摄时间与多个光照的照射时间的对应关系实际反映多幅目标图像对应的多幅初始图像的拍摄时间与多个光照的照射时间的对应关系,目标图像被拍摄时光照的颜色实际反映目标图像对应的初始图像被拍摄时光照的颜色。
在一些实施例中,预处理可以包括纹理一致化处理。图像的纹理是指图像中的元素(如像素)及其周围空间邻域的灰度分布。可以理解,若所述多幅初始图像由终端拍摄,由于终端和目标对象的距离、角度、采集时的背景可能发生变化,所述多幅初始图像的纹理可能不同。纹理一致化处理可以使所述多幅初始图像的纹理相同或基本相同,减少纹理特征的干扰,从而提高目标识别的效率和准确率。
在一些实施例中,获取模块可以通过纹理替换实现纹理一致化处理。纹理替换是指将所有初始图像的纹理都替换为指定图像的纹理。在一些实施例中,指定图像可以是多幅初始图像中的一幅,即,获取模块可以将所述多幅初始图像中一幅的纹理替换其他初始图像的纹理,实现纹理一致化。或者,指定图像可以是多幅初始图像以外的目标对象的图像。例如,存储设备中存储的过去拍摄的所述目标对象的图像。关于纹理替换的具体描述可以参见图5及其相关描述,在此不再赘述。
在一些实施例中,获取模块可以通过抠除背景、拍摄角度纠正等方式实现纹理一致化处理。例如,以目标对象为目标人脸为例,将所述多幅初始图像中除人脸以外的部分抠除,然后再将剩余部分中人脸的角度校正到预设角度(例如,正面面向图像采集设备等)。仅作为示例,背景的抠除可以通过图像识别技术识别所述多幅初始图像中每幅的人脸轮廓,再将所述人脸轮廓以外的部分抠除。角度校正可以通过校正算法(例如,人脸校正算法)或模型实现。在一些实施例中,获取模块还可以通过其他方式实现纹理一致化处理,在此不做限制。
在一些实施例中,预处理还可以包括图像筛选、图像去噪、图像增强等。
图像筛选可以包括筛除不包括目标对象或用户的特定身体部位的图像。筛选的对象可以是终端采集的初始图像,也可以是初始图像经过其他预处理(例如,纹理一致化处理)得到的图像。例如,获取模块可以基于初始图像的特征和包含目标对象的图像的特征进行匹配,筛除所述多幅初始图像中不包含目标对象的图像。
图像去噪可以包括去除图像中的干扰信息。图像中的干扰信息不仅会降低图像的质量,还会影响基于图像提取的颜色特征。在一些实施例中,获取模块可以通过中值滤波器、机器学习模型等实现图像去噪。
图像增强可以增加图像中的缺失信息。图像中的缺失信息会造成图像模糊,也会影响基于图像提取的颜色特征。例如,图像增强可以对图像的亮度、对比度、饱和度、色调等进行调节,增加其清晰度,减少噪点等。在一些实施例中,获取模块可以通过平滑滤波器、中值滤波器等实现图像增强。
与图像筛选类似,图像去噪或图像增强针对的对象可以是初始图像,或初始图像经过其他预处理后得到的图像。
在一些实施例中,预处理还可以包括其他操作,在此不作限制。在一些实施例中,所述目标识别***100可以进一步包括预处理模块,用于对所述初始图像进行预处理。
步骤230,基于光照序列和多幅目标图像,确定多幅目标图像的真实性。在一些实施例中,步骤230可以由验证模块执行。
多幅目标图像的真实性可以反映所述多幅目标图像是否是所述目标对象在多个颜色的光照的照射下拍摄获取的图像。例如,当终端未被劫持或攻击时,其发光元件可以发射多个颜色的光照,同时其图像采集设备可以目标对象进行录像或拍照以获取的所述目标图像。此时,所述目标图像具有真实性。又例如,当终端被劫持或攻击时,所述目标图像是基于攻击者上传的图像或视频获取。此时,所述目标图像不具有真实性。
若多幅目标图像基于对多幅初始图像预处理得到,多幅目标图像的真实性也可以称为多幅初始图像的真实性,其可以反映所述多幅目标图像对应的多幅初始图像是否是所述目标对象在多个颜色的光照的照射下拍摄获取的图像。如,当终端未被劫持或攻击时,所述初始图像和目标图像具有真实性。又例如,当终端被劫持或攻击时,所述初始图像和目标图像不具有真实性。为方便描述,下文将多幅目标图像的真实性和多幅初始图像的真实性统称为多幅目标图像的真实性。
目标图像的真实性可以用于确定终端的图像采集设备是否被攻击者劫持。例如,所述多幅目标图像中若存在至少一幅目标图像不具有真实性,则说明图像采集设备被劫持。又例如,所述多幅目标图像中若超过预设数量的目标图像不具有真实性,则说明图像采集设备被劫持。
在一些实施例中,验证模块可以基于多幅目标图像的颜色特征和光照序列,确定多幅目标图像的真实性。关于基于目标图像的颜色特征确定目标图像的真实性的更多细节可以参见图7及其相关描述,在此不再赘述。
图像的颜色特征是指与图像的颜色相关的信息。图像的颜色包括拍摄图像时光照的颜色、图像中拍摄对象的颜色、图像中背景的颜色等。在一些实施例中,颜色特征可以包括由神经网络提取的深度特征和/或复杂特征。
颜色特征可以通过多种方式表示。在一些实施例中,颜色特征可以基于图像中各像素点在颜色空间中的颜色值表示。颜色空间是使用一组数值描述颜色的数学模型,该组数值中每个数值可以表示颜色特征在颜色空间的每个颜色通道上的颜色值。在一些实施例中,颜色空间可以表示为向量空间,该向量空间的每个维度表示颜色空间的一个颜色通道。颜色特征可以用该向量空间中的向量来表示。在一些实施例中,颜色空间可以包括但不限于RGB颜色空间、Lαβ颜色空间、LMS颜色空间、HSV颜色空间、YCrCb颜色空间和HSL颜色空间等。可以理解,不同的颜色空包含不同的颜色通道。例如,RGB颜色空间包含红色通道R、绿色通道G和蓝色通道B,颜色特征可以用图像中各像素点分别在红色通道R、绿色通道G和蓝色通道B上的颜色值表示。
在一些实施例中,颜色特征可以通过其他方式表示(如,颜色直方图、颜色矩、颜色集等)。例如,对图像中各像素点在颜色空间中的颜色值进行直方图统计,生成表示颜色特征的直方图。又例如,对图像中各像素点在颜色空间中的颜色值进行特定运算(如,均值、平方差等),将该特定运算的结果表示该图像的颜色特征。
在一些实施例中,验证模块可以通过颜色特征提取算法和/或颜色验证模型(或其部分)来提取多幅目标图像的颜色特征。颜色特征提取算法包括:颜色直方图、颜色矩、颜色集等。例如,验证模块可以基于图像中各像素点分别在颜色空间的每个颜色通道的颜色值,统计梯度直方图,从而获取颜色直方图。又例如,验证模块可以将图像分割为多个区域,用图像中各像素点分别在颜色空间的每个颜色通道的颜色值建立的多个区域的二进制索引的集合,以确定所述图像的颜色集。关于基于颜色验证模型提取颜色特征的更多细节参见图10、13和图15及其相关描述。
在一些实施例中,验证模块可以基于光照序列,以及利用颜色验证模型对多幅目标图像进行处理,以确定多幅目标图像的真实性,所述颜色验证模型为预置参数的机器学习模型。关于基于颜色验证模型确定目标图像的真实性可以参见图8及其相关描述,在此不再赘述。
在本说明书一些实施例中,目标识别***100会给终端下发光照序列,并从终端获取与光照序列中多个光照存在对应关系的目标图像。处理设备通过识别目标图像被拍摄时光照的颜色,可以确定目标图像或其对应的初始图像是否是在目标对象被光照序列照射下拍摄的图像,进一步确定终端是否被劫持或攻击。可以理解,攻击者在不知道光照序列的情况下,上传的图像或上传的视频中的图像被拍摄时光照的颜色很难与光照序列中的多个光照的颜色相同。即使颜色的种类相同,每个颜色的位置顺序也很难相同。本说明书中披露的方法可以提高攻击者攻击的难度,保证目标识别的安全性。
图3是根据本说明书一些实施例所示的光照序列的示意图。
如前所述,光照序列中可以包含多个光照。在一些实施例中,光照序列中多个光照的颜色可以完全相同、完全不同或部分相同。例如,所述多个光照的颜色均为红色。又例如,所述多个光照中的至少两种光照有不同的颜色,即,所述多个光照有多个颜色。在一些实施例中,所述多个颜色包含白色。在一些实施例中,所述多个颜色包含红、蓝、绿。
光照序列中多个光照按照特定顺序排列。如图3所示,光照序列a包含红光、白光、蓝光、绿光4个依次排列的光照,光照序列b包括白光、蓝光、红光、绿光4个依次排列的光照,光照序列c包括红光、白光、蓝光、白光4个依次排列的光照,光照序列d包括红光、白光、白光、蓝光4个依次排列的光照。光照序列a和光照序列b中多个光照的颜色相同,但是排列顺序不同。类似的,光照序列c和光照序列d中多个光照的颜色相同,但是排列顺序不同。此外,光照序列a和b中4个光照的颜色完全不同,光照序列c和d中有两个光照的颜色相同。
图4是根据本说明书一些实施例所示的光照序列的另一示意图。
在一些实施例中,光照序列中光照的多个颜色可以包含至少一个基准颜色和至少一个验证颜色。验证颜色是多个颜色中直接用于验证图像真实性的颜色。基准颜色是多个颜色中用于辅助验证颜色确定目标图像真实性的颜色。例如,基准颜色对应的目标图像(又称为基准图像)可以用于确定验证颜色对应的目标图像(又称为验证图像)被拍摄时光照的颜色。进一步的,验证模块可以基于验证图像被拍摄时光照的颜色确定多幅目标图像的真实性。又例如,基准颜色对应的目标图像(又称为基准图像)可以用于验证颜色对应的目标图像(又称为验证图像)确定第一颜色关系。进一步的,验证模块可以基于第一颜色关系确定多幅目标图像的真实性。如图4所示,光照序列e中包含多个基准颜色的光照“红光、绿光、蓝光”,多个验证颜色的光照“黄光、紫光…青光”;光照序列f中包含多个基准颜色的光照“红光、白光…蓝光”,多个验证颜色的光照“红光..绿光”。
在一些实施例中,验证颜色存在多个。所述多个验证颜色可以完全相同。例如,验证颜色可以是 红、红、红、红。或者,多个验证颜色也可以完全不同。例如,验证颜色可以是红、黄、蓝、绿、紫。又或者,多个验证颜色还可以部分相同。例如,验证颜色可以是黄、绿、紫、黄、红。与验证颜色类似地,在一些实施例中,基准颜色存在多个,所述多个基准颜色可以完全相同、完全不同或部分相同。在一些实施例中,验证颜色可以仅包含一个颜色,例如绿色。
在一些实施例中,所述至少一个基准颜色和至少一个验证颜色可以根据目标识别***100的默认设定确定、由用户手动设定,或者由确定模块确定。例如,确定模块可以随机选取基准颜色和验证颜色。仅作为示例,确定模块可以从多个颜色中随机选取部分颜色作为所述至少一个基准颜色,剩余的颜色作为所述至少一个验证颜色。在一些实施例中,确定模块可以基于预设规则确定所述至少一个基准颜色和所述至少一个验证颜色。所述预设规则可以是关于验证颜色之间关系、基准颜色之间关系,和/或验证颜色和基准颜色之间关系等的规则。例如,所述预设规则为验证颜色可以基于基准颜色融合生成等。
在一些实施例中,至少一个验证颜色中的每一个可以基于至少一个基准颜色中的至少一部分颜色确定。例如,验证颜色可以基于至少一个基准颜色中的至少一部分颜色进行融合得到。在一些实施例中,至少一个基准颜色可以包含颜色空间的基色或原色。例如,所述至少一个基准颜色可以包括RGB空间的三原色,即“红、绿、蓝”。如图4所示,光照序列e中多个验证颜色“黄、紫…青”可以基于3个基准颜色“红、绿、蓝”确定。例如,“黄”可以基于第一比例对基准颜色“红、绿、蓝”进行融合得到,“紫”可以基于第二比例对基准颜色“红、绿、蓝”进行融合得到。
在一些实施例中,至少一个基准颜色中的一个或多个与至少一个验证颜色中的一个或多个相同。至少一个基准颜色和至少一个验证颜色之间可以全部相同或部分相同。例如,至少一个验证颜色中的某一个可以与至少一个基准颜色中特定一个颜色相同。可以理解的,该验证颜色也可以基于至少一个基准颜色确定,即,将该特定基准颜色作为该验证颜色即可。如图4所示,光照序列f中,多个基准颜色“红、白…蓝”和多个验证颜色“红..绿”均包含红色。
在一些实施例中,至少一个基准颜色和至少一个验证颜色还可以存在其他关系,在此不做限制。例如,至少一个基准颜色和所述至少一个验证颜色的色系相同或不同。示例的,至少一个基准颜色属于暖色系的颜色(如,红色、黄色等),至少一个基准颜色属于冷色系的颜色(如,灰色等)。
在一些实施例中,在所述光照序列中,所述至少一个基准颜色对应的光照可以排列在所述至少一个验证颜色对应的光照的前面或后面。如图4所示,光照序列e中,多个基准颜色的光照“红光、绿光、蓝光”排列在多个验证颜色的光照“黄光、紫光…青光”前面。光照序列f中,多个基准颜色的光照“红光、白光…蓝光”排列在多个验证颜色“红光..绿光”的后面。在一些实施例中,所述至少一个基准颜色对应的光照还可以和所述至少一个验证颜色对应的光照间隔排列,在此不做限制。
图5是根据本说明书一些实施例所示的获取多幅目标图像的示例性流程图。在一些实施例中,流程500可以由获取模块执行。如图5所示,该流程500包括下述步骤:
步骤510,获取多幅初始图像。
如前所述,所述多幅初始图像是从终端获取的未处理的图像。在一些实施例中,所述多幅初始图像可以是终端的图像采集设备拍摄的图像,也可以是被劫持的终端基于劫持者上传的图像或视频确定的图像。在一些实施例中,所述多幅初始图像可以包括第一初始图像和第二初始图像。
如前所述,获取模块可以对所述多幅初始图像进行纹理替换,以生成步骤220中所述的多幅目标图像。例如,获取模块可以利用指定图像中的纹理替换所述多幅初始图像的纹理。所述第一初始图像是指多幅初始图像中的指定图像,即,提供用于替换的纹理的初始图像。在一些实施例中,第一初始图像需要包含目标对象。例如,获取模块可以通过图像筛选从多幅初始图像中获取包含目标对象的第一初始图像。或者,第一初始图像可以是多幅初始图像中任意一幅。又例如,第一初始图像可以是多幅初始图像中拍摄时间最早的一幅。又例如,第一初始图像可以是多幅初始图像中背景最简单的一幅。仅作为示例,背景的简单程度通过背景的颜色种类判断,背景的颜色种类越少,背景越简单。背景简单程度还通过背景的线条复杂程度判断,背景的线条越少,背景越简单。在一些实施例中,光照序列中存在白色的光。第一初始图像可以是多幅目标图像中,采集时间与白光照射时间对应的初始图像。
第二初始图像是多幅初始图像中被替换纹理的初始图像。在一些实施例中,第二初始图像可以是除第一初始图像之外的任意初始图像。第二初始图像可以是一幅或者多幅。
在一些实施例中,终端可以根据光照序列中光照的照射时间采集对应的初始图像。获取模块可以通过网络从终端获取所述多幅初始图像。或者,劫持者可以通过终端设备上传图像或视频。获取模块可以基于所述上传的图像或视频确定所述多幅初始图像。关于终端采集初始图像的详细描述可以参见步骤220,在此不再赘述。
步骤520,用第一初始图像的纹理替换第二初始图像的纹理,以生成处理后的第二初始图像。
在一些实施例中,获取模块可以基于色彩迁移算法实现纹理替换。具体地,获取模块可以基于色 彩迁移算法将第二初始图像被拍摄时光照的颜色迁移到第一初始图像上,以生成处理后的第二初始图像。色彩迁移算法是将一幅图像的颜色迁移至另一幅图像上以生成新的图像的方法。色彩迁移算法包括但不限于Reinhard算法、Welsh算法、模糊聚类算法、自适应迁移算法等。在一些实施例中,色彩迁移算法可以提取第二初始图像的颜色特征,然后将第二初始图像的颜色特征迁移到第一初始图像上,生成处理后的第二初始图像。关于颜色特征的更多介绍参见步骤230及其相关描述。关于色彩迁移算法的详细描述可以参见图6及其相关描述,在此不再赘述。
可以理解,用第一初始图像的纹理替换第二初始图像的纹理,是为了使得所有的目标图像的纹理一致,但所有的目标图像的颜色保持不变。因此,在一些实施例中,获取模块可以基于色彩迁移算法将第二初始图像被拍摄时光照的颜色特征迁移到第一初始图像,则新生成的图像的颜色特征仍然与第二初始图像相同,但纹理变为第一初始图像的纹理。例如,第二初始图像为N幅(N为大于等于1的整数)时,将N幅第二初始图像中每一幅被拍摄时光照的颜色特征迁移到第一初始图像,可以得到N幅新生成的图像,所述N幅新生成的图像颜色特征分别代表N幅第二初始图像被拍摄时光照的颜色,但所述N幅新生成的图像纹理均为第一初始图像的纹理。
在一些实施例中,获取模块还可以使用纹理特征迁移算法实现纹理替换。具体地,纹理特征迁移算法可以提取第一初始图像的纹理特征和第二初始图像的纹理特征,并用第一初始图像的纹理特征替换第二初始图像的纹理特征,以生成处理后的第二初始图像。
在一些实施例中,提取纹理特征的方法可以包括但不限于几何法、灰度共生矩阵法、模型法、信号处理法和机器学习模型等。其中,机器学习模型可以包括但不限于深度神经网络模型、循环神经网络模型、自定义的模型结构等,在此不做限制。
步骤530,将处理后的第二初始图像作为多幅目标图像中的一幅。
可以理解,处理后的第二初始图像被拍摄时光照的颜色与第二初始图像相同,但纹理特征来自于第一初始图像。如果第一初始图像和第二初始图像被拍摄时的光照颜色不同,第一初始图像和处理后的第二初始图像可以是两张内容相同、颜色不同的图像。在一些实施例中,所述多幅初始图像包含一幅或多幅第二初始图像。对每幅第二初始图像,获取模块可以用第一初始图像的纹理替换所述第二初始图像的纹理,以生成对应的处理后的第二初始图像。可选地,获取模块可以将第一初始图像也作为多幅目标图像中的一幅。此时,所述多幅目标图像包含所述第一初始图像以及一幅或多幅处理后的第二初始图像。
如前所述,由于图像采集设备和目标对象的距离、角度可能发生变化,导致多幅初始图像的纹理可能不同。因此,本说明书一些实施例通过纹理一致化处理,使得多幅目标图像中的纹理相同,从而减少目标图像中的纹理对光照颜色识别的影响,更好地确定多幅目标图像的真实性。
图6是根据本说明书一些实施例所示的纹理替换的示意图。
如图6所示,m幅初始图像中,第1幅初始图像为白光照射时采集的,其他初始图像分别是在红光、橙光、青光…、蓝光的照射时采集的。获取模块可以选择第1幅初始图像610-1作为第一初始图像,将初始图像610-2、610-3…、610-m作为第二初始图像。各第二初始图像与第一初始图像除了颜色不同,纹理也有所不同。例如,第二初始图像610-m中的目标对象所在的位置与第一初始图像610-1中的不同。又例如,第二初始图像610-2、610-3…、610-m中的目标对象的拍摄背景均与第一初始图像610-1中的不同。初始图像610-1、610-2、610-3…、610-m的纹理差异可能会导致图像真实性判断结果准确度低,数据分析量增大等。
为解决上述问题,可以利用色彩迁移算法对第二初始图像进行预处理。如图6所示,获取模块分别提取m-1幅第二初始图像610-2、610-3…610-m的颜色特征(即,红色、橙色、青色、…蓝色分别对应的颜色特征)。获取模块分别将m-1个第二初始图像的颜色特征迁移到第一初始图像610-1上,生成m-1幅处理后的第二初始图像620-2、620-3…、620-m。可以理解,处理后的第二初始图像中融合了第一初始图像的纹理特征和第二初始图像的颜色特征,等同于用第一初始图像的纹理替换第二初始图像的纹理后得到的图像。
在一些实施例中,第一初始图像和第二初始图像是RGB图像。为了避免RGB颜色空间中各颜色通道之间相关性的影响,获取模块可以先将第一初始图像和第二初始图像从RGB颜色空间转换到Lαβ颜色空间。例如,获取模块可以通过神经网络将目标图像(例如,第一初始图像或第二初始图像)从RGB颜色空间转换到Lαβ颜色空间。又例如,获取模块可以基于多个过渡矩阵,将目标图像从RGB颜色空间先转换到LMS颜色空间,再从LMS颜色空间转换至Lαβ颜色空间。
进一步地,获取模块可以提取转化后的第二初始图像和转化后第一初始图像在Lαβ颜色空间中的颜色特征。在一些实施例中,获取模块可以计算转化后的第二初始图像的所有像素点在Lαβ各通道上的平均值μ_2j和标准差值σ_2j。其中,j表示Lαβ颜色空间中的颜色通道序号,0≤j≤2。j等于0,1,2时分别表示亮度通道L、黄蓝通道α和红绿通道β。获取模块可以计算转化后的第一初始图像的所有像素点 在Lαβ各通道上的平均值μ_1j和标准差值σ_1j。
更进一步地,获取模块可以将转化后的第二初始图像的颜色特征迁移到转化后的第一初始图像上。在一些实施例中,获取模块可以基于每个Lαβ通道中转化后的第一初始图像的标准差值σ_1j和转化后的第二初始图像的标准差值σ_2j,确定对应通道的缩放因子λ_j=σ_2j/σ_1j。对转化后的第一初始图像的每个像素点,获取模块可以用该像素点在每个Lαβ通道的值减去该通道的平均值μ_1j,得到该像素点在各Lαβ通道的更新值。获取模块可以再用每个像素点在每个Lαβ通道的更新值分别乘以该通道的缩放因子λ_j,再加上转化后的第二初始图像在对应Lαβ通道的平均值μ_2j,以生成处理后的第二初始图像。
在一些实施例中,获取模块还可以将处理后的第二初始图像从Lαβ颜色空间转换到RGB颜色空间。
本说明书的一些实施例基于色彩迁移算法将第二初始图像的颜色特征迁移到第一初始图像上,不仅避免了提取复杂的纹理特征,还可以使得处理后的第二初始图像包含更详细和更准确的颜色特征信息,从而可以提高确定目标图像真实性的效率和准确率。
图7是根据本说明书一些实施例所示的基于颜色特征确定目标图像的真实性的流程图。在一些实施例中,流程700可以由验证模块执行。如图7所示,该流程700可以包括以下步骤:
步骤710,提取多幅目标图像的颜色特征。
关于颜色特征的更多细节参见步骤230及其相关描述。
步骤720,基于多幅目标图像的颜色特征以及光照序列,确定多幅目标图像的真实性。
在一些实施例中,对于所述多幅目标图像中每一幅,验证模块可以基于该目标图像的颜色特征确定该目标图像被拍摄时光照的颜色,再基于光照序列确定该目标图像对应的颜色。进一步的,验证模块可以确定目标图像的真实性。
例如,验证颜色可以基于至少一个基准颜色融合得到时,验证模块可以基于至少一个基准图像的基准颜色特征构成新的颜色空间(即图9中的基准颜色空间)。进一步的,验证模块可以基于新的颜色空间和验证图像的验证颜色特征,确定该验证图像被拍摄时光照的颜色。更进一步的,验证模块可以结合该验证图像对应的颜色,确定该验证图像的真实性。关于基于基准颜色空间确定目标图像的真实性可以参见图9、图11及其相关描述,此处不再赘述。
在一些实施例中,验证模块可以基于多幅目标图像的颜色特征,确定多幅目标图像之间的颜色关系,再基于多幅目标图像之间的颜色关系和光照序列中多个光照的多个颜色之间的颜色关系,确定多幅目标图像的真实性。关于基于颜色关系确定目标图像的真实性可以参见图12及其相关描述,此处不再赘述。
在一些实施例中,对所述多幅目标图像中的每一幅,验证模块可以基于所述目标图像的颜色特征及其对应的光照的颜色特征,确定所述目标图像的颜色特征及其对应的光照的颜色特征之间的匹配度。进一步的,验证模块可以基于该匹配度确定该目标图像的真实性。例如,如果目标图像的颜色特征及其对应的光照的颜色特征之间的匹配度大于预设阈值,则该目标图像具有真实性。匹配度可以基于目标图像的颜色特征和光照的颜色特征之间的相似度确定。相似度可以通过欧式距离、曼哈顿距离等衡量。在一些实施例中,验证模块可以基于多幅目标图像构建的第一图像序列的颜色特征(即,图14中的第一特征信息)和多个颜色模板图像构建的第二图像序列的颜色特征(即,图14中的第二特性信息),确定第一特征信息和第二特征信息之间的匹配度。进一步的,验证模块可以基于该匹配度,确定多幅目标图像的真实性。关于基于序列确定多幅目标图像的真实性的更多细节可以参见图14及其相关描述。
在一些实施例中,本说明书一些实施例中针对图像真实性判断设定的预设阈值可以和拍摄稳定程度相关。拍摄稳定程度是终端的图像采集设备获取目标图像时的稳定程度。在一些实施例中,预设阈值与拍摄稳定程度正相关。可以理解,拍摄稳定程度越高,则获取的目标图像质量越高,基于多幅目标图像提取的颜色特征越能真实反应被拍摄时光照的颜色,则预设阈值越大。在一些实施例中,拍摄稳定度可以基于终端(例如,车载终端或用户终端等)的运动传感器检测到的终端的运动参数衡量。例如,运动传感器检测到的运动速度、震动频率等。示例的,运动参数越大,或者运动参数变化率越大,说明拍摄稳定程度越低。运动传感器可以是检测车辆行驶情况的传感器,车辆可以是目标用户使用的车辆。目标用户是指目标对象所属的用户。例如,目标用户为网约车司机,则运动传感器可以是司机端或者车载终端的运动传感器。
在一些实施例中,预设阈值还可以与拍摄距离和拍摄角度相关。拍摄距离是图像采集设备获取目标图像时和目标对象之间的距离。拍摄角度是图像采集设备获取目标图像时目标对象正面与终端屏幕的角度。在一些实施例中,拍摄距离和拍摄角度都与预设阈值负相关。可以理解,拍摄距离越短,则获取的目标图像质量越高,基于多幅目标图像提取的颜色特征越能真实反应被拍摄时光照的颜色,则预设阈值越大。拍摄角度越小,则获取的目标图像质量越高,同理,则预设阈值越大。在一些实施例中,拍摄距离和拍摄角度可以通过图像识别技术基于目标图像确定。
在一些实施例中,验证模块可以对每幅目标图像的拍摄稳定程度、拍摄距离和拍摄角度进行特定运算(如,求平均、标准差等),基于特定运算后的拍摄稳定程度、拍摄距离和拍摄角度确定预设阈值。例如,验证模块分别基于特定运算后的拍摄稳定程度、拍摄距离和拍摄角度,确定对应的子阈值,再基于拍摄稳定程度对应的子阈值、拍摄距离对应的子阈值和拍摄角度对应的子阈值,确定预设阈值。如,可以对这三个子阈值进行求平均、加权平均等。
图8是根据本说明书一些实施例所示的基于颜色验证模型确定目标图像的真实性的流程图。在一些实施例中,流程800可以由验证模块执行。如图8所示,该流程800可以包括以下步骤:
步骤810,获取颜色验证模型。
颜色验证模型是用于验证图像是否具有真实性的模型。颜色验证模型是预置参数的机器学习模型。预置参数是指机器学习模型训练过程中,学习到的模型参数。以神经网络为例,模型参数包括权重(Weight)和偏置(bias)等。颜色验证模型的所述预置参数在训练过程确定。例如,模型获取模块可以基于带有标签的多个训练样本训练初始颜色验证模型,以得到颜色验证模型。
在一些实施例中,颜色验证模型可以存储在存储设备中,验证模块可以通过网络从存储设备中获取颜色验证模型。在一些实施例中,颜色验证模型可以通过训练过程获取。关于颜色验证模型的训练过程可以参见图10、13和图15及其相关描述。
步骤820,基于光照序列,利用颜色验证模型对多幅目标图像进行处理,确定多幅目标图像的真实性。
在一些实施例中,颜色验证模型可以包括第一验证模型。所述第一验证模型可以包括第一颜色特征提取层和颜色分类层。第一颜色特征提取层提取目标图像的颜色特征。颜色分类层基于目标图像的颜色特征,确定目标图像对应的颜色。关于基于第一验证模型确定目标图像的真实性可以参见图10及其相关描述。
在一些实施例中,颜色验证模型可以包括第二验证模型。所述第二验证模型可以包括第二颜色特征提取层和颜色关系确定层。第二颜色特征提取层层提取目标图像的颜色特征。颜色关系确定层基于目标图像的颜色特征,确定不同目标图像对应的颜色之间的关系(例如,是否相同)。关于基于第二验证模型确定目标图像的真实性的详细描述可以参见图13及其相关描述。
在一些实施例中,颜色验证模型可以包括第三验证模型。所述第三验证模型可以包括第一提取层、第二提取层和判别层。第一提取层提取多幅目标图像构建的序列的颜色特征。第二提取层提取多个颜色模板图像构建的序列的颜色特征。判别层基于两个序列的颜色特征确定两个序列的关系。关于基于第三验证模型确定多幅目标图像的真实性可以参见图14及其相关描述。
图9是根据本说明书一些实施例所示的确定多幅目标图像的真实性的示例性流程图。在一些实施例中,流程图900可以由验证模块执行。如图9所示,该流程900可以包括以下步骤:
在一些实施例中,光照序列中光照的多个颜色包括至少一个基准颜色和至少一个验证颜色。至少一个验证颜色中的每一个可以基于至少一个基准颜色中的至少一部分颜色确定。例如,至少一个验证颜色中的每一个可以基于一个或多个基准颜色融合生成。所述多幅图像包括至少一幅基准图像和至少一幅验证图像。所述至少一幅验证图像中的每一幅与所述至少一个验证颜色中的一个对应。所述多幅基准图像中的每一幅与所述至少一个基准颜色中的一个对应。如步骤220所述,目标图像与特定的颜色对应说明若终端未被劫持时(即该目标图像是真实的),所述目标图像理应具备该特定的颜色。
步骤910,提取至少一幅基准图像的基准颜色特征和至少一幅验证图像的验证颜色特征。
基准颜色特征是指基准图像的颜色特征。验证颜色特征是指验证图像的颜色特征。关于颜色特征及其提取可以参见步骤230的描述。
在一些实施例中,验证模块可以基于第一验证模型包含的第一颜色特征提取层提取图像的颜色特征。关于基于第一颜色特征提取层提取颜色特征的细节可以参见图10及其相关描述,在此不再赘述。
步骤920,对至少一幅验证图像中的每一幅,基于验证图像的验证颜色特征和至少一幅基准图像的基准颜色特征,确定验证图像被拍摄时光照的颜色。
在一些实施例中,至少一幅基准图像的基准颜色特征可以用于构建一个基准颜色空间。所述基准颜色空间以所述至少一个基准颜色作为其颜色通道。具体地,每个基准图像对应的基准颜色特征可以作为该基准颜色空间中对应颜色通道的基准值。
在一些实施例中,所述多幅目标图像对应的颜色空间(又称为原始颜色空间)可以与所述基准颜色空间相同或不同。例如,所述多幅目标图像可以对应RGB颜色空间,所述至少一个基准颜色为红、蓝和绿,则多幅目标图像对应的原始颜色空间和基于所述基准颜色构建的基准颜色空间属于相同的颜色空间。在本文中,如果两个颜色空间的基色或原色相同,则这两个颜色空间可以被视为相同的颜色空间。
如上所述,验证颜色可以基于一个或多个基准颜色进行融合得到。因此,验证模块可以基于基准 颜色特征和/或其构建的基准颜色空间确定验证颜色特征对应的颜色。在一些实施例中,验证模块可以基于基准颜色空间,对验证图像的验证颜色特征进行映射,确定验证图像被拍摄时光照的颜色。例如,验证模块可以基于验证颜色特征和基准颜色空间中每个颜色通道的基准值之间的关系,确定验证颜色特征在每个颜色通道上的参数,再基于参数确定验证颜色特征对应的颜色,即验证图像被拍摄时光照的颜色。
示例的,验证模块可以将基于基准图像a、b、c提取的基准颜色特征x 、y 、z 分别作为颜色通道Ⅰ、颜色通道Ⅱ和颜色通道Ⅲ的基准值。颜色通道Ⅰ、颜色通道Ⅱ和颜色通道Ⅲ是所述基准颜色空间的三个颜色通道。验证模块可以基于验证图像d,提取验证颜色特征h ,并基于验证颜色特征h 和颜色通道Ⅰ、颜色通道Ⅱ和颜色通道Ⅲ的基准值x 、y 、z 之间的关系h =δ_1x +δ_2y +δ_3z ,确定验证颜色特征h 分别在颜色通道Ⅰ、颜色通道Ⅱ和颜色通道Ⅲ上的参数δ_1、δ_2和δ_3。验证模块可以基于参数δ_1、δ_2和δ_3确定验证颜色特征对应的颜色,即验证图像被拍摄时光照的颜色。在一些实施例中,参数和颜色类别的对应关系可以是预先设置的,也可以通过模型学习。
在一些实施例,基准颜色空间可以与原始颜色空间的颜色通道的颜色相同。例如,所述原始空间颜色可以是RGB空间,所述至少一个基准颜色可以是红、绿、蓝。验证模块可以基于红、绿、蓝对应的三幅基准图像的基准颜色特征构建新的RGB颜色空间(即基准颜色空间),并确定每幅验证图像的验证颜色特征在新的RGB颜色空间中的RGB值,从而确定验证图像被拍摄时光照的颜色。
在一些实施例中,验证模块可以基于第一验证模型中颜色分类层对基准颜色特征和验证颜色特征处理,确定验证图像被拍摄时光照的颜色,具体可以参见图10及其相关描述,在此不再赘述。
步骤930,基于光照序列和至少一幅验证图像被拍摄时光照的颜色,确定多幅目标图像的真实性。
在一些实施例中,对于至少一幅验证图像中的每一幅,验证模块可以基于光照序列确定所述验证图像对应的验证颜色。进一步地,验证模块可以基于验证图像对应的验证颜色,确定验证图像的真实性。例如,验证模块基于验证图像对应的验证颜色与被拍摄时光照的颜色是否一致的第一判断结果,确定验证图像的真实性。验证图像对应的验证颜色和被拍摄时光照的颜色相同表示验证图像具有真实性,验证图像对应的验证颜色和被拍摄时光照的颜色不相同表示该验证图像不具有真实性。又例如,验证模块基于多个验证图像对应的验证颜色之间的关系(例如,是否相同)与多幅验证图像被拍摄时光照的颜色之间的关系是否一致,确定验证图像的真实性。
在一些实施例中,验证模块可以基于至少一个验证图像的真实性确定终端的图像采集设备是否被劫持。例如,具有真实性的验证图像的个数超过第一阈值说明终端的图像采集设备未被劫持。又例如,不具有真实性的验证图像的个数超过第二阈值(例如,1)说明终端的图像采集设备被劫持。
在一些实施例中,验证模块可以结合基准颜色空间与其他验证方式,确定多幅目标图像的真实性。在一些实施例中,验证模块可以基于基准颜色空间确定验证图像和基准图像中每幅目标图像的更新后的颜色特征。更新后的颜色特征是指将原来的颜色特征转换到基准颜色空间后的特征。进一步地,验证模块可以基于每幅目标图像更新后的颜色特征替换原来的颜色特征,并结合其他验证方式确定多幅目标图像的真实性。例如,验证模块可以基于多幅目标图像更新后的颜色特征确定多幅目标图像之间的第一颜色关系,并基于第一颜色关系确定多幅目标图像的真实性。又例如,验证模块基于多幅目标图像更新后的颜色特征确定多幅目标图像构建的第一图像序列的颜色特征,并基于第一图像序列的颜色特征确定多幅目标图像的真实性。关于其他验证方式的相关描述可以参见本说明书的其他地方,例如图12~15及其相关描述。
由于基准图像和验证图像都是在相同的外界环境光的条件下拍摄的,基于基准图像建立基准颜色空间,并基于基准颜色空间确定验证图像被拍摄时光照的颜色可以使确定结果更加准确。进一步的,目标图像真实性的确定也更加准确。例如,光照序列中的光照比环境光微弱时,照射到目标对象的光照可能难以被检测。或者,当环境光为彩色光时,照射到目标对象的光照可能会受到干扰。当终端未被劫持时,基准图像和验证图像是在相同(或基本相同)的环境光下拍摄的。基于基准图像构建的基准颜色空间融合了环境光的影响,因此,相比于原始颜色空间,可以较准确地识别出验证图像被拍摄时光照的颜色。此外,本文中披露的方法可以避免终端的发光元件的干扰。当终端未被劫持时,基准图像和验证图像都是在相同的发光元件照射下被拍摄的,利用基准颜色空间可以消除或减弱发光元件的影响,提高识别光照颜色的准确率。
图10是根据本说明书一些实施例所示的第一验证模型的示意图。
在一些实施例中,验证模块可以基于第一验证模型和光照序列确定多幅目标图像的真实性。第一验证模型可以包括第一颜色特征提取层和颜色分类层。第一颜色特征提取层可以包括基准颜色特征提取层和验证颜色特征提取层。如图10所示,第一验证模型可以包括基准颜色特征提取层1030、验证颜色特征提取层1040以及颜色分类层1070。基准颜色特征提取层1030和验证颜色特征提取层1040可以用于实现步骤910。颜色分类层1070可以用于实现步骤920。进一步的,验证模块基于验证图像对应的颜色和光照序列,确定验证图像的真实性。
颜色特征提取模型(如,第一颜色特征提取层、基准颜色特征提取层1030和验证颜色特征提取层1040等)可以提取目标图像的颜色特征。在一些实施例中,颜色特征提取模型的类型可以包括ResNet、DenseNet、MobileNet、ShuffleNet或EfficientNet等卷积神经网络模型,或长短记忆循环神经网络等循环神经网络模型。在一些实施例中,基准颜色特征提取层1030和验证颜色特征提取层1040的类型可以相同或不同。
基准颜色特征提取层1030提取至少一幅基准图像1010的基准颜色特征1050。在一些实施例中,至少一幅基准图像1010可以包括多幅基准图像。基准颜色特征1050可以是所述多幅基准图像1010的颜色特征的融合。例如,可以将所述多幅基准图像1010进行拼接,拼接后输入基准颜色特征提取层1030,所述基准颜色特征提取层1030可以输出基准颜色特征1050。示例的,基准颜色特征1050是基准图像1010-1、1010-2、1010-3的颜色特征向量拼接而成的特征向量。
验证颜色特征提取层1040提取至少一幅验证图像1020的验证颜色特征1060。在一些实施例中,验证模块可以分别对至少一幅验证图像1020中的每一幅进行颜色判断。例如,如图8所示,验证模块可以将至少一幅基准图像1010输入基准颜色特征提取层1030,将验证图像1020-2输入验证颜色特征提取层1040。验证颜色特征提取层1040可以输出验证图像1020-2的验证颜色特征1060。颜色分类层1070可以基于基准颜色特征1050和验证图像1020-2的验证颜色特征1060,确定验证图像1020-2被拍摄时光照的颜色。
在一些实施例中,验证模块可以同时对多幅验证图像1020进行颜色判断。例如,验证模块可以将至少一幅基准图像1010输入基准颜色特征提取层1030,将多幅验证图像1020(包括验证图像1020-1、1020-2…1020-n)输入验证颜色特征提取层1040。验证颜色特征提取层1040可以同时输出多幅验证图像1020的验证颜色特征1060。颜色分类层1070可以同时确定多幅验证图像中每幅验证图像被拍摄时光照的颜色。
对至少一幅验证图像中每一幅,颜色分类层1070可以基于基准颜色特征和该验证图像的验证颜色特征,确定该验证图像被拍摄时光照的颜色。例如,颜色分类层1070可以基于基准颜色特征和该验证图像的验证颜色特征确定数值或概率,再基于数值或概率确定验证图像被拍摄时光照的颜色。验证图像对应的数值或概率可以反映所述验证图像被拍摄时光照的颜色属于各颜色的可能性。在一些实施例中,颜色分类层1070可以包括但不限于全连接层、深度神经网络等。
第一验证模型是预置参数的机器学习模型。可以理解的,第一验证模型包括的基准颜色特征提取层、验证颜色特征提取层和颜色分类层是预置参数的机器学习模型。第一验证模型的预置参数可以在模型训练过程确定。例如,获取模块可以基于带有第一标签的第一训练样本训练初始第一验证模型,以得到第一验证模型的预置参数。第一训练样本包括样本目标对象的至少一幅样本基准图像和至少一幅样本验证图像,第一训练样本的第一标签为每幅样本验证图像被拍摄时光照的颜色。其中,至少一个样本基准图像被拍摄时光照的颜色与至少一个基准颜色相同。例如,如果所述至少一个基准颜色包括红、绿、蓝,则所述至少样本基准图像包括样本目标对象在红光、绿光和蓝光照射下拍摄的三幅目标图像。
在一些实施例中,获取模块可以将第一训练样本输入初始第一验证模型,通过训练更新初始验证颜色特征提取层、初始基准颜色特征提取层和初始颜色分类层的参数,直到更新后的第一验证模型满足第一预设条件。更新后的第一验证模型可以被指定为预置参数的第一验证模型,换言之,更新后的第一验证模型可以被指定为训练后的第一验证模型。第一预设条件可以是更新后的颜色特征模型的损失函数小于阈值、收敛,或训练迭代次数达到阈值。
在一些实施例中,获取模块可以通过端到端的训练方式,训练初始第一验证模型中的初始验证颜色特征提取层、初始基准颜色特征提取层和初始颜色分类层。端到端的训练方式是指将训练样本输入初始模型,并基于初始模型的输出确定损失值,基于所述损失值更新所述初始模型。所述初始模型可能会包含用于执行不同数据处理操作的多个子模型或模块,其会在训练中被视为整体,进行同时更新。例如,在初始第一验证模型的训练中,可以将至少一幅样本基准图像输入初始基准颜色特征提取层,将至少一幅样本验证图像输入初始验证颜色特征提取层,基于初始颜色分类层的输出结果和第一标签建立损失函数,基于损失函数对初始第一验证模型中各初始模型的参数进行同时更新。
在一些实施例中,第一验证模型可以由处理设备或第三方预先训练后保存在存储设备中,处理设备可以从存储设备中直接调用第一验证模型。
本说明书一些实施例通过第一验证模型确定验证图像的真实性,可以提高目标图像真实性验证的效率。此外,使用第一验证模型可以提高目标对象真实性验证的可靠性,减少或者去除终端设备的性能差异的影响,进一步确定目标图像的真实性。可以理解,不同终端的硬件存在一定差异,例如,不同厂商的终端屏幕发射的相同颜色彩色光在饱和度、亮度等参数上可能会有差异,导致同一种颜色的类内差距比较大。初始第一验证模型的第一训练样本可以是由不同性能的终端拍摄的。初始第一验证模型在训练过程中 通过学习,可以使得第一验证模型在进行目标对象颜色判断时可以考虑终端性能差异,较为准确地确定目标图像的颜色。而且,当终端未被劫持时,由于基准图像和验证图像都是在相同的外界环境光的条件下拍摄的。在一些实施例中,基于第一验证模型中的基准颜色特征提取层,建立基准颜色空间,并基于基准颜色空间确定多幅目标图像的真实性时,可以消除或减弱外界环境光的影响。
图11是根据本说明书一些实施例所示的确定多幅目标图像的真实性的另一示例性流程图。在一些实施例中,流程图1100可以由验证模块执行。如图11所示,该流程1100包括如下步骤:
步骤1110,提取至少一幅验证图像的验证颜色特征。
关于提取验证颜色特征的具体描述可以参见步骤910及其相关描述。
步骤1120,提取至少一幅基准图像的基准颜色特征。
关于提取基准颜色特征的具体描述可以参见步骤910及其相关描述。
步骤1130,对至少一幅验证图像中的每一幅,基于光照序列和基准颜色特征,生成该验证图像对应的验证颜色的目标颜色特征。
目标颜色特征是指验证图像对应的验证颜色在基准颜色空间中表示的特征。在一些实施例中,对于至少一幅验证图像中的每一幅,验证模块可以基于光照序列,确定验证图像对应的验证颜色,并基于该验证颜色和基准颜色特征生成验证图像的目标颜色特征。例如,验证模块可以将验证颜色的颜色特征与基准颜色特征进行融合,得到目标颜色特征。
步骤1140,基于至少一幅验证图像中的每一幅对应的目标颜色特征和验证颜色特征,确定多幅目标图像的真实性。
在一些实施例中,对于至少一幅验证图像中的每一幅,验证模块可以基于其对应的目标颜色特征和验证颜色特征之间的相似度,确定验证图像的真实性。其中,目标颜色特征和验证颜色特征之间的相似度可以通过向量相似度计算得到,例如,通过欧式距离、曼哈顿距离等确定。示例性地,当目标颜色特征和验证颜色特征的相似度大于第三阈值,则验证图像具有真实性,反之则不具有真实性。
图12是根据本说明书一些实施例所示的确定多幅目标图像的真实性的方法的示例性流程图。在一些实施例中,流程1200可以由验证模块执行。如图12所示,该流程1200包括以下步骤:
如前所述,光照序列中多个光照对应的多个颜色包括至少一个基准颜色和至少一个验证颜色。在一些实施例中,至少一个基准颜色中的一个或多个与至少一个验证颜色中的一个或多个相同。所述多幅目标图像包括至少一幅基准图像和至少一幅验证图像,所述至少一幅基准图像的每幅与所述至少一幅基准颜色中的一个对应,所述至少一幅验证图像的每幅与所述至少一个验证颜色中的一个对应。
步骤1210,提取至少一幅基准图像中每一幅的基准颜色特征和至少一幅验证图像中每一副的验证颜色特征。
关于提取基准颜色特征和验证颜色特征可以参见步骤910及其相关描述,在此不再赘述。
在一些实施例中,验证模块可以基于第二验证模型包含的第二颜色特征提取层提取基准颜色特征和验证颜色特征。关于基于第二颜色特征提取层提取颜色特征的细节可以参见图13及其相关描述,在此不再赘述。
步骤1220,对至少一幅基准图像中的每一幅,基于基准图像的基准颜色特征和每幅验证图像的验证颜色特征,确定基准图像和每幅验证图像的第一颜色关系。
基准图像和验证图像之间的第一颜色关系是指基准图像被拍摄时光照的颜色和验证图像被拍摄时光照的颜色之间的关系。第一颜色关系包括相同、不同或相似等。在一些实施例中,第一颜色关系可以用数值表示。例如,相同用“1”表示,不同用“0”表示。
在一些实施例中,基于至少一幅基准图像和至少一幅验证图像确定的至少一个第一颜色关系可以用向量表示,向量中每个元素可以表示至少一幅基准图像中的一幅和至少一幅验证图像中的一幅之间的第一颜色关系。例如,1幅基准图像和5幅验证图像中每一幅的第一颜色关系分别为相同、不同、相同、相同、不同,则1幅基准图像和5幅验证图像的第一颜色关系可以用向量(1,0,1,1,0)表示。
在一些实施例中,基于至少一幅基准图像和至少一幅验证图像确定的至少一个第一颜色关系还可以用验证码表示。验证码中每个位置的子码可以表示至少一幅基准图像中的一幅和至少一幅验证图像中的一幅之间的第一颜色关系。例如,上述1幅基准图像和5幅验证图像的第一颜色关系可以用验证码10110表示。
在一些实施例中,验证模块可以基于基准图像的基准颜色特征和验证图像的验证颜色特征,确定其之间的第一颜色关系。例如,验证模块可以确定基准图像的基准颜色特征和验证图像的验证颜色特征之间的相似度,基于相似度和阈值确定至少一个第一颜色关系。例如,相似度大于第四阈值则判断为相同,小于第五阈值则判断为不同,或者大于第六阈值小于第四阈值则判断为相似等。其中,第四阈值可以大于第五阈值和第六阈值,第六阈值可以大于第五阈值。在一些实施例中,相似度可以用基准颜色特征和验证 颜色特征之间的距离表征。距离可以包括但不限于欧氏距离、曼哈顿距离、切比雪夫距离、闵可夫斯基距离、马氏距离、夹角余弦距离等。
在一些实施例中,验证模块还可以基于第二验证模型包含的颜色关系确定层获取第一颜色关系。关于颜色关系确定层的详细描述可以参见图13及其相关描述,在此不再赘述。
步骤1230,对至少一个基准颜色中的每一个,确定基准颜色和每个所述验证颜色的第二颜色关系。
基准颜色和验证颜色的第二颜色关系可以表示这两个颜色是否相同、不同,或者相似。在一些实施例中第二颜色关系的类型和表示方式可以与第一颜色关系类似,在此不再赘述。
在一些实施例中,验证模块可以基于基准颜色和验证颜色类别或颜色参数,确定其第二颜色关系。例如,如果基准颜色和验证颜色中的类别相同或颜色参数的数值差小于一定阈值,则判断这两个颜色相同,反之则判断这两个颜色不同。
在一些实施例中,验证模块可以提取基准颜色的颜色模板图像的第一颜色特征和验证颜色的颜色模板图像的第二颜色特征。验证模块可以进一步基于第一颜色特征和第二颜色特征,确定所述基准颜色和验证颜色的第二颜色关系。例如,验证模块可以计算第一颜色特征和第二颜色特征之间的相似度以确定所述第二颜色关系。
在一些实施例中,至少一个第一颜色关系和至少一个第二颜色关系存在一对一的对应关系。具体的,基准图像和验证图像之间的第一颜色关系与该基准图像对应的基准颜色和该验证图像对应的验证颜色之间第二颜色关系相对应。
步骤1240,基于至少一个第一颜色关系和至少一个第二颜色关系,确定多幅目标图像的真实性。
在一些实施例中,验证模块可以基于至少一个第一颜色关系中的部分或全部,和对应的第二颜色关系,确定多幅目标图像的真实性。
在一些实施例中,第一颜色关系和第二颜色关系可以通过向量表示。在一些实施例中,验证模块可以选择至少一个第一颜色关系中的部分或全部构建第一向量,基于被选择的第一颜色关系对应的第二颜色关系构建第二向量。进一步,验证模块可以基于第一向量和第二向量的相似度确定多幅目标图像的真实性。例如,相似度大于第七阈值,则多幅目标图像具有真实性。可以理解的,第一向量和第二向量中元素的排列顺序基于第一颜色关系和第二颜色关系之间的对应关系确定。例如,第一向量A中某第一颜色关系对应的元素为Aij,第二向量B中该第一颜色关系对应的第二颜色关系对应的元素为Bij。
在一些实施例中,第一颜色关系和第二颜色关系还可以通过验证码表示。在一些实施例中,验证模块可以选择至少一个第一颜色关系中的部分或全部构建对应的第一验证码,基于被选择的第一颜色关系对应的第二颜色关系构建对应的第二验证码,确定多幅目标图像的真实性。与第一向量和第二向量类似地,第一验证码和第二验证码中子码的位置基于第一颜色关系和第二颜色关系之间的对应关系确定。例如,第一验证码和第二验证码不同,则多幅目标图像不具有真实性。示例的,第一验证码为10110,第二验证码为10111,则多幅目标图像不具有真实性。又例如,验证模块可以基于第一验证码和第二验证码中子码相同的个数,确定多幅目标图像的真实性。例如,子码相同的个数大于第八阈值,则确定多幅目标图像的真实性,子码相同的个数小于第九阈值,则确定多幅目标图像不具有真实性。示例的,第八阈值为3,第九阈值为1,第一验证码为10110,第二验证码为10111,第一验证码和第二验证码的第一位、第二位、第三位和第四位的子码对应相同,则确定多幅目标图像具有真实性。
在一些实施例中,验证模块可以基于基准颜色空间确定验证图像和基准图像被拍摄时光照的颜色,进一步确定第一颜色关系,再结合对应的第二颜色关系确定多幅目标图像的真实性。在一些实施例中,验证模块可以基于基准颜色空间确定验证图像更新后的验证颜色特征和基准图像更新后的基准颜色特征。进一步的,验证模块基于更新后的验证颜色特征和更新后的基准颜色特征确定第一颜色关系,再结合对应的第二颜色关系确定多幅目标图像的真实性。
如前所述,基准图像和验证图像都是在相同的外界环境光的条件下、被相同的发光元件照射时拍摄的,因此,基于基准图像和验证图像之间的关系确定多幅目标图像的真实性时,可以消除或减弱外界环境光和发光元件的影响,从而提高光照颜色的识别准确率。
图13是根据本说明书一些实施例所示的第二验证模型的示意图。
在一些实施例中,验证模块可以基于第二验证模型和光照序列确定多幅目标图像的真实性。如图13所示,第二验证模型可以包括第二颜色特征提取层1330和颜色关系确定层1360。第二颜色特征提取层1330可以用于实现步骤1210,颜色关系确定层1360可以用于实现步骤1220。进一步的,验证模块可以基于所述第一颜色关系和光照序列,确定多幅目标图像的真实性。
在一些实施例中,至少一幅基准图像和至少一幅验证图像可以组成一个或多个图像对。每个图像对包括至少一副基准图像中的一幅和至少一幅验证图像中一幅。验证模块可以分别对一个或多个图像对进行分析,确定该图像对中基准图像和验证图像之间的第一颜色关系。例如,如图13所示,所述至少一幅基 准图像包括“1320-1…1320-y”,所述至少一幅验证图像包括“1310-1…1310-x”。出于说明目的,下文以基准图像1320-y和验证图像1310-1构成的图像对为例展开。
第二颜色特征提取层1330可以提取基准图像1320-y的基准颜色特征1350-y和验证图像1310-1的验证颜色特征1340-1。在一些实施例中,第二颜色特征提取层1330的类型可以包括ResNet、ResNeXt、SE-Net、DenseNet、MobileNet、ShuffleNet、RegNet、EfficientNet或Inception等卷积神经网络(Convolutional Neural Networks,CNN)模型,或循环神经网络模型。
第二颜色特征提取层1330的输入可以是图像对(如,基准图像1320-y和验证图像1310-1)。例如,可以将基准图像1320-y和验证图像1310-1拼接后输入第二颜色特征提取层1330。第二颜色特征提取层1330的输出可以是图像对的颜色特征(如,基准图像1320-y的基准颜色特征1350-y和验证图像1310-1的验证颜色特征1340-1)。例如,第二颜色特征提取模型1330的输出可以是验证图像1310-1的验证颜色特征1340-1和基准图像1320-y的基准颜色特征1350-y拼接后的颜色特征。
颜色关系确定层1360用于基于图像对的颜色特征,确定图像对的第一颜色关系。例如,验证模块可以将基准图像1320-y的基准颜色特征1350-y和验证图像1310-1的验证颜色特征1340-1输入颜色关系确定层1360,颜色关系确定层1360输出基准图像1320-y和验证图像1310-1的第一颜色关系。
在一些实施例中,验证模块可以将至少一个基准图像和至少一个验证图像组成的多对图像对一起输入第二验证模型。第二验证模型可以同时输出所述多对图像对中每一对的第一颜色关系。在一些实施例中,验证模块可以将所述多对图像对中某一对输入第二验证模型。第二验证模型可以输出该对图像对的第一颜色关系。
在一些实施例中,颜色关系确定层1360可以是分类模型,包括但不限于全连接层、深度神经网络、决策树等。
第二验证模型是预置参数的机器学习模型,可以理解的,第二验证模型包括的第二颜色特征提取层和颜色关系确定层为预置参数的机器学习模型。第二验证模型的预置参数可以在训练过程确定。例如,获取模块可以基于带有第二标签的第二训练样本训练初始第二验证模型,以得到第二验证模型。第二训练样本包括带有第二标签的一个或多个样本图像对。每个样本图像对包括样本目标对象在相同或不同灯光的照射下拍摄的两张目标图像。第二训练样本的第二标签可以说明样本图像对被拍摄时光照的颜色是否相同。
在一些实施例中,获取模块可以将第二训练样本输入初始第二验证模型,通过训练更新初始第二颜色特征提取层和初始颜色关系确定层的参数,直到更新后的第二验证模型满足第二预设条件。更新后的第二验证模型可以被指定为预置参数的第二验证模型,换言之,更新后的第二验证模型可以被指定为训练后的第二验证模型。第二预设条件可以是更新后的第二验证模型的损失函数小于阈值、收敛,或训练迭代次数达到阈值。
在一些实施例中,获取模块可以通过端到端的训练方式,训练初始第二验证模型中的初始第二颜色特征提取层和初始颜色关系确定层。端到端的训练方式是指将训练样本输入初始模型,并基于初始模型的输出确定损失值,基于所述损失值更新所述初始模型。所述初始模型可能会包含用于执行不同数据处理操作的多个子模型或模块,其会在训练中被视为整体,进行同时更新。例如,在初始第二验证模型训练中,可以将至少一幅样本基准图像和至少一幅验证图像输入初始颜色特征提取模型,基于初始颜色关系确定层的输出结果和第二标签建立损失函数,基于损失函数对初始第二验证模型中各初始模型的参数进行同时更新。
在一些实施例中,第二验证模型可以由处理设备或第三方预先训练后保存在存储设备中,处理设备可以从存储设备中直接调用第二验证模型。
基于第一颜色关系和第二颜色关系确定多幅目标图像的真实性,可以无需识别目标图像被拍摄时光照的类型,直接通过对比颜色特征确定被拍摄时光照的类型是否一致来进行识别,这相当于把颜色识别任务转化为判断颜色是否相同的二分类任务。在一些实施例中,可以使用第二验证模型来确定第一颜色关系。第二验证模型的颜色关系确定层可以只包括数量较少的神经元(例如,两个神经元)来进行颜色是否相同的判断。相比于传统方法中的颜色识别网络,本说明书中披露的第二验证模型结构更加简单。基于所述第二验证模型进行的目标对象分析所需要的计算资源(如计算空间)也相对更少,由此可以提高光照颜色的识别效率。同时,模型的输入可以是任意颜色对应的目标图像,与其他需要限定输入颜色种类数量的算法相比,本说明书实施例的适用性更高。而且,使用第二验证模型可以提高目标对象真实性验证的可靠性,减少或者去除终端设备的性能差异的影响,进一步确定目标图像的真实性。可以理解,不同终端的硬件存在一定差异,例如,不同厂商的终端屏幕发射的相同颜色彩色光在饱和度、亮度等参数上可能会有差异,导致同一种颜色的类内差距比较大。初始第二验证模型的第二训练样本可以是由不同性能的终端拍摄的。初始第二验证模型在训练过程中通过学习,可以使得第二验证模型在进行目标对象颜色判断时可以考虑终端性能差异,较为准确地确定目标图像的颜色。此外,当终端未被劫持时,基准图像和验证图像都是 在相同的外界环境光的条件下拍摄的。因此,基于第二验证模型对基准图像和验证图像进行处理,确定多幅目标图像的真实性时,可以消除或减弱外界环境光的影响。
图14是根据本说明书一些实施例所示的确定多幅目标图像的真实性方法的另一示例性流程图。在一些实施例中,流程1400可以由验证模块执行。如图14所示,该流程1400包括以下步骤:
步骤1410,基于多幅目标图像,确定第一图像序列。
第一图像序列是按照特定顺序排列的多幅目标图像的集合。在一些实施例中,验证模块可以将多幅目标图像按其各自的拍摄时间排序,以生成第一图像序列。例如,所述多幅目标图像可以按照其各自的拍摄时间从先到后排序。
步骤1420,基于多幅颜色模板图像,确定第二图像序列。
颜色模板图像是基于光照序列中光照的颜色生成的模板图像。某个颜色的颜色模板图像是指仅仅包含该颜色的纯色图片。例如,红色的颜色模板图像仅仅包含红色,不包含红色以外的其他颜色,也不包含纹理。
在一些实施例中,验证模块可以基于光照序列,生成所述多幅颜色模板图像。例如,验证模块可以根据光照序列中每个光照的颜色类型和/或颜色参数,生成该光照的颜色对应的颜色模板图像。在一些实施例中,存储设备中可以预先存储光照序列中每个颜色的颜色模板图像,验证模块可以通过网络从存储设备中获取光照序列中光照的颜色对应的颜色模板图像。
第二图像序列是按照顺序排列的多幅颜色模板图像的集合。在一些实施例中,验证模块可以将多幅颜色模板图像按其对应的光照的照射时间排序,以生成第二图像序列。例如,所述多幅颜色模板图像可以按照其对应的光照的照射时间从先到后排序。在一些实施例中,在第二图像序列中多幅颜色模板图像的排列顺序与第一图像序列中多幅目标图像的排列顺序相一致。在第二图像序列中多幅颜色模板图像对应的光照的照射时间与第一图像序列中多幅目标图像的拍摄时间对应。例如,如果多幅目标图像按照其拍摄时间从先到后排列,则多幅颜色模板图像也基于其对应的光照的照射时间从先到后排列。
步骤1430,提取第一图像序列的第一特征信息。
第一特征信息可以包括第一图像序列中多幅目标图像的颜色特征。关于提取颜色特征的更多细节参见步骤230及其相关描述。在一些实施例中,验证模块可以基于第三验证模型中的第一提取层提取第一图像序列的第一特征信息。关于基于第一提取层提取第一特征信息参见图15及其相关描述。
步骤1440,提取第二图像序列的第二特征信息。
第二特征信息可以包括第二图像序列中多个颜色模板图像的颜色特征。关于提取颜色特征的更多细节参见步骤230及其相关描述。在一些实施例中,验证模块可以基于第二颜色验证模型中的第二提取层提取所述第二特征信息。关于基于第二提取层提取第二颜色特征的更多细节参见图15及其相关描述。
步骤1450,基于第一特征信息和第二特征信息,确定多幅目标图像的真实性。
在一些实施例中,验证模块可以基于第一特征信息和第二特征信息的匹配度,确定第一图像序列中多幅目标图像被拍摄时光照的颜色序列和第二图像序列中多个模板颜色图像的颜色序列是否一致的第二判断结果。例如,验证模块可以将第一特征信息和第二特征信息的相似度作为匹配度,再基于第一特征信息和第二特征信息的相似度和阈值的关系,确定所述第二判断结果。例如,第一特征信息和第二特征信息的相似度大于第十阈值,则第二判断结果为一致。第一特征信息和第二特征信息的相似度小于第十一阈值,则第二判断结果为不一致。进一步的,验证模块可以基于该第二判断结果确定多幅目标图像的真实性。例如,第二判断结果为一致,则多幅目标图像具有真实性。
在一些实施例中,验证模块可以基于第三颜色验证模型中的判别层确定第二判断结果。关于基于判别层确定第二判断结果的更多细节参见图15及其相关描述。
本说明书一些实施例基于人工构造的颜色模板图像生成第二图像序列,并利用第二图像序列和第一图像序列(多幅目标图像的序列)的做比对的方式确定多幅目标图像的真实性。与直接识别第一图像序列的颜色相比,本说明书中披露的方法可以使目标图像的识别任务更加简单。在一些实施例中,可以使用第三验证模型来进行目标图像真实性分析。利用第二图像序列可以使得第三验证模型的识别任务更简单、学习难度更小,从而使得识别准确率更高。此外,第一图像序列中的多幅目标图像都是在相同的外界环境光的条件下、被相同的发光元件照射时拍摄的,因此,基于基准图像和验证图像之间的关系确定多幅目标图像的真实性时,可以消除或减弱外界环境光和发光元件的影响,从而提高光照颜色的识别准确率。
图15是根据本说明书一些实施例所示的第三验证模型的结构示例图。
在一些实施例中,验证模块可以基于第三验证模型和光照序列,确定多幅目标图像的真实性。如图15所示,第三颜色验证模型可以包括第一提取层1530、第二提取层1540和判别层1570。例如,验证模块可以利用第三验证模型实现步骤1430-1450,确定第二判断结果。具体的,基于第一提取层1530实现步骤1430,第二提取层1540实现步骤1440,判别层1570实现步骤1450。进一步的,验证模块基于第二 判断结果和光照序列确定多幅目标图像的真实性。
在一些实施例中,第一提取层1530的输入为第一图像序列1510,输出为第一特征信息1550。例如,验证模块可以将第一图像序列1510中多幅目标图像依次拼接后输入第一提取层1530。输出的第一特征信息1550可以是第一图像序列1510中多幅目标图像对应的颜色特征拼接后的特征。第二提取层1540的输入为第二图像序列1520,输出为第二特征信息1560。例如,验证模块可以将第二图像序列1520中多幅颜色模板图像依次拼接后输入第二提取层1540。输出的第二特征信息1560可以是第二图像序列1520中多幅颜色模板图像对应的颜色特征拼接后的特征。
在一些实施例中,所述第一提取层和第二提取层的类型包括但不限于ResNet、ResNeXt、SE-Net、DenseNet、MobileNet、ShuffleNet、RegNet、EfficientNet或Inception等卷积神经网络(Convolutional Neural Networks,CNN)模型,或循环神经网络模型。第一提取层和第二提取层的类型可以相同或不同。
在一些实施例中,判别层1570的输入为第一特征信息1550和第二特征信息1560,输出为第二判断结果。在一些实施例中,判别层可以是实现分类的模型,包括但不限于全连接层、深度神经网络(DNN)等。
第三验证模型是预置参数的机器学习模型。可以理解的,第三验证模型包括的第一提取层、第二提取层和判别层是预置参数的机器学习模型。第三验证模型的预置参数可以在模型训练过程确定。例如,获取模块可以基于带有第三标签的第三训练样本训练初始第三验证模型,以得到第三验证模型。第三训练样本包括第一样本图像序列和第二样本图像序列,第一样本图像序列由样本目标对象的多个样本目标图像构成,第二样本图像序列由多个样本颜色的多个样本颜色模板图像构成。第三训练样本的第三标签为第一样本图像序列的多个样本目标图像被拍摄时光照的颜色序列与第二样本图像序列中多个样本颜色模板的颜色序列是否一致。
在一些实施例中,获取模块可以将第三训练样本输入初始第三验证模型,通过训练更新初始第一提取层、初始第二提取层和初始判别层的参数,直到更新后的第三验证模型满足第一预设条件。更新后的第三验证模型可以被指定为预置参数的第三验证模型,换言之,更新后的第三验证模型可以被指定为训练后的第三验证模型。第三预设条件可以是更新后的第三验证模型的损失函数小于阈值、收敛,或训练迭代次数达到阈值。
在一些实施例中,获取模块可以通过端到端的训练方式,训练初始第三验证模型中的初始第一提取层、初始第二提取层和初始判别层。端到端的训练方式是指将训练样本输入初始模型,并基于初始模型的输出确定损失值,基于所述损失值更新所述初始模型。所述初始模型可能会包含用于执行不同数据处理操作的多个子模型或模块,其会在训练中被视为整体,进行同时更新。例如,在初始第一验证模型的训练中,可以将第一样本图像序列输入初始第一提取层,第二样本图像序列输入初始第二提取层,基于初始判别层的输出结果和第三标签建立损失函数,基于损失函数对初始第三验证模型中各初始模型的参数进行同时更新。
在一些实施例中,第一提取层和第二提取层的全部或者部分参数可以共享。
本说明书一些实施例通过第三验证模型确定目标图像的真实性,可以无需识别目标图像被拍摄时光照的类型,直接通过对比包含目标图像的第一图像序列和包含颜色模板图像第二图像序列是否一致来进行目标识别。这相当于把颜色识别任务转化为判断颜色是否相同的二分类任务。在一些实施例中,可以使用第三验证模型来确定第一图序列和第二图像序列是否一致。第二验证模型的判别层可以只包括数量较少的神经元(例如,两个神经元)来进行序列是否相同的判断。相比于传统方法中的颜色识别网络,本说明书中披露的第二验证模型结构更加简单。基于所述第三验证模型进行的目标对象分析所需要的计算资源(如计算空间)也相对更少,由此可以提高光照颜色的识别效率。同时,模型的输入可以是任意颜色对应的目标图像,与其他需要限定输入颜色种类数量的算法相比,本说明书实施例的适用性更高。而且,使用第三验证模型可以提高目标对象真实性验证的可靠性,减少或者去除终端设备的性能差异的影响,进一步确定目标图像的真实性。可以理解,不同终端的硬件存在一定差异,例如,不同厂商的终端屏幕发射的相同颜色彩色光在饱和度、亮度等参数上可能会有差异,导致同一种颜色的类内差距比较大。初始第三验证模型的第三训练样本可以是由不同性能的终端拍摄的。初始第三验证模型在训练过程中通过学习,可以使得训练后的第三验证模型在进行目标对象颜色判断时可以考虑终端性能差异,较为准确地确定目标图像的颜色。此外,第一图序列中的多幅目标图像都是在相同的外界环境光的条件下拍摄的。因此,基于第三验证模型对第一图像序列进行处理,确定多幅目标图像的真实性时,可以消除或减弱外界环境光的影响。
在一些实施例中,验证模块可以基于基准颜色空间,确定多个目标图像(包括至少一个验证目标对象图和至少一个基准图像)更新后的颜色特征,基于多个目标图像更新后的颜色特征生成第一图像序列更新后的第一特征信息。类似的,验证模块可以基于多个颜色模板图像更新后的颜色特征生成第二图像序列更新后的第二特征信息。验证模块可以进一步基于更新后的第一特征信息(或第一特征信息)和更新后 的第二特性信息(或第二特性信息),确定多幅目标图像的真实性。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本说明书的限定。虽然此处并没有明确说明,本领域技术人员可能会对本说明书进行各种修改、改进和修正。该类修改、改进和修正在本说明书中被建议,所以该类修改、改进、修正仍属于本说明书示范实施例的精神和范围。
同时,本说明书使用了特定词语来描述本说明书的实施例。如“一个实施例”、“一实施例”、和/或“一些实施例”意指与本说明书至少一个实施例相关的某一特征、结构或特点。因此,应强调并注意的是,本说明书中在不同位置两次或多次提及的“一实施例”或“一个实施例”或“一个替代性实施例”并不一定是指同一实施例。此外,本说明书的一个或多个实施例中的某些特征、结构或特点可以进行适当的组合。
此外,除非权利要求中明确说明,本说明书所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本说明书流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本说明书实施例实质和范围的修正和等价组合。例如,虽然以上所描述的***组件可以通过硬件设备实现,但是也可以只通过软件的解决方案得以实现,如在现有的服务器或移动设备上安装所描述的***。
同理,应当注意的是,为了简化本说明书披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本说明书实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本说明书对象所需要的特征比权利要求中提及的特征多。实际上,实施例的特征要少于上述披露的单个实施例的全部特征。
一些实施例中使用了描述成分、属性数量的数字,应当理解的是,此类用于实施例描述的数字,在一些示例中使用了修饰词“大约”、“近似”或“大体上”来修饰。除非另外说明,“大约”、“近似”或“大体上”表明所述数字允许有±20%的变化。相应地,在一些实施例中,说明书和权利要求中使用的数值参数均为近似值,该近似值根据个别实施例所需特点可以发生改变。在一些实施例中,数值参数应考虑规定的有效数位并采用一般位数保留的方法。尽管本说明书一些实施例中用于确认其范围广度的数值域和参数为近似值,在具体实施例中,此类数值的设定在可行范围内尽可能精确。
针对本说明书引用的每个专利、专利申请、专利申请公开物和其他材料,如文章、书籍、说明书、出版物、文档等,特此将其全部内容并入本说明书作为参考。与本说明书内容不一致或产生冲突的申请历史文件除外,对本说明书权利要求最广范围有限制的文件(当前或之后附加于本说明书中的)也除外。需要说明的是,如果本说明书附属材料中的描述、定义、和/或术语的使用与本说明书所述内容有不一致或冲突的地方,以本说明书的描述、定义和/或术语的使用为准。
最后,应当理解的是,本说明书中所述实施例仅用以说明本说明书实施例的原则。其他的变形也可能属于本说明书的范围。因此,作为示例而非限制,本说明书实施例的替代配置可视为与本说明书的教导一致。相应地,本说明书的实施例不仅限于本说明书明确介绍和描述的实施例。

Claims (10)

  1. 一种目标识别方法,所述方法包括:
    获取多幅初始图像,所述多幅初始图像的拍摄时间与照射到目标对象的光照序列中多个光照的照射时间具有对应关系,所述多个光照有多个颜色,所述多幅初始图像包括第一初始图像和至少一幅第二初始图像;
    对至少一幅第二初始图像中的每一幅,用所述第一初始图像的纹理替换所述第二初始图像的纹理,以生成处理后的第二初始图像;以及
    基于所述光照序列和多幅目标图像,确定所述多幅目标图像的真实性,所述多幅目标图像包括所述第一初始图像和所述至少一幅处理后的第二初始图像。
  2. 根据权利要求1所述的方法,所述用所述第一初始图像的纹理替换所述第二初始图像的纹理,以生成处理后的第二初始图像包括:
    基于色彩迁移算法,将所述第二初始图像被拍摄时光照的颜色迁移至所述第一初始图像,生成处理后的第二初始图像。
  3. 根据权利要求2所述的方法,所述色彩迁移算法包括Reinhard算法、Welsh算法、模糊聚类算法和自适应迁移算法中的一种。
  4. 根据权利要求1所述的方法,所述多个颜色包括白色,所述第一初始图像的拍摄时间与所述光照序列中白色的光照射时间相对应。
  5. 根据权利要求1所述的方法,所述基于所述光照序列和所述多幅目标图像,确定所述多幅目标图像的真实性包括:
    确定所述多幅目标图像被拍摄时光照的颜色;以及
    基于所述光照序列和所述多幅目标图像被拍摄时光照的颜色,确定所述多幅目标图像的真实性。
  6. 根据权利要求5所述的方法,所述确定所述多幅目标图像被拍摄时光照的颜色包括:
    对于所述多幅目标图像中的每一幅,基于颜色验证模型对所述目标图像进行处理,确定所述目标图像被拍摄时光照的颜色,所述颜色验证模型为预置参数的机器学习模型。
  7. 根据权利要求1所述的方法,所述基于所述光照序列和多幅目标图像,确定所述多幅目标图像的真实性包括:
    基于所述多幅目标图像,确定第一图像序列;
    基于多幅颜色模板图像,确定第二图像序列,所述多幅颜色模板图像基于所述光照序列生成;
    基于所述第一图像序列和所述第二图像序列,确定所述多幅目标图像的真实性。
  8. 一种目标识别***,所述***包括:
    获取模块,用于获取多幅初始图像,所述多幅初始图像的拍摄时间与照射到目标对象的光照序列中多个光照的照射时间具有对应关系,所述多个光照有多个颜色,所述多幅初始图像包括第一初始图像和至少一幅第二初始图像;
    预处理模块,用于对至少一幅第二初始图像中的每一幅,用所述第一初始图像的纹理替换所述第二初始图像的纹理,以生成处理后的第二初始图像;以及
    验证模块,用于基于所述光照序列和多幅目标图像,确定所述多幅目标图像的真实性,所述多幅目标图像包括所述第一初始图像和所述至少一幅处理后的第二初始图像。
  9. 一种目标判别装置,其特征在于,所述装置包括至少一个处理器以及至少一个存储器;
    所述至少一个存储器用于存储计算机指令;
    所述至少一个处理器用于执行所述计算机指令中的至少部分指令以实现如权利要求1至7中任意一项所述的方法。
  10. 一种计算机可读存储介质,其特征在于,所述存储介质存储计算机指令,当所述计算机指令被处理器执行时实现如权利要求1至7中任意一项所述的方法。
PCT/CN2022/075531 2021-04-20 2022-02-08 用于目标识别的方法和*** WO2022222575A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110423528.X 2021-04-20
CN202110423528.XA CN113111806A (zh) 2021-04-20 2021-04-20 用于目标识别的方法和***

Publications (1)

Publication Number Publication Date
WO2022222575A1 true WO2022222575A1 (zh) 2022-10-27

Family

ID=76718623

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/075531 WO2022222575A1 (zh) 2021-04-20 2022-02-08 用于目标识别的方法和***

Country Status (2)

Country Link
CN (1) CN113111806A (zh)
WO (1) WO2022222575A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580210A (zh) * 2023-07-05 2023-08-11 四川弘和数智集团有限公司 一种线性目标检测方法、装置、设备及介质

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111806A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 用于目标识别的方法和***
WO2022222904A1 (zh) * 2021-04-20 2022-10-27 北京嘀嘀无限科技发展有限公司 图像验证方法、***及存储介质
CN113743284A (zh) * 2021-08-30 2021-12-03 杭州海康威视数字技术股份有限公司 图像识别方法、装置、设备、相机及门禁设备
CN114266977B (zh) * 2021-12-27 2023-04-07 青岛澎湃海洋探索技术有限公司 基于超分辨可选择网络的多auv的水下目标识别方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376592A (zh) * 2018-09-10 2019-02-22 阿里巴巴集团控股有限公司 活体检测方法、装置和计算机可读存储介质
CN111881844A (zh) * 2020-07-30 2020-11-03 北京嘀嘀无限科技发展有限公司 一种判断图像真实性的方法及***
CN113111807A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标识别的方法和***
CN113111810A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标识别方法和***
CN113111806A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 用于目标识别的方法和***
CN113111811A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标判别方法和***

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6937744B1 (en) * 2000-06-13 2005-08-30 Microsoft Corporation System and process for bootstrap initialization of nonparametric color models
CN109461168B (zh) * 2018-10-15 2021-03-16 腾讯科技(深圳)有限公司 目标对象的识别方法和装置、存储介质、电子装置
CN109493280B (zh) * 2018-11-02 2023-03-14 腾讯科技(深圳)有限公司 图像处理方法、装置、终端及存储介质
CN111523438B (zh) * 2020-04-20 2024-02-23 支付宝实验室(新加坡)有限公司 一种活体识别方法、终端设备和电子设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376592A (zh) * 2018-09-10 2019-02-22 阿里巴巴集团控股有限公司 活体检测方法、装置和计算机可读存储介质
CN111881844A (zh) * 2020-07-30 2020-11-03 北京嘀嘀无限科技发展有限公司 一种判断图像真实性的方法及***
CN113111807A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标识别的方法和***
CN113111810A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标识别方法和***
CN113111806A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 用于目标识别的方法和***
CN113111811A (zh) * 2021-04-20 2021-07-13 北京嘀嘀无限科技发展有限公司 一种目标判别方法和***

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580210A (zh) * 2023-07-05 2023-08-11 四川弘和数智集团有限公司 一种线性目标检测方法、装置、设备及介质
CN116580210B (zh) * 2023-07-05 2023-09-15 四川弘和数智集团有限公司 一种线性目标检测方法、装置、设备及介质

Also Published As

Publication number Publication date
CN113111806A (zh) 2021-07-13

Similar Documents

Publication Publication Date Title
WO2022222575A1 (zh) 用于目标识别的方法和***
CN111488756B (zh) 基于面部识别的活体检测的方法、电子设备和存储介质
George et al. Cross modal focal loss for rgbd face anti-spoofing
WO2022222569A1 (zh) 一种目标判别方法和***
CN109543640B (zh) 一种基于图像转换的活体检测方法
CN110163078A (zh) 活体检测方法、装置及应用活体检测方法的服务***
CN109086723B (zh) 一种基于迁移学习的人脸检测的方法、装置以及设备
CN112801057B (zh) 图像处理方法、装置、计算机设备和存储介质
CN109871845B (zh) 证件图像提取方法及终端设备
CN108664843B (zh) 活体对象识别方法、设备和计算机可读存储介质
WO2022222585A1 (zh) 一种目标识别的方法和***
CN110532746B (zh) 人脸校验方法、装置、服务器及可读存储介质
KR102145132B1 (ko) 딥러닝을 이용한 대리 면접 예방 방법
CN106991364A (zh) 人脸识别处理方法、装置以及移动终端
CN113111810B (zh) 一种目标识别方法和***
Hadiprakoso et al. Face anti-spoofing using CNN classifier & face liveness detection
CN115147936A (zh) 一种活体检测方法、电子设备、存储介质及程序产品
CN107862654A (zh) 图像处理方法、装置、计算机可读存储介质和电子设备
CN112200075B (zh) 一种基于异常检测的人脸防伪方法
JP3962517B2 (ja) 顔面検出方法及びその装置、コンピュータ可読媒体
CN113723310B (zh) 基于神经网络的图像识别方法及相关装置
Hadwiger et al. Towards learned color representations for image splicing detection
CN116229528A (zh) 一种活体掌静脉检测方法、装置、设备及存储介质
JP2004128715A (ja) ビデオデータの記憶制御方法およびシステム、プログラム、記録媒体、ビデオカメラ
WO2022222904A1 (zh) 图像验证方法、***及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22790685

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22790685

Country of ref document: EP

Kind code of ref document: A1