CN111507202B - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN111507202B
CN111507202B CN202010228167.9A CN202010228167A CN111507202B CN 111507202 B CN111507202 B CN 111507202B CN 202010228167 A CN202010228167 A CN 202010228167A CN 111507202 B CN111507202 B CN 111507202B
Authority
CN
China
Prior art keywords
image
glasses
images
detection model
human eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010228167.9A
Other languages
Chinese (zh)
Other versions
CN111507202A (en
Inventor
张小亮
请求不公布姓名
王秀贞
戚纪纲
杨占金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Superred Technology Co Ltd
Original Assignee
Beijing Superred Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Superred Technology Co Ltd filed Critical Beijing Superred Technology Co Ltd
Priority to CN202010228167.9A priority Critical patent/CN111507202B/en
Publication of CN111507202A publication Critical patent/CN111507202A/en
Application granted granted Critical
Publication of CN111507202B publication Critical patent/CN111507202B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to an image processing method, an image processing device and a storage medium. The image processing method comprises the following steps: acquiring an image, wherein the image comprises a complete human eye part; performing glasses identification on the image through a glasses detection model, and determining whether the image contains a glasses image; segmenting the human eye part in the image according to the glasses identification result to obtain a human eye image; and based on the human eye image, carrying out iris recognition. By the aid of the iris recognition method and the iris recognition device, integrity and accuracy of the iris image of the human eye after segmentation are guaranteed, and accuracy and efficiency of iris recognition are further guaranteed.

Description

Image processing method, device and storage medium
Technical Field
The present disclosure relates to the field of image recognition technologies, and in particular, to an image processing method and apparatus, and a storage medium.
Background
The iris recognition technology is an identity authentication technology based on biological characteristics, and mainly comprises the following steps: the method comprises the steps of image acquisition, image preprocessing, accurate segmentation of eyes in an image, extraction of iris features and pattern matching of the segmented eye image, and decision making.
However, the existing iris recognition technology generally has the following problems: due to the fact that the iris image segmentation is not accurate due to the fact that the glasses are worn, iris recognition efficiency is low, and accuracy is poor.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides an image processing method, apparatus, and storage medium.
According to a first aspect of embodiments of the present disclosure, there is provided an image processing method including: acquiring an image, wherein the image comprises a complete human eye part; carrying out glasses identification on the image through a glasses detection model, and determining whether the image contains a glasses image; according to the result of the glasses identification, segmenting the human eye parts in the image to obtain a human eye image;
in one example, performing glasses recognition on an image through a glasses detection model, and determining whether the image contains a glasses image, includes: normalizing the image to obtain a normalized image; calling a glasses detection model, wherein the glasses detection model outputs the probability that the images contain the glasses images according to the input images; and inputting the normalized image into a glasses detection model, and determining whether the image contains a glasses image or not according to the output of the glasses detection model.
In one example, the eyewear detection model is trained by: acquiring a training set, wherein the training set comprises a plurality of human eye training images which are correspondingly provided with marks containing or not containing glasses images; inputting a plurality of human eye training images into a convolutional neural network, and obtaining a glasses identification result through the convolutional neural network; and adjusting parameters of the convolutional neural network based on the result and the identification of the glasses identification and the loss function to obtain a glasses detection model meeting the loss value.
In one example, the training step further comprises: preprocessing a human eye training image, wherein the preprocessing comprises at least one of the following: increase or decrease in brightness, addition of noise, blurred images, and randomly scaled images.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus including: an acquisition unit configured to acquire an image, wherein the image includes a complete human eye part; the identification unit is configured to perform glasses identification on the image through a glasses detection model and determine whether the image contains a glasses image; and the processing unit is configured to segment the eye part of the person in the image according to the result of the glasses identification to obtain a human eye image, and perform iris identification based on the human eye image.
In one example, the recognition unit performs glasses recognition on the image through a glasses detection model in the following manner, and determines whether the image contains a glasses image: normalizing the image to obtain a normalized image; calling a glasses detection model, wherein the glasses detection model outputs the probability that the images contain the glasses images according to the input images; and inputting the normalized image into a glasses detection model, and determining whether the image contains a glasses image or not according to the output of the glasses detection model.
In an example, the obtaining unit is further configured to: acquiring a training set, wherein the training set comprises a plurality of human eye training images, and the human eye training images correspond to marks containing or not containing glasses images;
the image processing apparatus further includes: a training unit configured to train a glasses detection model by: inputting a plurality of human eye training images into a convolutional neural network, and obtaining a glasses identification result through the convolutional neural network; and adjusting parameters of the convolutional neural network based on the result and the identification of the glasses identification and the loss function to obtain a glasses detection model meeting the loss value.
In an example, the training unit is further configured to: preprocessing a human eye training image, wherein the preprocessing comprises at least one of the following: increase or decrease in brightness, addition of noise, blurred images, and randomly scaled images.
According to a third aspect of the present disclosure, there is provided an image processing apparatus including: a memory configured to store instructions. And a processor configured to invoke an instruction to perform the image processing method of the foregoing first aspect or any one of the examples of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a processor, perform the image processing method of the foregoing first aspect or any one of the examples of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the eye recognition method comprises the steps of carrying out eye recognition on partial images including eyes, determining whether the images include the eye images or not, and then according to the eye recognition result, carrying out targeted adoption of different eye segmentation methods on the images including the eyes and the images not including the eyes to obtain the eye images, so that the integrity and the accuracy of the iris images of the eyes after segmentation are guaranteed, and further the accuracy and the efficiency of the iris recognition are guaranteed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
Fig. 3 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
FIG. 4 is a block diagram illustrating an apparatus in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The image processing method according to the exemplary embodiment of the present application can be applied to an application scenario of iris recognition. In this scenario, the subject of execution of the image processing may be a terminal capable of iris recognition, and in the exemplary embodiments described below, the terminal is sometimes also referred to as a smart terminal device, where the terminal may be a Mobile terminal, and may also be referred to as a User Equipment (UE), a Mobile Station (MS), and the like. A terminal is a device that provides voice and/or data connection to a user, or a chip disposed in the device, such as a handheld device, a vehicle-mounted device, etc. having a wireless connection function. Examples of terminals may include, for example: the Mobile terminal comprises a Mobile phone, a tablet computer, a notebook computer, a palm computer, mobile Internet Devices (MID), a wearable device, a Virtual Reality (VR) device, an Augmented Reality (AR) device, a wireless terminal in industrial control, a wireless terminal in unmanned driving, a wireless terminal in remote operation, a wireless terminal in a smart grid, a wireless terminal in transportation safety, a wireless terminal in a smart city, a wireless terminal in a smart home and the like.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment, as illustrated in fig. 1, including the following steps.
In step S11, an image is acquired.
In this application, the acquired image may be acquired by an image acquisition device in real time, or may be acquired by an electronic device such as a mobile terminal or a Personal Computer (PC). Wherein the acquired image includes a complete human eye portion.
In step S12, the glasses recognition is performed on the image by the glasses detection model, and it is determined whether the image includes a glasses image.
Before iris recognition of an image, eye segmentation needs to be performed on the acquired image, and iris recognition needs to be performed on the basis of the segmented eye image. When the eye segmentation is carried out on the acquired image, the eye of the acquired image is directly segmented, and the problem that the eye segmentation of the image is inaccurate due to glasses worn by a user in the image exists, so that the accuracy of iris recognition of the segmented image is influenced, and the efficiency of iris recognition is influenced.
In order to ensure the accuracy of image eye segmentation and further ensure the accuracy and efficiency of image eye iris recognition, in one embodiment, before the image eye segmentation, the glasses recognition can be performed on the acquired image through a glasses detection model to determine whether the image contains a glasses image.
The glasses detection model may perform glasses recognition on the image, for example, in the following manner, and determine whether the image includes a glasses image:
normalizing the image to obtain a normalized image, calling a glasses detection model, inputting the normalized image into the glasses detection model according to the probability that the input image outputs the image containing the glasses image, and determining whether the image contains the glasses image according to the output of the glasses detection model.
Therefore, whether the image contains the glasses or not can be identified through the image detection model, and then different eye segmentation methods are pertinently adopted according to the image containing or not containing the glasses to obtain the eye image, so that the problems that when the eye of the image is directly segmented, the iris of the eye of the image is inaccurately identified, the iris identification precision is influenced, and the iris identification efficiency is influenced due to the fact that glasses reflected by iris identification equipment are contained in the image are avoided.
In step S13, the eye portion of the person in the image is segmented according to the result of the glasses recognition, so as to obtain the eye image.
In the application, after the image detection model identifies the glasses of the image and obtains a result that the image contains or does not contain the glasses, different eye segmentation methods are adopted pertinently for the image containing the glasses and the image not containing the glasses to segment the eye part of the person in the image, so that the accurate eye part image is obtained.
In step S14, iris recognition is performed based on the human eye image.
In the exemplary embodiment of the application, glasses identification is carried out on the image comprising the human eye part, whether the image comprises the glasses image or not is determined, and then different eye segmentation methods are adopted pertinently for the image comprising the glasses and the image not comprising the glasses according to the glasses identification result to obtain the human eye image, so that the integrity and the accuracy of the segmented human eye iris image are ensured, and the accuracy and the efficiency of the iris identification are further ensured.
Before the image processing method is applied to the image for identifying the glasses, the method further comprises training a glasses detection model, and the training of the glasses detection model can be realized through the following modes:
fig. 2 is a flowchart illustrating a method of training a glasses inspection model, as shown in fig. 2, according to an exemplary embodiment, the image processing method includes the following steps.
In step S21, a training set is acquired.
The training set comprises a plurality of human eye training images, and the human eye training images are corresponding to marks containing or not containing the glasses images.
In order to improve the robustness of the glasses detection model, in the present application, the eye training image may be preprocessed, wherein the preprocessing includes at least one of the following: increase or decrease in brightness, addition of noise, blurring of the image.
For example, the human eye training image may be grayed, pyramid scaled, and randomly increased or decreased in brightness, added with noise, blurred image, and randomly scaled image, and the scaled images of different sizes may be uniformly scaled to, for example, 480 × 480 pixels, and then normalized.
In practical application, in order to shorten the time for training the glasses detection model and improve the efficiency for training the glasses detection model, the training images can be normalized before being input into the convolutional neural network, and the images after normalization processing are input into the convolutional neural network for training.
In step S22, a plurality of eye training images are input to the convolutional neural network, a result of glasses recognition is obtained by the convolutional neural network, and a parameter of the convolutional neural network is adjusted based on the result of glasses recognition, the identifier, and the loss function, so as to obtain a glasses detection model satisfying the loss value.
The objective function in this application is, for example, as follows:
Figure BDA0002428363130000051
therein, sigma i L(y i ,f(x i (ii) a ω) is the accumulation of the loss function, y i To train labels of images, x i For training the image, ω is a parameter of the network,
Figure BDA0002428363130000052
Figure BDA0002428363130000053
is the predicted result.
The loss function of the glasses detection model may be, for example, a cross entropy loss function. The loss function of the eyeglass detection model is shown as follows:
Figure BDA0002428363130000054
where Ω (ω) is used as the regularization parameter, an L2 regularization function is used. λ is usually set to an extremely small number as a hyperparameter.
And inputting the plurality of human eye training images into a convolutional neural network, and obtaining a prediction result of glasses identification through the convolutional neural network. And calculating an error between the prediction result and the identifier corresponding to the human eye training image according to the loss function, and adjusting the parameters of the convolutional neural network until the error calculated through the loss function is lower than a preset threshold value to obtain a glasses detection model meeting a loss value, namely obtaining a stable glasses detection model parameter omega.
In addition, when training the glasses detection model based on the convolutional neural network, the method can be based on the loss function of the human eye frame:
Figure BDA0002428363130000055
Figure BDA0002428363130000061
and an objective function, wherein the image detection model can be trained to recognize the eye socket position coordinates of the images while the image detection model is trained to detect the glasses.
In the exemplary embodiment of the application, the image detection model is obtained through training, whether the partial images of the human eyes contain the glasses or not can be identified based on the image detection model obtained through training, and then according to the images containing or not containing the glasses, different eye segmentation methods are adopted, the completeness and the accuracy of the iris of the images after eye segmentation are ensured, and further the accuracy of iris identification is ensured.
Based on the same inventive concept, the present disclosure also provides an image processing apparatus.
It is understood that, in order to implement the above functions, the application control device provided in the embodiments of the present disclosure includes a hardware structure and/or a software module corresponding to each function. The disclosed embodiments can be implemented in hardware or a combination of hardware and computer software, in combination with the exemplary elements and algorithm steps disclosed in the disclosed embodiments. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Fig. 3 is a block diagram 100 of an image processing apparatus according to an exemplary embodiment. Referring to fig. 3, the image processing apparatus includes: an acquisition unit 101, a recognition unit 102 and a processing unit 103.
Wherein the acquiring unit 101 is configured to acquire an image, wherein the image comprises a complete human eye part; the recognition unit 102 is configured to perform glasses recognition on the image through a glasses detection model, and determine whether the image contains a glasses image; the processing unit 103 is configured to segment the eye portion of the person in the image according to the result of the glasses identification to obtain a eye image, and perform iris identification based on the eye image.
In an example, the recognition unit 102 performs glasses recognition on the image through a glasses detection model in the following manner, and determines whether the image includes a glasses image: normalizing the image to obtain a normalized image; calling a glasses detection model, wherein the glasses detection model outputs the probability that the images contain the glasses images according to the input images; and inputting the normalized image into a glasses detection model, and determining whether the image contains a glasses image or not according to the output of the glasses detection model.
In an example, the obtaining unit 101 is further configured to: acquiring a training set, wherein the training set comprises a plurality of human eye training images, and the human eye training images correspond to marks containing or not containing glasses images;
the image processing apparatus further includes: a training unit 104 configured to train the glasses detection model by: inputting a plurality of human eye training images into a convolutional neural network, and obtaining a glasses identification result through the convolutional neural network; and adjusting parameters of the convolutional neural network based on the result and the identification of the glasses identification and the loss function to obtain a glasses detection model meeting the loss value.
In an example, the training unit 104 is further configured to: preprocessing the human eye training image, wherein the preprocessing comprises at least one of the following steps: increase or decrease in brightness, addition of noise, blurred images, and randomly scaled images.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
Fig. 4 is a block diagram illustrating an apparatus 400 for image processing according to an example embodiment. For example, the apparatus 400 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 4, the apparatus 400 may include one or more of the following components: processing components 402, memory 404, power components 406, multimedia components 408, audio components 410, input/output (I/O) interfaces 412, sensor components 414, and communication components 416.
The processing component 402 generally controls overall operation of the device 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 402 may include one or more processors 420 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the device 400. Examples of such data include instructions for any application or method operating on the device 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile storage devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply components 406 provide power to the various components of device 400. The power components 406 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power supplies for the apparatus 400.
The multimedia component 408 includes a screen that provides an output interface between the device 400 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 400 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, audio component 410 includes a Microphone (MIC) configured to receive external audio signals when apparatus 400 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416. In some embodiments, audio component 410 also includes a speaker for outputting audio signals.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of state assessment for the apparatus 400. For example, the sensor component 414 can detect the open/closed state of the device 400, the relative positioning of components, such as a display and keypad of the apparatus 400, the sensor component 414 can also detect a change in the position of the apparatus 400 or a component of the apparatus 400, the presence or absence of user contact with the apparatus 400, orientation or acceleration/deceleration of the apparatus 400, and a change in the temperature of the apparatus 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate wired or wireless communication between the apparatus 400 and other devices. The apparatus 400 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the apparatus 400 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is further understood that the use of "a plurality" in this disclosure means two or more, as other terms are analogous. "and/or" describes the association relationship of the associated object, indicating that there may be three relationships, for example, a and/or B, which may indicate: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. The singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It will be further understood that the terms "first," "second," and the like are used to describe various information and that such information should not be limited by these terms. These terms are only used to distinguish one type of information from another and do not denote a particular order or importance. Indeed, the terms "first," "second," etc. are used interchangeably throughout. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure.
It is further to be understood that while operations are depicted in the drawings in a particular order, this is not to be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring an image, wherein the image comprises a complete human eye part;
carrying out glasses identification on the image through a glasses detection model, and determining whether the image contains a glasses image;
according to the result of the glasses identification, segmenting the human eye part in the image to obtain a human eye image;
and based on the human eye image, carrying out iris recognition.
2. The method according to claim 1, wherein the performing glasses recognition on the image through a glasses detection model to determine whether the image includes a glasses image comprises:
normalizing the image to obtain a normalized image;
calling the glasses detection model, wherein the glasses detection model outputs the probability that the images contain glasses images according to the input images;
and inputting the normalized image into the glasses detection model, and determining whether the image contains a glasses image according to the output of the glasses detection model.
3. The method of claim 2, wherein the eyewear detection model is trained by:
acquiring a training set, wherein the training set comprises a plurality of human eye training images, and the human eye training images correspond to marks containing glasses images or not containing the glasses images;
inputting a plurality of human eye training images into a convolutional neural network, and obtaining a glasses identification result through the convolutional neural network;
and adjusting parameters of the convolutional neural network based on the result of the glasses identification, the identification and the loss function to obtain the glasses detection model meeting the loss value.
4. The method of claim 3, wherein the training step further comprises:
pre-processing the human eye training image, wherein the pre-processing comprises at least one of: increase or decrease in brightness, addition of noise, blurred images, and randomly scaled images.
5. An image processing apparatus characterized by comprising:
an acquisition unit configured to acquire an image, wherein the image comprises a complete human eye part;
the identification unit is configured to perform glasses identification on the image through a glasses detection model and determine whether the image contains a glasses image;
a processing unit configured to segment the eye part in the image according to the result of the glasses recognition to obtain an eye image, an
And based on the human eye image, carrying out iris recognition.
6. The apparatus according to claim 5, wherein the recognition unit performs the glasses recognition on the image through a glasses detection model to determine whether the image includes a glasses image by:
normalizing the image to obtain a normalized image;
calling the glasses detection model, wherein the glasses detection model outputs the probability that the image contains glasses images according to the input image;
and inputting the normalized image into the glasses detection model, and determining whether the image contains glasses images or not according to the output of the glasses detection model.
7. The apparatus of claim 6, wherein the obtaining unit is further configured to:
acquiring a training set, wherein the training set comprises a plurality of human eye training images, and the human eye training images correspond to marks containing glasses images or not containing the glasses images;
the device further comprises:
a training unit configured to train a glasses detection model by:
inputting a plurality of human eye training images into a convolutional neural network, and obtaining a glasses identification result through the convolutional neural network; and adjusting parameters of the convolutional neural network based on the result of the glasses identification, the identification and the loss function to obtain the glasses detection model meeting the loss value.
8. The apparatus of claim 7, wherein the training unit is further configured to:
pre-processing the human eye training image, wherein the pre-processing comprises at least one of: increase or decrease in brightness, addition of noise, blurred images, and randomly scaled images.
9. An image processing apparatus characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to: performing the image processing method of any of claims 1-4.
10. A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a processor, perform the image processing method of any one of claims 1-4.
CN202010228167.9A 2020-03-27 2020-03-27 Image processing method, device and storage medium Active CN111507202B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010228167.9A CN111507202B (en) 2020-03-27 2020-03-27 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010228167.9A CN111507202B (en) 2020-03-27 2020-03-27 Image processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111507202A CN111507202A (en) 2020-08-07
CN111507202B true CN111507202B (en) 2023-04-18

Family

ID=71869016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010228167.9A Active CN111507202B (en) 2020-03-27 2020-03-27 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111507202B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140098364A (en) * 2013-01-31 2014-08-08 오형렬 Iris authentication device with means to avoid reflections from glasses
CN105637512A (en) * 2013-08-22 2016-06-01 贝斯普客公司 Method and system to create custom products
KR20170031542A (en) * 2015-09-11 2017-03-21 엘지전자 주식회사 Iris registration and authentication methods based on glasses detection information
CN107506708A (en) * 2017-08-14 2017-12-22 广东欧珀移动通信有限公司 Solve lock control method and Related product

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20140098364A (en) * 2013-01-31 2014-08-08 오형렬 Iris authentication device with means to avoid reflections from glasses
CN105637512A (en) * 2013-08-22 2016-06-01 贝斯普客公司 Method and system to create custom products
KR20170031542A (en) * 2015-09-11 2017-03-21 엘지전자 주식회사 Iris registration and authentication methods based on glasses detection information
CN107506708A (en) * 2017-08-14 2017-12-22 广东欧珀移动通信有限公司 Solve lock control method and Related product

Also Published As

Publication number Publication date
CN111507202A (en) 2020-08-07

Similar Documents

Publication Publication Date Title
CN106651955B (en) Method and device for positioning target object in picture
CN109784255B (en) Neural network training method and device and recognition method and device
US20200279120A1 (en) Method, apparatus and system for liveness detection, electronic device, and storage medium
CN110688951A (en) Image processing method and device, electronic equipment and storage medium
CN111553864B (en) Image restoration method and device, electronic equipment and storage medium
EP2977956A1 (en) Method, apparatus and device for segmenting an image
CN107944367B (en) Face key point detection method and device
CN110287671B (en) Verification method and device, electronic equipment and storage medium
CN107958223B (en) Face recognition method and device, mobile equipment and computer readable storage medium
CN106557759B (en) Signpost information acquisition method and device
CN109934275B (en) Image processing method and device, electronic equipment and storage medium
CN111243011A (en) Key point detection method and device, electronic equipment and storage medium
CN107038428B (en) Living body identification method and apparatus
CN107730448B (en) Beautifying method and device based on image processing
CN110909654A (en) Training image generation method and device, electronic equipment and storage medium
EP3113071A1 (en) Method and device for acquiring iris image
CN111368796A (en) Face image processing method and device, electronic equipment and storage medium
EP3208742A1 (en) Method and apparatus for detecting pressure
CN110751659A (en) Image segmentation method and device, terminal and storage medium
CN112188091B (en) Face information identification method and device, electronic equipment and storage medium
CN109255784B (en) Image processing method and device, electronic equipment and storage medium
CN111626086A (en) Living body detection method, living body detection device, living body detection system, electronic device, and storage medium
CN108171222B (en) Real-time video classification method and device based on multi-stream neural network
CN108596957B (en) Object tracking method and device
CN107729886B (en) Method and device for processing face image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100081 room 701, floor 7, Fuhai international port, Haidian District, Beijing

Applicant after: Beijing wanlihong Technology Co.,Ltd.

Address before: 100081 1504, floor 15, Fuhai international port, Daliushu Road, Haidian District, Beijing

Applicant before: BEIJING SUPERRED TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant