CN115546846A - Image recognition processing method and device, electronic equipment and storage medium - Google Patents

Image recognition processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115546846A
CN115546846A CN202210910090.2A CN202210910090A CN115546846A CN 115546846 A CN115546846 A CN 115546846A CN 202210910090 A CN202210910090 A CN 202210910090A CN 115546846 A CN115546846 A CN 115546846A
Authority
CN
China
Prior art keywords
image
feature
algorithm
target
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210910090.2A
Other languages
Chinese (zh)
Inventor
康春生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lumi United Technology Co Ltd
Original Assignee
Lumi United Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lumi United Technology Co Ltd filed Critical Lumi United Technology Co Ltd
Priority to CN202210910090.2A priority Critical patent/CN115546846A/en
Publication of CN115546846A publication Critical patent/CN115546846A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1382Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The embodiment of the application provides an image recognition processing method and device, electronic equipment and a storage medium, and relates to the technical field of computers. Wherein, the method comprises the following steps: acquiring an image to be recognized, and acquiring a plurality of feature matching algorithms, wherein the feature matching algorithms are adapted to attribute information of a target object in the image to be recognized; according to the multiple feature matching algorithms, respectively carrying out image feature extraction on the image to be recognized to obtain multiple image features; performing feature fusion on the image features to obtain target features; and identifying the target object in the image to be identified according to the target characteristics to obtain an identification result. The embodiment of the application solves the problem of low success rate of identification such as biological feature identification in the related technology.

Description

Image recognition processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image recognition processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of computer technology, image recognition processing has been widely applied to various fields, such as image-based biometric recognition. For example, in the public safety field, the biometric recognition may be face recognition, and if the face recognition of a registered user of a certain building is successful, the registered user can smoothly pass through the entrance of the building.
At present, the recognition success rate of biometric recognition not only depends on the image to be recognized, for example, if the image quality of the image to be recognized is good, the recognition success rate is improved, but also depends on the biometric features of the target, taking fingerprint recognition as an example, the fingerprint quality difference of different people is large, and the recognition success rate of people with poor fingerprint quality is low compared with the recognition success rate of people with good fingerprint quality. In addition, the electronic device for biometric identification is susceptible to external factors, such as an access control device susceptible to external factors, such as seasonal variations and artificial wear, thereby affecting the success rate of biometric identification.
Therefore, how to improve the recognition success rate of the biometric recognition still remains to be solved.
Disclosure of Invention
Embodiments of the present application provide an image recognition processing method, an image recognition processing apparatus, an electronic device, and a storage medium, which can solve the problem of low success rate of recognition, such as biometric recognition, in the related art.
The technical scheme is as follows:
according to an aspect of an embodiment of the present application, an image recognition processing method includes: acquiring an image to be identified and acquiring a plurality of feature matching algorithms, wherein the feature matching algorithms are adapted to attribute information of a target object in the image to be identified; according to the multiple feature matching algorithms, respectively carrying out image feature extraction on the image to be recognized to obtain multiple image features; performing feature fusion on the image features to obtain target features; and identifying the target object in the image to be identified according to the target characteristic to obtain an identification result.
According to an aspect of an embodiment of the present application, an image recognition processing apparatus includes: the algorithm acquisition module is used for acquiring an image to be recognized and acquiring a plurality of feature matching algorithms, and the feature matching algorithms are adapted to the attribute information of a target object in the image to be recognized; the feature extraction module is used for respectively extracting image features of the image to be recognized according to the multiple feature matching algorithms to obtain multiple image features; the characteristic fusion module is used for carrying out characteristic fusion on the plurality of image characteristics to obtain target characteristics; and the feature identification module is used for identifying the target object in the image to be identified according to the target feature to obtain an identification result.
In an exemplary embodiment, the algorithm obtaining module includes: the information extraction unit is used for extracting the attribute information from the image to be identified; and the algorithm selecting unit is used for selecting a plurality of feature matching algorithms which are adaptive to the attribute information from an algorithm set, and the algorithm set comprises a plurality of candidate algorithms which can be selected and used for image feature extraction.
In an exemplary embodiment, the algorithm selecting unit includes: the algorithm searching subunit is used for searching a plurality of corresponding candidate algorithms in the algorithm set according to the attribute information; and the algorithm adapter unit is used for taking the candidate algorithm with the adaptation degree meeting the adaptation condition as the feature matching algorithm based on the adaptation degree of the candidate algorithm and the attribute information.
In an exemplary embodiment, the attribute information includes a primary attribute and a secondary attribute associated with the primary attribute; the device further comprises: the set construction module is used for constructing the algorithm set; the set building module comprises: the algorithm classification unit is used for classifying various candidate algorithms according to different primary attributes to obtain a plurality of algorithm categories, and the candidate algorithms in each algorithm category correspond to the same primary attribute; the algorithm adaptation unit is used for adapting the candidate algorithm and the secondary attribute associated with the corresponding primary attribute of the candidate algorithm aiming at the candidate algorithm in each algorithm category; and the path construction unit is used for constructing a path between the adapted candidate algorithm and the secondary attribute, and configuring adaptation degree for the constructed path to obtain the algorithm set.
In an exemplary embodiment, the algorithmic search subunit comprises: and the corresponding subunit is used for searching the candidate algorithm with the corresponding relation with the secondary attribute from the algorithm set based on the corresponding relation between the secondary attribute and the candidate algorithm in the algorithm set.
In an exemplary embodiment, the feature recognition module includes: the characteristic segmentation unit is used for carrying out segmentation processing on the target characteristic to obtain a plurality of target characteristic subsections; the characteristic identification unit is used for carrying out biological characteristic identification on the target object in the image to be identified according to each target characteristic group to obtain an identification result corresponding to each target characteristic group, and each target characteristic group comprises a set number of target characteristic subsections in a plurality of target characteristic subsections; and the result generating unit is used for obtaining the identification result according to the identification result corresponding to each target feature group.
In an exemplary embodiment, the feature recognition unit includes: a sample obtaining subunit, configured to obtain, for each sample feature in a sample set, a sample feature group corresponding to each target feature group, where the sample feature group includes a set number of sample feature subsections in a plurality of sample feature subsections, and the sample feature subsections are obtained by performing segmentation processing on the sample features; and the similarity operator unit is used for calculating the similarity between each target feature group and the acquired sample feature group as the corresponding identification result of each target feature group.
In an exemplary embodiment, the target object includes a fingerprint.
According to an aspect of an embodiment of the present application, an electronic device includes: the system comprises at least one processor, at least one memory and at least one communication bus, wherein the memory is stored with computer programs, and the processor reads the computer programs in the memory through the communication bus; the computer program, when executed by a processor, implements the image recognition processing method as described above.
According to an aspect of an embodiment of the present application, a storage medium has a computer program stored thereon, and the computer program, when executed by a processor, implements the image recognition processing method as described above.
According to an aspect of an embodiment of the present application, a computer program product includes a computer program, the computer program is stored in a storage medium, a processor of a computer device reads the computer program from the storage medium, and the processor executes the computer program, so that the computer device implements the image recognition processing method as described above when executing the computer program.
The technical scheme provided by the application brings the beneficial effects that:
in the technical scheme, the image to be identified and a plurality of feature matching algorithms adapted to attribute information of a target object in the image to be identified are obtained, image features of the image to be identified are respectively extracted according to the feature matching algorithms to obtain a plurality of image features, the target features are obtained through feature fusion of the plurality of image features, finally, the target object in the image to be identified is identified according to the target features to obtain an identification result, and for people with poor fingerprint quality and people with good fingerprint quality, as the quality of fingerprints in the image to be identified is different, the plurality of feature matching algorithms adapted to different fingerprint qualities of the image to be identified are different, in other words, through adaptation, a more effective feature matching algorithm is obtained and identified for the image to be identified with different fingerprint qualities, so that the success rate of biological feature identification of the image to be identified by using the feature matching algorithm is higher, and the problem of low success rate of identification such as biological feature identification in related technologies can be effectively solved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
FIG. 1 is a schematic illustration of an implementation environment according to an embodiment of the present application;
FIG. 2 is a flow diagram illustrating a method of image recognition processing according to an exemplary embodiment;
FIG. 3 is a flow diagram for one embodiment of step 350 of the corresponding embodiment of FIG. 2;
FIG. 4 is a flow diagram for one embodiment of step 310 in a corresponding embodiment of FIG. 2;
FIG. 5 is a flowchart of one embodiment of step 313 in the corresponding embodiment of FIG. 4;
FIG. 6 is a flow diagram illustrating another method of image recognition processing in accordance with an exemplary embodiment;
FIG. 7 is a diagram of one embodiment of a set of algorithms according to a corresponding embodiment of FIG. 6;
FIG. 8 is a flowchart of one embodiment of step 370 in the corresponding embodiment of FIG. 2;
FIG. 9 is a diagram illustrating an embodiment of an image recognition processing method in an application scenario;
FIG. 10 is a schematic diagram of a fingerprint feature extraction architecture involved in the application scenario of FIG. 9;
fig. 11 is a block diagram showing a configuration of an image recognition processing apparatus according to an exemplary embodiment;
FIG. 12 is a hardware block diagram of an electronic device shown in accordance with an exemplary embodiment;
fig. 13 is a block diagram illustrating a configuration of an electronic device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
The following is a description and an explanation of several terms referred to in this application:
FRR, also known as False Reject Rate in english, means the False Reject Rate in chinese, and means the probability that the biometric features of the registered user are not successfully recognized. The registered user refers to a user whose corresponding biometric feature is stored in the biometric feature library. Accordingly, the negative of the false reject rate is the pass rate, i.e., pass rate = 1-false reject rate, which refers to the probability that the biometric feature of the registered user is successfully recognized.
The FAR, which is called a False Accepted Rate in english, means a rejection Rate, and may also be understood as a False recognition Rate or a False recognition Rate, which means a probability that a biometric feature of another person is recognized as a biometric feature of a registered user in a biometric feature library and the other person passes through the biometric feature. The other person refers to a user who does not store the corresponding biological characteristics in the biological characteristic library.
As described above, the recognition success rate of biometric recognition is not only affected by the image to be recognized but also depends on the biometric characteristic itself of the target, and in addition, the electronic device for biometric recognition may also cause a low recognition success rate of biometric recognition due to the influence of external factors.
At present, the related art proposes an improvement scheme for an image to be recognized, specifically: the quality control is carried out on the image to be recognized through one or more modes of image preprocessing, image quality evaluation, image characteristic adjustment and the like, so that the image quality of the image to be recognized is improved, and the recognition success rate of biological characteristic recognition is improved.
However, although the above-mentioned improvement scheme can improve the image quality of the image to be recognized to a certain extent, the success rate of recognizing the biometric features caused by the biometric features of the target and the electronic device is low, and the improvement effect is not obvious, for example, in the case that the difference of the fingerprint quality of different people is large and the fingerprint change is large due to different external factors, the improvement effect of the image quality of the image to be recognized is not good.
As can be seen from the above, the related art still has the defect of low success rate of biometric identification.
Therefore, the image recognition processing method provided by the application can effectively improve the recognition success rate of target recognition, and accordingly, the image recognition processing method is suitable for the image recognition processing device, and the image recognition processing device can be deployed in electronic equipment, for example, the electronic equipment has a biological feature recognition function and can be image acquisition equipment, a desktop computer, a notebook computer, a tablet computer, a server and the like.
To make the objects, technical solutions and advantages of the present application more clear, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an implementation environment related to an image recognition processing method. As shown in fig. (a), the implementation environment includes a gateway 110, an image capture device 130 disposed in the gateway 110, and a server 150.
The image capturing device 130 may be a video camera, a camera, or an electronic device such as a smart phone and a tablet computer, which is configured with a camera, and may also be an electronic device having a function of capturing a specific image texture, such as an intelligent door lock, which is not limited herein.
The server 150 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, a domain name service, a security service, a CDN, a big data and artificial intelligence platform, and the like. For example, in the present implementation environment, the biometric service is provided by the server 150.
The image capturing device 130 is disposed on the gateway 110, and communicates with the gateway 110 through its own configured communication module, so as to implement interaction with the gateway 110. In one application scenario, image capture device 130 is deployed in gateway 110 by accessing gateway 110 through a local area network. The process of accessing the gateway 110 by the image capturing device 130 through the local area network includes firstly establishing a local area network by the gateway 110, and the image capturing device 130 accesses the local area network established by the gateway 110 by connecting to the gateway 110. Such local area networks include, but are not limited to: bluetooth, WIFI, ZIGBEE, or LORA.
The server 150 establishes a communication connection with the gateway 110 in advance, and data transmission between the server 150 and the gateway 110 is realized through the communication connection. For example, the transmitted data includes at least an image to be recognized and the like.
In one application scenario, the image capturing device 130 captures and captures an image to be recognized, and transmits the image to be recognized to the server 150 through the gateway 110, so that the server 150 provides a biometric identification service.
For the server 150, after receiving the image to be recognized, a biometric service may be invoked for the image to be recognized, including: the method comprises the steps of obtaining multiple feature matching algorithms matched with attribute information of a target object in an image of the image to be recognized, extracting image features of the image to be recognized respectively according to the feature matching algorithms to obtain multiple image features, obtaining target features through feature fusion of the multiple image features, and finally recognizing the target object in the image to be recognized according to the target features to obtain a recognition result, so that the problem of low success rate of recognition of biological feature recognition in the related technology is solved.
Of course, in other application scenarios, for an image capturing device with a biometric feature recognition function, after an image to be recognized is captured and captured, a target object in the image to be recognized may be directly recognized, and a recognition result obtained thereby may be applied in a relevant manner. For example, the image capturing device may be an entrance guard device having a fingerprint recognition function, so that a visitor who passes through fingerprint recognition can pass through the entrance guard smoothly.
Unlike in fig. (a), in fig. (b), the implementation environment further includes a user terminal 170.
Specifically, the user terminal 170 may also be considered as a user terminal or a terminal, which is operated by a client having a target display function, and the user terminal 170 may be an electronic device such as a desktop computer, a notebook computer, a tablet computer, a smart phone, and the like, which is not limited herein. The client has a target display function, and may be in the form of an application program or a web page, and accordingly, an interface for the client to perform target display may be in the form of a program window or a web page, which is not limited herein.
A communication connection is pre-established between the user terminal 170 and the server 150, and data transmission between the user terminal 170 and the server 150 is realized through the communication connection. For example, the transmitted data may be the recognition result or the like.
In one application scenario, the user terminal 170 initiates a biometric request to the server 170 by means of a running client to request the server 170 to provide a biometric service for the image to be recognized. After the server 170 obtains the recognition result by invoking the biometric service, the recognition result can be returned to the user terminal 170 for display in the user terminal 170 based on the recognition result.
Certainly, in other application scenarios, for a user terminal configured with a camera, the user terminal may further integrate functions such as biometric feature recognition and display, taking face recognition as an example, after the user terminal shoots and acquires an image to be recognized, the user terminal may also directly recognize a face in the image to be recognized and display the recognized face on a corresponding interface, or perform related applications based on the recognized face, for example, unlocking a smart phone with the face, and the like.
Referring to fig. 2, an embodiment of the present application provides an image recognition processing method, which is applicable to an electronic device, where the electronic device may be specifically the image capturing device 130, the server 150, the user terminal 170, and the like in the implementation environment shown in fig. 1.
In the following method embodiments, for convenience of description, the main execution subject of each step of the method is taken as an electronic device for illustration, but the method is not particularly limited to this configuration.
As shown in fig. 2, the method may include the steps of:
and 310, acquiring an image to be recognized, and acquiring a plurality of feature matching algorithms adapted to the attribute information of the target object in the image to be recognized.
First, the image to be recognized is obtained by shooting and collecting a target, the target corresponds to a target object in the image to be recognized, and it can also be understood that the biometric feature recognition is performed on the target object in the image to be recognized, so that the target may refer to physiological features of a human face, a fingerprint, a palm print, an iris, an auricle, a pulse and the like, or behavior features of a gait, a note, a voice and the like, so that the biometric feature recognition method based on the image recognition is suitable for different application scenarios. For example, in a public security application scenario, a visitor identified by a fingerprint can smoothly pass through an entrance guard; in another criminal investigation application scenario, tracking of a person is achieved through face recognition.
It is to be understood that the capturing may be a single capturing or may be a continuous capturing, and then, for the same object, for the continuous capturing, a piece of video may be obtained, and the image to be recognized may be derived from one of the pictures of the video, and for the single capturing, a plurality of pictures may be obtained, and the image to be recognized may be derived from one of the pictures. In other words, the biometric recognition in the present embodiment is performed in units of frames.
Regarding the acquisition of the image to be recognized, the image to be recognized may be derived from the image to be recognized acquired by the image acquisition device in real time, or may be an image to be recognized that is pre-stored in the server for a historical time period and is acquired by the image acquisition device. That is, after the image to be recognized is captured by the image capturing device, the electronic device may process the image to be recognized in real time, or may store the image for reprocessing in advance, for example, to process the image according to the time designated by the operator. For the electronic device, the image to be recognized collected in real time may be obtained, or the image to be recognized stored in advance may be obtained, that is, the image to be recognized collected in a history time period is called, which is not limited in this embodiment.
In one possible implementation, the target object includes a biometric that includes a physiological characteristic and a behavioral characteristic. For example, the target object may be a fingerprint of a person, a face of a person, or the like. Accordingly, the target object is different, and the attribute information may also be different, for example, the target object is a human face, and the attribute information may be a gender, a face shape, or the target object is a fingerprint, and the attribute information may be a fingerprint quality, a fingerprint type, and the like, and the embodiment does not limit the attribute information of the target object.
Secondly, the adaptation means that the success rate of recognizing the biological characteristics of the image to be recognized by using a characteristic matching algorithm is high. In other words, the adaptive feature matching algorithm can effectively improve the recognition success rate of the subsequent biological feature recognition.
Regarding the acquisition of the adapted feature matching algorithm, in one possible implementation manner, based on the attribute information of the target object in the image to be recognized, a plurality of feature matching algorithms adapted to the attribute information are obtained. The attribute information may be used to indicate the recognizability of a target object in an image to be recognized, and may also be used to indicate the type of the target object in the image to be recognized, where the attribute information may be extracted from the image to be recognized by the electronic device, or may be generated by the image capture device through configuration when the image to be recognized is captured and captured, and is sent to the electronic device, which is not limited here.
It should be noted that the different recognizability substantially means that the probability of successfully recognizing the target object in the image to be recognized is different, the higher the recognizability is, and the higher the probability of successfully recognizing the target object in the image to be recognized is, then, along with the difference in recognizability, the feature matching algorithm adapted to the attribute information is distinguished, for example, for the image to be recognized including the target object with low recognizability, the feature matching algorithm capable of more effectively improving the recognition success rate of biometric recognition is adapted, so that the biometric recognition can be more effectively performed for different images to be recognized having large image quality difference and large target change caused by different external factors, and the recognition success rate of biometric recognition can be more effectively improved.
Taking a target object in an image to be identified as a fingerprint and attribute information as fingerprint quality for illustration, the feature matching algorithm includes but is not limited to: an algorithm for extracting minutiae features, an algorithm for extracting fine line structure features, an algorithm for extracting global features, an algorithm for extracting local features, an algorithm for extracting topological structure features, an algorithm for extracting mixed features of fine line structure features and minutiae features, a triangulation algorithm, a triangulation matching algorithm, a direction field restoration algorithm, and the like.
Then, for the image to be recognized with good fingerprint quality, the adapted feature matching algorithm includes, but is not limited to: an algorithm for extracting the feature of the detail point, a triangulation algorithm, a triangulation matching algorithm and the like.
For images to be identified with poor fingerprint quality, the adapted feature matching algorithm includes, but is not limited to: algorithms for extracting local features, algorithms for extracting topological structure features, and the like.
It is worth mentioning that, for the attribute information being the fingerprint quality, the fingerprint quality can be represented by the configured quality score, for example, the fingerprint quality is good, and the configured quality score is 70 to 100, which is used for indicating that the identifiability of the fingerprint in the image to be identified is high; the fingerprint has a poor quality, and the quality score is below 70 points, which is used to indicate that the recognizability of the fingerprint in the image to be recognized is low, and is not limited in particular.
And 330, respectively extracting image features of the image to be recognized according to the multiple feature matching algorithms to obtain multiple image features.
Wherein each image feature corresponds to a feature matching algorithm. The image features are an accurate description of the target object in the image to be recognized. It should be understood that the target objects in the image to be recognized are different, and the extracted image features are also different, in other words, the image features uniquely identify the target objects in the image to be recognized.
After determining the plurality of feature matching algorithms, a plurality of corresponding image features can be determined through image feature extraction, and it can be understood that the plurality of image features accurately describe the target object in the image to be recognized from different dimensions.
Taking a target object as an example of a fingerprint, extracting minutiae features from an image to be identified through an algorithm for extracting the minutiae features, wherein the minutiae features accurately describe minutiae points of the fingerprint in the image to be identified; the thin line structural feature can be extracted from the image to be identified through an algorithm for extracting the thin line structural feature, and the detail structural feature accurately describes the texture of the fingerprint in the image to be identified.
And 350, performing feature fusion on the plurality of image features to obtain target features.
Wherein feature fusion includes, but is not limited to: consistency processing, binarization processing, normalization processing and the like. It should be noted that, the inventor has realized that, since a plurality of image features are obtained by a plurality of feature matching algorithms, respective feature vector dimensions are different, and thus, the consistency processing refers to unifying different feature vector dimensions of the plurality of image features.
In one possible implementation, as shown in fig. 3, step 350 may include the following steps: step 351, carrying out consistency processing on the dimensionality of the feature vector on the plurality of image features to obtain intermediate features; and step 355, performing binarization processing on the intermediate features to obtain target features.
The feature fusion may be implemented using a fully connected network, and may also be implemented by other machine learning methods, which are not limited herein.
In this way, the target features obtained by fusing the plurality of image features can accurately describe the target object in the image to be recognized from different dimensions at the same time, so that the robustness of biological feature recognition is improved, and the success rate of biological feature recognition is improved.
And step 370, identifying the target object in the image to be identified according to the target feature to obtain an identification result.
The target object may be identified by a target classification, a target search, and the like, which is not limited herein. Accordingly, the recognition result is used to indicate the category to which the target object belongs, or to indicate a sample matched with the target object, or to indicate whether the biometric recognition is successful, or the like.
In one possible implementation, the identification process based on the target classification implementation may include the following steps: performing category prediction on a target object in an image to be recognized according to the target characteristics to obtain a prediction category of the target object; and generating a recognition result according to the prediction type of the target object.
For example, the target object is a biometric feature to be recognized by a human face, which is suitable for a recognition process of target classification, assuming that the human face classes include a female human face and a male human face. And the category prediction is essentially to respectively calculate the probability P1 that the face to be recognized belongs to the female face and the probability P2 that the face to be recognized belongs to the male face according to the face characteristics.
If the probability P1 of the face belonging to the female is greater than or equal to the probability P2 of the face belonging to the male, the face to be recognized is predicted to belong to the female, that is, the prediction category of the face to be recognized is the female face, otherwise, if the probability P of the face belonging to the female is smaller than the probability P2 of the face belonging to the male, the face to be recognized is predicted to belong to the male face, that is, the prediction category of the face to be recognized is the male face.
Further, the prediction category of the face to be recognized may be directly used as the recognition result, or may be used as the recognition result after the set condition of the biometric recognition is satisfied. It is understood that the setting condition herein means that the probability of the face class to which the face to be recognized belongs is greater than the probability threshold, for example, the probability threshold is 0.9, and in the case that P1/P2 is greater than 0.9, the face class to which the face to be recognized belongs can be used as the recognition result.
In one possible implementation manner, the identification process based on the target retrieval implementation may include the following steps: traversing based on the sample characteristics of each sample image in the sample library, and respectively calculating the similarity between the target characteristics and the sample characteristics of each sample image; and generating a recognition result according to the calculated similarity.
For another example, the face is also applicable to the recognition process of the target retrieval, and if the sample features A1, B1, and C1 of the registered users a, B, and C are stored in the sample library, traversal is performed on all sample features in the sample library, and the similarity between the face feature and the sample features A1, B1, and C1 is calculated respectively.
Further, the face of the registered user corresponding to the sample feature with the largest similarity may be selected as the recognition result, and the face of the corresponding registered user may be used as the recognition result after the largest similarity satisfies the setting condition of the biometric feature recognition, or the recognition result indicating that the face recognition fails may be generated after all similarities do not satisfy the setting condition of the biometric feature recognition. It can be understood that the setting condition herein means that the similarity is greater than the similarity threshold, for example, the similarity threshold is 0.9, and if the similarities between the human face features and the sample features A1, B1, and C1 are all less than 0.9, the recognition result is used to indicate that the human face recognition fails.
Through the process, the characteristic matching algorithm which is more effective in acquiring and identifying different images to be identified is obtained through the matching of the characteristic matching algorithm and the images to be identified, so that the identification success rate of the biological characteristic identification of the images to be identified by using the characteristic matching algorithm is higher, and the problem of low identification success rate of the biological characteristic identification in the related technology can be effectively solved.
Referring to FIG. 4, in an exemplary embodiment, step 310 may include the steps of:
and 311, extracting attribute information from the image to be identified.
The attribute information is used for indicating the identifiability of the target object in the image to be identified, and can also be used for indicating the type of the target object in the image to be identified, and the like.
Taking the target object as an example of a fingerprint, the attribute information includes but is not limited to: fingerprint type, fingerprint quality, number of minutiae points, minutiae type, number of pseudo minutiae points, degree of dryness/wetness of the fingerprint, image quality, image distortion, and the like. On one hand, the fingerprint quality is good, the number of minutiae is large, the number of pseudo minutiae is large, the fingerprint is dry, the image quality is good, and the image is not distorted, and the fingerprint identification degree in the image to be identified is high; otherwise, poor fingerprint quality, small or lost number of minutiae, small or lost number of false minutiae, wet fingerprint, poor image quality and image distortion are all used for indicating that the identifiability of the fingerprint in the image to be identified is low; on the other hand, the fingerprint type and the minutiae type are used for indicating the type of the fingerprint in the image to be identified.
And 313, selecting a plurality of feature matching algorithms adapted to the attribute information from the algorithm set.
The algorithm set at least comprises candidate algorithms for extracting the image features.
For a target object that is a fingerprint, the set of algorithms may be constructed from at least two candidate algorithms: an algorithm for extracting minutiae features, an algorithm for extracting fine line structure features, an algorithm for extracting global features, an algorithm for extracting local features, an algorithm for extracting topological structure features, an algorithm for extracting mixed features of fine line structure features and minutiae features, a triangulation algorithm, a triangulation matching algorithm, a direction field restoration algorithm, and the like.
Regarding the selection of the adapted feature matching algorithm, in one possible implementation, the adapted feature matching algorithm is randomly selected from the set of algorithms; in a possible implementation manner, a feature matching algorithm with the adaptation degree of the candidate algorithm and the attribute information meeting the adaptation condition is selected from the algorithm set.
It should be noted that the adaptation degree refers to a degree of adaptation between the feature matching algorithm and the attribute information in the image to be recognized, and it can be understood that the higher the adaptation degree is, the higher the degree of adaptation between the feature matching algorithm and the attribute information in the image to be recognized is, and the more effectively the recognition success rate of the biometric feature recognition can be improved. The adaptation degree can be represented by a weight value configured for each candidate algorithm in the process of constructing the algorithm set, and the false rejection rate and/or the false rejection rate obtained by testing each candidate algorithm by using the same test set can be represented. Adaptation conditions, aiming at screening out a feature matching algorithm which can effectively improve the recognition success rate of the biological feature recognition. The adaptation condition may refer to a feature matching algorithm with K top names (e.g., K = 2) of the adaptation degree, and may also refer to a feature matching algorithm with the adaptation degree exceeding an adaptation threshold. Both the adaptation degree and the adaptation condition can be flexibly set according to the actual needs of the application scenario, and are not limited herein.
Still taking the target object as the fingerprint for example, for the image to be identified with the attribute information of good fingerprint quality, a large number of minutiae and dry fingerprint, the adaptive feature matching algorithm may be at least two of the following: an algorithm for extracting the feature of the detail point, a triangulation algorithm, a triangulation matching algorithm and the like.
For the images to be identified with attribute information of poor fingerprint quality, small or lost number of minutiae, large number of pseudo minutiae and wet fingerprint, the adaptive feature matching algorithm can be at least two of the following: algorithms for extracting local features, algorithms for extracting topological structure features, and the like.
For the image to be recognized with the attribute information of poor image quality and large image distortion, the adaptive feature matching algorithm can be at least two of the following algorithms: an algorithm for extracting local features, a directional field restoration algorithm, and the like.
Under the cooperation of the embodiment, the attribute information extracted from the image to be recognized is utilized to provide basis and support for the adaptation of the feature matching algorithm, so that the robustness of the biological feature recognition is improved, and the success rate of the biological feature recognition is improved.
Referring to FIG. 5, in an exemplary embodiment, step 313 may include the steps of:
3131, searching a plurality of corresponding candidate algorithms in the algorithm set according to the attribute information.
That is to say, the algorithm set includes not only a plurality of candidate algorithms for image feature extraction, but also the corresponding relationship between the attribute information and the candidate algorithms, so that the candidate algorithms corresponding to the attribute information can be found from the algorithm set as the feature matching algorithms based on the corresponding relationship between the attribute information and the candidate algorithms in the algorithm set.
In one possible implementation, the attribute information includes a primary attribute and a secondary attribute associated with the primary attribute. The first-level attribute refers to a criterion as a type of a target object in the image to be identified, such as a fingerprint type; secondary attribute slavery refers to a sub-criterion belonging to a criterion, such as an arch in a fingerprint type; or, the primary attribute refers to a criterion for measuring the identifiability of the target object in the image to be identified, such as fingerprint quality; the secondary attribute membership refers to a sub-criterion subordinate to a criterion, such as good fingerprint quality and poor fingerprint quality in the fingerprint quality.
As previously mentioned, for a target object being a fingerprint, the attribute information includes, but is not limited to: fingerprint type, fingerprint quality, number of minutiae, minutiae type, number of pseudo minutiae, dryness/wetness of the fingerprint, image quality, image distortion, and the like.
Based on this, in the case of a fingerprint as a target object, the primary attributes include, but are not limited to: fingerprint type, fingerprint quality, number of minutiae points, minutiae point type, number of pseudo minutiae points, dryness/wetness of fingerprint, image quality, image distortion, and the like.
Then, for the above-mentioned primary attributes, the associated secondary attributes specifically mean that the secondary attributes associated with the fingerprint type include: arch grain, skip grain and bucket grain; the secondary attributes associated with fingerprint quality include: the quality of the fingerprint is good and the quality of the fingerprint is poor; the secondary attributes associated with the number of minutiae include: the number of the detail points is large, and the number of the detail points is small; the secondary attributes associated with the minutiae types include: center points, triangle points, termination points (also considered as end points), bifurcation points (also considered as cross points), bifurcation points, and isolated points; the secondary attributes associated with the number of pseudo minutiae include: the number of the pseudo fine nodes is large, and the number of the pseudo fine nodes is small; secondary attributes associated with the dryness/wetness level of the fingerprint include: drying and wetting fingerprints; the secondary attributes of the image quality association include: the image quality is good and poor; the secondary attributes associated with image distortion include: the image distortion is large and small.
Therefore, in a possible implementation mode, a plurality of corresponding candidate algorithms are searched in an algorithm set according to the primary attributes, and the algorithm set at least comprises the corresponding relation between the primary attributes and the candidate algorithms; in a possible implementation manner, a plurality of corresponding candidate algorithms are searched in an algorithm set according to the secondary attributes, and the algorithm set at least comprises the corresponding relation between the secondary attributes and the candidate algorithms.
As shown in fig. 6, in one possible implementation manner, the constructing of the algorithm set may include the following steps: step 410, classifying a plurality of candidate algorithms according to different primary attributes to obtain a plurality of algorithm categories, wherein the candidate algorithms in each algorithm category correspond to the same primary attribute; 430, aiming at the candidate algorithm in each algorithm category, adapting the candidate algorithm and the secondary attribute associated with the corresponding primary attribute; and 450, constructing a path between the adapted candidate algorithm and the secondary attribute, and configuring the adaptation degree for the constructed path to obtain an algorithm set.
Fig. 7 shows a schematic diagram of a set of algorithms, in fig. 7, the primary attributes include fingerprint type, minutiae type, number of minutiae 61, fingerprint dryness/wetness, image distortion 63. The secondary attributes associated with the primary attributes are different, for example, the secondary attributes 611 associated with the minutiae number 61 include a large number of minutiae and a small number of minutiae; the secondary attribute 631 associated with the image quality 63 includes a large image distortion and a small image distortion. For the above-mentioned primary attributes and the associated secondary attributes, it is possible to classify a plurality of candidate algorithms according to different primary attributes to obtain a plurality of algorithm categories, for example, the number 61 of detail points corresponding to the candidate algorithms in the algorithm category 612, and the image distortion 63 corresponding to the candidate algorithms in the algorithm category 632. Even for candidate algorithms in the same algorithm class, since different candidate algorithms solve different problems, for example, the algorithm for extracting the local features may solve the problem of large image distortion, and the algorithm for extracting the global features is suitable for the case of small image distortion, it can be considered that the second-order attributes adapted by different candidate algorithms may be different, so for the candidate algorithms in each algorithm class, the adaptation between the candidate algorithms and the second-order attributes needs to be performed, for example, for a plurality of candidate algorithms in the algorithm class 632, the candidate algorithm adapted to large image distortion has an algorithm for extracting the local features and a direction field restoration algorithm, and the candidate algorithm adapted to small image distortion has an algorithm for extracting the detail power-down features and an algorithm for extracting the global features.
After the candidate algorithm and the secondary attribute are adapted, a path can be constructed between the adapted candidate algorithm and the secondary attribute, it can also be understood that the path indicates the adapted candidate algorithm and the secondary attribute, or indicates the candidate algorithm and the secondary attribute having a correspondence relationship, and configures an adaptation degree for the constructed path, for example, the path 613 and the adaptation degree 1 thereof, the path 633 and the adaptation degree 2 thereof, thereby obtaining the algorithm set 600.
Taking 611 the secondary attribute in the attribute information as the greater number of details as an example, as shown in fig. 7, based on the corresponding relationship between the secondary attribute and the candidate algorithm in the algorithm set 600, according to the greater number of details in the attribute information, the candidate algorithm that can be found from the algorithm set 600 and has the greater corresponding relationship with the greater number of details includes: and the algorithm is used for extracting the feature of the detail point, the triangulation algorithm and the triangulation matching algorithm. It should be noted that the candidate algorithm having a corresponding relationship with the secondary attribute may also be considered as a candidate algorithm that constructs a path with the secondary attribute, that is, a candidate algorithm that adapts to the secondary attribute.
And 3133, based on the adaptation degree of the candidate algorithm and the attribute information, using the candidate algorithm with the adaptation degree meeting the adaptation condition as a feature matching algorithm.
With continued reference to FIG. 7, it can be seen that each candidate algorithm that corresponds to a secondary attribute has a corresponding degree of adaptation. Then, after determining the candidate algorithms corresponding to the secondary attributes, the candidate algorithms found in step 3131 can be screened according to the adaptation degrees of the candidate algorithms, so that the candidate algorithms whose adaptation degrees satisfy the adaptation conditions are used as feature matching algorithms.
In one possible implementation, the candidate algorithm with the top 3 adaptation degrees is considered as the adaptation degree satisfying the adaptation condition.
In one possible implementation, the adaptation degree is the false rejection rate and/or the recognition rejection rate obtained by testing each candidate algorithm by using the same test set.
Under the action of the embodiment, the construction of the algorithm set is completed by classifying and evaluating a plurality of candidate algorithms, and basis and support are provided for the more effective feature matching algorithm of different images to be recognized, so that the recognition success rate of biological feature recognition is improved.
Referring to fig. 8, in an exemplary embodiment, step 370 may include the steps of:
and step 371, performing segmentation processing on the target feature to obtain a plurality of target feature subsections.
For example, a feature vector for representing a target feature is equally divided into n segments, and each segment of the feature vector is regarded as a target feature sub-segment.
Step 373, traversing a plurality of target feature groups, and performing biometric feature recognition on the target object in the image to be recognized according to the currently traversed target feature group to obtain a recognition result corresponding to the currently traversed target feature group.
As mentioned above, the target object can be identified by means of target classification, target retrieval, etc., and the inventors have realized that, especially in the target retrieval method, since the similarity between each sample feature in the sample set and the target feature needs to be calculated, as the number of sample features in the sample set increases, the amount of similarity calculation increases, and the similarity calculation speed is also affected, thereby affecting the identification efficiency of biometric identification.
Based on this, in this embodiment, the target feature group subjected to the segmentation processing performs the biometric identification on the target in the image to be identified, so as to improve the identification efficiency of the biometric identification. The target feature group comprises a set number of target feature subsections in the plurality of target feature subsections. To illustrate with the foregoing example, assuming that the set number is r, and m = n/r, the target feature includes m target feature groups, each of which includes r target feature subsections. It is worth mentioning that the number of target feature sub-segments included in the last target feature group is between 1 and r, considering that n/r is not necessarily evenly divided.
After a plurality of target feature groups contained in the target features are determined, the target object in the image to be recognized can be subjected to biological feature recognition according to each target feature group. The target feature set currently traversed is illustrated as follows:
in a possible implementation mode, according to a currently traversed target feature group, performing class prediction on a target object in an image to be recognized to obtain the probability that the target object belongs to different classes; and taking the probability that the target object belongs to different categories as the identification result corresponding to the currently traversed target feature group.
In a possible implementation manner, a sample feature group corresponding to a currently traversed target feature group is obtained for each sample feature in a sample set, wherein the sample feature group comprises a set number of sample feature subsections in a plurality of sample feature subsections, and the sample feature subsections are obtained by carrying out segmentation processing on sample features; and calculating the similarity between the currently traversed target feature group and the acquired sample feature group as an identification result corresponding to the currently traversed target feature group.
Step 375, determining the recognition result of the target object according to the recognition results corresponding to all the traversed target feature groups.
In step 377, if the recognition result of the target object satisfies the set condition for biometric recognition, a recognition result is generated based on the recognition result of the target object.
Taking the example that the target feature includes m target feature groups, each of which includes r target feature sub-segments, the identification process based on the target classification is described as follows:
assuming that the category includes a category a and a category b, regarding the first target feature group, if the probability that the target object belongs to the category a is P1, and the probability that the target object belongs to the category b is P2, the first target feature group is used as the corresponding recognition result. At this time, without any other traversed target feature group, it is possible to determine that the probability Pa that the target object belongs to the class a is P1 and the probability Pb that the target object belongs to the class b is P2.
If Pa is greater than Pb, the target object belongs to the category a as a recognition result of the target object.
On the other hand, if Pb is larger than Pa, the target object belongs to the category b as a recognition result of the target object.
Assuming that the set condition of the biometric identification is that the probability exceeds a probability threshold (for example, the probability threshold is 0.9), the set condition of the biometric identification is that the probability exceeds 0.9 × r/n (the probability threshold for r traversed target feature sub-segments) for the first target feature group, and if Pa or Pb is less than 0.9 × r/n, that is, the identification result of the target object does not satisfy the set condition of the biometric identification, it is determined that the biometric identification performed for the first target feature group fails, and then the biometric identification is continued for the previous two target feature groups.
And regarding the second target feature group, if the probability that the target object belongs to the class a is P3, and the probability that the target object belongs to the class b is P4, the second target feature group is used as a corresponding recognition result. In this case, in combination with the recognition result corresponding to the first target feature group, the probabilities P1 and P2 of the target object belonging to the categories a and b in the first target feature group can specify that the probability Pa of the target object belonging to the category a is α × P1+ β × P3, and the probability Pb of the target object belonging to the category b is α × P2+ β × P4. Where α and β are weights respectively assigned to the first target feature group and the second target feature group.
Similarly, if Pa is greater than Pb, the target object belongs to the category a as a recognition result of the target object.
Conversely, if Pb is greater than Pa, the target object belongs to the class b as the recognition result of the target object.
In this case, if Pa or Pb is greater than 0.9 × 2r/n (a probability threshold for the traversed 2r target feature subsections), that is, if the recognition result of the target object satisfies the set condition of biometric recognition, it is determined that biometric recognition for the first two target feature groups is successful, and the recognition result is obtained from the recognition result of the target object, that is, the target object belongs to the category a.
Otherwise, the biological feature recognition is continued aiming at the first three target feature groups until the biological feature recognition is successful or the m target feature groups are traversed.
In the process, the biological feature recognition based on the segmented features is realized, namely, the biological feature recognition is carried out on the image to be recognized through the target feature group subjected to the segmented processing, so that the method is not only suitable for the biological feature recognition based on the target classification, but also suitable for the biological feature recognition based on the target retrieval, the recognition speed of the biological feature recognition can be effectively improved, and the recognition efficiency of the biological feature recognition can be effectively improved.
Fig. 9 to fig. 10 are schematic diagrams illustrating a specific implementation of an image recognition processing method in an application scenario. The application scenario is suitable for the implementation environment shown in fig. 1, when the target object is a fingerprint, the image capture device 130 may be an electronic device with a fingerprint capture function, such as a fingerprint lock or a fingerprint door lock, for example, an intelligent door lock, the image capture device 130 captures and captures the fingerprint to obtain a fingerprint image, and forwards the fingerprint image to the server 150 through the gateway 110, so as to perform fingerprint identification on the fingerprint in the fingerprint image.
Now, with reference to fig. 9 to 10, the following description is made of two branches involved in the fingerprint identification process:
first, regardless of the registration branch or the retrieval branch, the fingerprint feature extraction needs to be performed on the acquired fingerprint image, that is, steps 81 to 82 are performed.
Fig. 10 shows a schematic diagram of a fingerprint feature extraction architecture, and in fig. 10, the fingerprint feature extraction architecture includes attribute extraction 821, algorithm selection 822 and feature fusion 823. Wherein, the attribute extraction 821 is used for extracting attribute information from the fingerprint image; the algorithm selection 822 is used for acquiring a plurality of feature matching algorithms adapted to the attribute information according to the attribute information output by the attribute extraction 821; respectively carrying out fingerprint feature extraction on the fingerprint images according to the multiple feature matching algorithms to obtain multiple fingerprint features; the feature fusion 823 is configured to perform feature fusion on the multiple fingerprint features output by the algorithm selection 822 to obtain a target feature.
The difference between the registered branch and the retrieval branch is that the target feature obtained by the registered branch is a fingerprint sample feature, and the target feature obtained by the retrieval branch is a fingerprint feature to be identified.
Registration branch 83:
after the fingerprint sample characteristics 831 are obtained, the fingerprint sample characteristics 831 can be stored in the fingerprint library 832 to provide basis and support for fingerprint identification.
Retrieval branch 84:
after the fingerprint features 841 to be recognized are obtained, fingerprint recognition can be performed on the fingerprints in the acquired fingerprint image based on the fingerprint features 841 to be recognized and the fingerprint sample features 831 stored in the fingerprint library 832.
Taking two-stage matching as an example, the fingerprint identification process is described as follows:
by executing step 842, fingerprint feature 841 to be recognized is segmented to obtain n target feature sub-segments, the first target feature group is obtained from the first r target feature sub-segments, and the second target feature group is obtained from the last (n-r) target feature sub-segments.
For each fingerprint sample feature in the fingerprint library 832, the currently traversed fingerprint sample feature is segmented by executing step 843 to obtain n sample feature sub-segments, a first sample feature group is obtained from the first r sample feature sub-segments, and a second sample feature group is obtained from the last (n-r) sample feature sub-segments.
For the two obtained target feature groups and two obtained sample feature groups, by executing step 843, calculating the similarity between the corresponding target feature group and the corresponding sample feature group, specifically, calculating the similarity between the first target feature group and the first sample feature group, and recording as Sa; and calculating the similarity between the second target feature group and the second sample feature group, and marking as Sb.
By executing steps 845 to 846, it is determined whether the fingerprint identification is successful, specifically: assuming T as the similarity threshold of biological feature recognition, if
Figure RE-GDA0003934642900000191
The first-level matching fails, which indicates that the fingerprint identification performed on the first target feature group fails; at this time, secondary matching is performed, that is, fingerprint identification is continuously performed on the first two target feature groups, and if α × Sa + β × Sb > T, the secondary matching is successful, which indicates that fingerprint identification performed on the first two target feature groups is successful, and it may also be considered that fingerprint identification on a fingerprint in the acquired fingerprint image is successful. Where α and β refer to the weights configured for the first r target/sample characteristic subsections and the last (n-r) target/sample characteristic subsections, respectively.
In the application scenario, on one hand, the algorithm selection 822 is added to the fingerprint feature extraction framework, so that a more effective feature matching algorithm can be adapted to fingerprint images with different attribute information, and the cost is low; on the other hand, the problem of low success rate of fingerprint identification caused by fingerprint differences of different people and large fingerprint differences in different environments can be effectively solved through the adaptation of the feature matching algorithm, so that the applicability and the robustness of the fingerprint identification scheme are high; in addition, through the strategy of characteristic segmentation matching, the identification speed of fingerprint identification can be effectively improved, and the identification efficiency of fingerprint identification can be effectively improved.
In another application scenario, the image capturing device may be connected to a terminal corresponding to a user through a network. The user can configure the image acquisition equipment through the APP in the terminal, including configuring basic functions of the image acquisition equipment, such as functions of fingerprint input, password setting, exception reporting and the like, and further can perform custom configuration on an algorithm of image identification.
Specifically, all feature matching algorithms in the matching algorithm library can be opened to a user, that is, the feature matching algorithms in the matching algorithm library can be displayed in the terminal APP, and the user can custom select at least two feature matching algorithms used in the matching algorithm library in the process of configuring the image acquisition device. The attribute information of each feature matching algorithm can be displayed while various feature matching algorithms in the matching algorithm library are displayed. For example, the target object may be a fingerprint, and the attribute information may include fingerprint quality, fingerprint type, and the like. Therefore, when the user selects, the user can self-define and select the feature matching matched with the self biological feature attribute according to the self biological feature attribute, such as the fingerprint attribute.
And then constructing a feature matching algorithm used by the image acquisition equipment for currently performing image recognition processing according to at least two feature matching algorithms selected by the user.
In the process of identifying the acquired image, firstly, after acquiring the image to be identified, the image acquisition equipment acquires at least one feature matching algorithm selected by a user, respectively extracts image features of the image to be identified, and then performs feature fusion on the acquired image features to obtain target features; and then identifying the target object in the image to be identified according to the target characteristics to obtain an identification result.
In the application scene, a matching algorithm library comprising a plurality of diversified and relatively accurate feature matching algorithms is provided, and a plurality of more effective feature matching algorithm supports are obtained and identified for the images to be identified with different fingerprint qualities. And various matching algorithms in the matching algorithm library can be displayed to the user for the user to perform customized selection according to the self biological characteristic attribute, so that the accuracy of biological characteristic identification can be effectively ensured, and the adaptability and flexibility of biological characteristic identification are greatly improved.
The following are embodiments of the apparatus of the present application, which can be used to execute the image recognition processing method of the present application. For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the method embodiments of the image recognition processing method of the present application.
Referring to fig. 11, an embodiment of the present application provides an image recognition processing apparatus 900, which includes but is not limited to: an algorithm acquisition module 910, a feature extraction module 930, a feature fusion module 950, and a feature identification module 970.
The algorithm obtaining module 910 is configured to obtain an image to be recognized, and obtain multiple feature matching algorithms, where the feature matching algorithms are adapted to attribute information of a target object in the image to be recognized.
The feature extraction module 930 is configured to perform image feature extraction on the image to be recognized according to multiple feature matching algorithms, respectively, to obtain multiple image features.
And a feature fusion module 950, configured to perform feature fusion on the multiple image features to obtain a target feature.
The feature recognition module 970 is configured to recognize a target object in the image to be recognized according to the target feature, so as to obtain a recognition result.
In an exemplary embodiment, the algorithm acquisition module includes: the information extraction unit is used for extracting attribute information from the image to be identified; and the algorithm selection unit is used for selecting a plurality of feature matching algorithms matched with the attribute information from an algorithm set, and the algorithm set comprises a plurality of candidate algorithms which can be selected and used for image feature extraction.
In an exemplary embodiment, the algorithm selecting unit includes: the algorithm searching subunit is used for searching a plurality of corresponding candidate algorithms in the algorithm set according to the attribute information; and the algorithm adapter unit is used for taking the candidate algorithm with the adaptation degree meeting the adaptation condition as a feature matching algorithm based on the adaptation degree of the candidate algorithm and the attribute information.
In an exemplary embodiment, the attribute information includes a primary attribute and a secondary attribute associated with the primary attribute; the device still includes: the set construction module is used for constructing an algorithm set; the set building module comprises: the algorithm classification unit is used for classifying various candidate algorithms according to different primary attributes to obtain a plurality of algorithm categories, and the candidate algorithms in each algorithm category correspond to the same primary attribute; the algorithm adaptation unit is used for adapting the candidate algorithms and the secondary attributes associated with the corresponding primary attributes of the candidate algorithms according to the candidate algorithms in each algorithm category; and the path construction unit is used for constructing a path between the adaptive candidate algorithm and the second-level attribute, and configuring the adaptation degree for the constructed path to obtain an algorithm set.
In an exemplary embodiment, the algorithm lookup subunit includes: and the corresponding subunit is used for searching the candidate algorithm corresponding to the secondary attribute from the algorithm set based on the corresponding relationship between the secondary attribute and the candidate algorithm in the algorithm set.
In an exemplary embodiment, the feature recognition module includes: the characteristic segmentation unit is used for carrying out segmentation processing on the target characteristic to obtain a plurality of target characteristic subsections; the characteristic identification unit is used for carrying out biological characteristic identification on a target object in an image to be identified according to each target characteristic group to obtain an identification result corresponding to each target characteristic group, and each target characteristic group comprises a set number of target characteristic subsegments in the target characteristic subsegments; and the result generating unit is used for obtaining the recognition result according to the recognition result corresponding to each target feature group.
In an exemplary embodiment, the feature recognition unit includes: the sample acquisition subunit is used for acquiring a sample feature group corresponding to each target feature group aiming at each sample feature in the sample set, wherein the sample feature group comprises a set number of sample feature subsections in a plurality of sample feature subsections, and the sample feature subsections are obtained by carrying out segmentation processing on sample features; and the similarity calculation operator unit is used for calculating the similarity between each target feature group and the acquired sample feature group as the corresponding recognition result of each target feature group.
In an exemplary embodiment, the target object includes a fingerprint.
It should be noted that, when the image recognition processing device provided in the foregoing embodiment performs biometric recognition, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules as needed, that is, the internal structure of the image recognition processing device is divided into different functional modules to complete all or part of the functions described above.
In addition, the image recognition processing apparatus provided in the above embodiments and the embodiments of the image recognition processing method belong to the same concept, wherein the specific manner in which each module performs operations has been described in detail in the method embodiments, and is not described herein again.
FIG. 12 shows a schematic of a structure of an electronic device according to an example embodiment. The electronic device is suitable for the image acquisition device 130, the server 150 and the user terminal 170 in the implementation environment shown in fig. 1.
It should be noted that the electronic device is only an example adapted to the application and should not be considered as providing any limitation to the scope of use of the application. The electronic device is also not to be construed as necessarily relying on or having to have one or more components in the exemplary electronic device 2000 shown in fig. 12.
The hardware structure of the electronic device 2000 may have a large difference due to the difference of configuration or performance, as shown in fig. 12, the electronic device 2000 includes: a power supply 210, an interface 230, at least one memory 250, and at least one Central Processing Unit (CPU) 270.
Specifically, the power supply 210 is used to provide operating voltages for various hardware devices on the electronic device 2000.
The interface 230 includes at least one wired or wireless network interface 231 for interacting with external devices. For example, interaction between gateway 110 and server 150 in the implementation environment shown in FIG. 1 occurs.
Of course, in other examples of the application, the interface 230 may further include at least one serial-to-parallel conversion interface 233, at least one input/output interface 235, at least one USB interface 237, and the like, as shown in fig. 12, which is not limited herein.
The storage 250 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk, an optical disk, or the like, where the stored resources include an operating system 251, an application 253, data 255, and the like, and the storage may be a transient storage or a permanent storage.
The operating system 251 is used for managing and controlling hardware devices and application programs 253 on the electronic device 2000 to implement the operation and processing of the mass data 255 in the memory 250 by the central processing unit 270, and may be Windows server, mac OS XTM, unix, linux, freeBSDTM, or the like.
The application 253 is a computer program that performs at least one specific task on the operating system 251, and may include at least one module (not shown in fig. 12), each of which may include a computer program for the electronic device 2000. For example, the image recognition processing device may be regarded as an application 253 deployed on the electronic apparatus 2000.
The data 255 may be a photograph, a picture, or the like stored in a magnetic disk, or may be an image to be recognized, or the like, and is stored in the memory 250.
The central processor 270 may include one or more processors and is configured to communicate with the memory 250 through at least one communication bus to read the computer programs stored in the memory 250, and further to implement operations and processing on the mass data 255 in the memory 250. The image recognition processing method is accomplished, for example, by the central processor 270 reading a form of a series of computer programs stored in the memory 250.
Furthermore, the present application can be implemented by hardware circuits or by hardware circuits in combination with software, and therefore, the implementation of the present application is not limited to any specific hardware circuits, software, or a combination of the two.
Referring to fig. 13, in an embodiment of the present application, an electronic device 4000 is provided, where the electronic device 400 may include: fingerprint locks, door access controls, smart door locks, gateway-type cameras, smart phones, desktop computers, laptop computers, servers, and the like.
In fig. 13, the electronic device 4000 includes at least one processor 4001, at least one communication bus 4002, and at least one memory 4003.
Processor 4001 is coupled to memory 4003, such as by communication bus 4002. Optionally, the electronic device 4000 may further include a transceiver 4004, and the transceiver 4004 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data. In addition, the transceiver 4004 is not limited to one in practical applications, and the structure of the electronic device 4000 is not limited to the embodiment of the present application.
The Processor 4001 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array), or other Programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein. The processor 4001 may also be a combination that performs a computational function, including, for example, a combination of one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Communication bus 4002 may include a path that carries information between the aforementioned components. The communication bus 4002 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The communication bus 4002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 13, but this is not intended to represent only one bus or one type of bus.
The Memory 4003 may be a ROM (Read Only Memory) or other types of static storage devices that can store static information and instructions, a RAM (Random Access Memory) or other types of dynamic storage devices that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program codes in the form of instructions or data structures and that can be accessed by a computer, but is not limited to.
The memory 4003 has a computer program stored thereon, and the processor 4001 reads the computer program stored in the memory 4003 through the communication bus 4002.
The computer program realizes the image recognition processing method in the above embodiments when executed by the processor 4001.
Furthermore, in the embodiments of the present application, a storage medium is provided, and a computer program is stored on the storage medium, and when being executed by a processor, the computer program realizes the image recognition processing method in the embodiments.
A computer program product is provided in an embodiment of the present application, the computer program product comprising a computer program stored in a storage medium. The processor of the computer apparatus reads the computer program from the storage medium, and the processor executes the computer program, so that the computer apparatus executes the image recognition processing method in each of the embodiments described above.
Compared with the related technology, on one hand, through the construction of the algorithm set, a more effective feature matching algorithm can be adapted to the fingerprint images with different attribute information, and the cost is low; on the other hand, through the adaptation of the feature matching algorithm, the feature matching algorithm which is more effective in acquiring and identifying the images to be identified with different fingerprint qualities is obtained, so that the identification success rate of the biological feature identification of the images to be identified by using the feature matching algorithm is higher, and the applicability and the robustness of the biological feature identification scheme are high; in addition, through the strategy of feature segmentation matching, the identification speed of biological feature identification can be effectively improved, and the identification efficiency of biological feature identification can be effectively improved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of execution is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a few embodiments of the present application and it should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present application, and that these improvements and modifications should also be considered as the protection scope of the present application.

Claims (11)

1. An image recognition processing method, characterized by comprising:
acquiring an image to be recognized, and acquiring a plurality of feature matching algorithms, wherein the feature matching algorithms are adapted to attribute information of a target object in the image to be recognized;
according to the multiple feature matching algorithms, respectively carrying out image feature extraction on the image to be identified to obtain multiple image features;
performing feature fusion on the image features to obtain target features;
and identifying the target object in the image to be identified according to the target characteristic to obtain an identification result.
2. The method of claim 1, wherein obtaining a plurality of feature matching algorithms comprises:
extracting the attribute information from the image to be identified;
and selecting a plurality of feature matching algorithms which are adaptive to the attribute information from an algorithm set, wherein the algorithm set comprises a plurality of candidate algorithms which can be selected for image feature extraction.
3. The method of claim 2, wherein said selecting a plurality of said feature matching algorithms from a set of algorithms that are adapted to said attribute information comprises:
searching a plurality of corresponding candidate algorithms in the algorithm set according to the attribute information;
and taking the candidate algorithm with the adaptation degree meeting the adaptation condition as the feature matching algorithm based on the adaptation degree of the candidate algorithm and the attribute information.
4. The method of claim 3, wherein the attribute information includes a primary attribute and a secondary attribute associated with the primary attribute;
the method further comprises the following steps: constructing the algorithm set;
the constructing the algorithm set comprises:
classifying a plurality of candidate algorithms according to different primary attributes to obtain a plurality of algorithm categories, wherein the candidate algorithms in each algorithm category correspond to the same primary attribute;
aiming at the candidate algorithm in each algorithm category, adapting the candidate algorithm and the secondary attribute associated with the corresponding primary attribute;
and constructing a path between the matched candidate algorithm and the secondary attribute, and configuring the matching degree for the constructed path to obtain the algorithm set.
5. The method of claim 4,
the searching for multiple corresponding candidate algorithms in the algorithm set according to the attribute information includes:
and searching the candidate algorithm with the corresponding relation with the secondary attribute from the algorithm set based on the corresponding relation between the secondary attribute and the candidate algorithm in the algorithm set.
6. The method of claim 1, wherein the recognizing the target object in the image to be recognized according to the target feature to obtain a recognition result comprises:
performing segmentation processing on the target characteristics to obtain a plurality of target characteristic subsections;
performing biological feature recognition on the target object in the image to be recognized according to each target feature group to obtain a recognition result corresponding to each target feature group, wherein the target feature group comprises a set number of target feature subsections in a plurality of target feature subsections;
and obtaining the identification result according to the identification result corresponding to each target feature group.
7. The method as claimed in claim 6, wherein the performing biometric recognition on the target object in the image to be recognized according to each target feature group to obtain a recognition result corresponding to each target feature group comprises:
obtaining a sample feature group corresponding to each target feature group aiming at each sample feature in a sample set, wherein the sample feature group comprises a set number of sample feature subsections in a plurality of sample feature subsections, and the sample feature subsections are obtained by carrying out segmentation processing on the sample features;
and calculating the similarity between each target feature group and the acquired sample feature group as a corresponding recognition result of each target feature group.
8. The method of any of claims 1 to 7, wherein the target object comprises a fingerprint.
9. An image recognition processing apparatus, characterized in that the apparatus comprises:
the algorithm acquisition module is used for acquiring an image to be identified and acquiring a plurality of feature matching algorithms, and the feature matching algorithms are adapted to the attribute information of a target object in the image to be identified;
the characteristic extraction module is used for respectively extracting the image characteristics of the image to be recognized according to the multiple characteristic matching algorithms to obtain multiple image characteristics;
the characteristic fusion module is used for carrying out characteristic fusion on the image characteristics to obtain target characteristics;
and the feature identification module is used for identifying the target object in the image to be identified according to the target feature to obtain an identification result.
10. An electronic device, comprising: at least one processor, at least one memory, and at least one communication bus, wherein,
the memory has a computer program stored thereon, and the processor reads the computer program in the memory through the communication bus;
the computer program, when executed by the processor, implements the biometric method of any one of claims 1 to 8.
11. A storage medium on which a computer program is stored, the computer program realizing the image recognition processing method according to any one of claims 1 to 8 when executed by a processor.
CN202210910090.2A 2022-07-29 2022-07-29 Image recognition processing method and device, electronic equipment and storage medium Pending CN115546846A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210910090.2A CN115546846A (en) 2022-07-29 2022-07-29 Image recognition processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210910090.2A CN115546846A (en) 2022-07-29 2022-07-29 Image recognition processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115546846A true CN115546846A (en) 2022-12-30

Family

ID=84724385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210910090.2A Pending CN115546846A (en) 2022-07-29 2022-07-29 Image recognition processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115546846A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117560455A (en) * 2024-01-11 2024-02-13 腾讯科技(深圳)有限公司 Image feature processing method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117560455A (en) * 2024-01-11 2024-02-13 腾讯科技(深圳)有限公司 Image feature processing method, device, equipment and storage medium
CN117560455B (en) * 2024-01-11 2024-04-26 腾讯科技(深圳)有限公司 Image feature processing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Khan et al. Deep unified model for face recognition based on convolution neural network and edge computing
CN108009528B (en) Triple Loss-based face authentication method and device, computer equipment and storage medium
US12020473B2 (en) Pedestrian re-identification method, device, electronic device and computer-readable storage medium
CN109255352B (en) Target detection method, device and system
CN109492536B (en) Face recognition method and system based on 5G framework
CN107403173B (en) Face recognition system and method
CN111191568B (en) Method, device, equipment and medium for identifying flip image
CN113033519B (en) Living body detection method, estimation network processing method, device and computer equipment
CN109740573B (en) Video analysis method, device, equipment and server
WO2019196626A1 (en) Media processing method and related apparatus
JP2016099734A (en) Image processor, information processing method and program
CN110991231B (en) Living body detection method and device, server and face recognition equipment
CN108416298B (en) Scene judgment method and terminal
WO2021082548A1 (en) Living body testing method and apparatus, server and facial recognition device
Feng et al. A novel saliency detection method for wild animal monitoring images with WMSN
CN115546846A (en) Image recognition processing method and device, electronic equipment and storage medium
US20230386185A1 (en) Statistical model-based false detection removal algorithm from images
Valehi et al. A graph matching algorithm for user authentication in data networks using image-based physical unclonable functions
Younis et al. IFRS: An indexed face recognition system based on face recognition and RFID technologies
CN112487082A (en) Biological feature recognition method and related equipment
CN111507289A (en) Video matching method, computer device and storage medium
CN108596068B (en) Method and device for recognizing actions
CN113255531B (en) Method and device for processing living body detection model, computer equipment and storage medium
KR20200124887A (en) Method and Apparatus for Creating Labeling Model with Data Programming
KR20210031444A (en) Method and Apparatus for Creating Labeling Model with Data Programming

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination