CN117523638A - Face recognition method and system based on priority screening - Google Patents

Face recognition method and system based on priority screening Download PDF

Info

Publication number
CN117523638A
CN117523638A CN202311606317.5A CN202311606317A CN117523638A CN 117523638 A CN117523638 A CN 117523638A CN 202311606317 A CN202311606317 A CN 202311606317A CN 117523638 A CN117523638 A CN 117523638A
Authority
CN
China
Prior art keywords
recognition
face
scene
determining
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311606317.5A
Other languages
Chinese (zh)
Other versions
CN117523638B (en
Inventor
朱湘军
唐伟文
李利苹
汪壮雄
孟凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGZHOU VIDEO-STAR ELECTRONICS CO LTD
Original Assignee
GUANGZHOU VIDEO-STAR ELECTRONICS CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUANGZHOU VIDEO-STAR ELECTRONICS CO LTD filed Critical GUANGZHOU VIDEO-STAR ELECTRONICS CO LTD
Priority to CN202311606317.5A priority Critical patent/CN117523638B/en
Publication of CN117523638A publication Critical patent/CN117523638A/en
Application granted granted Critical
Publication of CN117523638B publication Critical patent/CN117523638B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face recognition method and a face recognition system based on priority screening, wherein the method comprises the following steps: acquiring a plurality of face images and identification requests sent by users corresponding to each face image by carrying out real-time identification on the target image; determining the recognition level corresponding to each face image according to the request parameters of the recognition request and a preset level determination algorithm; acquiring the current position of an acquisition device corresponding to the target image, and determining a device scene corresponding to the acquisition device according to the device parameters of the acquisition device and the current position; determining the recognition strategies corresponding to the face images according to the equipment scene and the recognition level corresponding to each face image; the recognition strategy is used for limiting whether the face images are recognized or not and the recognition sequence. Therefore, the invention can realize more accurate and reasonable face recognition under the scene of multiple faces in the same image, and improves the effect and efficiency of face recognition.

Description

Face recognition method and system based on priority screening
Technical Field
The invention relates to the technical field of image processing, in particular to a face recognition method and system based on priority screening.
Background
With the development of image processing technology, the face recognition algorithm has better recognition accuracy when facing a single face scene, but in some specific scenes, such as a scene with multiple faces in a single scene, how to realize more efficient and reasonable face recognition still has defects in the prior art.
Specifically, when the identification of multiple faces of a single image is realized, the identification strategy is not determined by considering the identification request and the equipment scene of different face images, so that the defect of the prior art is obvious, and the problem needs to be solved.
Disclosure of Invention
The invention aims to solve the technical problem of providing a face recognition method and a face recognition system based on priority screening, which can realize more accurate and reasonable face recognition under the scene of multiple faces in the same image and improve the effect and efficiency of face recognition.
In order to solve the technical problem, the first aspect of the present invention discloses a face recognition method based on priority screening, which comprises the following steps:
acquiring a plurality of face images and identification requests sent by users corresponding to each face image by carrying out real-time identification on the target image;
determining the recognition level corresponding to each face image according to the request parameters of the recognition request and a preset level determination algorithm;
acquiring the current position of an acquisition device corresponding to the target image, and determining a device scene corresponding to the acquisition device according to the device parameters of the acquisition device and the current position;
determining the recognition strategies corresponding to the face images according to the equipment scene and the recognition level corresponding to each face image; the recognition strategy is used for limiting whether the face images are recognized or not and the recognition sequence.
As an optional implementation manner, in the first aspect of the present invention, the request parameters include an identification initiation object, an identification initiation destination, an identification initiation time, and a location where the identification object is located; and/or the device parameters include at least one of a device type, a device name, and a device component parameter.
In an optional implementation manner, in a first aspect of the present invention, the determining, according to a request parameter of the recognition request and a preset level determining algorithm, a recognition level corresponding to each face image includes:
inputting request parameters of each identification request into a trained priority prediction neural network model to obtain priority parameters corresponding to each face image; the priority prediction neural network model is obtained through training a training data set comprising a plurality of training request parameters and corresponding priority labels;
calculating the product of the priority parameter corresponding to each face image and the preset parameter weight to obtain the level parameter corresponding to each face image;
and sequencing all the face images from large to small according to the level parameters to obtain an image sequence, and determining the sequence of each face image in the image sequence as the identification level corresponding to each face image.
As an optional implementation manner, in the first aspect of the present invention, the preset parameter weights include a time parameter weight and a location weight; the time parameter weight is in direct proportion to the time difference between the identification initiation time corresponding to the face image and the current time point; the position weight is inversely proportional to a position gap between a position of the recognition object corresponding to the face image and a current position of the acquisition device corresponding to the target image.
As an optional implementation manner, in a first aspect of the present invention, the determining, according to the device parameter of the obtaining device and the current location, a device scene corresponding to the obtaining device includes:
inputting the equipment parameters and the current position of the acquisition equipment into a trained scene prediction neural network model to obtain an equipment scene corresponding to the acquisition equipment; the scene prediction neural network model is obtained through training a training data set comprising a plurality of training equipment parameters, position labels and corresponding equipment scene labels; the equipment scenario includes at least one of a business meeting, a restaurant site, a business party, a family party, and a wedding site.
In an optional implementation manner, in a first aspect of the present invention, the determining, according to the device scene and the recognition level corresponding to each of the face images, a recognition policy corresponding to the face images includes:
determining at least one scene recognition rule according to the equipment scene and the corresponding relation between the preset scene and the rule; the scene recognition rule is used for limiting the sequence among face images corresponding to different recognition initiation objects corresponding to a specific equipment scene;
and determining the recognition strategies corresponding to the face images based on a dynamic programming algorithm according to the at least one scene recognition rule and the recognition level corresponding to each face image.
As an optional implementation manner, in the first aspect of the present invention, the determining, according to the at least one scene recognition rule and the recognition level corresponding to each of the face images, the recognition policy corresponding to the plurality of face images based on a dynamic programming algorithm includes:
setting an objective function, wherein the objective function comprises the steps that the number of face images which are determined to be recognized in a recognition strategy reaches the maximum, and the sum of level difference parameters corresponding to all the face images in the recognition strategy reaches the minimum; the level difference parameter is the difference value between the identification level of the same face image and the identification order in the identification strategy;
setting limiting conditions, namely, setting the limiting conditions to not violate the scene recognition rules between the recognition sequences of any two face images in the recognition strategies;
and according to the objective function and the limiting condition, calculating the identification strategies corresponding to the face images based on a dynamic programming algorithm.
The second aspect of the invention discloses a face recognition system based on priority screening, the system comprises:
the acquisition module is used for acquiring a plurality of face images and identification requests sent by users corresponding to the face images through real-time identification of the target images;
the first determining module is used for determining the identification level corresponding to each face image according to the request parameters of the identification request and a preset level determining algorithm;
the second determining module is used for acquiring the current position of the acquisition equipment corresponding to the target image and determining the equipment scene corresponding to the acquisition equipment according to the equipment parameters of the acquisition equipment and the current position;
the third determining module is used for determining the recognition strategies corresponding to the face images according to the equipment scene and the recognition level corresponding to each face image; the recognition strategy is used for limiting whether the face images are recognized or not and the recognition sequence.
As an optional implementation manner, in the second aspect of the present invention, the request parameters include an identification initiation object, an identification initiation destination, an identification initiation time, and a location where the identification object is located; and/or the device parameters include at least one of a device type, a device name, and a device component parameter.
In a second aspect of the present invention, the specific manner of determining the recognition level corresponding to each face image by the first determining module according to the request parameter of the recognition request and a preset level determining algorithm includes:
inputting request parameters of each identification request into a trained priority prediction neural network model to obtain priority parameters corresponding to each face image; the priority prediction neural network model is obtained through training a training data set comprising a plurality of training request parameters and corresponding priority labels;
calculating the product of the priority parameter corresponding to each face image and the preset parameter weight to obtain the level parameter corresponding to each face image;
and sequencing all the face images from large to small according to the level parameters to obtain an image sequence, and determining the sequence of each face image in the image sequence as the identification level corresponding to each face image.
As an optional implementation manner, in the second aspect of the present invention, the preset parameter weights include a time parameter weight and a location weight; the time parameter weight is in direct proportion to the time difference between the identification initiation time corresponding to the face image and the current time point; the position weight is inversely proportional to a position gap between a position of the recognition object corresponding to the face image and a current position of the acquisition device corresponding to the target image.
In a second aspect of the present invention, the second determining module determines, according to the device parameter of the obtaining device and the current location, a specific manner of the device scene corresponding to the obtaining device, where the specific manner includes:
inputting the equipment parameters and the current position of the acquisition equipment into a trained scene prediction neural network model to obtain an equipment scene corresponding to the acquisition equipment; the scene prediction neural network model is obtained through training a training data set comprising a plurality of training equipment parameters, position labels and corresponding equipment scene labels; the equipment scenario includes at least one of a business meeting, a restaurant site, a business party, a family party, and a wedding site.
In a second aspect of the present invention, the determining, by the third determining module, a specific manner of determining a recognition policy corresponding to the face images according to the device scene and a recognition level corresponding to each face image, includes:
determining at least one scene recognition rule according to the equipment scene and the corresponding relation between the preset scene and the rule; the scene recognition rule is used for limiting the sequence among face images corresponding to different recognition initiation objects corresponding to a specific equipment scene;
and determining the recognition strategies corresponding to the face images based on a dynamic programming algorithm according to the at least one scene recognition rule and the recognition level corresponding to each face image.
As an optional implementation manner, in the second aspect of the present invention, the third determining module determines, according to the at least one scene recognition rule and the recognition level corresponding to each of the face images, a specific manner of the recognition policies corresponding to the face images based on a dynamic programming algorithm, where the specific manner includes:
setting an objective function, wherein the objective function comprises the steps that the number of face images which are determined to be recognized in a recognition strategy reaches the maximum, and the sum of level difference parameters corresponding to all the face images in the recognition strategy reaches the minimum; the level difference parameter is the difference value between the identification level of the same face image and the identification order in the identification strategy;
setting limiting conditions, namely, setting the limiting conditions to not violate the scene recognition rules between the recognition sequences of any two face images in the recognition strategies;
and according to the objective function and the limiting condition, calculating the identification strategies corresponding to the face images based on a dynamic programming algorithm.
The third aspect of the present invention discloses another face recognition system based on priority screening, the system comprising:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform some or all of the steps in the priority screening-based face recognition method disclosed in the first aspect of the present invention.
A fourth aspect of the present invention discloses a computer storage medium storing computer instructions which, when invoked, are adapted to perform part or all of the steps of the priority screening based face recognition method disclosed in the first aspect of the present invention.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the recognition level of each face image can be determined through the recognition requests corresponding to the plurality of face images, and then the equipment scene is further determined, so that the recognition strategy is determined, more accurate and reasonable face recognition under the same-image multi-face scene can be realized, and the face recognition effect and efficiency are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a face recognition method based on priority screening according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a face recognition system based on priority screening according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of another face recognition system based on priority screening according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The invention discloses a face recognition method and a face recognition system based on priority screening, which can determine the recognition level of each face image through recognition requests corresponding to a plurality of face images, further determine equipment scenes and determine recognition strategies according to the equipment scenes, so that more accurate and reasonable face recognition under the same-image multi-face scenes can be realized, and the face recognition effect and efficiency are improved. The following will describe in detail.
Example 1
Referring to fig. 1, fig. 1 is a flow chart of a face recognition method based on priority screening according to an embodiment of the present invention. The method described in fig. 1 may be applied to a corresponding data processing device, a data processing terminal, and a data processing server, where the server may be a local server or a cloud server, and the embodiment of the present invention is not limited to the method shown in fig. 1, and the method for face recognition based on priority screening may include the following operations:
101. and acquiring a plurality of face images and identification requests sent by users corresponding to the face images through real-time identification of the target images.
Optionally, the identification request may be obtained by requesting or communicating with a device of the user corresponding to the face image, where the device may be a wearable device or a mobile terminal. Optionally, in a manner of corresponding the face image to the user, the position or the image content of the face image may be pre-identified to determine the user object thereof, so as to further determine the identification request sent by the user object.
102. And determining the recognition level corresponding to each face image according to the request parameters of the recognition request and a preset level determination algorithm.
Optionally, the request parameters include an identification initiation object, an identification initiation destination, an identification initiation time, and a location where the identification object is located.
103. And acquiring the current position of the acquisition equipment corresponding to the target image, and determining the equipment scene corresponding to the acquisition equipment according to the equipment parameters and the current position of the acquisition equipment.
Optionally, the device parameter includes at least one of a device type, a device name, and a device component parameter.
104. And determining the recognition strategies corresponding to the face images according to the equipment scene and the recognition level corresponding to each face image.
Specifically, the recognition policy is used to define whether to recognize and the recognition order of the plurality of face images.
Therefore, the method described by implementing the embodiment of the invention can determine the recognition level of each face image through the recognition requests corresponding to the plurality of face images, further determine the equipment scene and determine the recognition strategy according to the equipment scene, thereby realizing more accurate and reasonable face recognition under the same-image multi-face scene and improving the effect and efficiency of face recognition.
As an optional embodiment, in the step, determining the recognition level corresponding to each face image according to the request parameter of the recognition request and a preset level determining algorithm includes:
inputting request parameters of each identification request into a trained priority prediction neural network model to obtain priority parameters corresponding to each face image; the priority prediction neural network model is obtained through training a training data set comprising a plurality of training request parameters and corresponding priority labels;
calculating the product of the priority parameter corresponding to each face image and the preset parameter weight to obtain the level parameter corresponding to each face image;
and sequencing all the face images from large to small according to the level parameters to obtain an image sequence, and determining the sequence of each face image in the image sequence as the identification level corresponding to each face image.
Specifically, the neural network model in the invention can be a neural network model of a CNN structure, an RNN structure or an LTSM structure, and can be trained through a corresponding gradient descent algorithm and a loss function until convergence, and an operator can select according to the data characteristics and scene requirements.
Through the embodiment, the priority parameters of the face images can be predicted by using the priority prediction neural network model, and the face images are finally sequenced to determine the recognition level, so that a more accurate recognition strategy can be determined later, and more accurate and reasonable face recognition under the same-image multi-face scene is realized.
As an alternative embodiment, the preset parameter weights include a time parameter weight and a location weight; the time parameter weight is in direct proportion to the time difference between the identification initiation time corresponding to the face image and the current time point; the position weight is inversely proportional to a position gap between a position of the recognition object corresponding to the face image and a current position of the acquisition device corresponding to the target image.
The specific weight value may be calculated by a weight determination algorithm or weight calculation rules derived from actual experience by an operator.
Through the embodiment, the priority parameters of the face images can be regulated in an auxiliary manner by using the time parameter weights and the position weights so as to determine the accurate level parameters and determine the recognition level, so that a more accurate recognition strategy can be determined later, and more accurate and reasonable face recognition under the scene of multiple faces in the same image is realized.
As an optional embodiment, in the step, determining, according to the device parameter and the current location of the obtaining device, a device scene corresponding to the obtaining device includes:
inputting the equipment parameters and the current position of the acquired equipment into a trained scene prediction neural network model to acquire an equipment scene corresponding to the acquired equipment; the scene prediction neural network model is obtained through training a training data set comprising a plurality of training device parameters, position labels and corresponding device scene labels; the equipment scenario includes at least one of a business meeting, a restaurant site, a business party, a family party, and a wedding site.
Through the embodiment, the equipment scene corresponding to the equipment can be predicted and obtained by utilizing the scene prediction neural network model, so that a more accurate recognition strategy can be determined in the follow-up process, and more accurate and reasonable face recognition under the same-image multi-face scene is realized.
As an optional embodiment, in the step, according to the device scene and the recognition level corresponding to each face image, determining a recognition policy corresponding to a plurality of face images includes:
determining at least one scene recognition rule according to the equipment scene and the corresponding relation between the preset scene and the rule; the scene recognition rule is used for limiting the sequence among face images corresponding to different recognition initiation objects corresponding to a specific equipment scene;
and determining the recognition strategies corresponding to the face images based on the dynamic programming algorithm according to at least one scene recognition rule and the recognition level corresponding to each face image.
Through the embodiment, at least one scene recognition rule can be determined by utilizing the corresponding relation between the preset scene and the rule, and the recognition strategies corresponding to the face images are determined based on the dynamic programming algorithm, so that more accurate recognition strategies can be determined, and more accurate and reasonable face recognition under the same-image multi-face scene can be realized.
As an optional embodiment, in the step, according to at least one scene recognition rule and a recognition level corresponding to each face image, determining, based on a dynamic programming algorithm, a recognition policy corresponding to a plurality of face images includes:
setting an objective function, wherein the objective function comprises the steps that the number of face images which are determined to be recognized in a recognition strategy is the largest, and the sum of level difference parameters corresponding to all face images in the recognition strategy is the smallest; the level difference parameter is the difference between the identification level of the same face image and the identification order in the identification strategy;
setting limiting conditions includes that the scene recognition rule is not violated between the recognition sequences of any two face images in the recognition strategy;
and according to the objective function and the limiting condition, calculating the recognition strategies corresponding to the face images based on a dynamic programming algorithm.
Through the embodiment, the recognition strategies corresponding to the multiple face images can be calculated based on the dynamic programming algorithm according to the objective function and the limiting conditions, so that more accurate recognition strategies can be determined, and more accurate and reasonable face recognition under the same-image multiple face scene can be realized.
Example two
Referring to fig. 2, fig. 2 is a schematic structural diagram of a face recognition system based on priority screening according to an embodiment of the present invention. The system described in fig. 2 may be applied to a corresponding data processing device, a data processing terminal, and a data processing server, where the server may be a local server or a cloud server, and embodiments of the present invention are not limited. As shown in fig. 2, the system may include:
an obtaining module 201, configured to obtain a plurality of face images and an identification request sent by a user corresponding to each face image by performing real-time identification on a target image;
a first determining module 202, configured to determine, according to a request parameter of the recognition request and a preset level determining algorithm, a recognition level corresponding to each face image;
a second determining module 203, configured to obtain a current position of an obtaining device corresponding to the target image, and determine a device scene corresponding to the obtaining device according to a device parameter and the current position of the obtaining device;
a third determining module 204, configured to determine, according to the device scene and the recognition level corresponding to each face image, a recognition policy corresponding to a plurality of face images; the recognition policy is used to define whether or not a plurality of face images are recognized and the recognition order.
As an alternative embodiment, the request parameters include an identification initiation object, an identification initiation destination, an identification initiation time, and a location where the identification object is located; and/or the device parameters include at least one of a device type, a device name, a device component parameter.
As an optional embodiment, the specific manner of determining the recognition level corresponding to each face image by the first determining module 202 according to the request parameter of the recognition request and the preset level determining algorithm includes:
inputting request parameters of each identification request into a trained priority prediction neural network model to obtain priority parameters corresponding to each face image; the priority prediction neural network model is obtained through training a training data set comprising a plurality of training request parameters and corresponding priority labels;
calculating the product of the priority parameter corresponding to each face image and the preset parameter weight to obtain the level parameter corresponding to each face image;
and sequencing all the face images from large to small according to the level parameters to obtain an image sequence, and determining the sequence of each face image in the image sequence as the identification level corresponding to each face image.
As an alternative embodiment, the preset parameter weights include a time parameter weight and a location weight; the time parameter weight is in direct proportion to the time difference between the identification initiation time corresponding to the face image and the current time point; the position weight is inversely proportional to a position gap between a position of the recognition object corresponding to the face image and a current position of the acquisition device corresponding to the target image.
As an optional embodiment, the second determining module 203 determines, according to the device parameter and the current location of the obtaining device, a specific manner of obtaining a device scene corresponding to the device, where the specific manner includes:
inputting the equipment parameters and the current position of the acquired equipment into a trained scene prediction neural network model to acquire an equipment scene corresponding to the acquired equipment; the scene prediction neural network model is obtained through training a training data set comprising a plurality of training device parameters, position labels and corresponding device scene labels; the equipment scenario includes at least one of a business meeting, a restaurant site, a business party, a family party, and a wedding site.
As an optional embodiment, the third determining module 204 determines, according to the device scene and the recognition level corresponding to each face image, a specific manner of the recognition policy corresponding to the plurality of face images, where the specific manner includes:
determining at least one scene recognition rule according to the equipment scene and the corresponding relation between the preset scene and the rule; the scene recognition rule is used for limiting the sequence among face images corresponding to different recognition initiation objects corresponding to a specific equipment scene;
and determining the recognition strategies corresponding to the face images based on the dynamic programming algorithm according to at least one scene recognition rule and the recognition level corresponding to each face image.
As an optional embodiment, the third determining module 204 determines, based on the dynamic programming algorithm, a specific manner of the recognition policy corresponding to the plurality of face images according to at least one scene recognition rule and the recognition level corresponding to each face image, where the specific manner includes:
setting an objective function, wherein the objective function comprises the steps that the number of face images which are determined to be recognized in a recognition strategy is the largest, and the sum of level difference parameters corresponding to all face images in the recognition strategy is the smallest; the level difference parameter is the difference between the identification level of the same face image and the identification order in the identification strategy;
setting limiting conditions includes that the scene recognition rule is not violated between the recognition sequences of any two face images in the recognition strategy;
and according to the objective function and the limiting condition, calculating the recognition strategies corresponding to the face images based on a dynamic programming algorithm.
The details and technical effects of the modules in the embodiment of the present invention may refer to the description in the first embodiment, and are not described herein.
Example III
Referring to fig. 3, fig. 3 is a schematic structural diagram of another face recognition system based on priority screening according to an embodiment of the present invention. As shown in fig. 3, the system may include:
a memory 301 storing executable program code;
a processor 302 coupled with the memory 301;
the processor 302 invokes executable program code stored in the memory 301 to perform some or all of the steps in the priority screening-based face recognition method disclosed in the first embodiment of the present invention.
Example IV
The embodiment of the invention discloses a computer storage medium which stores computer instructions for executing part or all of the steps in the face recognition method based on priority screening disclosed in the embodiment of the invention when the computer instructions are called.
The system embodiments described above are merely illustrative, in which the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above detailed description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in part in the form of a software product that may be stored in a computer-readable storage medium including Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disc Memory, tape Memory, or any other medium that can be used for computer-readable carrying or storing data.
Finally, it should be noted that: the embodiment of the invention discloses a face recognition method and a face recognition system based on priority screening, which are disclosed by the embodiment of the invention only and are only used for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A face recognition method based on priority screening, the method comprising:
acquiring a plurality of face images and identification requests sent by users corresponding to each face image by carrying out real-time identification on the target image;
determining the recognition level corresponding to each face image according to the request parameters of the recognition request and a preset level determination algorithm;
acquiring the current position of an acquisition device corresponding to the target image, and determining a device scene corresponding to the acquisition device according to the device parameters of the acquisition device and the current position;
determining the recognition strategies corresponding to the face images according to the equipment scene and the recognition level corresponding to each face image; the recognition strategy is used for limiting whether the face images are recognized or not and the recognition sequence.
2. The face recognition method based on priority screening according to claim 1, wherein the request parameters include recognition initiation object, recognition initiation destination, recognition initiation time and recognition object location; and/or the device parameters include at least one of a device type, a device name, and a device component parameter.
3. The face recognition method based on priority screening according to claim 2, wherein the determining the recognition level corresponding to each face image according to the request parameters of the recognition request and a preset level determining algorithm includes:
inputting request parameters of each identification request into a trained priority prediction neural network model to obtain priority parameters corresponding to each face image; the priority prediction neural network model is obtained through training a training data set comprising a plurality of training request parameters and corresponding priority labels;
calculating the product of the priority parameter corresponding to each face image and the preset parameter weight to obtain the level parameter corresponding to each face image;
and sequencing all the face images from large to small according to the level parameters to obtain an image sequence, and determining the sequence of each face image in the image sequence as the identification level corresponding to each face image.
4. A priority screening-based face recognition method according to claim 3, wherein the preset parameter weights include a time parameter weight and a location weight; the time parameter weight is in direct proportion to the time difference between the identification initiation time corresponding to the face image and the current time point; the position weight is inversely proportional to a position gap between a position of the recognition object corresponding to the face image and a current position of the acquisition device corresponding to the target image.
5. The face recognition method based on priority screening according to claim 1, wherein the determining, according to the device parameter of the acquiring device and the current location, a device scene corresponding to the acquiring device includes:
inputting the equipment parameters and the current position of the acquisition equipment into a trained scene prediction neural network model to obtain an equipment scene corresponding to the acquisition equipment; the scene prediction neural network model is obtained through training a training data set comprising a plurality of training equipment parameters, position labels and corresponding equipment scene labels; the equipment scenario includes at least one of a business meeting, a restaurant site, a business party, a family party, and a wedding site.
6. The face recognition method based on priority screening according to claim 1, wherein the determining the recognition strategies corresponding to the face images according to the device scene and the recognition level corresponding to each face image includes:
determining at least one scene recognition rule according to the equipment scene and the corresponding relation between the preset scene and the rule; the scene recognition rule is used for limiting the sequence among face images corresponding to different recognition initiation objects corresponding to a specific equipment scene;
and determining the recognition strategies corresponding to the face images based on a dynamic programming algorithm according to the at least one scene recognition rule and the recognition level corresponding to each face image.
7. The method for face recognition based on priority screening as recited in claim 6, wherein determining the recognition strategies corresponding to the face images based on the at least one scene recognition rule and the recognition level corresponding to each face image based on a dynamic programming algorithm comprises:
setting an objective function, wherein the objective function comprises the steps that the number of face images which are determined to be recognized in a recognition strategy reaches the maximum, and the sum of level difference parameters corresponding to all the face images in the recognition strategy reaches the minimum; the level difference parameter is the difference value between the identification level of the same face image and the identification order in the identification strategy;
setting limiting conditions, namely, setting the limiting conditions to not violate the scene recognition rules between the recognition sequences of any two face images in the recognition strategies;
and according to the objective function and the limiting condition, calculating the identification strategies corresponding to the face images based on a dynamic programming algorithm.
8. A priority screening-based face recognition system, the system comprising:
the acquisition module is used for acquiring a plurality of face images and identification requests sent by users corresponding to the face images through real-time identification of the target images;
the first determining module is used for determining the identification level corresponding to each face image according to the request parameters of the identification request and a preset level determining algorithm;
the second determining module is used for acquiring the current position of the acquisition equipment corresponding to the target image and determining the equipment scene corresponding to the acquisition equipment according to the equipment parameters of the acquisition equipment and the current position;
the third determining module is used for determining the recognition strategies corresponding to the face images according to the equipment scene and the recognition level corresponding to each face image; the recognition strategy is used for limiting whether the face images are recognized or not and the recognition sequence.
9. A priority screening-based face recognition system, the system comprising:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the priority screening based face recognition method of any one of claims 1-7.
10. A computer storage medium storing computer instructions which, when invoked, are operable to perform a priority screening based face recognition method according to any one of claims 1-7.
CN202311606317.5A 2023-11-28 2023-11-28 Face recognition method and system based on priority screening Active CN117523638B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311606317.5A CN117523638B (en) 2023-11-28 2023-11-28 Face recognition method and system based on priority screening

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311606317.5A CN117523638B (en) 2023-11-28 2023-11-28 Face recognition method and system based on priority screening

Publications (2)

Publication Number Publication Date
CN117523638A true CN117523638A (en) 2024-02-06
CN117523638B CN117523638B (en) 2024-05-17

Family

ID=89747529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311606317.5A Active CN117523638B (en) 2023-11-28 2023-11-28 Face recognition method and system based on priority screening

Country Status (1)

Country Link
CN (1) CN117523638B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622246A (en) * 2017-09-26 2018-01-23 广东欧珀移动通信有限公司 Face identification method and Related product
CN107657161A (en) * 2017-09-12 2018-02-02 广东欧珀移动通信有限公司 Method of mobile payment and Related product based on recognition of face
WO2020037937A1 (en) * 2018-08-20 2020-02-27 深圳壹账通智能科技有限公司 Facial recognition method and apparatus, terminal, and computer readable storage medium
WO2020113563A1 (en) * 2018-12-07 2020-06-11 北京比特大陆科技有限公司 Facial image quality evaluation method, apparatus and device, and storage medium
CN112488078A (en) * 2020-12-23 2021-03-12 浙江大华技术股份有限公司 Face comparison method and device and readable storage medium
CN115826972A (en) * 2022-11-29 2023-03-21 深圳市有方科技股份有限公司 Face recognition method and device, computer equipment and storage medium
CN115880753A (en) * 2022-11-30 2023-03-31 中国工商银行股份有限公司 Face recognition processing method and device
CN116665272A (en) * 2023-05-30 2023-08-29 北京首都国际机场股份有限公司 Airport scene face recognition fusion decision method and device, electronic equipment and medium
CN117221476A (en) * 2023-11-09 2023-12-12 广州视声智能科技有限公司 Visual dialogue method and system based on priority screening

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657161A (en) * 2017-09-12 2018-02-02 广东欧珀移动通信有限公司 Method of mobile payment and Related product based on recognition of face
CN107622246A (en) * 2017-09-26 2018-01-23 广东欧珀移动通信有限公司 Face identification method and Related product
WO2020037937A1 (en) * 2018-08-20 2020-02-27 深圳壹账通智能科技有限公司 Facial recognition method and apparatus, terminal, and computer readable storage medium
WO2020113563A1 (en) * 2018-12-07 2020-06-11 北京比特大陆科技有限公司 Facial image quality evaluation method, apparatus and device, and storage medium
CN112488078A (en) * 2020-12-23 2021-03-12 浙江大华技术股份有限公司 Face comparison method and device and readable storage medium
CN115826972A (en) * 2022-11-29 2023-03-21 深圳市有方科技股份有限公司 Face recognition method and device, computer equipment and storage medium
CN115880753A (en) * 2022-11-30 2023-03-31 中国工商银行股份有限公司 Face recognition processing method and device
CN116665272A (en) * 2023-05-30 2023-08-29 北京首都国际机场股份有限公司 Airport scene face recognition fusion decision method and device, electronic equipment and medium
CN117221476A (en) * 2023-11-09 2023-12-12 广州视声智能科技有限公司 Visual dialogue method and system based on priority screening

Also Published As

Publication number Publication date
CN117523638B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
CN108805091B (en) Method and apparatus for generating a model
CN106682906B (en) Risk identification and service processing method and equipment
CN110991380A (en) Human body attribute identification method and device, electronic equipment and storage medium
CN109858441A (en) A kind of monitoring abnormal state method and apparatus for construction site
CN116910707B (en) Model copyright management method and system based on equipment history record
CN112953920B (en) Monitoring management method based on cloud mobile phone
CN111241388A (en) Multi-policy recall method and device, electronic equipment and readable storage medium
CN111611085A (en) Man-machine hybrid enhanced intelligent system, method and device based on cloud edge collaboration
CN117170873A (en) Resource pool management method and system based on artificial intelligence
CN117632905B (en) Database management method and system based on cloud use records
CN117221476B (en) Visual dialogue method and system based on priority screening
CN110163151B (en) Training method and device of face model, computer equipment and storage medium
CN117235873B (en) Smart home layout method and system based on historical work record
CN117523638B (en) Face recognition method and system based on priority screening
CN114143734A (en) Data processing method and device for 5G Internet of things network card flow acquisition
CN112381151B (en) Method and device for determining similar videos
CN115239068A (en) Target task decision method and device, electronic equipment and storage medium
CN114529144A (en) Cloud service quality assessment method and device for government affair cloud and storage medium
CN112905987B (en) Account identification method, device, server and storage medium
CN116823264A (en) Risk identification method, risk identification device, electronic equipment, medium and program product
CN110020728B (en) Service model reinforcement learning method and device
CN117912456B (en) Voice recognition method and system based on data prediction
CN117615359B (en) Bluetooth data transmission method and system based on multiple rule engines
CN117201502B (en) Intelligent cloud server access method and system based on artificial intelligence
CN116797405B (en) Engineering data processing method and system based on data intercommunication of participating parties

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant