WO2021027343A1 - Human face image recognition method and apparatus, electronic device, and storage medium - Google Patents
Human face image recognition method and apparatus, electronic device, and storage medium Download PDFInfo
- Publication number
- WO2021027343A1 WO2021027343A1 PCT/CN2020/089012 CN2020089012W WO2021027343A1 WO 2021027343 A1 WO2021027343 A1 WO 2021027343A1 CN 2020089012 W CN2020089012 W CN 2020089012W WO 2021027343 A1 WO2021027343 A1 WO 2021027343A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face image
- image data
- face
- feature
- features
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Definitions
- the present disclosure relates to the field of computer vision technology, and in particular to a method and device for facial image recognition, electronic equipment and storage media.
- Face recognition applications based on deep learning are currently very common.
- the performance of the face recognition model obtained through deep learning is closely related to the type of data used for training.
- Effectiveness refers to the improvement of the face recognition model, and can help to dig out more helpful information for model training
- the present disclosure proposes a technical solution for facial image recognition.
- a face image recognition method including:
- an unpaired face image data pair is obtained; wherein, the unpaired face image data pair is used to characterize the characteristics of two face images belonging to different faces ;
- the face image recognition network is trained to obtain the target recognition network for recognizing the face image.
- the face image recognition network is trained to obtain the target recognition network for recognizing face images. Compared with the previous face image recognition network, it will be more complete and can be used When the target recognition network recognizes the face image to be recognized, it improves the recognition efficiency and accuracy of the face image.
- the extracting the image data to be processed belonging to different faces from the face image data includes:
- Extracting features of the face image in the face image data according to the face image recognition network Extracting features of the face image in the face image data according to the face image recognition network
- the features belonging to different face images are used as the image data to be processed.
- the facial image features in the facial image data can be extracted according to the facial image recognition network, since it is necessary to target the image features of two persons belonging to different faces to form a non-paired face Image data pair, therefore, features belonging to different face images are used as the image data to be processed.
- the obtaining an unpaired face image data pair according to the to-be-processed image data belonging to different faces includes:
- the features belonging to different faces include at least a first feature in a first face and a second feature in a second face;
- the first face and the second face are constructed as the face image data pair.
- the face image recognition network is trained. Therefore, if the similarity obtained at least according to the first feature of the first face and the second feature of the second face meets the preset condition, the first face and The second face structure is the face image data pair.
- the method before the training the face image recognition network, the method further includes:
- the sampling order is obtained.
- the sampling order before training the face image recognition network, it is necessary to obtain the sampling order according to the feature correlation between the face image data pairs, so as to extract sample data from the training samples according to the sampling order, which is beneficial Training of facial image recognition network. If the sampling order is not considered, such as random sampling, it is bound to reduce the training effect of the face image recognition network.
- the obtaining the sampling order according to the feature correlation between the face image data pairs includes:
- the traversal path obtained by traversing the KD-Tree is used as the sampling order.
- KD-Tree is adopted, and features with close feature correlation between face image data pairs are used as adjacent nodes of KD-Tree. Then, after traversing KD-Tree to obtain a traversal path, the traversal path can be As the sampling sequence, extracting sample data from training samples according to the sampling sequence is beneficial to the training of the face image recognition network.
- the method further includes:
- the face image data pairs read according to the sampling order are used as training samples input to the face image recognition network.
- extracting sample data from training samples according to the sampling sequence is beneficial to the training of the face image recognition network.
- the face image data pair is derived from at least a first face image set used for face training and a second face image set obtained by collecting faces, and one of the two face image sets Human faces are different.
- the face image data pair can be obtained from two pre-divided face image sets, and the faces in the two face image sets are not the same, thereby avoiding filtering different people from one face image set
- the cost of face processing can quickly obtain the sample data "non-paired face image data pair" used to train the face image recognition network.
- the training of the face image recognition network includes:
- the sample feature includes the feature extracted from the first face image set, and the sample feature set is obtained through multiple iterations.
- the features extracted from the first face image set are used as sample features and saved, so that more features can be retained, so that the face image can be recognized for the next time.
- the iterative training of the network with more reference features is conducive to the training of the face image recognition network.
- the training of the face image recognition network further includes:
- the loss function in each iteration, can be calculated based on the current facial features extracted from the second face image set and the sample features saved in the previous iteration, so that people can be trained based on the back propagation of the loss function.
- Face image recognition network the target recognition network obtained will be more complete than the previous face image recognition network.
- the recognition efficiency and accuracy of the face image can be improved rate.
- a face image recognition device including:
- the extraction unit is used to extract the to-be-processed image data belonging to different faces from the face image data;
- the first processing unit is configured to obtain unpaired face image data pairs according to the to-be-processed image data belonging to different faces; wherein, the unpaired face image data pairs are used to characterize different face image data.
- the unpaired face image data pairs are used to characterize different face image data.
- the second processing unit is configured to train the face image recognition network according to the unpaired face image data pair to obtain a target recognition network for recognizing the face image.
- the extraction unit is further configured to:
- Extracting features of the face image in the face image data according to the face image recognition network Extracting features of the face image in the face image data according to the face image recognition network
- the features belonging to different face images are used as the image data to be processed.
- the first processing unit is further configured to:
- the features belonging to different faces include at least a first feature in a first face and a second feature in a second face;
- the first face and the second face are constructed as the face image data pair.
- the device further includes a third processing unit, configured to:
- the sampling order is obtained.
- the third processing unit is further configured to:
- the traversal path obtained by traversing the KD-Tree is used as the sampling order.
- the second processing unit is further configured to:
- the face image data pairs read according to the sampling order are used as training samples input to the face image recognition network.
- the face image data pair is derived from at least a first face image set used for face training and a second face image set obtained by collecting faces, and one of the two face image sets Human faces are different.
- the second processing unit is further configured to:
- the sample feature includes the feature extracted from the first face image set, and the sample feature set is obtained through multiple iterations.
- the second processing unit is further configured to:
- an electronic device including:
- a memory for storing processor executable instructions
- the processor is configured to execute the above-mentioned face image recognition method.
- a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the above-mentioned face image recognition method is realized.
- a computer program including computer readable code, and when the computer readable code is run in an electronic device, a processor in the electronic device executes The above face image recognition method.
- the to-be-processed image data belonging to different faces are extracted from the face image data; the unpaired face image data pairs are obtained according to the to-be-processed image data belonging to different faces; The unpaired face image data pairs are used to characterize the characteristics of two face images belonging to different faces; according to the unpaired face image data pairs, the face image recognition network is trained to obtain Target recognition network for face images.
- the face image recognition network is trained to obtain the target recognition network for recognizing face images. Compared with the previous face image recognition network, it will be more complete. When the recognition network recognizes the face image to be recognized, it can improve the recognition efficiency and accuracy of the face image.
- Fig. 1 shows a flowchart of a face image recognition method according to an embodiment of the present disclosure.
- Fig. 2 shows a flowchart of a face image recognition method according to an embodiment of the present disclosure.
- Fig. 3 shows a flowchart of a training process of a face image recognition network according to an embodiment of the present disclosure.
- Fig. 4 shows a flowchart of the training process of a face image recognition network according to an embodiment of the present disclosure.
- Fig. 5 shows a block diagram of a face image recognition device according to an embodiment of the present disclosure.
- Fig. 6 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- FIG. 7 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- the performance of the face recognition network obtained through deep learning is closely related to the type of data used for training.
- you can Training the face recognition network (such as incremental training) by collecting face data in this scene.
- the incremental training refers to: training based on new training samples to continuously learn new knowledge from new training samples, and to save most of the historical knowledge that has been learned before, such as two pieces of information obtained from the same face.
- the present disclosure adds a process of training on pairwise unpaired face image data obtained from different faces.
- the collected face images can be used to construct unlabeled data for unsupervised training.
- the face images are fed into the face recognition network for training in a "pair" manner. Since this unsupervised training method only constrains between pairs of faces, even if multiple pairs of face images are fed into the face recognition network, there are no constraints between different pairs of face images. Therefore, Unable to dig out more effective information that is helpful for training the face recognition network, which leads to the low processing efficiency of the trained face recognition network (such as the target recognition network used to recognize face images) obtained by this training method The recognition accuracy is not high.
- image data to be processed belonging to different faces may be used, and unpaired face image data pairs are obtained according to the image data to be processed belonging to different faces, so that the above-mentioned operation is performed based on the unpaired face image data.
- Incremental training because of the constraints between different pairs of face images, it can dig out more effective information that is helpful for training the face recognition network, resulting in the trained face recognition obtained by the training method of the present disclosure
- the processing efficiency of the network (such as the target recognition network used to recognize the face image) is relatively efficient, and the recognition accuracy is improved.
- Fig. 1 shows a flowchart of a face image recognition method according to an embodiment of the present disclosure.
- the face image recognition method is applied to a face image recognition device.
- the face image recognition device can be implemented by a terminal device or a server or other processing equipment.
- the terminal equipment can be user equipment (UE, User Equipment), mobile devices, cellular phones, cordless phones, personal digital assistants (PDAs, Personal Digital Assistant), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
- the face image recognition method can be implemented by a processor calling computer-readable instructions stored in a memory. As shown in Figure 1, the process includes:
- Step S101 Extract image data to be processed belonging to different faces from the face image data.
- face image data is acquired, and the face image data is image data of multiple different faces.
- Extract the features of the face image in the face image data according to the face image recognition network For example, the feature extraction function module in the face image recognition network can be used to extract the features of the face image in the face image data.
- the features belonging to different face images are used as the image data to be processed, and the image data to be processed is composed of multiple face features, including multiple face features of the same face and multiple face features of different faces.
- Step S102 Obtain an unpaired face image data pair according to the to-be-processed image data belonging to different faces; wherein the unpaired face image data pair is used to represent two faces belonging to different faces The characteristics of the image.
- the to-be-processed image data belonging to different faces may be multiple features obtained after feature extraction of image data of multiple different faces, and the difference between the two features of the multiple features is calculated. Similarity, if the similarity between the two features meets the preset condition, the face images corresponding to the two features with similarity are queried, and the face image data pair is constructed according to the queried face images,
- the face image data pair (such as an unpaired face image pair) can also be called "paired" unlabeled data, that is, in the subsequent training process, the unpaired face image is regarded as unlabeled data and paired Input the face image recognition network to train the face image recognition network.
- the references "first” and “second” are used to distinguish different features derived from different face images.
- the aforementioned features belonging to different faces include at least a first feature in a first face and a second feature in a second face.
- the similarity obtained according to the first feature and the second feature meets the preset conditions
- the first face and the second face are constructed as the face image data pair.
- Step S103 According to the unpaired face image data pair, the face image recognition network is trained to obtain a target recognition network for recognizing the face image.
- a plurality of face image data pairs are used as unlabeled data, and the face image recognition network is input in pairs to train the face image recognition network.
- the training samples used for the training of the face image recognition network can be obtained, namely: multiple face image data pairs (such as non-paired face image pairs), among which, the non-paired person Face image pairing: Two face images do not belong to the same person.
- the possible constraints (or correlations) between different pairs of face image data can be used to obtain unpaired face image data pairs to train the face image recognition network more effectively.
- the face image to be recognized can be recognized according to the target recognition network to obtain the recognition result.
- the target recognition network used to recognize the face image is obtained by training the face image recognition network Later, according to the target recognition network for image recognition, the recognition processing effect is higher and the recognition accuracy is improved.
- the face image recognition method is applied to a face image recognition device.
- the face image recognition device can be implemented by a terminal device or a server or other processing equipment.
- the terminal equipment can be user equipment (UE, User Equipment), mobile devices, cellular phones, cordless phones, personal digital assistants (PDAs, Personal Digital Assistant), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
- the face image recognition method can be implemented by a processor calling computer-readable instructions stored in a memory. As shown in Figure 2, the process includes:
- Step S201 Extract image data to be processed belonging to different faces from the face image data.
- face image data is acquired, and the face image data is image data of multiple different faces.
- Extract the features of the face image in the face image data according to the face image recognition network For example, the feature extraction function module in the face image recognition network can be used to extract the features of the face image in the face image data.
- the features belonging to different face images are used as the image data to be processed, and the image data to be processed is composed of multiple face features, including multiple face features of the same face and multiple face features of different faces.
- Step S202 Obtain an unpaired face image data pair according to the to-be-processed image data belonging to different faces; wherein the unpaired face image data pair is used to represent two faces belonging to different faces The characteristics of the image.
- the to-be-processed image data belonging to different faces may be multiple features obtained after feature extraction of image data of multiple different faces, and the difference between the two features of the multiple features is calculated. Similarity, if the similarity between the two features meets the preset condition, the face images corresponding to the two features with similarity are queried, and the face image data pair is constructed according to the queried face images,
- the face image data pair (such as an unpaired face image pair) can also be called "paired" unlabeled data, that is, in the subsequent training process, the unpaired face image is regarded as unlabeled data and paired Input the face image recognition network to train the face image recognition network.
- the references "first” and “second” are used to distinguish different features derived from different face images.
- the aforementioned features belonging to different faces include at least a first feature in a first face and a second feature in a second face.
- the similarity obtained according to the first feature and the second feature meets the preset conditions
- the first face and the second face are constructed as the face image data pair.
- Step S203 Obtain a sampling order according to the feature correlation between the face image data pairs.
- the sampling order of the face pictures can be determined according to the correlation of the face features, for example, a feature set can be obtained according to the features between the face image data pairs.
- a feature tree KD-Tree is constructed according to the feature set, and features with close feature correlation between face image data pairs are used as adjacent nodes of the KD-Tree.
- the traversal path obtained by traversing the KD-Tree is used as the sampling order. Calculating the sampling order of the face image based on the correlation of the face image features can make the face images read adjacently have a greater correlation, that is to say, read the face image according to the sampling order. Compared with random reading of face images, more constraints (or correlations) generated between different face image data can be obtained.
- constraints can train the face image recognition network more effectively and improve its network parameters.
- the sample features stored in the feature memory module can be combined to further improve the effective training of the face image recognition network, and improve the training efficiency and accuracy of the face image recognition network.
- Step S204 Use the face image data pairs read according to the sampling order as training samples input to the face image recognition network.
- the face image data pair is derived from at least the first face image set used for face training and the second face image set obtained by collecting faces in a real environment, and the two face image sets Faces are different.
- the feature set obtained by extracting the features of the first face image can be recorded as set A in subsequent application examples, and the feature set obtained by extracting features of the second face image can be recorded as set in subsequent application examples B, do not repeat it here.
- Step S205 Train the face image recognition network according to the training samples to obtain a target recognition network for recognizing the face image.
- a plurality of face image data pairs are used as unlabeled data, and the face image recognition network is input in pairs to train the face image recognition network.
- the training samples for the face image recognition network training can be obtained, namely: multiple face image data pairs (such as non-paired face image pairs), where the non-paired person Face image pairing: Two face images do not belong to the same person.
- the possible constraints (or correlations) between different pairs of face image data can be used to obtain training samples composed of unpaired face image data pairs, and then the training samples can be more effective To train the face image recognition network.
- the face image to be recognized can be recognized according to the target recognition network to obtain the recognition result.
- the face image recognition network can be trained more effectively and its network parameters can be improved according to the training samples, after training the face image recognition network to obtain a target recognition network for recognizing face images, according to the target
- the recognition network performs image recognition, the recognition processing effect is higher, and the recognition accuracy is improved.
- training the face image recognition network includes: saving sample features in each iteration of training the face image recognition network.
- the sample features include features extracted from the first face image set, and a sample feature set is obtained through multiple iterations. Characterized in that a subsequent sample may be referred to as application example F A, F A feature may be stored in the memory module, the sample application feature sets in the subsequent examples may be referred to as F M, F A set consisting of F M. , Do not repeat it here.
- the training the face image recognition network further includes: according to the current face features extracted from the second face image set in each iteration and the sample feature set obtained in the previous iteration Calculate the loss function for all sample characteristics. Training the face image recognition network according to the back propagation of the loss function. It can be understood as: the facial features retained in each iteration and the facial features of the previous iteration are used to calculate the loss function, that is, the facial features retained in each iteration are constrained with the facial features of the previous iteration to obtain more constraints information.
- the constraint information can also be called "effective information" because it can train the face image recognition network more effectively.
- the present disclosure can use the face image recognition network Add a feature memory module (used to save the sample feature) in the training process.
- the current iteration face feature and the sample feature in the feature memory module are calculated together to calculate the loss function, which can provide more effective information, thus, In the training process, more effective information can be used to train the face image recognition network more effectively and improve training efficiency.
- FIG. 3 shows a flowchart of a face image recognition network training process according to an embodiment of the present disclosure, as shown in Fig. 3, including:
- Step S301 Extracting features from different collected face images, and constructing training samples composed of unpaired face image pairs.
- the images in the training samples may be called training images.
- Step S302 Calculate the sampling order of the training images in the training sample during training according to the characteristics of the unpaired face image pair.
- Step S303 Read the training images in the training samples according to the calculated sampling order, and train the face image recognition network together with the sample features in the feature memory module.
- Fig. 4 shows a flowchart of a face image recognition network training process according to an embodiment of the present disclosure. Based on Figs. 3 to 4, the specific implementations involved are described as follows:
- Output face image features, unpaired face image pairs.
- Specific implementation methods include: aligning the input face images; using the current face recognition model to extract features from the aligned face images to obtain face recognition features, and the facial image features collected from actual application scenarios are recorded as a set A.
- the original face image features of the system are recorded as set B; the cosine similarity is calculated from the pairwise features in feature set B and feature set A, and the obtained cosine similarity set is sorted from largest to smallest, taking the top 10% (The percentage is not unique and can be adjusted according to the actual situation.
- the cosine similarity of the critical point is used as an optimized target threshold for subsequent training of the face image recognition network.
- feature set A information of unpaired face image pair
- Output The image sampling order during the training of the face image recognition network.
- the target recognition network is obtained after training, that is, the new face image recognition network.
- the specific implementation method includes: using the network parameters of the current face image recognition network to initialize the face image recognition network; reading the unpaired face image pair according to the calculated sampling order, and for each iteration, the read the unpaired face image pair
- the paired face image pair includes at least two parts: I A and I B , I A comes from the original face training image of the system, and I B comes from the collected face image.
- the images I A and I B are calculated by the face image recognition network to obtain the features F A and F B , and then the F A is saved in the feature memory module.
- the B and F all features in the feature set of the memory module M F loss function calculation, and updates the network parameters Face Recognition network.
- Loss function can be calculated as shown in equation (1), where, L is the loss function; N, M are the total number of different characteristics corresponding to at least one; sample characterized wherein F M F A sample collection configuration; F. B is the feature of the image I B obtained through the calculation of the face image recognition network; threshold is the cosine similarity of the critical point obtained when the cosine similarity is calculated according to the pairwise features in the feature set B and the feature set A, which is regarded as the person An optimized target threshold for face image recognition network training.
- sample features in the feature memory module are time-sensitive and need to be deleted periodically to update the sample features in the feature memory module. For example, if the sample feature in the feature memory module has existed for more than 100 iterations (the value is not unique, it can be adjusted according to the actual training effect), the sample feature will be removed from the feature memory module, and the iterative process will continue until it meets the requirements. Set the number of iterations.
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- the present disclosure also provides facial image recognition devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any of the facial image recognition methods provided in the present disclosure.
- the corresponding technical solutions and descriptions and refer to methods Part of the corresponding records will not be repeated.
- Fig. 5 shows a block diagram of a face image recognition device according to an embodiment of the present disclosure.
- the face image recognition device includes: an extracting unit 31, which is configured to extract from face image data belonging to The to-be-processed image data of different faces; the first processing unit 32 is configured to obtain a pair of unpaired face image data according to the to-be-processed image data belonging to different faces; wherein, the unpaired face image The data pair is used to characterize the features of two face images belonging to different faces; the second processing unit 33 is used to train the face image recognition network according to the non-paired face image data pair to obtain Target recognition network that recognizes facial images.
- a recognition unit may also be included for recognizing the face image to be recognized according to the target recognition network to obtain a recognition result.
- the extraction unit is further configured to: extract features of a face image in the face image data according to the face image recognition network; use features belonging to different face images as the waiting Process image data.
- the first processing unit is further configured to: the features belonging to different faces include at least a first feature in a first face and a second feature in a second face; according to the When the similarity obtained by the first feature and the second feature meets a preset condition, the first face and the second face are constructed as the face image data pair.
- the device further includes a third processing unit, configured to obtain a sampling order according to the feature correlation between the pair of face image data.
- the third processing unit is further configured to: obtain a feature set according to the features between the face image data pairs; construct a feature tree KD-Tree according to the feature set, and face image data Features with close feature correlation between pairs are regarded as adjacent nodes of the KD-Tree; the traversal path obtained by traversing the KD-Tree is used as the sampling order.
- the second processing unit is further configured to: use the face image data pairs read according to the sampling order as training samples input to the face image recognition network.
- the face image data pair is derived from at least a first face image set used for face training and a second face image set obtained by collecting faces in a real environment, and two face images The faces in the set are not the same.
- the second processing unit is further configured to: save sample features in each iteration of training the face image recognition network; the sample features include data from the first face image From the features extracted from the collection, the sample feature set is obtained through multiple iterations.
- the second processing unit is further configured to: obtain all the sample features in the sample feature set according to the current face features extracted from the second face image set in each iteration and the previous iteration , Calculate the loss function; train the face image recognition network according to the back propagation of the loss function.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above method.
- the electronic device can be provided as a terminal, server or other form of device.
- the embodiments of the present disclosure also provide a computer program, the computer program includes computer-readable code, when the computer-readable code is run in an electronic device, the processor in the electronic device is executed to implement the above method .
- Fig. 6 is a block diagram showing an electronic device 800 according to an exemplary embodiment.
- the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, images, videos, etc.
- the memory 804 can be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic Disk Magnetic Disk or Optical Disk.
- the power supply component 806 provides power for various components of the electronic device 800.
- the power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.
- the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC).
- the microphone is configured to receive external audio signals.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
- the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
- the component is the display and the keypad of the electronic device 800.
- the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
- the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
- the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the electronic device 800 can be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- ASIC application specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing devices
- PLD programmable logic devices
- FPGA field A programmable gate array
- controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- a non-volatile computer-readable storage medium such as a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
- Fig. 7 is a block diagram showing an electronic device 900 according to an exemplary embodiment.
- the electronic device 900 may be provided as a server.
- the electronic device 900 includes a processing component 922, which further includes one or more processors, and a memory resource represented by a memory 932, for storing instructions that can be executed by the processing component 922, such as application programs.
- the application program stored in the memory 932 may include one or more modules each corresponding to a set of instructions.
- the processing component 922 is configured to execute instructions to perform the aforementioned methods.
- the electronic device 900 may also include a power supply component 926 configured to perform power management of the electronic device 900, a wired or wireless network interface 950 configured to connect the electronic device 900 to a network, and an input output (I/O) interface 958 .
- the electronic device 900 can operate based on an operating system stored in the memory 932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile computer-readable storage medium such as the memory 932 including computer program instructions, which can be executed by the processing component 922 of the electronic device 900 to complete the foregoing method.
- the present disclosure may be a system, method, and/or computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory flash memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a printer with instructions stored thereon
- the computer-readable storage medium used here is not interpreted as a transient signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or in one or more programming languages.
- Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to access the Internet connection).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
- the computer-readable program instructions are executed to realize various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine such that when these instructions are executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner, so that the computer-readable medium storing instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more functions for implementing the specified logical function.
- Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (21)
- 一种人脸图像识别方法,其特征在于,所述方法包括:A face image recognition method, characterized in that the method includes:从人脸图像数据中提取属于不同人脸的待处理图像数据;Extract the to-be-processed image data belonging to different faces from the face image data;根据所述属于不同人脸的待处理图像数据,得到非配对的人脸图像数据对;其中,所述非配对的人脸图像数据对用于表征属于不同人脸的两张人脸图像的特征;According to the to-be-processed image data belonging to different faces, an unpaired face image data pair is obtained; wherein, the unpaired face image data pair is used to characterize the characteristics of two face images belonging to different faces ;根据所述非配对的人脸图像数据对,对人脸图像识别网络进行训练,得到用于识别人脸图像的目标识别网络。According to the unpaired face image data pair, the face image recognition network is trained to obtain the target recognition network for recognizing the face image.
- 根据权利要求1所述的方法,其特征在于,所述从人脸图像数据中提取属于不同人脸的待处理图像数据,包括:The method according to claim 1, wherein said extracting image data to be processed belonging to different faces from face image data comprises:根据所述人脸图像识别网络,提取所述人脸图像数据中人脸图像的特征;Extracting features of the face image in the face image data according to the face image recognition network;将属于不同人脸图像的特征作为所述待处理图像数据。The features belonging to different face images are used as the image data to be processed.
- 根据权利要求2所述的方法,其特征在于,所述根据所述属于不同人脸的待处理图像数据,得到非配对的人脸图像数据对,包括:The method according to claim 2, wherein the obtaining a pair of unpaired face image data according to the to-be-processed image data belonging to different faces comprises:所述属于不同人脸的特征至少包括第一人脸中的第一特征和第二人脸中的第二特征;The features belonging to different faces include at least a first feature in a first face and a second feature in a second face;根据所述第一特征和所述第二特征得到的相似度符合预设条件的情况下,将所述第一人脸和所述第二人脸构造为所述人脸图像数据对。In the case that the similarity obtained according to the first feature and the second feature meets a preset condition, the first face and the second face are constructed as the face image data pair.
- 根据权利要求2所述的方法,其特征在于,所述对人脸图像识别网络进行训练之前,所述方法还包括:The method according to claim 2, characterized in that, before the training of the face image recognition network, the method further comprises:根据所述人脸图像数据对之间的特征相关性,得到采样顺序。According to the feature correlation between the face image data pairs, the sampling order is obtained.
- 根据权利要求4所述的方法,其特征在于,所述根据所述人脸图像数据对之间的特征相关性,得到采样顺序,包括:The method according to claim 4, wherein the obtaining a sampling order according to the feature correlation between the face image data pairs comprises:根据所述人脸图像数据对之间的特征,得到特征集合;Obtaining a feature set according to the features between the face image data pairs;根据所述特征集合构造特征树KD-Tree,人脸图像数据对之间的特征相关性近的特征作为所述KD-Tree的相邻节点;Constructing a feature tree KD-Tree according to the feature set, and features with close feature correlation between face image data pairs are used as adjacent nodes of the KD-Tree;将遍历所述KD-Tree得到的遍历路径作为所述采样顺序。The traversal path obtained by traversing the KD-Tree is used as the sampling order.
- 根据权利要求4或5所述的方法,其特征在于,所述得到采样顺序之后,所述方法还包括:The method according to claim 4 or 5, characterized in that, after the sampling order is obtained, the method further comprises:将根据所述采样顺序读取的人脸图像数据对作为输入所述人脸图像识别网络的训练样本。The face image data pairs read according to the sampling order are used as training samples input to the face image recognition network.
- 根据权利要求1-6中任一项所述的方法,其特征在于,所述人脸图像数据对,至少来源于用于人脸训练的第一人脸图像集合和采集人脸得到的第二人脸图像集合,且两个人脸图像集合中的人脸为不相同。The method according to any one of claims 1 to 6, wherein the face image data pair is at least derived from a first face image collection used for face training and a second face image collection The face image collection, and the faces in the two face image collections are different.
- 根据权利要求7所述的方法,其特征在于,所述对人脸图像识别网络进行训练,包括:The method according to claim 7, wherein said training a face image recognition network comprises:对所述人脸图像识别网络进行训练的每次迭代中,保存样本特征;In each iteration of training the face image recognition network, save sample features;所述样本特征包括从所述第一人脸图像集合中提取的特征,经多次迭代得到样本特征集。The sample feature includes the feature extracted from the first face image set, and the sample feature set is obtained through multiple iterations.
- 根据权利要求8所述的方法,其特征在于,所述对人脸图像识别网络进行训练,还包括:The method according to claim 8, wherein said training a face image recognition network further comprises:根据每次迭代中从所述第二人脸图像集合中提取的当前人脸特征和上一次迭代得到样本特征集中的所有样本特征,计算损失函数;Calculating a loss function according to the current face features extracted from the second face image set in each iteration and all the sample features in the sample feature set obtained in the previous iteration;根据所述损失函数的反向传播来训练所述人脸图像识别网络。Training the face image recognition network according to the back propagation of the loss function.
- 一种人脸图像识别装置,其特征在于,所述装置包括:A face image recognition device, characterized in that the device includes:提取单元,用于从人脸图像数据中提取属于不同人脸的待处理图像数据;The extraction unit is used to extract the to-be-processed image data belonging to different faces from the face image data;第一处理单元,用于根据所述属于不同人脸的待处理图像数据,得到非配对的人脸图像数据对;其中,所述非配对的人脸图像数据对用于表征属于不同人脸的两张人脸图像的特征;The first processing unit is configured to obtain unpaired face image data pairs according to the to-be-processed image data belonging to different faces; wherein, the unpaired face image data pairs are used to characterize different face image data. Features of two face images;第二处理单元,用于根据所述非配对的人脸图像数据对,对人脸图像识别网络进行训练,得到用于识别人脸图像的目标识别网络。The second processing unit is configured to train the face image recognition network according to the unpaired face image data pair to obtain a target recognition network for recognizing the face image.
- 根据权利要求10所述的装置,其特征在于,所述提取单元,进一步用于:The device according to claim 10, wherein the extraction unit is further configured to:根据所述人脸图像识别网络,提取所述人脸图像数据中人脸图像的特征;Extracting features of the face image in the face image data according to the face image recognition network;将属于不同人脸图像的特征作为所述待处理图像数据。The features belonging to different face images are used as the image data to be processed.
- 根据权利要求11所述的装置,其特征在于,所述第一处理单元,进一步用于:The device according to claim 11, wherein the first processing unit is further configured to:所述属于不同人脸的特征至少包括第一人脸中的第一特征和第二人脸中的第二特征;The features belonging to different faces include at least a first feature in a first face and a second feature in a second face;根据所述第一特征和所述第二特征得到的相似度符合预设条件的情况下,将所述第一人脸和所述第二人脸构造为所述人脸图像数据对。In the case that the similarity obtained according to the first feature and the second feature meets a preset condition, the first face and the second face are constructed as the face image data pair.
- 根据权利要求11所述的装置,其特征在于,所述装置还包括第三处理单元,用于:The device according to claim 11, wherein the device further comprises a third processing unit, configured to:根据所述人脸图像数据对之间的特征相关性,得到采样顺序。According to the feature correlation between the face image data pairs, the sampling order is obtained.
- 根据权利要求13所述的装置,其特征在于,所述第三处理单元,进一步用于:The device according to claim 13, wherein the third processing unit is further configured to:根据所述人脸图像数据对之间的特征,得到特征集合;Obtaining a feature set according to the features between the face image data pairs;根据所述特征集合构造特征树KD-Tree,人脸图像数据对之间的特征相关性近的特征作为所述KD-Tree的相邻节点;Constructing a feature tree KD-Tree according to the feature set, and features with close feature correlation between face image data pairs are used as adjacent nodes of the KD-Tree;将遍历所述KD-Tree得到的遍历路径作为所述采样顺序。The traversal path obtained by traversing the KD-Tree is used as the sampling order.
- 根据权利要求13或14所述的装置,其特征在于,所述第二处理单元,进一步用于:The device according to claim 13 or 14, wherein the second processing unit is further configured to:将根据所述采样顺序读取的人脸图像数据对,作为输入所述人脸图像识别网络的训练样本。The face image data pairs read according to the sampling order are used as training samples input to the face image recognition network.
- 根据权利要求10-15中任一项所述的装置,其特征在于,所述人脸图像数据对,至少来源于用于人脸训练的第一人脸图像集合和采集人脸得到的第二人脸图像集合,且两个人脸图像集合中的人脸为不相同。The device according to any one of claims 10-15, wherein the face image data pair is at least derived from a first face image collection used for face training and a second face image collection The face image collection, and the faces in the two face image collections are different.
- 根据权利要求16所述的装置,其特征在于,所述第二处理单元,进一步用于:The device according to claim 16, wherein the second processing unit is further configured to:对所述人脸图像识别网络进行训练的每次迭代中,保存样本特征;In each iteration of training the face image recognition network, save sample features;所述样本特征包括从所述第一人脸图像集合中提取的特征,经多次迭代得到样本特征集。The sample feature includes the feature extracted from the first face image set, and the sample feature set is obtained through multiple iterations.
- 根据权利要求17所述的装置,其特征在于,所述第二处理单元,进一步用于:The device according to claim 17, wherein the second processing unit is further configured to:根据每次迭代中从所述第二人脸图像集合中提取的当前人脸特征和上一次迭代得到样本特征集中的所有样本特征,计算损失函数;Calculating a loss function according to the current face features extracted from the second face image set in each iteration and all the sample features in the sample feature set obtained in the previous iteration;根据所述损失函数的反向传播来训练所述人脸图像识别网络。Training the face image recognition network according to the back propagation of the loss function.
- 一种电子设备,其特征在于,包括:An electronic device, characterized in that it comprises:处理器;processor;用于存储处理器可执行指令的存储器;A memory for storing processor executable instructions;其中,所述处理器被配置为:执行权利要求1至9中任意一项所述的方法。Wherein, the processor is configured to execute the method according to any one of claims 1-9.
- 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至9中任意一项所述的方法。A computer-readable storage medium having computer program instructions stored thereon, wherein the computer program instructions implement the method according to any one of claims 1 to 9 when executed by a processor.
- 一种计算机程序,其特征在于,所述计算机程序包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至9中的任意一项所述的方法。A computer program, characterized in that the computer program includes computer readable code, and when the computer readable code is executed in an electronic device, a processor in the electronic device executes for implementing claims 1 to 9 The method described in any one of.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217026325A KR20210114511A (en) | 2019-08-12 | 2020-05-07 | Face image recognition method and apparatus, electronic device and storage medium |
JP2021547720A JP2022520120A (en) | 2019-08-12 | 2020-05-07 | Face image recognition methods and devices, electrical equipment and storage media |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910739381.8A CN110458102A (en) | 2019-08-12 | 2019-08-12 | A kind of facial image recognition method and device, electronic equipment and storage medium |
CN201910739381.8 | 2019-08-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021027343A1 true WO2021027343A1 (en) | 2021-02-18 |
Family
ID=68485929
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/089012 WO2021027343A1 (en) | 2019-08-12 | 2020-05-07 | Human face image recognition method and apparatus, electronic device, and storage medium |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP2022520120A (en) |
KR (1) | KR20210114511A (en) |
CN (1) | CN110458102A (en) |
TW (1) | TW202107337A (en) |
WO (1) | WO2021027343A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114255502A (en) * | 2021-12-23 | 2022-03-29 | 中国电信股份有限公司 | Face image generation method and device, face recognition method, face recognition equipment and medium |
CN115909434A (en) * | 2022-09-07 | 2023-04-04 | 以萨技术股份有限公司 | Data processing system for acquiring human face image characteristics |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110458102A (en) * | 2019-08-12 | 2019-11-15 | 深圳市商汤科技有限公司 | A kind of facial image recognition method and device, electronic equipment and storage medium |
CN111339964A (en) * | 2020-02-28 | 2020-06-26 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN112149732A (en) * | 2020-09-23 | 2020-12-29 | 上海商汤智能科技有限公司 | Image protection method and device, electronic equipment and storage medium |
CN112949634B (en) * | 2021-03-08 | 2024-04-26 | 北京交通大学 | Railway contact net nest detection method |
CN112784823B (en) * | 2021-03-17 | 2023-04-07 | 中国工商银行股份有限公司 | Face image recognition method, face image recognition device, computing equipment and medium |
CN113269425B (en) * | 2021-05-18 | 2022-06-07 | 北京航空航天大学 | Quantitative evaluation method for health state of equipment under unsupervised condition and electronic equipment |
KR20230032092A (en) | 2021-08-30 | 2023-03-07 | 주식회사 엘지에너지솔루션 | A solid electrolyte membrane and all solid-state lithium secondary battery comprising the same |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8224042B2 (en) * | 2009-03-12 | 2012-07-17 | Seiko Epson Corporation | Automatic face recognition |
CN103679158A (en) * | 2013-12-31 | 2014-03-26 | 北京天诚盛业科技有限公司 | Face authentication method and device |
CN109753875A (en) * | 2018-11-28 | 2019-05-14 | 北京的卢深视科技有限公司 | Face identification method, device and electronic equipment based on face character perception loss |
CN110458102A (en) * | 2019-08-12 | 2019-11-15 | 深圳市商汤科技有限公司 | A kind of facial image recognition method and device, electronic equipment and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4606779B2 (en) * | 2004-06-07 | 2011-01-05 | グローリー株式会社 | Image recognition apparatus, image recognition method, and program causing computer to execute the method |
JP4591215B2 (en) * | 2005-06-07 | 2010-12-01 | 株式会社日立製作所 | Facial image database creation method and apparatus |
JP6312485B2 (en) * | 2014-03-25 | 2018-04-18 | キヤノン株式会社 | Information processing apparatus, authentication apparatus, and methods thereof |
-
2019
- 2019-08-12 CN CN201910739381.8A patent/CN110458102A/en active Pending
-
2020
- 2020-05-07 WO PCT/CN2020/089012 patent/WO2021027343A1/en active Application Filing
- 2020-05-07 JP JP2021547720A patent/JP2022520120A/en active Pending
- 2020-05-07 KR KR1020217026325A patent/KR20210114511A/en active Search and Examination
- 2020-07-02 TW TW109122357A patent/TW202107337A/en unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8224042B2 (en) * | 2009-03-12 | 2012-07-17 | Seiko Epson Corporation | Automatic face recognition |
CN103679158A (en) * | 2013-12-31 | 2014-03-26 | 北京天诚盛业科技有限公司 | Face authentication method and device |
CN109753875A (en) * | 2018-11-28 | 2019-05-14 | 北京的卢深视科技有限公司 | Face identification method, device and electronic equipment based on face character perception loss |
CN110458102A (en) * | 2019-08-12 | 2019-11-15 | 深圳市商汤科技有限公司 | A kind of facial image recognition method and device, electronic equipment and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114255502A (en) * | 2021-12-23 | 2022-03-29 | 中国电信股份有限公司 | Face image generation method and device, face recognition method, face recognition equipment and medium |
CN114255502B (en) * | 2021-12-23 | 2024-03-29 | 中国电信股份有限公司 | Face image generation method and device, face recognition method, equipment and medium |
CN115909434A (en) * | 2022-09-07 | 2023-04-04 | 以萨技术股份有限公司 | Data processing system for acquiring human face image characteristics |
Also Published As
Publication number | Publication date |
---|---|
KR20210114511A (en) | 2021-09-23 |
TW202107337A (en) | 2021-02-16 |
CN110458102A (en) | 2019-11-15 |
JP2022520120A (en) | 2022-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021027343A1 (en) | Human face image recognition method and apparatus, electronic device, and storage medium | |
WO2021196401A1 (en) | Image reconstruction method and apparatus, electronic device and storage medium | |
JP7171884B2 (en) | Pedestrian recognition method and device | |
WO2020155627A1 (en) | Facial image recognition method and apparatus, electronic device, and storage medium | |
WO2020232977A1 (en) | Neural network training method and apparatus, and image processing method and apparatus | |
WO2021031645A1 (en) | Image processing method and apparatus, electronic device and storage medium | |
WO2022011892A1 (en) | Network training method and apparatus, target detection method and apparatus, and electronic device | |
US20200394216A1 (en) | Method and device for video processing, electronic device, and storage medium | |
TW202131281A (en) | Image processing method and apparatus, and electronic device and storage medium | |
WO2021036382A1 (en) | Image processing method and apparatus, electronic device and storage medium | |
WO2021035812A1 (en) | Image processing method and apparatus, electronic device and storage medium | |
WO2020155609A1 (en) | Target object processing method and apparatus, electronic device, and storage medium | |
WO2017092122A1 (en) | Similarity determination method, device, and terminal | |
CN110659690B (en) | Neural network construction method and device, electronic equipment and storage medium | |
CN110532956B (en) | Image processing method and device, electronic equipment and storage medium | |
CN110781957A (en) | Image processing method and device, electronic equipment and storage medium | |
WO2021208666A1 (en) | Character recognition method and apparatus, electronic device, and storage medium | |
WO2020192113A1 (en) | Image processing method and apparatus, electronic device, and storage medium | |
CN111259967B (en) | Image classification and neural network training method, device, equipment and storage medium | |
CN111582383B (en) | Attribute identification method and device, electronic equipment and storage medium | |
WO2023115911A1 (en) | Object re-identification method and apparatus, electronic device, storage medium, and computer program product | |
CN109992606A (en) | A kind of method for digging of target user, device, electronic equipment and storage medium | |
TWI751593B (en) | Network training method and device, image processing method and device, electronic equipment, computer readable storage medium and computer program | |
WO2015188589A1 (en) | User data update method and device | |
CN110286775A (en) | A kind of dictionary management method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20852763 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021547720 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20217026325 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20852763 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04.08.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20852763 Country of ref document: EP Kind code of ref document: A1 |