WO2019184627A1 - 一种人脸识别方法、装置、服务器及存储介质 - Google Patents

一种人脸识别方法、装置、服务器及存储介质 Download PDF

Info

Publication number
WO2019184627A1
WO2019184627A1 PCT/CN2019/075538 CN2019075538W WO2019184627A1 WO 2019184627 A1 WO2019184627 A1 WO 2019184627A1 CN 2019075538 W CN2019075538 W CN 2019075538W WO 2019184627 A1 WO2019184627 A1 WO 2019184627A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
image
face recognition
target
recognition model
Prior art date
Application number
PCT/CN2019/075538
Other languages
English (en)
French (fr)
Inventor
谭莲芝
刘诗超
张肇勇
潘益伟
夏武
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2019184627A1 publication Critical patent/WO2019184627A1/zh
Priority to US16/900,461 priority Critical patent/US11367311B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/30Scenes; Scene-specific elements in albums, collections or shared content, e.g. social network photos or video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the field of face recognition technology, and in particular, to a face recognition method, apparatus, server, and storage medium.
  • the image management technology In recent years, the wide application of image management technology (such as electronic photo albums) has greatly facilitated the management of images by users.
  • the image management technology generally manages the image based on the face features of the face image recognized by the universal face recognition model.
  • the universal face recognition model is trained based on the face public data set
  • the similarity between the person wearing glasses recognized by the universal face recognition model and the face feature of another person wearing the same glasses is high, even higher than the recognized The similarity between his own face and the face features without glasses.
  • a face recognition method, device, server and storage medium are provided to improve the distinguishing of the face features of the recognized face image during the application of the image management technology, thereby facilitating accurate management of the image, which is urgently needed solved problem.
  • the present application provides a face recognition method for recognizing a face image by a target face recognition model trained by training samples formed by calibrating a face image having identity information, so as to improve the recognized person.
  • the distinguishing of the face features of the face image facilitates the accurate management of the image.
  • the present application also provides corresponding apparatus, devices, storage media, and computer program products.
  • a face recognition method applied to a server comprising:
  • the method for generating the target face recognition model includes:
  • the training sample to train the universal face recognition model, and updating the parameters of the universal face recognition model based on the training target to obtain a target face recognition model; wherein the training target is the universal face recognition model pair
  • the prediction result of the identity information of the face image in the training sample approaches the identity information in which the face image is calibrated in the training sample.
  • a face recognition device comprising:
  • An image detecting unit configured to perform face detection on the image to obtain a face image
  • the model generation unit includes:
  • a training sample determining subunit for determining a training sample including a face image with identification information
  • a model training subunit configured to train the universal face recognition model by using the training sample, and update a parameter of the universal face recognition model based on the training target to obtain a target face recognition model; wherein the training target is a
  • the prediction result of the universal face recognition model for the identity information of the face image in the training sample approaches the identity information of the face image in the training sample being calibrated.
  • a server comprising: at least one memory and at least one processor; the memory storing a program, the processor invoking a program stored in the memory, the program for:
  • the training sample to train the universal face recognition model, and updating the parameters of the universal face recognition model based on the training target to obtain a target face recognition model; wherein the training target is the universal face recognition model pair
  • the prediction result of the identity information of the face image in the training sample approaches the identity information in which the face image is calibrated in the training sample.
  • a storage medium storing a program suitable for execution by a processor, the program for:
  • the training sample to train the universal face recognition model, and updating the parameters of the universal face recognition model based on the training target to obtain a target face recognition model; wherein the training target is the universal face recognition model pair
  • the prediction result of the identity information of the face image in the training sample approaches the identity information in which the face image is calibrated in the training sample.
  • a computer program product comprising instructions that, when run on a computer, cause the computer to perform the face recognition method described herein.
  • the present invention provides a face recognition method, which is achieved by identifying a face image by a target face recognition model, wherein the target face recognition model is a face image with the identity information as a training sample.
  • the prediction result of the identity information of the face image in the training sample is approximated by the universal face recognition model, and the identity information of the face image in the training sample is the training target, and the universal face recognition model is used.
  • the face image is input to the target face recognition model, and the face feature with higher discrimination degree can be output, thereby improving the accuracy of image management. Sex.
  • FIG. 1 is a block diagram of a hardware structure of a server according to an embodiment of the present application.
  • FIG. 2 is a flowchart of a method for generating a target face recognition model according to an embodiment of the present application
  • FIG. 3 is a flowchart of a method for recognizing a face according to an embodiment of the present application
  • FIG. 4 is a flowchart of another method for recognizing a face according to an embodiment of the present application.
  • FIG. 5 is a flowchart of a method for determining a target friend user from among various friend users of a user in response to a target face recognition model sharing request sent by a user according to an embodiment of the present disclosure
  • FIG. 6 is a flowchart of still another method for recognizing a face according to an embodiment of the present application.
  • FIG. 7 is a flowchart of still another method for recognizing a face according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of an image management application scenario according to an embodiment of the present disclosure.
  • FIG. 9(a) is another image management application scenario diagram provided by an embodiment of the present application.
  • FIG. 9(b) is still another image management application scenario diagram provided by an embodiment of the present application.
  • FIG. 9(c) is still another image management application scenario diagram provided by an embodiment of the present application.
  • FIG. 9(d) is still another image management application scenario diagram provided by the embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of a face recognition device according to an embodiment of the present application.
  • a face recognition method provided by an embodiment of the present application can be applied to a server (for example, a face recognition server or other specially set server).
  • a server for example, a face recognition server or other specially set server.
  • the hardware structure of the server may include: at least one processor 11, at least one communication interface 12, at least one memory 13, and at least one communication bus 14. ;
  • the number of the processor 11, the communication interface 12, the memory 13, and the communication bus 14 is at least one, and the processor 11, the communication interface 12, and the memory 13 complete communication with each other through the communication bus 14;
  • the processor 11 may be a central processing unit CPU, a GPU (Graphics Processing Unit), or an Application Specific Integrated Circuit (ASIC), or one or more configured to implement the embodiments of the present application. Integrated circuit, etc.
  • CPU central processing unit
  • GPU Graphics Processing Unit
  • ASIC Application Specific Integrated Circuit
  • the memory 13 may include a high speed RAM memory, and may also include a non-volatile memory or the like, such as at least one disk memory;
  • the memory stores a program
  • the processor can call a program stored in the memory, and the program is used to:
  • the training sample to train the universal face recognition model, and updating the parameters of the universal face recognition model based on the training target to obtain a target face recognition model; wherein the training target is the universal face recognition model pair
  • the prediction result of the identity information of the face image in the training sample approaches the identity information in which the face image is calibrated in the training sample.
  • the method can be applied to various scenes for identification.
  • the method may be applied to a cell monitoring scenario. Specifically, for a user entering and leaving a cell, by capturing a user image, performing face detection on the image to obtain a face image, and then using the target face recognition model to the person in the image The face image is subjected to face recognition to obtain a face feature. Since the target face recognition model is trained by the face recognition model with the identity information, the face recognition feature can be output. Thus, user identity information can be accurately predicted.
  • the method can be applied to the company's album management system. If a new picture is saved in the album during the use of the album, the face recognition method and the face recognition can be performed on the image by the face recognition method of the present application. The face feature is obtained, the user identity information in the image is predicted based on the face feature, and then the image in the album is managed according to the identity information.
  • refinement function and extended function of the program can be referred to the following description.
  • FIG. 2 is a flowchart of a method for generating a target face recognition model according to an embodiment of the present application.
  • the method includes:
  • S201 Determine a training sample, where the training sample includes a face image that is labeled with identity information;
  • each face image provided by the user and labeled with the identity information may be used as a training sample.
  • the image to be stored to the image management application or stored in the image management application may be associated with the image management application.
  • Image if the image management technology is applied to the image management application, the image to be stored to the image management application or stored in the image management application may be associated with the image management application.
  • the image associated with the image management application can be viewed as an image in the storage space associated with the user.
  • each target image in the image related to the image management application (each image related to the image management application and located in the specified image management range is regarded as a target image) can be seen
  • the image is the user-specified storage space.
  • an image in a storage space associated with a friend user who has open image management authority to the user may also be regarded as an image in the user-specified storage space. That is, each of the buddy users of the user is determined, and the target buddy user who has opened the image management authority for the user is determined from the determined buddy users, and the image in the storage space associated with the target buddy user may be regarded as the user specified. The image in the storage space.
  • the target friend user specifies an image open range in the open image management right of the user
  • the image in the storage space associated with the target friend user and located in the open range of the image is regarded as The image in the storage space specified by the user.
  • the user may calibrate the identity image of the face image in the image of the storage space specified and/or associated by the user, and each face image labeled with the identity information may be used as a user-provided person with the identity information.
  • Face image For example, when a user stores an image in an image management application, the identity information of any one or more face images in the image can be calibrated.
  • the user can separately calibrate the identity information of the face in each face image in the image, and the identity information of the face in the user-calibrated face image is the face of the user calibration.
  • the identity information of the image it should be noted that, when there is a case where the face detection is not allowed to cause the annotation to correspond to the image in the multi-person image, after the user uploads the multi-person image, the detection result for the multi-person image may be returned for the user to perform the second confirmation. Until the calibrated identity information corresponds to the face in the multiplayer picture.
  • a plurality of face images calibrated by the identity information may be provided by the user.
  • 5 to 10 images with the identity information may be provided by the user.
  • the target face feature corresponding to the identity information may be calculated based on each face image that is labeled with the same identity information.
  • the target facial features corresponding to the identity information may be calculated according to each face image calibrated by the identity information for each of the calibrated identity information.
  • each of the calibrated identity information performing the following calculation: determining each face image calibrated by the identity information, and identifying the determined face feature of each face image according to the universal face recognition model, and identifying The average value of each face feature is determined as the target face feature corresponding to the identity information.
  • the following process may also be performed for each identity information that is calibrated: from the image storage space, for example, user specified or In the associated storage space, a face image matching the face feature and the target face feature corresponding to the identity information is obtained, and the identity information is calibrated for each acquired face image, and then each acquired face image is also obtained.
  • a training sample this can ensure the diversity of training samples and improve the quality of training samples.
  • a training sample larger than the preset number that is calibrated by the identity information may be acquired.
  • the optional preset number can be 100 sheets. The above is only a preset number of preferred modes provided by the embodiments of the present application. Specifically, the user may set a preset number of specific content according to the requirements of the application, which is not limited herein.
  • the training sample calibrated by an identity information includes not only a face image provided by the user and having the identity information, but also a target face feature corresponding to the identity information is specified from the user or
  • the target face feature corresponding to the identity information may be updated based on each training sample calibrated by the identity information. That is, the average value of the face features of the respective training samples calibrated by the identity information is re-established as the target face feature corresponding to the identity information.
  • the two facial features may be considered to match.
  • the similarity between the facial feature and the target facial feature is greater than a preset similarity threshold, the facial feature can be considered to match the target facial feature.
  • the preset similarity threshold is 85%.
  • the above is only a preferred method for determining the matching of two facial features provided by the embodiment of the present application. Specifically, the user can arbitrarily set the conditions for matching the two facial features according to their own needs, which is not limited herein.
  • each face image calibrated by the identity information is used as a training sample, and the prediction result of the identity information of the face in the training sample is approximated to the training sample by the universal face recognition model.
  • the identity information is trained for the training target to obtain the target face recognition model.
  • the training sample is input as input information to the universal face recognition model, and the face feature of the training sample may be output.
  • the universal face recognition model includes a face verification model, and the face verification model is used for feature extraction of a face image to obtain a face feature, for example, feature extraction of a normalized face image to obtain a 512-dimensional face. feature.
  • the face verification model may be a deep convolutional neural network model including at least an input layer, a convolution layer, a pooling layer, a local convolution layer, a fully connected layer, and an output layer, where the volume The number of layers, pooling layers, local convolutional layers, and connection methods can be set according to actual needs.
  • the pooling layer can be the largest pooling layer, and the output layer is implemented by the softmax loss function. Since the face image calibrated by the identity information can include a face, it can also include multiple faces. Based on this, a centerloss loss function can be added to the output layer for constraint, thereby providing model robustness.
  • the general face recognition model is trained by each training sample to obtain a target face recognition model.
  • the face feature of the training sample is identified based on the currently trained general face recognition model, and the target person corresponding to the identity information of the training sample being calibrated is calculated.
  • the mapping distance between the face feature and the target face feature includes a loss function calculated based on the face feature and the target face feature. Furthermore, by minimizing the loss function as a training target, the parameters in the currently trained general face recognition model are updated to obtain a target face recognition model.
  • the target face recognition model can be detected by using samples other than the training samples to determine the accuracy of face image recognition.
  • the accuracy of face image recognition can be characterized by Mean Average Precision (mAP), which specifically refers to the face image recognition accuracy in the case of maximum recall.
  • mAP Mean Average Precision
  • the determined sample may be divided into two parts according to a preset ratio, one part is used for training and the other part is used for testing, wherein the preset ratio may be set according to user requirements, and in one example, the preset ratio may be It is 80%: 20%.
  • FIG. 3 is a flowchart of a method for recognizing a face according to an embodiment of the present application. As shown in FIG. 3, the method includes:
  • the face detection on the image may be performed by performing face detection on an image in a storage space designated by the user, or performing image detection on an image in a storage space associated with the user.
  • the face detection technology for performing face detection on the image to obtain a face image in the image is prior art, and will not be described in detail.
  • the target face recognition model uses the face image with the identity information as the training sample, and the general face recognition model approximates the prediction result of the face information in the training sample to the face image in the training sample.
  • the identity information is the training target, and the general face recognition model is trained to generate.
  • the target face recognition model may be a target face recognition model that is pre-trained based on the target face recognition model generation method provided by the foregoing embodiment, or may be provided based on the foregoing embodiment after performing the step S301.
  • the target face recognition model trained by the target face recognition model generation method is not limited herein.
  • FIG. 4 is a flowchart of another method for recognizing a face according to an embodiment of the present application.
  • the method includes:
  • S402. Perform face recognition on the face image in the image by using a target face recognition model to obtain a face feature.
  • the target face recognition model uses the user-provided face image with the identity information as the training sample, and the general face recognition model predicts the identity information of the face in the training sample to be calibrated to the training sample.
  • the identity information is the training target, and the general face recognition model is trained and generated.
  • the execution process of the steps S401-S402 provided by the embodiment of the present application is the same as the execution process of the steps S301-S302 provided by the foregoing embodiment.
  • the process of performing the steps S401-S402 refer to the foregoing embodiment for the steps S301-S302.
  • the face recognition method shown in FIG. 4 provided by the embodiment of the present application further includes steps after step S402, which is provided in the embodiment of the present disclosure.
  • S403. Predicting the identity information of the face in the face image according to the face feature of the face image in the image recognized by the target face recognition model.
  • the identity information of the face in the face image may be predicted according to the recognized face feature of the face image.
  • predicting identity information of the face in the face image according to the face feature of the recognized face image including: determining a target face feature that matches the face feature of the recognized face image, The identity information corresponding to the determined target face feature is used as the predicted identity information of the face in the face image.
  • the target face recognition model passes the training sample, the prediction result of the identity information of the face in the training sample by the universal face recognition model is approximated by the identity information of the face image in the training sample being trained for the training target training. Therefore, compared with the general face recognition model, it can better distinguish the facial features and thus has a higher recognition accuracy.
  • the Updating the target face feature corresponding to the identity information for example, after predicting the identity information of the face in the face image according to the face feature of the recognized face image, and calibrating the predicted identity for the face image And further updating the target face feature corresponding to the identity information to an average of the face features of the respective face images calibrated by the identity information.
  • the face recognition method provided by the embodiment of the present application may further include: binding the identity information of the face in the face image predicted in step S403 to the image to which the face image belongs.
  • image 1 includes two face images (face image 1 and face image 2, respectively), and image 2 includes a face image (face image) 3
  • image 1 includes two face images (face image 1 and face image 2, respectively)
  • image 2 includes a face image (face image) 3
  • predicting the identity information of the face in the face image 1 as the identity information 2 is the identity information 1
  • the identity information of the face in the face image 3 is the identity information 1; then, the identity information 1 is bound to the image 1 and the image 2, and the identity information 2 is bound to the image 1.
  • the identity information of the face in the predicted face image is bound to the image to which the face image belongs, so that when the user searches for the image corresponding to the identity information/the identity of each face image in the user search image When the information is obtained, the corresponding search result can be obtained directly according to the binding relationship between the identity information and the image.
  • the identity information of the face in the face image predicted in step S403 is not bound to the image to which the face image belongs, as long as the image can be recognized based on the target face recognition model.
  • the identity information of the face in the face image in the image can be used to determine the identity information of the face in each face image included in the image. For example, if three face images are displayed in one image, namely face image 1, face image 2 and face image 3; the identity information of the face in the face image 1 is recognized based on the target face recognition model.
  • the identity information of the identity information in the face image 2 is the identity information 2
  • the identity information of the face in the face image 3 is the identity information 1, and further, it can be determined that the image has different identity information.
  • the face images of the two persons are the face image of the person whose identity information is the identity information 1, and the face image of the person whose identity information is the identity information 2.
  • the face recognition method provided by the embodiment of the present application may be: after the target face recognition model is generated based on the target face recognition model generation method provided by the foregoing embodiment, the target face recognition model may be shared in response to the target face recognition model.
  • the request determines the target friend user from among the user friend users; and shares the target face recognition model to the target friend user.
  • the target face recognition model may be generated in response to the target sent by the user.
  • the model sharing request shares the generated target face recognition model to the target friend user (where the target friend user is one or more friend users selected from the user's friend users).
  • the image management application may generate the target face recognition model by using the user-provided face image with the identity information to generate the target face recognition model.
  • the image management application displays each friend user account (you can also display information indicating the friend user when displaying the friend user account of the friend user), so that the user selects one or more of the displayed friend users.
  • the friend user account is used as the target friend user, and the target face recognition model is shared to each target friend user account.
  • the target face recognition model can be used to support a photo album management system of a company, and the target face recognition model can be shared with the target friend user, which can be shared with another company's album management system, or a department. Album management system.
  • FIG. 5 is a flowchart of a method for determining a target buddy user from among various buddy users of a user in response to a target face recognition model sharing request according to an embodiment of the present application.
  • the method includes:
  • each friend user account of the user account of the user is determined, and each friend user account represents a friend user.
  • S502 Determine, from among the buddy users, the candidate buddy user, where the identity information of the candidate buddy user is the identity information used to calibrate the training sample, and/or the identity information of the buddy user of the candidate buddy user is used for calibration. Training the identity information of the sample;
  • the identity information of the candidate friend user is used to calibrate the identity information of the training sample, and/or the identity information of the friend user of the candidate friend user is used to calibrate the identity information of the training sample, so that the candidate user may be
  • the target face recognition model is received, the target face recognition model is used to have a better recognition effect on the candidate user and/or the friend user of the candidate user.
  • the determined each candidate friend user may be displayed in the image management application, so that the user performs a selection operation on each displayed candidate friend user. Then, each candidate friend user selected by the user is used as a target friend user.
  • each identity information used for calibrating a training sample may be associated with an attribute information, which is used to verify the recognized face feature.
  • the attribute information includes address information, and the address information indicates that there is an address range.
  • the target face feature matching the recognized face feature may be determined. And determining the identity information that matches the determined target facial feature as the identity information of the facial image; further, determining the attribute information associated with the identity information, and obtaining the address information of the person having the identity information, Further, as long as the photographing address of the image to which the face image belongs is acquired, and the photographing address belongs to the address range indicated by the address information, the face feature of the face image recognized based on the target face recognition model can be determined to be accurate.
  • the attribute information may further include any one or more of gender, age (or age range), and image capturing time, and correspondingly, based on gender, age, and image capturing time.
  • the information is used to verify the recognized facial features.
  • FIG. 6 is a flowchart of still another method for recognizing a face according to an embodiment of the present application. As shown in FIG. 6, the method includes:
  • S602. Perform face recognition on the face image in the image by using the target face recognition model to obtain a face feature.
  • the target face recognition model uses the face image of the user-provided identity information as a training sample, and the general face recognition model predicts the identity information of the face in the training sample to be close to the training sample.
  • the calibrated identity information is the training target, and the general face recognition model is trained to generate.
  • the execution process of the steps S601-S602 provided by the embodiment of the present application is the same as the execution process of the steps S301-S302 provided by the foregoing embodiment.
  • the process of performing the steps S601-S602 refer to the foregoing embodiment for the steps S301-S302.
  • the face recognition method shown in FIG. 6 provided by the embodiment of the present application further includes steps after step S602, which is provided in the embodiment of the present disclosure.
  • S603. Generate a request result of the image clustering request according to the face feature of the image in the storage space specified by the user or the associated storage space identified by the target face recognition model; and the image clustering request is used to indicate the storage space specified or associated with the user. The images in the specified image range are clustered.
  • the image management application may receive an image clustering request sent by a user, and the image clustering request It is used to indicate that the images in the specified image range in the storage space specified or associated by the user are clustered; after receiving the image clustering request, the image management application may first determine that the image is located in the storage space specified or associated by the user. An image within a specified image range requested by the class; and identifying a face feature of the face image in each of the determined images based on the target face recognition model; and further determining each image according to the face feature of each face image Clustering is performed to obtain at least one image category. For example, the images to which the face images matched by the face features belong are classified into one class.
  • the face features are obtained by performing face detection and face recognition on the image, and then clustering based on the similarity of the face features of each picture, the image can be managed according to the identity information, thus, It is convenient for users to find all images of any user.
  • FIG. 7 is a flowchart of still another method for recognizing a face according to an embodiment of the present application. As shown in FIG. 7, the method includes:
  • the target face recognition model uses the face image of the user-provided identity information as a training sample, and the general face recognition model predicts the identity information of the face in the training sample to be close to the training sample.
  • the calibrated identity information is the training target, and the general face recognition model is trained to generate.
  • the execution process of the steps S701-S702 provided by the embodiment of the present application is the same as the execution process of the steps S301-S302 provided by the foregoing embodiment.
  • the execution process of the steps S701-S702 refer to the foregoing embodiment for the steps S301-S302.
  • the image search request is used to instruct to search for an image related to the specified identity information within the specified image range.
  • the face recognition method shown in FIG. 7 provided by the embodiment of the present application further includes the steps after step S702, which is provided in the embodiment of the present disclosure.
  • S703. Generate a request result of the image search request according to the face feature of the image in the storage space specified by the user specified or associated by the target face recognition model; and the image search request is used to indicate the storage space specified or associated by the user. Search for images related to the specified identity information within the specified image range.
  • an image including a face image having the specified identity information may be used as an image related to the specified identity information, that is, if an image includes a face image having the specified identity information, The image is regarded as an image related to the specified identity information; wherein, if the identity information of the face in the face image is the specified identity information, the face image is a face image having the specified identity information.
  • a face image including target identity information related to the specified identity information may be used as an image related to the specified identity information; that is, if an image includes the identifier information associated with the specified identity information.
  • the target identity information the image is considered to be an image associated with the specified identity information.
  • the face image is a face image having target identity information related to the specified identity information.
  • the target identity information related to the specified identity information may be identity information that is related to the specified identity information; and the target identity information related to the specified identity information may also be identity information that has a friend relationship with the specified identity information.
  • the above is only a preferred manner of the target identity information provided by the embodiment of the present application. Specifically, the inventor can arbitrarily set the specific content of the target identity information according to his own needs, which is not limited herein.
  • the image management application may receive an image search request sent by a user, where the image search request is used for Instructing to search for an image related to the specified identity information within a specified image range in the storage space specified or associated by the user; after receiving the image search request, the image management application may first determine that the image search is located in the storage space specified or associated by the user Requesting an image within a specified image range; and identifying a face feature of the face image in each of the determined images based on the target face recognition model; and determining a target face corresponding to the specified identity information from each face feature The image to which the feature matching face feature belongs, and/or the image to which the face feature matching the target face feature corresponding to the specified identity information related target identity information belongs is determined from each face feature.
  • the image search request may further be used to indicate that the image related to the specified attribute information is searched within the specified range, for example, the female image, the child image, the photographed image, Single image and so on.
  • a face recognition method may be further provided. After performing the steps S601-S602, the face recognition method may perform not only step S603 but also step S703, and detailed execution of steps S603 and S703.
  • the face recognition method may perform not only step S603 but also step S703, and detailed execution of steps S603 and S703.
  • a face recognition method provided by an embodiment of the present application can be applied to an image management application.
  • the image management application can include the following functions:
  • the image management application in the embodiment of the present application may determine an image in a storage space specified or associated by a user; wherein an image in a storage space associated with the user may be considered to be related to the image management application.
  • An image an image associated with the image management application, including: an image to be stored in the image management application or already stored in the image management application); the image in the user-specified storage space can be considered as:
  • Each of the images related to the image management application is located in an image within a specified image management range; or an image related to a friend user who has open image management authority to the user (an image related to a friend user who has open image management authority to the user)
  • the method includes: a friend user who opens the image management authority for the user to store to an image management application corresponding thereto or an image that has been stored in the corresponding image management application).
  • the user may select at least one image from the image in the user association or the specified storage space, and calibrate the identity information of the face in the face image in the selected image, and mark the user with The face image of the identity information is used as a training sample.
  • the training sample is input to the universal face recognition model, and the prediction result of the identity information of the face in the training sample is approximated by the universal face recognition model to the calibrated identity of the training sample.
  • the information is a training target, and the general face recognition model is trained to generate a target face recognition model.
  • the image management application may perform image management on the image in the storage space specified or associated by the user based on the generated face recognition function of the target face recognition model.
  • FIG. 9( a ) is another image management application scenario diagram provided by the embodiment of the present application.
  • the to-be-managed image set 911 includes to be managed.
  • the face image and the identity information of 2 are the face images of the identity information 3.
  • the image to be managed in the image management group 911 is subjected to clustering processing, and the image clustering result of the image clustering request includes three image categories, which are respectively an image category 912 corresponding to the identity information 1 and corresponding to the identity information 2.
  • FIG. 9(b) is another image management application scenario diagram provided by the embodiment of the present application.
  • the image within the specified image range in the storage space specified or associated by the user indicated by the image search request can be regarded as the image to be managed 911; if the face image with the specified identity information is displayed If the image is an image related to the specified identity information, and the image search request specifies the identity information as the identity information 2, the search process is performed in the image to be managed 911, and the obtained image search result 915 includes the image to be managed 11 to be managed. Image 14 and image 15 to be managed.
  • the image in the specified image range in the storage space specified or associated by the user indicated by the image search request can be regarded as the image to be managed 911; if the specified identity information is displayed
  • the image search request specifies the identity information as the identity information 1 and the identity information 3
  • the search process is performed in the image to be managed 911, and the obtained image search result includes the image.
  • the category 916 and the image category 917 wherein the image category 916 corresponds to the identity information 1, the image category 916 includes the image to be managed 11, the image to be managed 13 and the image to be managed 15; the image category 917 corresponds to the identity information 3, and the image category 917
  • the image to be managed 12, the image to be managed 13 and the image to be managed 15 are included.
  • the image management application may perform face detection on the image by using image detection technology to obtain a face image in the image, and Inputting the obtained face image into the target face recognition model to obtain the face feature of the face image; and further determining the target face by determining the target face feature matching the face feature
  • the identity information corresponding to the feature is determined as the identity information of the face in the face image.
  • the identity information of the face in each face image in the image to be determined is the identity information of each face displayed in the image of the identity information to be determined.
  • the present invention provides a face recognition method and a server, the method comprising: performing face detection on an image in a storage space designated or associated by a user to obtain a face image; and performing a face on the face image based on the target face recognition model Identifying and obtaining facial features; wherein, the target face recognition model uses the face image of the user-provided identity information as a training sample, and the general face recognition model predicts the identity information of the face in the training sample.
  • the identity information that is calibrated near the training sample is a training target, and the general face recognition model is trained to be generated. Therefore, the target face recognition model recognizes the face feature of the image in the storage space specified or associated by the user. More discriminating, which improves the accuracy of image management.
  • the face recognition device described below can be considered as a program module required for the server to implement the face recognition method provided by the embodiment of the present application.
  • the face recognition device content described below may be referred to each other in correspondence with the face recognition method content described above.
  • FIG. 10 is a schematic structural diagram of a face recognition device according to an embodiment of the present application.
  • the device includes:
  • the image detecting unit 101 is configured to perform face detection on the user image to obtain a face image.
  • the face recognition unit 102 is configured to perform face recognition on the face image in the image by using the target face recognition model to obtain a face feature; wherein the target face recognition model is generated by the model generating unit 103;
  • the model generating unit 103 includes:
  • a training sample determining sub-unit 1031 configured to determine a training sample, the training sample including a face image that is labeled with identity information;
  • the model training sub-unit 1032 is configured to train the universal face recognition model by using the training sample, and update a parameter of the universal face recognition model based on the training target to obtain a target face recognition model; wherein the training target is The prediction result of the universal face recognition model for the identity information of the face image in the training sample approaches the identity information of the face image in the training sample being calibrated.
  • the model training subunit 1032 includes:
  • a face feature determining subunit configured to perform face recognition on the training sample by using a universal face recognition model, to obtain a face feature of the training sample
  • mapping distance calculation subunit configured to calculate a mapping distance between a facial feature of the training sample and a target facial feature corresponding to the calibrated identity information of the training sample
  • a training subunit configured to update the parameter in the universal face recognition model by minimizing the mapping distance as a training target, to obtain a target face recognition model.
  • the training sample determining subunit 1031 is specifically configured to:
  • the face image of the information is used as a training sample.
  • the method further includes: an identity information prediction unit, configured to predict a person face based on a face image in the image recognized by the target face recognition model; The identity information of the face in the face image.
  • the identity information is associated with attribute information, and the attribute information is used to verify the recognized face feature.
  • a face recognition device provided by an embodiment of the present application further includes a target face recognition model sharing unit, configured to determine a target from each friend user of the user in response to the target face recognition model sharing request. A friend user; sharing the target face recognition model to the target friend user.
  • the target face recognition model sharing unit is specifically configured to: determine, according to the target face recognition model sharing request, each friend user of the user; determine the candidate friend user from among the friend users;
  • the identity information of the candidate friend user is the identity information used to calibrate the training sample, and/or the identity information of the friend user of the candidate friend user is the identity information used to calibrate the training sample;
  • the user is determined to be the target friend user.
  • an image clustering unit is further configured to generate a request result of an image clustering request based on a face feature recognized by the target face recognition model;
  • a clustering request is used to indicate clustering of images within a specified image range.
  • an image search unit is further configured to generate a request result of an image search request based on a face feature recognized by the target face recognition model; and an image search request Used to indicate that an image related to the specified identity information is searched within the specified image range.
  • the embodiment of the present application further provides a storage medium, where the storage medium can store a program suitable for execution by a processor, and the program is used to:
  • the face image in the image is subjected to face recognition by the target face recognition model to obtain a face feature; and the target face recognition model is generated by the following generation method:
  • the training sample to train the universal face recognition model, and updating the parameters of the universal face recognition model based on the training target to obtain a target face recognition model; wherein the training target is the universal face recognition model pair
  • the prediction result of the identity information of the face image in the training sample approaches the identity information in which the face image is calibrated in the training sample.
  • the embodiment of the present application further provides a computer program product including instructions, which when executed on a computer, causes the computer to perform the face recognition method according to any implementation manner of the embodiment of the present application.
  • refinement function and the extended function of the computer program product may refer to the foregoing description.
  • the present application provides a face recognition device, a storage medium, and a computer program product.
  • a face image is obtained by performing face detection on an image, and a face recognition is performed on the face image based on a target face recognition model to obtain a face feature.
  • the target face recognition model is a training sample with the face image with the identity information provided by the user, and the prediction result of the identity information of the face in the training sample is approximated by the general face recognition model.
  • the calibrated identity information is the training target, and the general face recognition model is trained and generated. The recognition result of the face features of the image is more differentiated by the target face recognition model, thereby improving the accuracy of image management.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented directly in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

一种人脸识别方法、装置、服务器及存储介质,该方法包括:对图像进行人脸检测得到人脸图像(S401);通过目标人脸识别模型对该人脸图像进行人脸识别得到人脸特征;该目标人脸识别模型是以标定有身份信息的人脸图像为训练样本,以通用人脸识别模型对训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息为训练目标,对所述通用人脸识别模型进行训练生成的(S402)。上述方法由用户标定身份信息的训练样本对通用人脸识别模型进行再训练,使得所生成的目标人脸识别模型对特定用户的人脸特征的识别结果更具区分性,进而提高了对图像管理的准确性。

Description

一种人脸识别方法、装置、服务器及存储介质
本申请要求于2018年03月27日提交中国专利局、申请号为201810258504.1、申请名称为“一种人脸识别方法、装置、服务器及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人脸识别技术领域,更具体地说,涉及一种人脸识别方法、装置、服务器及存储介质。
背景技术
近几年来图像管理技术的广泛应用(如电子相册等),在极大程度上方便了用户对图像的管理。目前图像管理技术在应用过程中,一般是基于通用人脸识别模型识别出的人脸图像的人脸特征来实现对图像的管理。
然而,因为通用人脸识别模型是根据人脸公开数据集训练得到的,因此在图像管理技术应用过程中,通常存在由通用人脸识别模型识别出的人脸图像的人脸特征的区分性低、进而导致图像管理结果不准确的问题。比如,在图像管理技术应用过程中,由通用人脸识别模型识别出的一个戴眼镜的人与戴同样眼镜的另一个人的人脸特征之间的相似度很高,甚至高过识别出的他本人戴眼镜与不戴眼镜的人脸特征之间的相似度。
因此,提供一种人脸识别方法、装置、服务器及存储介质,以提高图像管理技术应用过程中,识别出的人脸图像的人脸特征的区分性,进而便于对图像的准确管理,是亟待解决的问题。
发明内容
有鉴于此,本申请提供一种人脸识别方法,其通过标定有身份信息的人脸图像所形成的训练样本训练得到的目标人脸识别模型对人脸图像进行识别,以提高识别出的人脸图像的人脸特征的区分性,进而便于对图像的准确管理。本申请还提供了相应的装置、设备、存储介质以及计算机程序产品。
为实现上述目的,本申请实施例提供如下技术方案:
一种人脸识别方法,应用于服务器,包括:
对图像进行人脸检测得到人脸图像;
通过目标人脸识别模型对所述图像中的人脸图像进行人脸识别得到人脸特征;其中,所述目标人脸识别模型的生成方法,包括:
确定训练样本,所述训练样本包括标定有身份信息的人脸图像;
利用所述训练样本对通用人脸识别模型进行训练,基于训练目标更新所述通用人脸识别模型的参数,得到目标人脸识别模型;其中,所述训练目标是所述通用人脸识别模型对所述训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息。
一种人脸识别装置,包括:
图像检测单元,用于对图像进行人脸检测得到人脸图像;
人脸识别单元,用于通过目标人脸识别模型对所述图像中的人脸图像进行人脸识别得到人脸特征,其中,所述目标人脸识别模型是通过模型生成单元生成的;所述模型生成单元,包括:
训练样本确定子单元,用于确定训练样本,所述训练样本包括标定有身份信息的人脸图像;
模型训练子单元,用于利用所述训练样本对通用人脸识别模型进行训练,基于训练目标更新所述通用人脸识别模型的参数,得到目标人脸识别模型;其中,所述训练目标是所述通用人脸识别模型对所述训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息。
一种服务器,包括:至少一个存储器和至少一个处理器;所述存储器存储有程序,所述处理器调用所述存储器存储的程序,所述程序用于:
对图像进行人脸检测,得到人脸图像;
通过目标人脸识别模型对所述图像中的人脸图像进行人脸识别,得到人脸特征;以及,通过以下生成方法生成所述目标人脸识别模型:
确定训练样本,所述训练样本包括标定有身份信息的人脸图像;
利用所述训练样本对通用人脸识别模型进行训练,基于训练目标更新所述通用人脸识别模型的参数,得到目标人脸识别模型;其中,所述训练目标是所 述通用人脸识别模型对所述训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息。
一种存储介质,所述存储介质存储有适于处理器执行的程序,所述程序用于:
对用户指定或关联的存储空间中的图像进行人脸检测,得到人脸图像;
通过目标人脸识别模型对所述图像中的人脸图像进行人脸识别,得到人脸特征;以及,通过以下生成方法生成所述目标人脸识别模型:
确定训练样本,所述训练样本包括标定有身份信息的人脸图像;
利用所述训练样本对通用人脸识别模型进行训练,基于训练目标更新所述通用人脸识别模型的参数,得到目标人脸识别模型;其中,所述训练目标是所述通用人脸识别模型对所述训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息。
一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行本申请所述的人脸识别方法。
本申请提供一种人脸识别方法,该方法是通过目标人脸识别模型对人脸图像进行识别而实现的,其中,目标人脸识别模型是以标定有身份信息的人脸图像为训练样本,以通用人脸识别模型对所述训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息为训练目标,对所述通用人脸识别模型进行训练生成的,在对图像进行人脸检测得到人脸图像后,将人脸图像输入至该目标人脸识别模型,能够输出区分度较高的人脸特征,进而提高了对图像管理的准确性。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本申请实施例提供的一种服务器的硬件结构框图;
图2为本申请实施例提供的一种目标人脸识别模型生成方法流程图;
图3为本申请实施例提供的一种人脸识别方法流程图;
图4为本申请实施例提供的另一种人脸识别方法流程图;
图5为本申请实施例提供的一种响应用户发送的目标人脸识别模型分享请求,从用户的各个好友用户中,确定目标好友用户的方法流程图;
图6为本申请实施例提供的又一种人脸识别方法流程;
图7为本申请实施例提供的又一种人脸识别方法流程;
图8为本申请实施例提供的一种图像管理应用场景图;
图9(a)为本申请实施例提供的另一种图像管理应用场景图;
图9(b)为本申请实施例提供的又一种图像管理应用场景图;
图9(c)为本申请实施例提供的又一种图像管理应用场景图;
图9(d)为本申请实施例提供的又一种图像管理应用场景图;
图10为本申请实施例提供的一种人脸识别装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
实施例
本申请实施例提供的一种人脸识别方法可以应用于服务器(如,人脸识别服务器或者其他专门设置的服务器)。
图1为本申请实施例提供的一种服务器的硬件结构框图,参照图1,服务器的硬件结构可以包括:至少一个处理器11,至少一个通信接口12,至少一个存储器13和至少一个通信总线14;
在本申请实施例中,处理器11、通信接口12、存储器13、通信总线14的数量为至少一个,且处理器11、通信接口12、存储器13通过通信总线14完成相互间的通信;
处理器11可能是一个中央处理器CPU、GPU(Graphics Processing Unit, 图形处理器),或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本申请实施例的一个或多个集成电路等;
存储器13可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory)等,例如至少一个磁盘存储器;
其中,存储器存储有程序,处理器可调用存储器存储的程序,程序用于:
对图像进行人脸检测得到人脸图像;
通过目标人脸识别模型对所述图像中的人脸图像进行人脸识别得到人脸特征;以及,通过以下生成方法生成所述目标人脸识别模型:
确定训练样本,所述训练样本包括标定有身份信息的人脸图像;
利用所述训练样本对通用人脸识别模型进行训练,基于训练目标更新所述通用人脸识别模型的参数,得到目标人脸识别模型;其中,所述训练目标是所述通用人脸识别模型对所述训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息。
可以理解,该方法可以应用于多种场景进行身份识别。例如,该方法可以应用于小区监控场景中,具体地,对于进出小区的用户,通过拍摄用户图像,对图像进行人脸检测,得到人脸图像,然后通过目标人脸识别模型对图像中的人脸图像进行人脸识别得到人脸特征,由于该目标人脸识别模型是以标定有身份信息的人脸图像对通用人脸识别模型训练得到的,其能够输出区分度较高的人脸特征,因而能够准确地预测用户身份信息。又例如,该方法可以应用于公司的相册管理***中,在相册使用过程中,若有新的图片保存在相册,则可以通过本申请的人脸识别方法对图像进行人脸检测以及人脸识别得到人脸特征,基于该人脸特征预测图像中用户身份信息,然后按照身份信息对相册中的图像进行管理。
可选的,程序的细化功能和扩展功能可参照下文描述。
为了便于对本申请实施例提供的一种人脸识别方法进行详细阐述,在此首先对本申请实施例提供的一种目标人脸识别模型的生成方法进行详细介绍。
如图2所示为本申请实施例提供的一种目标人脸识别模型生成方法流程图。
如图2所示,该方法包括:
S201、确定训练样本,训练样本包括标定有身份信息的人脸图像;
在本申请实施例中,优选的,可以将用户提供的标定有身份信息的每个人脸图像作为一个训练样本。
可选的,图像管理技术在应用过程中,若图像管理技术应用在图像管理应用程序时,可以将待存储至图像管理应用程序或已存储于图像管理应用程序的图像作为与图像管理应用程序相关的图像。
可选的,与图像管理应用程序相关的图像可以看成是用户关联的存储空间中的图像。在本申请实施例中,优选的,与图像管理应用程序相关的图像中的各个目标图像(将与图像管理应用程序相关的每一个位于指定图像管理范围内的图像看成一个目标图像)可以看成是用户指定存储空间中的图像。
进一步的,还可以将对该用户开放图像管理权限的好友用户所关联的存储空间中的图像看成是该用户指定存储空间中的图像。即,确定该用户的各个好友用户,从所确定的各个好友用户中确定为该用户开放图像管理权限的目标好友用户,该目标好友用户所关联的存储空间中的图像可以看成是该用户指定的存储空间中的图像。
需要注意的是:若目标好友用户对该用户的开放的图像管理权限中指定了图像开放范围,则将与该目标好友用户关联的存储空间中的、位于该图像开放范围内的图像看成是该用户指定的存储空间中的图像。
可选的,用户可以为用户指定和/或关联的存储空间的图像中的人脸图像标定身份信息,进而每个标定有身份信息的人脸图像可以作为一个用户提供的标定有身份信息的人脸图像。比如,用户在图像管理应用程序中存储图像时,可以标定一下这个图像中的任意一个或多个人脸图像的身份信息。
若该图像中存在多个人脸图像时,用户可以分别标定该图像中每个人脸图像中的人脸的身份信息,用户标定的人脸图像中人脸的身份信息便是用户标定的该人脸图像的身份信息。需要说明的是,多人图像中存在人脸检测不准导致标注与图像不对应的情况时,还可以在用户上传多人图像之后,返回针对该多人图像的检测结果以便用户进行二次确认,直至标定的身份信息与多人图片中的人脸一一对应。
可选的,针对每个标定的身份信息,可以由用户提供多张被该身份信息标定的人脸图像。在本申请实施例中,优选的,针对一个被标定的身份信息,可以由用户提供5~10张标定有该身份信息的图像。
在本申请实施例中,优选的,在确定标定有身份信息的人脸图像后,可以基于标定有同一所述身份信息的各个人脸图像,计算所述身份信息对应的目标人脸特征。在具体实现时,可以分别针对每个被标定的身份信息,根据各个被该身份信息标定的人脸图像计算该身份信息对应的目标人脸特征。
可选的,针对每个被标定的身份信息,进行如下计算:确定被该身份信息标定的各个人脸图像,根据通用人脸识别模型识别所确定的每个人脸图像的人脸特征,将识别出的各个人脸特征的平均值确定为与该身份信息对应的目标人脸特征。
在本申请实施例中,除了可以将用户提供的标定有身份信息的人脸图像作为训练样本,还可以针对被标定的每个身份信息,执行以下过程:从图像存储空间中,例如用户指定或关联的存储空间中,获取人脸特征与该身份信息对应的目标人脸特征匹配的人脸图像,为获取到的每个人脸图像标定该身份信息,进而将获取到的每一个人脸图像也作为一个训练样本,如此可以保证训练样本的多样性,提高训练样本的质量。
在本申请实施例中,优选的,为了提高对通用人脸识别模型训练结果的准确性,针对每个被标定的身份信息,可以获取被该身份信息标定的大于预设数量的训练样本,来对通用人脸识别模型进行训练。可选的预设数量可以100张。以上仅仅是本申请实施例提供的预设数量的优选方式,具体的,用户可根据自己的需求设置预设数量的具体内容,在此不做限定。
进一步的,若确定的训练样本中,被一身份信息标定的训练样本中不仅包括用户提供的标定有该身份信息的人脸图像,还包括根据该身份信息对应的目标人脸特征从用户指定或关联的存储空间中获取到的人脸图像时,可以将基于被该身份信息标定的各个训练样本对该身份信息对应的目标人脸特征进行更新。即,将被该身份信息标定的各个训练样本的人脸特征的平均值重新作为与该身份信息对应的目标人脸特征。
在本申请实施例中,优选的,当两个人脸特征之间的相似度大于预设相似 度阈值时,可以认为这两个人脸特征匹配。相应的,若人脸特征与目标人脸特征之间的相似度大于预设相似度阈值时,可以认为该人脸特征与该目标人脸特征匹配。可选的,预设相似度阈值为85%。
以上仅仅是本申请实施例提供的确定两个人脸特征匹配的优选方式,具体的,用户可根据自己的需求任意设置两个人脸特征匹配的条件,在此不做限定。
S202、利用通用人脸识别模型对训练样本进行人脸识别,得到训练样本的人脸特征;
在本申请实施例中,优选的,以被身份信息标定的各个人脸图像为训练样本,以通用人脸识别模型对训练样本中的人脸的身份信息的预测结果趋近于训练样本被标定的身份信息为训练目标对该通用人脸识别模型进行训练,以得到目标人脸识别模型。
可选的,将训练样本作为输入信息输入至通用人脸识别模型,可以输出该训练样本的人脸特征。其中,通用人脸识别模型中包括人脸验证模型,该人脸验证模型用于对人脸图像进行特征提取得到人脸特征,例如,对规范化的人脸图像进行特征提取得到512维的人脸特征。在一些可能的实现方式中,人脸验证模型可以是深度卷积神经网络模型,其至少包括输入层、卷积层、池化层、本地卷积层、全连接层以及输出层,其中,卷积层、池化层、本地卷积层的数量以及连接方式可以根据实际需求而设置,池化层可以是最大池化层,输出层通过softmax损失函数实现。由于标定由身份信息的人脸图像可以包括一个人脸,也可以包括多个人脸,基于此,可以在输出层添加centerloss损失函数进行约束,从而提供模型鲁棒性。
S203、计算训练样本的人脸特征与训练样本被标定的身份信息对应的目标人脸特征之间的映射距离;
S204、以最小化映射距离为训练目标,更新通用人脸识别模型中的参数,得到目标人脸识别模型。
可选的,利于各个训练样本对通用人脸识别模型进行训练,得到目标人脸识别模型。在对通用人脸识别模型进行训练的过程中,基于当前被训练的通用人脸识别模型识别训练样本的人脸特征,并计算该人脸特征与该训练样本被标定的身份信息对应的目标人脸特征之间的映射距离,并以最小化该映射距离为 训练目标更新当前被训练的通用人脸识别模型中的参数,以得到目标人脸识别模型。
在本申请实施例中,优选的,人脸特征与目标人脸特征之间的映射距离包括基于该人脸特征和该目标人脸特征所计算得到的损失函数。进而,以最小化该损失函数为训练目标,更新当前被训练的通用人脸识别模型中的参数,以得到目标人脸识别模型。
实际应用时,在训练得到目标人脸识别模型后,还可以利用训练样本以外的样本对目标人脸识别模型进行检测,以确定人脸图像识别的精度。其中,人脸图像识别的精度可以通过平均准确率(Mean Average Precision,mAP)表征,其具体是指在最大召回的情况下人脸图像识别准确率。作为一个示例,可以将确定出的样本按照预设比例分为两部分,一部分用于训练,一部分用于测试,其中,预设比例可以根据用户需求而设置,在一个示例中,预设比例可以是80%:20%。
基于上述本申请实施例生成的目标人脸识别模型,可对用户指定或关联的存储空间中的图像进行人脸识别。具体的,图3为本申请实施例提供的一种人脸识别方法流程图。如图3所示,该方法包括:
S301、对图像进行人脸检测,得到人脸图像;
其中,对图像进行人脸检测具体可以是对用户指定的存储空间中的图像进行人脸检测,也可以是用户关联的存储空间中的图像进行人脸检测。可选的,对图像进行人脸检测以得到图像中的人脸图像的人脸检测技术为现有技术,再此不做详细介绍。
S302、通过目标人脸识别模型对所述图像中的人脸图像进行人脸识别得到人脸特征。
其中,目标人脸识别模型以标定有身份信息的人脸图像为训练样本,以通用人脸识别模型对训练样本中的人脸的身份信息的预测结果趋近于训练样本中人脸图像被标定的身份信息为训练目标,对通用人脸识别模型进行训练生成。
在具体实现时,该目标人脸识别模型可以是基于上述实施例提供的目标人脸识别模型生成方法预先训练好的目标人脸识别模型,也可以是在执行完成步 骤S301之后基于上述实施例提供的目标人脸识别模型生成方法训练的目标人脸识别模型,在此不做限定。
图4为本申请实施例提供的另一种人脸识别方法流程图。
如图4所示,该方法包括:
S401、对图像进行人脸检测,得到人脸图像;
S402、通过目标人脸识别模型对所述图像中的人脸图像进行人脸识别,得到人脸特征;
其中,目标人脸识别模型以用户提供的标定有身份信息的人脸图像为训练样本,以通用人脸识别模型对训练样本中的人脸的身份信息的预测结果趋近于训练样本被标定的身份信息为训练目标,对通用人脸识别模型进行训练生成。
可选的,本申请实施例提供的步骤S401-S402的执行过程与上述实施例提供的步骤S301-S302的执行过程相同,有关步骤S401-S402的执行过程请参见上述实施例对步骤S301-S302的执行过程的介绍,再此不做赘述。
进一步的,相比于上述实施例提供的如图3所示的人脸识别方法而言,本申请实施例提供的如图4所示的一种人脸识别方法在步骤S402之后,进一步包括步骤S403、根据目标人脸识别模型识别出的图像中的人脸图像的人脸特征,预测人脸图像中人脸的身份信息。
可选的,在根据目标人脸识别模型识别出图像中的人脸图像的人脸特征后,可根据识别出的该人脸图像的人脸特征预测该人脸图像中的人脸的身份信息。
可选的,根据识别出的人脸图像的人脸特征预测该人脸图像中的人脸的身份信息,包括:确定与识别出的人脸图像的人脸特征匹配的目标人脸特征,将与所确定的目标人脸特征对应的身份信息作为预测到的该人脸图像中人脸的身份信息。
由于目标人脸识别模型是通过训练样本,以通用人脸识别模型对训练样本中的人脸的身份信息的预测结果趋近于训练样本中人脸图像被标定的身份信息为训练目标训练生成的,因此,相较于通用人脸识别模型,其能够对人脸特征进行更好地区分,因而具有较高的识别准确率。
在本申请实施例中,优选的,随着对人脸图像(如,用户指定或关联的存储空间中的图像中的人脸图像)中的人脸的身份信息的预测,还可以进一步对与身份信息对应的目标人脸特征进行更新;如,根据识别出的人脸图像的人脸特征预测出该人脸图像中的人脸的身份信息之后,为该人脸图像标定该预测出的身份信息;进而将该身份信息对应的目标人脸特征更新为该身份信息所标定的各个人脸图像的人脸特征的平均值。
S404、将预测的人脸图像中人脸的身份信息与人脸图像所属的图像绑定。
进一步的,本申请实施例提供的一种人脸识别方法还可以包括:将步骤S403中预测的人脸图像中人脸的身份信息与人脸图像所属的图像绑定。
比如,若待进行人脸识别的图像为图像1和图像2,图像1中包括两个人脸图像(分别为人脸图像1和人脸图像2),图像2中包括一个人脸图像(人脸图像3),基于本申请实施例提供的一种人脸识别方法预测出人脸图像1中的人脸的身份信息为身份信息2、人脸图像2中的人脸的身份信息为身份信息1,人脸图像3中的人脸的身份信息为身份信息1;则,将身份信息1与图像1和图像2绑定,将身份信息2与图像1绑定。
可选的,将预测的人脸图像中人脸的身份信息与人脸图像所属的图像绑定,使得在用户搜索与身份信息对应的图像时/在用户搜索图像中的各个人脸图像的身份信息时,可以直接根据身份信息与图像的绑定关系得到相应的搜索结果。
可选的,本申请实施例,在未将步骤S403中预测的人脸图像中人脸的身份信息与人脸图像所属的图像绑定的基础上,只要能够基于目标人脸识别模型识别出图像中的人脸图像中的人脸的身份信息,便可实现对图像中所包括的各个人脸图像中的人脸的身份信息进行确定。比如,如果一张图像中显示有3个人脸图像,分别为人脸图像1、人脸图像2和人脸图像3;基于目标人脸识别模型识别出人脸图像1中的人脸的身份信息为身份信息1、人脸图像2中的人脸的身份信息为身份信息2,人脸图像3中的人脸的身份信息为身份信息1,进而可确定此张图像中显示了具有不同身份信息的两个人的人脸图像,分别为身份信息为身份信息1的人的人脸图像,以及身份信息为身份信息2的人的人脸图像。
进一步的,本申请实施例提供的一种人脸识别方法,在基于上述实施例提供的目标人脸识别模型生成方法而生成目标人脸识别模型后,还可以:响应于目标人脸识别模型分享请求,从用户的各个好友用户中,确定目标好友用户;将目标人脸识别模型分享给目标好友用户。
可选的,在通过训练样本(训练样本包括用户提供的标定有身份信息的人脸图像)对通用人脸识别模型进行训练生成目标人脸识别模型后,可以响应于用户发送的目标人脸识别模型分享请求,将该生成的目标人脸识别模型分享给目标好友用户(其中,目标好友用户为从该用户的好友用户中选择出的一个或多个好友用户)。
比如,在图像管理技术应用过程中,若图像管理技术应用在图像管理应用程序时,该图像管理应用程序在通过用户提供的标定有身份信息的人脸图像以生成目标人脸识别模型后,可以接收该用户发送的目标人脸识别模型分享请求,并在接收到该目标人脸识别模型分享请求后,获取该用户的各个好友用户,并通过图像管理应用程序显示该用户的各个好友用户,以便于用户从显示的各个好友用户中选择一个或多个好友用户作为目标好友用户,进而将该目标人脸识别模型分享给每一个目标好友用户。具体的,可以是在接收到用户发送的目标人脸识别模型分享请求后,获取该用户的每个好友用户的好友用户账号(如,获取该用户的用户账号的各个好友用户账号),并通过图像管理应用程序显示每个好友用户账号(还可以在显示好友用户的好友用户账号时,显示用于指示该好友用户的信息),以便于用户从显示的各个好友用户账号中选择一个或多个好友用户账号作为目标好友用户,进而将该目标人脸识别模型分享给每一个目标好友用户账号。
在实际应用时,目标人脸识别模型可以用于支持某公司的一个相册管理***,将目标人脸识别模型分享给目标好友用户具体可以是分享给另外一个公司的相册管理***,或者某个部门的相册管理***。
图5为本申请实施例提供的一种响应于目标人脸识别模型分享请求,从用户的各个好友用户中,确定目标好友用户的方法流程图。
如图5所示,该方法包括:
S501、响应于目标人脸识别模型分享请求,确定用户的各个好友用户;
具体的,可以是响应用户发送的目标人脸识别模型分享请求,确定用户的用户账号的各个好友用户账号,每个好友用户账号表示了一个好友用户。
S502、从各个好友用户中确定待选好友用户;其中,待选好友用户的身份信息为用于标定训练样本的身份信息,和/或,待选好友用户的好友用户的身份信息为用于标定训练样本的身份信息;
可选的,待选好友用户的身份信息为用户标定训练样本的身份信息,和/或,待选好友用户的好友用户的身份信息为用于标定训练样本的身份信息,可以使得待选用户在接收到该目标人脸识别模型时,利用该目标人脸识别模型对该待选用户和/或该待选用户的好友用户有较好的识别效果。
S503、将所述待选好友用户中被选中的用户确定为目标好友用户。
可选的,在从各个好友用户中确定待选好友用户后,可以在图像管理应用程序中显示所确定的各个待选好友用户,以便于用户对所显示的各个待选好友用户进行选择操作,进而将用户选中的每一个待选好友用户作为一个目标好友用户。
进一步的,在本申请实施例提供的一种人脸识别方法中,用于标定训练样本的每一个身份信息都可以关联一个属性信息,该属性信息用于校验识别出的人脸特征。
在本申请实施例中,优选的,属性信息包括地址信息,地址信息指示有地址范围。可选的,在基于目标人脸识别模型识别出用户指定或关联的存储空间中的图像中的人脸图像的人脸特征后,可以确定与该识别出的人脸特征匹配的目标人脸特征,进而将与所确定的目标人脸特征匹配的身份信息确定为该人脸图像的身份信息;进一步的,确定与该身份信息关联的属性信息,可以得到具有该身份信息的人的地址信息,进而只要获取该人脸图像所属图像的拍摄地址,在该拍摄地址属于该地址信息指示的地址范围内,便可确定基于目标人脸识别模型识别出的该人脸图像的人脸特征准确。
在本申请实施例其他可能的实现方式中,属性信息还可以包括性别、年龄 (或者年龄段)以及图像拍摄时间中任意一种或多种,对应地,还可以基于性别、年龄以及图像拍摄时间等信息对识别出的人脸特征进行校验。
图6为本申请实施例提供的又一种人脸识别方法流程。如图6所示,该方法包括:
S601、对图像进行人脸检测,得到人脸图像;
S602、通过目标人脸识别模型对图像中的人脸图像进行人脸识别,得到人脸特征;
其中,所述目标人脸识别模型以用户提供的标定有身份信息的人脸图像为训练样本,以通用人脸识别模型对训练样本中的人脸的身份信息的预测结果趋近于训练样本被标定的身份信息为训练目标,对通用人脸识别模型进行训练生成。
可选的,本申请实施例提供的步骤S601-S602的执行过程与上述实施例提供的步骤S301-S302的执行过程相同,有关步骤S601-S602的执行过程请参见上述实施例对步骤S301-S302的执行过程的介绍,再此不做赘述。
S603、基于目标人脸识别模型识别出的人脸特征,生成图像聚类请求的请求结果;图像聚类请求用于指示对指定图像范围内的图像进行聚类。
进一步的,相比于上述实施例提供的如图3所示的人脸识别方法而言,本申请实施例提供的如图6所示的一种人脸识别方法在步骤S602之后,进一步包括步骤S603、基于目标人脸识别模型识别出的用户指定或关联的存储空间中的图像的人脸特征,生成图像聚类请求的请求结果;图像聚类请求用于指示对用户指定或关联的存储空间中的指定图像范围内的图像进行聚类。
在本申请实施例中,优选的,在图像管理技术应用过程中,若该图像管理技术应用于图像管理应用程序,该图像管理应用程序可以接收用户发送的图像聚类请求,该图像聚类请求用于指示对用户指定或关联的存储空间中的指定图像范围内的图像进行聚类;图像管理应用程序接收到图像聚类请求后,可以先确定用户指定或关联的存储空间中的位于图像聚类请求的指定图像范围内的图像;并基于目标人脸识别模型识别出所确定的每个图像中的人脸图像的人脸特征;进而根据各个人脸图像的人脸特征对所确定的各个图像进行聚类,得到 至少一个图像类别。如,将人脸特征匹配的各个人脸图像所属的图像归为一类。
在对图像进行管理时,通过对图像进行人脸检测以及人脸识别得到人脸特征,然后基于每张图片人脸特征的相似度进行聚类,可以实现按身份信息对图像进行管理,如此,可以方便用户查找任一用户的所有图像。
图7为本申请实施例提供的又一种人脸识别方法流程。如图7所示,该方法包括:
S701、对图像进行人脸检测,得到人脸图像;
S702、通过目标人脸识别模型对图像中的人脸图像进行人脸识别,得到人脸特征;
其中,所述目标人脸识别模型以用户提供的标定有身份信息的人脸图像为训练样本,以通用人脸识别模型对训练样本中的人脸的身份信息的预测结果趋近于训练样本被标定的身份信息为训练目标,对通用人脸识别模型进行训练生成。
可选的,本申请实施例提供的步骤S701-S702的执行过程与上述实施例提供的步骤S301-S302的执行过程相同,有关步骤S701-S702的执行过程请参见上述实施例对步骤S301-S302的执行过程的介绍,再此不做赘述。
S703、基于目标人脸识别模型识别出的人脸特征,生成图像搜索请求的请求结果;图像搜索请求用于指示在指定图像范围内搜索与指定身份信息相关的图像。
进一步的,相比于上述实施例提供的如图3所示的人脸识别方法而言,本申请实施例提供的如图7所示的一种人脸识别方法在步骤S702之后,进一步包括步骤S703、基于目标人脸识别模型识别出的用户指定或关联的存储空间中的图像的人脸特征,生成图像搜索请求的请求结果;图像搜索请求用于指示在用户指定或关联的存储空间中的指定图像范围内搜索与指定身份信息相关的图像。
在本申请实施例中,优选的,可以将包括具有指定身份信息的人脸图像的图像作为与指定身份信息相关的图像,即,若一图像中包括具有指定身份信息的人脸图像,则将该图像看成是与指定身份信息相关的图像;其中,若人脸图 像中的人脸的身份信息为指定身份信息时,该人脸图像为具有指定身份信息的人脸图像。
在本申请实施例中,优选的,也可以将包括具有与指定身份信息相关的目标身份信息的人脸图像作为与指定身份信息相关的图像;即,若一图像中包括具有与指定身份信息相关的目标身份信息,则将该图像看成是与指定身份信息相关的图像。其中,若人脸图像中的人脸的身份信息为与指定身份信息相关的目标身份信息时,该人脸图像为具有与指定身份信息相关的目标身份信息的人脸图像。
可选的,与指定身份信息相关的目标身份信息可以是与指定身份信息存在亲属关系的身份信息;与指定身份信息相关的目标身份信息也可以是与指定身份信息存在好友关系的身份信息。
以上仅仅是本申请实施例提供的目标身份信息的优选方式,具体的,发明人可根据自己的需求任意设置目标身份信息的具体内容,在此不做限定。
在本申请实施例中,优选的,在图像管理技术应用过程中,若该图像管理技术应用于图像管理应用程序,该图像管理应用程序可以接收用户发送的图像搜索请求,该图像搜索请求用于指示在用户指定或关联的存储空间中的指定图像范围内搜索与指定身份信息相关的图像;图像管理应用程序接收到图像搜索请求后,可以先确定用户指定或关联的存储空间中的位于图像搜索请求的指定图像范围内的图像;并基于目标人脸识别模型识别出所确定的每个图像中的人脸图像的人脸特征;进而从各个人脸特征中确定与指定身份信息对应的目标人脸特征匹配的人脸特征所属的图像,和/或,从各个人脸特征中确定与指定身份信息相关目标身份信息所对应的目标人脸特征匹配的人脸特征所属的图像。
在实际应用时,若图像关联有属性信息,则所述图像搜索请求还可以用于指示在指定范围内搜索与指定属性信息相关的图像,例如,可以指示搜索女性图像、小孩图像、合照图像、单人图像等等。
在本申请实施例中,还可以提供一种人脸识别方法,该人脸识别方法在执行完成步骤S601-S602之后,不仅可以执行步骤S603还可以执行步骤S703,有关步骤S603和S703的详细执行过程请参数上述实施例的描述,在此不做赘述。
为了便于对本申请实施例提供的一种人脸识别方法的理解,现从该人脸识别方法的应用场景进行详细介绍。
本申请实施例提供的一种人脸识别方法可以应用于图像管理应用程序,如图8所示该图像管理应用程序可以包括以下功能:
如图8所示,本申请实施例中的图像管理应用程序可以确定用户指定或关联的存储空间中的图像;其中,用户关联的存储空间中的图像可以认为是与该图像管理应用程序相关的图像(与该图像管理应用程序相关的图像,包括:待存储于该图像管理应用程序中或已存储于该图像管理应用程序中的图像);用户指定的存储空间中的图像可以认为是:与该图像管理应用程序相关的图像中的每一个位于指定图像管理范围内的图像;或者,对该用户开放图像管理权限的好友用户相关的图像(对该用户开放图像管理权限的好友用户相关的图像,包括:为该用户开放图像管理权限的好友用户待存储至其所对应的图像管理应用程序或已存储至其所对应的图像管理应用程序中的图像)。
可选的,用户可以从该用户关联或指定的存储空间中的图像中,选择至少一张图像,并标定其所选择的图像中的人脸图像中的人脸的身份信息,将用户标定有身份信息的人脸图像作为训练样本。
进一步的,还可以针对每个被标定的身份信息,从用户关联或指定的存储空间中获取更多的具有该身份信息的人脸图像,并将该人脸图像被该身份信息标定,以作为训练样本。此过程请参见上述实施例的详细介绍,在此不做赘述。
在本申请实施例中,优选的,将训练样本输入至通用人脸识别模型,并以通用人脸识别模型对训练样本中的人脸的身份信息的预测结果趋近于训练样本被标定的身份信息为训练目标,对通用人脸识别模型进行训练,以生成目标人脸识别模型。
可选的,图像管理应用程序可以基于所生成的目标人脸识别模型的人脸识别功能,实现对用户指定或关联的存储空间中的图像进行图像管理。
可选的,参见图9(a)为本申请实施例提供的另一种图像管理应用场景图。
如图9(a)所示,若图像聚类请求指示的用户指定或关联的存储空间中 的指定图像范围内的图像可以看成一个待管理图像集合911,待管理图像集合911中包括待管理图像11、待管理图像12、待管理图像13、待管理图像14和待管理图像15;若基于目标人脸识别模型识别出的待管理图像11中显示有身份信息为身份信息1的人脸图像和身份信息为身份信息2的人脸图像;待管理图像12中显示有身份信息为身份信息3的人脸图像;待管理图像13中显示有身份信息为身份信息1的人脸图像和身份信息为身份信息3的人脸图像;待管理图像14中显示有身份信息为身份信息2的人脸图像;待管理图像15中显示有身份信息为身份信息1的人脸图像、身份信息为身份信息2的人脸图像和身份信息为身份信息3的人脸图像。
对待管理图像集合911中的各个待管理图像进行聚类处理,得到图像聚类请求的图像聚类结果包括3个图像类别,分别为与身份信息1对应的图像类别912、与身份信息2对应的图像类别913和与身份信息3对应的图像类别914;其中,图像类别912包括待管理图像11、待管理图像13和待管理图像15;图像类别913包括待管理图像11、待管理图像14和待管理图像15;图像类别914包括待管理图像12、待管理图像13和待管理图像15。
可选的,参见图9(b)为本申请实施例提供的又一种图像管理应用场景图。
如图9(b)所示,若图像搜索请求指示的用户指定或关联的存储空间中的指定图像范围内的图像可以看成上述待管理图像集合911;如果显示有指定身份信息的人脸图像的图像为与指定身份信息相关的图像,且图像搜索请求指定身份信息为身份信息2时,则在待管理图像集合911中进行搜索处理,得到的图像搜索结果915包括待管理图像11、待管理图像14和待管理图像15。
进一步的,如图9(c)所示,若图像搜索请求指示的用户指定或关联的存储空间中的指定图像范围内的图像可以看成上述待管理图像集合911;如果显示有指定身份信息的人脸图像的图像为与指定身份信息相关的图像,且图像搜索请求指定身份信息为身份信息1和身份信息3时,则在待管理图像集合911中进行搜索处理,得到的图像搜索结果包括图像类别916和图像类别917,其中,图像类别916与身份信息1对应,图像类别916中包括待管理图像11、 待管理图像13和待管理图像15;图像类别917与身份信息3对应,图像类别917中包括待管理图像12、待管理图像13和待管理图像15。
进一步的,如图9(d)所示,若存在一张待确定身份信息的图像,图像管理应用程序可以利用图像检测技术对该图像进行人脸检测,得到该图像中的人脸图像,并将所得到的人脸图像输入至目标人脸识别模型,以得到该人脸图像的人脸特征;进而通过确定与该人脸特征匹配的目标人脸特征的方式,将所确定的目标人脸特征对应的身份信息确定为该人脸图像中人脸的身份信息。其中,待确定身份信息的图像中各个人脸图像中人脸的身份信息便是该待确定身份信息的图像中所显示的各个人脸的身份信息。
以上是为了对本申请实施例提供的一种人脸识别方法进行说明,而提供的优选场景实施例,有关本申请实施例提供的一种人脸识别方法的具体应用场景,发明人可根据自己的需求任意设置,在此不做限定。
本申请提供一种人脸识别方法及服务器,该方法包括对用户指定或关联的存储空间中的图像进行人脸检测,得到人脸图像;基于目标人脸识别模型对该人脸图像进行人脸识别,得到人脸特征;其中,目标人脸识别模型以用户提供的标定有身份信息的人脸图像为训练样本,以通用人脸识别模型对训练样本中的人脸的身份信息的预测结果趋近于训练样本被标定的身份信息为训练目标,对通用人脸识别模型进行训练生成的,因此,目标人脸识别模型对该用户指定或关联的存储空间中的图像的人脸特征的识别结果更具区分性,进而提高了对图像管理的准确性。
下面对本申请实施例提供的人脸识别装置进行介绍,下文描述的人脸识别装置可认为是,服务器为实现本申请实施例提供的人脸识别方法,所需设置的程序模块。下文描述的人脸识别装置内容,可与上文描述的人脸识别方法内容相互对应参照。
图10为本申请实施例提供的一种人脸识别装置的结构示意图。
如图10所示,该装置包括:
图像检测单元101,用于对用户图像进行人脸检测,得到人脸图像;
人脸识别单元102,用于通过目标人脸识别模型对图像中的人脸图像进行 人脸识别,得到人脸特征;其中,所述目标人脸识别模型是通过模型生成单元103生成的;所述模型生成单元103,包括:
训练样本确定子单元1031,用于确定训练样本,所述训练样本包括标定有身份信息的人脸图像;
模型训练子单元1032,用于利用所述训练样本对通用人脸识别模型进行训练,基于训练目标更新所述通用人脸识别模型的参数,得到目标人脸识别模型;其中,所述训练目标是所述通用人脸识别模型对所述训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息。
进一步的,在本申请实施例提供的一种人脸识别装置中,所述模型训练子单元1032,包括:
人脸特征确定子单元,用于利用通用人脸识别模型对所述训练样本进行人脸识别,得到所述训练样本的人脸特征;
映射距离计算子单元,用于计算所述训练样本的人脸特征与所述训练样本被标定的身份信息对应的目标人脸特征之间的映射距离;
训练子单元,用于以最小化所述映射距离为训练目标,更新所述通用人脸识别模型中的参数,得到目标人脸识别模型。
在本申请实施例中,优选的,所述训练样本确定子单元1031具体用于:
确定标定有身份信息的人脸图像;
基于标定有同一所述身份信息的各个所述人脸图像,计算所述身份信息对应的目标人脸特征;
从图像存储空间中,获取人脸特征与所述目标人脸特征匹配的人脸图像,并为所获取的人脸图像标定与所述目标人脸特征对应的身份信息,将每个标定有身份信息的人脸图像作为一个训练样本。
进一步的,在本申请实施例提供的一种人脸识别装置中,还包括:身份信息预测单元,用于根据目标人脸识别模型识别出的图像中的人脸图像的人脸特征,预测人脸图像中人脸的身份信息。
在本申请实施例中,优选的,身份信息关联有属性信息,属性信息用于校验识别出的人脸特征。
进一步的,在本申请实施例提供的一种人脸识别装置中,还包括目标人脸识别模型分享单元,用于响应于目标人脸识别模型分享请求,从用户的各个好友用户中,确定目标好友用户;将目标人脸识别模型分享给目标好友用户。
在本申请实施例中,优选的,目标人脸识别模型分享单元具体用于:响应于目标人脸识别模型分享请求,确定用户的各个好友用户;从各个好友用户中确定待选好友用户;其中,待选好友用户的身份信息为用于标定训练样本的身份信息,和/或,待选好友用户的好友用户的身份信息为用于标定训练样本的身份信息;将待选好友用户中被选中的用户确定为目标好友用户。
进一步的,在本申请实施例提供的一种人脸识别装置中,还包括图像聚类单元,用于基于目标人脸识别模型识别出的人脸特征,生成图像聚类请求的请求结果;图像聚类请求用于指示对指定图像范围内的图像进行聚类。
进一步的,在本申请实施例提供的一种人脸识别装置中,还包括图像搜索单元,用于基于目标人脸识别模型识别出的人脸特征,生成图像搜索请求的请求结果;图像搜索请求用于指示在指定图像范围内搜索与指定身份信息相关的图像。
本申请实施例还提供一种存储介质,该存储介质可存储有适于处理器执行的程序,程序用于:
对图像进行人脸检测,得到人脸图像;
通过目标人脸识别模型对图像中的人脸图像进行人脸识别,得到人脸特征;以及,通过以下生成方法生成所述目标人脸识别模型:
确定训练样本,所述训练样本包括标定有身份信息的人脸图像;
利用所述训练样本对通用人脸识别模型进行训练,基于训练目标更新所述通用人脸识别模型的参数,得到目标人脸识别模型;其中,所述训练目标是所述通用人脸识别模型对所述训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息。
相应地,本申请实施例还提供一种包括指令的计算机程序产品,当其在计 算机上运行时,使得所述计算机执行本申请实施例任意一种实现方式所述的人脸识别方法。
可选的,所述计算机程序产品的细化功能和扩展功能可参照上文描述。
本申请提供一种人脸识别装置、存储介质及计算机程序产品,通过对图像进行人脸检测,得到人脸图像;基于目标人脸识别模型对该人脸图像进行人脸识别,得到人脸特征;其中,目标人脸识别模型是以用户提供的标定有身份信息的人脸图像为训练样本,以通用人脸识别模型对训练样本中的人脸的身份信息的预测结果趋近于训练样本被标定的身份信息为训练目标,对通用人脸识别模型进行训练生成的,通过目标人脸识别模型对图像的人脸特征的识别结果更具区分性,进而提高了对图像管理的准确性。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本申请。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见 的,本文中所定义的一般原理可以在不脱离本申请的核心思想或范围的情况下,在其它实施例中实现。因此,本申请将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (16)

  1. 一种人脸识别方法,由服务器执行,所述人脸识别方法包括:
    生成目标人脸识别模型;
    对图像进行人脸检测得到人脸图像;
    通过所述目标人脸识别模型对所述图像中的人脸图像进行人脸识别得到人脸特征;
    其中,所述目标人脸识别模型的生成包括:
    确定训练样本,所述训练样本包括标定有身份信息的人脸图像;
    利用所述训练样本对通用人脸识别模型进行训练,基于训练目标更新所述通用人脸识别模型的参数,得到目标人脸识别模型;其中,所述训练目标是所述通用人脸识别模型对所述训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息。
  2. 根据权利要求1所述的方法,所述利用所述训练样本对通用人脸识别模型进行训练,基于训练目标更新所述通用人脸识别模型的参数,得到目标人脸识别模型,包括:
    利用通用人脸识别模型对所述训练样本进行人脸识别,得到所述训练样本的人脸特征;
    计算所述训练样本的人脸特征与所述训练样本被标定的身份信息对应的目标人脸特征之间的映射距离;
    以最小化所述映射距离为训练目标,更新所述通用人脸识别模型中的参数,得到目标人脸识别模型。
  3. 根据权利要求1所述的方法,所述确定训练样本包括:
    确定标定有身份信息的人脸图像;
    基于标定有同一所述身份信息的各个所述人脸图像,计算所述身份信息对应的目标人脸特征;
    从图像存储空间中,获取人脸特征与所述目标人脸特征匹配的人脸图像,并为所获取的人脸图像标定与所述目标人脸特征对应的身份信息,将每个标定有身份信息的人脸图像作为一个训练样本。
  4. 根据权利要求1至3任意一项所述的方法,还包括:
    根据所述目标人脸识别模型识别出的所述图像中的人脸图像的人脸特征,预测所述人脸图像中人脸的身份信息。
  5. 根据权利要求1至3任意一项所述的方法,所述身份信息关联有属性信息,所述属性信息用于校验识别出的人脸特征。
  6. 根据权利要求1至3任意一项所述的方法,还包括:
    响应于目标人脸识别模型分享请求,从用户的各个好友用户中,确定目标好友用户;
    将所述目标人脸识别模型分享给所述目标好友用户。
  7. 根据权利要求6所述的方法,所述响应于目标人脸识别模型分享请求,从用户的各个好友用户中,确定目标好友用户,包括:
    响应于目标人脸识别模型分享请求,确定所述用户的各个好友用户;
    从各个所述好友用户中确定待选好友用户;其中,所述待选好友用户的身份信息为用于标定所述训练样本的身份信息,和/或,所述待选好友用户的好友用户的身份信息为用于标定所述训练样本的身份信息;
    将所述待选好友用户中被选中的用户确定为目标好友用户。
  8. 根据权利要求1至3任意一项所述的方法,还包括:
    基于所述目标人脸识别模型识别出的人脸特征,生成图像聚类请求的请求结果;所述图像聚类请求用于指示对指定图像进行聚类。
  9. 根据权利要求8所述的方法,还包括:
    基于所述目标人脸识别模型识别出的人脸特征,生成图像搜索请求的请求结果;所述图像搜索请求用于指示在指定图像范围内搜索与指定身份信息相关的图像。
  10. 一种人脸识别装置,包括:
    模型生成单元,用于生成目标人脸识别模型;
    图像检测单元,用于对图像进行人脸检测得到人脸图像;
    人脸识别单元,用于通过所述目标人脸识别模型对所述图像中的人脸图像进行人脸识别得到人脸特征;其中,所述模型生成单元,包括:
    训练样本确定子单元,用于确定训练样本,所述训练样本包括标定有身份信息的人脸图像;
    模型训练子单元,用于利用所述训练样本对通用人脸识别模型进行训练,基于训练目标更新所述通用人脸识别模型的参数,得到目标人脸识别模型;其中,所述训练目标是所述通用人脸识别模型对所述训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息。
  11. 根据权利要求10所述的装置,所述模型训练子单元,包括:
    人脸特征确定子单元,用于利用通用人脸识别模型对所述训练样本进行人脸识别,得到所述训练样本的人脸特征;
    映射距离计算子单元,用于计算所述训练样本的人脸特征与所述训练样本被标定的身份信息对应的目标人脸特征之间的映射距离;
    训练子单元,用于以最小化所述映射距离为训练目标,更新所述通用人脸识别模型中的参数,得到目标人脸识别模型。
  12. 根据权利要求11所述的装置,所述训练样本确定子单元具体用于:
    确定标定有身份信息的人脸图像;
    基于标定有同一所述身份信息的各个所述人脸图像,计算所述身份信息对应的目标人脸特征;
    从图像存储空间中,获取人脸特征与所述目标人脸特征匹配的人脸图像,并为所获取的人脸图像标定与所述目标人脸特征对应的身份信息,将每个标定有身份信息的人脸图像作为一个训练样本。
  13. 根据权利要求10至12任意一项所述的装置,还包括:身份信息预测单元,用于根据所述目标人脸识别模型识别出的所述图像中的人脸图像的人脸特征,预测所述人脸图像中人脸的身份信息。
  14. 一种服务器,包括:至少一个存储器和至少一个处理器;所述存储器存储有程序,所述处理器调用所述存储器存储的程序,所述程序用于:
    生成目标人脸识别模型;
    对图像进行人脸检测,得到人脸图像;
    通过所述目标人脸识别模型对所述图像中的人脸图像进行人脸识别,得到人脸特征;以及,所述目标人脸识别模型的生成包括:
    确定训练样本,所述训练样本包括标定有身份信息的人脸图像;
    利用所述训练样本对通用人脸识别模型进行训练,基于训练目标更新所述 通用人脸识别模型的参数,得到目标人脸识别模型;其中,所述训练目标是所述通用人脸识别模型对所述训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息。
  15. 一种存储介质,所述存储介质存储有适于处理器执行的程序,所述程序用于:
    生成目标人脸识别模型;
    对图像进行人脸检测,得到人脸图像;
    通过目标人脸识别模型对所述图像中的人脸图像进行人脸识别,得到人脸特征;以及,所述目标人脸识别模型的生成包括:
    确定训练样本,所述训练样本包括标定有身份信息的人脸图像;
    利用所述训练样本对通用人脸识别模型进行训练,基于训练目标更新所述通用人脸识别模型的参数,得到目标人脸识别模型;其中,所述训练目标是所述通用人脸识别模型对所述训练样本中的人脸图像的身份信息的预测结果趋近于所述训练样本中人脸图像被标定的身份信息。
  16. 一种包括指令的计算机程序产品,当其在计算机上运行时,使得所述计算机执行权利要求1至9任一项所述的人脸识别方法。
PCT/CN2019/075538 2018-03-27 2019-02-20 一种人脸识别方法、装置、服务器及存储介质 WO2019184627A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/900,461 US11367311B2 (en) 2018-03-27 2020-06-12 Face recognition method and apparatus, server, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810258504.1A CN110309691B (zh) 2018-03-27 2018-03-27 一种人脸识别方法、装置、服务器及存储介质
CN201810258504.1 2018-03-27

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/900,461 Continuation US11367311B2 (en) 2018-03-27 2020-06-12 Face recognition method and apparatus, server, and storage medium

Publications (1)

Publication Number Publication Date
WO2019184627A1 true WO2019184627A1 (zh) 2019-10-03

Family

ID=68062469

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075538 WO2019184627A1 (zh) 2018-03-27 2019-02-20 一种人脸识别方法、装置、服务器及存储介质

Country Status (3)

Country Link
US (1) US11367311B2 (zh)
CN (1) CN110309691B (zh)
WO (1) WO2019184627A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108920639B (zh) * 2018-07-02 2022-01-18 北京百度网讯科技有限公司 基于语音交互的上下文获取方法及设备
WO2021030178A1 (en) * 2019-08-09 2021-02-18 Clearview Ai, Inc. Methods for providing information about a person based on facial recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7903883B2 (en) * 2007-03-30 2011-03-08 Microsoft Corporation Local bi-gram model for object recognition
CN102945117A (zh) * 2012-10-19 2013-02-27 广东欧珀移动通信有限公司 根据人脸识别自动操作照片的方法、装置及终端
CN105426857A (zh) * 2015-11-25 2016-03-23 小米科技有限责任公司 人脸识别模型训练方法和装置
CN105631403A (zh) * 2015-12-17 2016-06-01 小米科技有限责任公司 人脸识别方法及装置
CN106407369A (zh) * 2016-09-09 2017-02-15 华南理工大学 一种基于深度学习人脸识别的照片管理方法和***

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010001310A1 (en) * 2008-07-02 2010-01-07 C-True Ltd. Face recognition system and method
CN101976356A (zh) * 2010-09-30 2011-02-16 惠州市华阳多媒体电子有限公司 网吧实名制人脸识别方法及识别***
JP5713821B2 (ja) * 2011-06-30 2015-05-07 キヤノン株式会社 画像処理装置及び方法、並びに画像処理装置を有するカメラ
CN103530652B (zh) * 2013-10-23 2016-09-14 北京中视广信科技有限公司 一种基于人脸聚类的视频编目方法、检索方法及其***
CN104680119B (zh) * 2013-11-29 2017-11-28 华为技术有限公司 图像身份识别方法和相关装置及身份识别***
CN104573652B (zh) * 2015-01-04 2017-12-22 华为技术有限公司 确定人脸图像中人脸的身份标识的方法、装置和终端
CN106326815B (zh) * 2015-06-30 2019-09-13 芋头科技(杭州)有限公司 一种人脸图像识别方法
CN105095873B (zh) * 2015-07-31 2018-12-18 小米科技有限责任公司 照片共享方法、装置
CN105468948B (zh) * 2015-12-09 2019-01-25 广州广电运通金融电子股份有限公司 一种通过社交关系进行身份验证的方法
US20170206403A1 (en) * 2016-01-19 2017-07-20 Jason RAMBACH Method of distributed face recognition and system thereof
CN105868309B (zh) * 2016-03-24 2019-05-24 广东微模式软件股份有限公司 一种基于人脸图像聚类和识别技术的图像快速查找和自助打印方法
CN105956022B (zh) * 2016-04-22 2021-04-16 腾讯科技(深圳)有限公司 电子镜图像处理方法和装置、图像处理方法和装置
WO2018023753A1 (zh) * 2016-08-05 2018-02-08 胡明祥 面部识别技术匹配电脑解锁时的数据采集方法和识别***
CN107341434A (zh) * 2016-08-19 2017-11-10 北京市商汤科技开发有限公司 视频图像的处理方法、装置和终端设备
CN106384237A (zh) * 2016-08-31 2017-02-08 北京志光伯元科技有限公司 基于人脸识别的会员认证和管理方法、装置及***
CN106453864A (zh) * 2016-09-26 2017-02-22 广东欧珀移动通信有限公司 一种图像处理方法、装置和终端
CN107247941A (zh) * 2017-06-22 2017-10-13 易容智能科技(苏州)有限公司 一种高硬件弹性的精准人脸采样和识别方法
EP3647992A4 (en) * 2017-06-30 2020-07-08 Guangdong Oppo Mobile Telecommunications Corp., Ltd. FACE IMAGE PROCESSING METHOD AND APPARATUS, INFORMATION MEDIUM, AND ELECTRONIC DEVICE
CN107545243A (zh) * 2017-08-07 2018-01-05 南京信息工程大学 基于深度卷积模型的黄种人脸识别方法
CN107679451A (zh) * 2017-08-25 2018-02-09 百度在线网络技术(北京)有限公司 建立人脸识别模型的方法、装置、设备和计算机存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7903883B2 (en) * 2007-03-30 2011-03-08 Microsoft Corporation Local bi-gram model for object recognition
CN102945117A (zh) * 2012-10-19 2013-02-27 广东欧珀移动通信有限公司 根据人脸识别自动操作照片的方法、装置及终端
CN105426857A (zh) * 2015-11-25 2016-03-23 小米科技有限责任公司 人脸识别模型训练方法和装置
CN105631403A (zh) * 2015-12-17 2016-06-01 小米科技有限责任公司 人脸识别方法及装置
CN106407369A (zh) * 2016-09-09 2017-02-15 华南理工大学 一种基于深度学习人脸识别的照片管理方法和***

Also Published As

Publication number Publication date
US11367311B2 (en) 2022-06-21
CN110309691B (zh) 2022-12-27
US20200311390A1 (en) 2020-10-01
CN110309691A (zh) 2019-10-08

Similar Documents

Publication Publication Date Title
WO2020215571A1 (zh) 一种识别敏感数据的方法、装置、存储介质及计算机设备
TWI677852B (zh) 一種圖像特徵獲取方法及裝置、電子設備、電腦可讀存儲介質
CN111062871B (zh) 一种图像处理方法、装置、计算机设备及可读存储介质
WO2019119505A1 (zh) 人脸识别的方法和装置、计算机装置及存储介质
EP3028184B1 (en) Method and system for searching images
JP5123288B2 (ja) 画像コレクション間の接続の形成
US11003896B2 (en) Entity recognition from an image
JP4897042B2 (ja) 複数の画像の集合における固有のオブジェクトの識別
WO2016177259A1 (zh) 一种相似图像识别方法及设备
CN109800325A (zh) 视频推荐方法、装置和计算机可读存储介质
CN102591868B (zh) 用于拍照指南自动生成的***和方法
CN110647904B (zh) 一种基于无标记数据迁移的跨模态检索方法及***
CN111368772B (zh) 身份识别方法、装置、设备及存储介质
JP6969663B2 (ja) ユーザの撮影装置を識別する装置及び方法
WO2021036309A1 (zh) 图像识别方法、装置、计算机装置及存储介质
WO2022057309A1 (zh) 肺部特征识别方法、装置、计算机设备及存储介质
CN107590420A (zh) 视频分析中的场景关键帧提取方法及装置
WO2015196964A1 (zh) 搜索匹配图片的方法、图片搜索方法及装置
WO2019184627A1 (zh) 一种人脸识别方法、装置、服务器及存储介质
CN110175515B (zh) 一种基于大数据的人脸识别算法
TW202004520A (zh) 基於多分類器的推薦方法、裝置及電子設備
WO2022193973A1 (zh) 图像处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品
WO2021120589A1 (zh) 用于3d图像的异常图像筛查方法、装置、设备及存储介质
JP2021018459A (ja) 評価支援方法、評価支援システム、プログラム
JP2010250637A (ja) 画像サーバー、画像検索システム、画像検索方法および画像管理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19777850

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19777850

Country of ref document: EP

Kind code of ref document: A1