WO2019062347A1 - 人脸识别方法及相关产品 - Google Patents

人脸识别方法及相关产品 Download PDF

Info

Publication number
WO2019062347A1
WO2019062347A1 PCT/CN2018/100102 CN2018100102W WO2019062347A1 WO 2019062347 A1 WO2019062347 A1 WO 2019062347A1 CN 2018100102 W CN2018100102 W CN 2018100102W WO 2019062347 A1 WO2019062347 A1 WO 2019062347A1
Authority
WO
WIPO (PCT)
Prior art keywords
face recognition
scenario
scene
mobile terminal
related information
Prior art date
Application number
PCT/CN2018/100102
Other languages
English (en)
French (fr)
Inventor
王健
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2019062347A1 publication Critical patent/WO2019062347A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present invention relates to the field of mobile terminal technologies, and in particular, to a face recognition method and related products.
  • biometric identification is often used, such as fingerprint recognition, face recognition, and iris recognition.
  • Biometric identification techniques such as vein recognition and palmprint recognition.
  • the face recognition process may occur in various scenarios, which may result in a matching failure even in a registered user due to some external factors in the matching process, thereby reducing the efficiency of face recognition. And accuracy.
  • the embodiment of the invention provides a face recognition method and related products, which can improve the security, reliability and accuracy of face recognition.
  • an embodiment of the present invention provides a mobile terminal, including a processor, a face image collection device and a memory connected to the processor, where
  • the face image collecting device is configured to collect multiple face images of the current user
  • the memory is configured to store a preset face image template
  • the processor is configured to determine a first scenario of the mobile terminal, and generate a face recognition policy corresponding to the first scenario when the first mapping is not included in the preset mapping relationship set; a mapping relationship between the first scene and the generated face recognition policy, adding the mapping relationship to the mapping relationship set; and identifying the current user according to the generated face recognition policy corresponding to the first scenario Legal user.
  • an embodiment of the present invention provides a method for recognizing a face, including:
  • the current user is identified as a legitimate user according to the generated face recognition policy corresponding to the first scenario.
  • an embodiment of the present invention provides a mobile terminal, including a processing unit,
  • the processing unit is configured to determine a first scenario of the mobile terminal
  • the processing unit is further configured to: when the first mapping is not included in the preset mapping relationship set, generate a face recognition policy corresponding to the first scenario;
  • the processing unit is further configured to establish a mapping relationship between the first scenario and the generated face recognition policy, and add the mapping relationship to the mapping relationship set;
  • the processing unit is further configured to identify that the current user is a legitimate user according to the generated face recognition policy corresponding to the first scenario.
  • an embodiment of the present invention provides a mobile terminal, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured by The processor executes, the program comprising instructions for performing the steps of any of the methods of the first aspect of the embodiments of the present invention.
  • an embodiment of the present invention provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program for electronic data exchange, wherein the computer program causes the computer to perform the implementation of the present invention.
  • the computer comprises a mobile terminal.
  • an embodiment of the present invention provides a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program, the computer program being operative to cause a computer to execute Some or all of the steps described in any of the methods of the first aspect of the invention.
  • the computer program product can be a software installation package, the computer comprising a mobile terminal.
  • the mobile terminal first determines the first scenario of the mobile terminal, and secondly, when the first mapping scenario is not included in the preset mapping relationship set, the face corresponding to the first scenario is generated. Identifying a policy, and then establishing a mapping relationship between the first scenario and the generated face recognition policy, adding the mapping relationship to the mapping relationship set, and finally, corresponding to the generated first scenario.
  • the face recognition policy identifies the current user as a legitimate user. Conducive to improving the security, reliability and accuracy of face recognition.
  • FIG. 1 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
  • FIG. 2 is a schematic flowchart of a face recognition method according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a face recognition method according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
  • FIG. 5 is a structural block diagram of a functional unit of a mobile terminal according to an embodiment of the present invention.
  • references to "an embodiment” herein mean that a particular feature, structure, or characteristic described in connection with the embodiments can be included in at least one embodiment of the invention.
  • the appearances of the phrases in various places in the specification are not necessarily referring to the same embodiments, and are not exclusive or alternative embodiments that are mutually exclusive. Those skilled in the art will understand and implicitly understand that the embodiments described herein can be combined with other embodiments.
  • the mobile terminal involved in the embodiments of the present invention may include various handheld devices having wireless communication functions, in-vehicle devices, wearable devices, computing devices, or other processing devices connected to the wireless modem, and various forms of user equipment (User Equipment, UE), mobile station (MS), terminal device, and the like.
  • UE User Equipment
  • MS mobile station
  • terminal device and the like.
  • the devices mentioned above are collectively referred to as mobile terminals.
  • the mobile terminal described in the embodiment of the present invention is provided with a biological image capturing device, which specifically includes a fingerprint image capturing device, an iris image capturing device, and a face image capturing device, wherein the fingerprint image capturing device may be a fingerprint sensor module.
  • the group and iris image acquisition device may comprise an infrared light source and an iris camera, and the face image acquisition device may be a universal camera module, such as a front camera.
  • FIG. 1 is a schematic structural diagram of a mobile terminal 100 according to an embodiment of the present invention.
  • the mobile terminal 100 includes: a shell, a touch display screen, a main board, a battery, and a sub board.
  • the light source 21, the iris camera 22, the front camera 23, the processor 110, the memory 120, and the SIM card slot, etc., the sub-board is provided with a vibrator, an integrated sound chamber, a VOOC flash charging interface and a fingerprint module 24, and the infrared light source 21
  • the iris camera 22 constitutes an iris image capturing device of the mobile terminal 100
  • the front camera 23 constitutes a face image collecting device of the mobile terminal 100
  • the fingerprint sensor module 24 constitutes a fingerprint image collecting device of the mobile terminal 100
  • the iris image capturing device, the face image capturing device, and the fingerprint image capturing device are collectively referred to as the biological image capturing device of the mobile terminal 100, wherein when the face recognition is performed, the current user's image can be collected by the face image
  • the face image collecting device is configured to collect multiple face images of the current user
  • the memory is configured to store a preset face image template
  • the processor 110 is configured to determine a first scenario of the mobile terminal, and generate a face recognition policy corresponding to the first scenario when the first mapping is not included in the preset mapping relationship set; a mapping relationship between the first scenario and the generated face recognition policy, adding the mapping relationship to the mapping relationship set; and identifying the current user according to the generated face recognition policy corresponding to the first scenario For legitimate users.
  • the mobile terminal first determines the first scenario of the mobile terminal, and secondly, when the first mapping scenario is not included in the preset mapping relationship set, the face corresponding to the first scenario is generated. Identifying a policy, and then establishing a mapping relationship between the first scenario and the generated face recognition policy, adding the mapping relationship to the mapping relationship set, and finally, corresponding to the generated first scenario.
  • the face recognition policy identifies the current user as a legitimate user. It can be seen that, when the face recognition is performed, the first scene currently located is determined, and when the first scene is detected in the preset mapping relationship set, the face recognition policy applicable to the first scene is generated. When the generated face recognition policy identifies that the current user is a registered user, the mapping relationship set is updated, so that the face recognition policy can be directly used when the first scene is in the first scene, so that the face recognition is improved. Safety, reliability and accuracy.
  • the processor 110 is specifically configured to: acquire related information about performing a face recognition process in the first scenario,
  • the related information includes a user operation record in the face recognition process, and generates a face recognition policy corresponding to the first scene according to the related information, where the face recognition policy corresponding to the first scene is
  • the operation corresponding to the user operation record is an automatic execution operation.
  • the processor 110 is specifically configured to: acquire multiple face images and scene data in the first scene, in accordance with the information about acquiring the face recognition process; A plurality of face images and the scene data determine related information of the face recognition process; and acquire related information of the face recognition process.
  • the processor 110 is further configured to acquire related information of the preset abnormal event when a preset abnormal event occurs in the face recognition process in the first scenario, where the The information about the abnormal event is added to the face recognition policy corresponding to the first scenario.
  • the processor 110 is further configured to determine a second scenario of the mobile terminal, and determine, if the second scenario is included in the preset mapping relationship set, a face recognition policy corresponding to the second scene; and identifying, according to the face recognition policy corresponding to the second scenario, that the current user is a legal user, and the face recognition policy corresponding to the second scenario is different from the first scenario Face recognition strategy.
  • the first scenario includes a location scenario, a user scenario, and an environment scenario, where the location scenario is used to indicate a location of the mobile terminal, and the user scenario is used to indicate a correlation of a current user's face.
  • the environment scenario is used to indicate related information of a scenario where the mobile terminal is currently located
  • FIG. 2 is a schematic flowchart of a face recognition method according to an embodiment of the present invention, which is applied to a mobile terminal. As shown in the figure, the face recognition method includes:
  • the mobile terminal determines a first scenario of the mobile terminal.
  • the first scenario is a face recognition scenario corresponding to the mobile terminal performing face recognition.
  • the first scene may be a location scene. Since the mobile terminal is in a moving state as the position of the user changes, the mobile terminal can be in different location scenes, and the location scene includes an indoor scene and an outdoor scene. Indoor scenes such as cinemas, supermarkets, shopping malls, cafes, gyms, etc., outdoor scenes such as open-air stadiums, parks, amusement parks, pedestrian streets, and the like.
  • the first scenario may be a user scenario.
  • the user's posture such as the user's face facing the mobile terminal, the front camera angle is different, and the user's different expressions may have a certain impact on the face recognition;
  • the user's face has no obstruction, such as the user wearing glasses or no glasses, the user Putting down the bangs or combing the bangs, the user has a beard or no beard will become a factor affecting face recognition;
  • the user's face itself varies widely, for example, the user's face changes over time, or the user makes up, different makeup, No makeup, or scars on the user's face.
  • the user scene affects the face image acquisition process, the feature point extraction process, and the feature point matching process in the face recognition process.
  • the first scenario may be an environment scenario. Since the current location of the mobile terminal may not be a location that can be described in detail, the current environmental factors may have a certain impact on the face recognition process, and thus the scenario may be defined as an environment scenario.
  • the light When the mobile terminal is in an environmental scene, the light may be dark, and the dark light may result in the inability to capture a clear face image; or the light is very high, and the high light may cause the captured photo to be overexposed to make it impossible to A clear face is captured; or there are many obstacles in the current environment, which may obscure the face or enter the captured face image, causing errors in face recognition.
  • the division of the scene category is divided according to a factor that affects face recognition in a face recognition process, and the scene category includes a location scene, a user scene, and an environment scene, if there are coincident places in the three scenarios.
  • the mobile terminal may preset the priority of the scene classification. For example, if the first scene is identified as one scene in the location scene, the scene may be a scene in the user scene, but the scene priority location scene is higher than the user scene. The first scene is determined to be a corresponding scene in the location scene.
  • Each scene corresponds to a face recognition strategy, and a face recognition policy can be applied to multiple scenes.
  • the mobile terminal Before performing the face recognition, the mobile terminal first determines the first scene where the mobile terminal is located, and the mobile terminal may be in a certain scene or is switching from one scene to another, thereby facilitating further determination of face recognition. Strategy.
  • the mobile terminal determines the current first scenario.
  • the confirmation mode of the first scenario may be that the mobile terminal locates the current location of the mobile terminal by using a Global Positioning System (GPS) to identify the current first scenario.
  • GPS Global Positioning System
  • the user may manually select the first scene in which the mobile terminal is currently located, or the mobile terminal acquires scene data of the current scene through the sensor, and determines the first scene according to the scene data.
  • the mobile terminal generates a face recognition policy corresponding to the first scenario when the first mapping is not included in the preset mapping relationship set.
  • the face recognition algorithm model pre-stored by the mobile terminal is used, that is, the face recognition policy set pre-stored by the mobile terminal.
  • the face recognition policy set includes a plurality of face recognition strategies, each face recognition strategy corresponds to a face recognition scene, and each face recognition scene forms a mapping relationship with a face recognition strategy, and multiple face recognition scenes and multiple persons
  • the face recognition strategy forms a set of mapping relationships.
  • the mobile terminal After determining the first scenario, the mobile terminal queries the preset mapping relationship set according to the first scenario, and when the preset mapping relationship set does not include the first scenario, it indicates that the current mobile terminal is not applicable to the first scenario.
  • the face recognition strategy therefore, the mobile terminal generates a face recognition strategy corresponding to the first scene.
  • the mobile terminal when detecting that the first scene is not the preset scene, captures multiple face images of the current user in the first scene or captures more scene data, and is used to generate a person corresponding to the first scene. Face recognition strategy.
  • the mobile terminal establishes a mapping relationship between the first scenario and the generated face recognition policy, and adds the mapping relationship to the mapping relationship set.
  • the mapping relationship between the generated face recognition policy and the first scenario is established, and the mapping relationship is added to the mapping relationship set.
  • the update of the mapping relationship set is implemented, and thus, there is an interest that the generated face recognition policy can be directly used when the mobile terminal is again in the first scene.
  • S204 The mobile terminal identifies that the current user is a legal user according to the generated face recognition policy corresponding to the first scenario.
  • the mobile terminal identifies, according to the generated face recognition policy, whether the current user in the first scenario is a legitimate user, and the face recognition process can be completed more accurately and efficiently.
  • the mobile terminal first determines the first scenario of the mobile terminal, and secondly, when the first mapping scenario is not included in the preset mapping relationship set, the face corresponding to the first scenario is generated. Identifying a policy, and then establishing a mapping relationship between the first scenario and the generated face recognition policy, adding the mapping relationship to the mapping relationship set, and finally, corresponding to the generated first scenario.
  • the face recognition policy identifies the current user as a legitimate user. It can be seen that, when the face recognition is performed, the first scene currently located is determined, and when the first scene is detected in the preset mapping relationship set, the face recognition policy applicable to the first scene is generated. When the generated face recognition policy identifies that the current user is a registered user, the mapping relationship set is updated, so that the face recognition policy can be directly used when the first scene is in the first scene, so that the face recognition is improved. Safety, reliability and accuracy.
  • the generating a face recognition policy corresponding to the first scenario includes: acquiring related information about performing a face recognition process in the first scenario, where the related information includes the face recognition process a user operation record in the user; generating a face recognition policy corresponding to the first scenario according to the related information, wherein an operation corresponding to the user operation record in the face recognition policy corresponding to the first scenario is automatically performed operating.
  • the mobile terminal acquires related information of the face recognition process in the first scenario, and the related information includes the interaction operation record of the user, that is, the related information may be information input by the user for performing face recognition, and the mobile terminal may The related information generates a face recognition policy corresponding to the first scene, and the operation corresponding to the user operation record in the face recognition policy corresponding to the first scene may be an operation automatically performed by the mobile terminal, that is, in the face recognition process.
  • the user interacts with the mobile terminal, and the related information is input, and the mobile terminal can perform self-learning and training, and does not need to prompt the user to input relevant information when the next scene is in the first scene.
  • the first scene of the mobile terminal is a location scene
  • the mobile terminal is located in a movie theater
  • the first scene is a movie original scene
  • the mobile terminal captures multiple face images of the current user in the cinema scene, and finds that the acquired face image is not clear, and the face image is not clear may be due to the darkness of the cinema or the color of the light affecting the effect of the image.
  • the factor that the user faces the mobile terminal when watching the movie is not standard, and the like, the factor makes the mobile terminal unable to locate the facial feature point in the face image, and at this time, the mobile terminal can output to prompt the current user.
  • a prompt message for manually marking the face feature point is performed, or the user actively marks the face feature point in the face image, so that the mobile terminal can accurately acquire the face feature point in the face image.
  • the image of the face feature point acquired in the scene may not be clear, and the mobile terminal can intelligently learn and train.
  • the face image is preprocessed, for example, brightened. Pre-processing such as sharpening, so as to extract a more clear face feature point image for face recognition.
  • the mobile terminal can intelligently learn and train, according to the face feature points actively marked by the user, select the optimal face feature points to match or select fewer face features. Point matching is performed to improve the matching success rate of face recognition, thereby solving the problem that face recognition cannot be successfully performed even for legitimate users.
  • the mobile terminal has the function of learning and training. After the face recognition is successfully performed in the first scenario by the user operation record, the subsequent mobile terminal is no longer in the first scenario. The user is required to enter relevant information.
  • the mobile terminal obtains related information of the face recognition process according to the user operation record, thereby obtaining a face recognition strategy corresponding to the first scene, and the mobile terminal has the capability of active learning and training, which is beneficial to the mobile terminal.
  • the face recognition strategy corresponding to the first scene is used to perform face recognition, thereby facilitating the improvement of the security, reliability and accuracy of the face recognition.
  • the acquiring related information of the face recognition process includes: acquiring a plurality of face images and scene data in the first scene; and according to the plurality of face images and the scene data Determining relevant information of the face recognition process; acquiring related information of the face recognition process.
  • the mobile terminal When detecting that the first scene is not a preset scene, the mobile terminal captures multiple face images and scene data of the current user in the first scene, and determines the face according to the multiple face images and the scene data. Identify relevant information about the process to obtain relevant information. For example, after acquiring a plurality of face images of the current user, the mobile terminal determines the clearest face image of the plurality of face images, but still cannot locate at least one feature point in the face image. For example, the eyebrows cannot be positioned, and the relevant information can be determined as the information for determining the current user's eyebrow position. At this time, the current user can manually position the eyebrows in the image.
  • the mobile terminal may determine the sharpness of the plurality of images, select the face image with the highest definition, and match the face image with the face image template.
  • the mobile terminal acquires multiple face images of the current user and scene information of the first scene, thereby determining relevant information in the face recognition process, which is beneficial to improving the reliability and accuracy of the face recognition.
  • the method further includes: acquiring information about the preset abnormal event when a preset abnormal event occurs in the face recognition process in the first scenario, and setting the preset abnormality Information related to the event is added to the face recognition policy corresponding to the first scene.
  • the mobile terminal may have an abnormal event, which may be a crash, restart, stuck, black screen, flashback, etc. of the mobile terminal, and the user may have a high frequency in the face recognition process.
  • the abnormal event that occurs is set as a preset abnormal event.
  • the mobile terminal displays a preset abnormal event during the face recognition process, the mobile terminal confirms the cause and influence factor of the abnormal event, and eliminates or avoids the intelligent learning. The appearance of the influence factor causes the subsequent abnormal events to occur or decrease during face recognition.
  • the mobile terminal adds related information of the preset abnormal event to the face recognition policy corresponding to the first scenario, thereby performing intelligent learning on a situation in which a preset abnormal event occurs in the face recognition process. And training, this can reduce or avoid the occurrence of preset anomalies.
  • the method further includes: determining a second scenario of the mobile terminal; and determining that the second scenario is included in the preset mapping relationship set, determining that the second scenario corresponds to a face recognition strategy; the face recognition policy corresponding to the second scenario identifies that the current user is a legitimate user, and the face recognition policy corresponding to the second scenario is different from the face recognition policy corresponding to the first scenario .
  • the face recognition policy corresponding to the second scenario may be determined, and the current user is identified as a registered user according to the face recognition policy corresponding to the second scenario.
  • the first scene is not the preset scene, and the second scene is the preset scene. Therefore, the face recognition policy corresponding to the second scene is different from the face recognition strategy corresponding to the first scene.
  • the mobile terminal may directly determine the face recognition policy corresponding to the second scenario and perform face recognition, thereby determining Whether the current user is a legitimate user is beneficial to improving the efficiency, reliability and accuracy of face recognition.
  • FIG. 3 is a schematic flowchart of a method for recognizing a face according to an embodiment of the present invention, which is applied to a mobile terminal. As shown in the figure, the method for recognizing a face includes:
  • the mobile terminal determines a first scenario of the mobile terminal.
  • the mobile terminal acquires related information about performing a face recognition process in the first scenario when the first mapping is not included in the preset mapping relationship set, where the related information includes the person User action record during face recognition.
  • the mobile terminal generates a face recognition policy corresponding to the first scenario according to the related information, where an operation corresponding to the user operation record in the face recognition policy corresponding to the first scenario is automatically performed. operating.
  • the mobile terminal establishes a mapping relationship between the first scenario and the generated face recognition policy, and adds the mapping relationship to the mapping relationship set.
  • the mobile terminal identifies, according to the generated face recognition policy corresponding to the first scenario, that the current user is a legitimate user.
  • the mobile terminal first determines the first scenario of the mobile terminal, and secondly, when the first mapping scenario is not included in the preset mapping relationship set, the face corresponding to the first scenario is generated. Identifying a policy, and then establishing a mapping relationship between the first scenario and the generated face recognition policy, adding the mapping relationship to the mapping relationship set, and finally, corresponding to the generated first scenario.
  • the face recognition policy identifies the current user as a legitimate user. It can be seen that, when the face recognition is performed, the first scene currently located is determined, and when the first scene is detected in the preset mapping relationship set, the face recognition policy applicable to the first scene is generated. When the generated face recognition policy identifies that the current user is a registered user, the mapping relationship set is updated, so that the face recognition policy can be directly used when the first scene is in the first scene, so that the face recognition is improved. Safety, reliability and accuracy.
  • FIG. 4 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
  • the mobile terminal includes a processor, a memory, and a communication.
  • the current user is identified as a legitimate user according to the generated face recognition policy corresponding to the first scenario.
  • the mobile terminal first determines the first scenario of the mobile terminal, and secondly, when the first mapping scenario is not included in the preset mapping relationship set, the face corresponding to the first scenario is generated. Identifying a policy, and then establishing a mapping relationship between the first scenario and the generated face recognition policy, adding the mapping relationship to the mapping relationship set, and finally, corresponding to the generated first scenario.
  • the face recognition policy identifies the current user as a legitimate user. It can be seen that, when the face recognition is performed, the first scene currently located is determined, and when the first scene is detected in the preset mapping relationship set, the face recognition policy applicable to the first scene is generated. When the generated face recognition policy identifies that the current user is a registered user, the mapping relationship set is updated, so that the face recognition policy can be directly used when the first scene is in the first scene, so that the face recognition is improved. Safety, reliability and accuracy.
  • the instructions in the program are specifically configured to perform the following steps: acquiring a correlation of performing a face recognition process in the first scenario Information, the related information includes a user operation record in the face recognition process; generating a face recognition policy corresponding to the first scene according to the related information, wherein the face recognition strategy corresponding to the first scene
  • the operation corresponding to the user operation record is an automatic execution operation.
  • the instructions in the program are specifically configured to perform the following steps: acquiring a plurality of face images and scene data in the first scene; Determining related information of the face recognition process according to the plurality of face images and the scene data; acquiring related information of the face recognition process.
  • the instructions in the program are further configured to: acquire information about the preset abnormal event when a preset abnormal event occurs in the face recognition process in the first scenario And adding related information of the preset abnormal event to the face recognition policy corresponding to the first scenario.
  • the instructions in the program are further configured to: determine a second scenario of the mobile terminal; if it is detected that the second mapping scenario is included in the preset mapping relationship set, Determining a face recognition policy corresponding to the second scenario; identifying, according to the face recognition policy corresponding to the second scenario, that the current user is a legal user, and the face recognition policy corresponding to the second scenario is different from the first The face recognition strategy corresponding to the scene.
  • the first scenario includes a location scenario, a user scenario, and an environment scenario, where the location scenario is used to indicate a location of the mobile terminal, and the user scenario is used to indicate a correlation of a current user's face.
  • the environment scenario is used to indicate related information of a scenario in which the mobile terminal is currently located.
  • the mobile terminal includes corresponding hardware structures and/or software modules for performing the respective functions in order to implement the functions.
  • the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
  • the embodiment of the present invention may divide a functional unit into a mobile terminal according to the method example.
  • each functional unit may be divided according to each function, or two or more functions may be integrated into one processing unit.
  • the integrated unit can be implemented in the form of hardware or in the form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • FIG. 5 shows a block diagram of one possible functional unit composition of the mobile terminal involved in the embodiment.
  • the mobile terminal 500 includes a processing unit 502 and an acquisition unit 503.
  • the processing unit 502 is configured to perform control management on the actions of the mobile terminal.
  • the processing unit 502 is configured to support the mobile terminal to perform steps S201-S204 in FIG. 2, steps S301-S305 in FIG. 3, and/or for the description herein.
  • Other processes of technology is configured to collect multiple face images of the current user, and the mobile terminal may further include a storage unit 501 for storing program codes and data of the mobile terminal.
  • the processing unit 502 is configured to determine a first scenario of the mobile terminal, and configured to generate a face recognition policy corresponding to the first scenario when the first mapping is not included in the preset mapping relationship set. And establishing a mapping relationship between the first scenario and the generated face recognition policy, adding the mapping relationship to the mapping relationship set; and for corresponding to the generated first scenario
  • the face recognition policy identifies the current user as a legitimate user.
  • the processing unit 502 is specifically configured to: acquire related information about performing a face recognition process in the first scenario, in the generating a face recognition policy corresponding to the first scenario,
  • the related information includes a user operation record in the face recognition process, and a face recognition policy corresponding to the first scene according to the related information, where the face recognition policy corresponding to the first scene is
  • the operation corresponding to the user operation record is an automatic execution operation.
  • the processing unit 502 is specifically configured to acquire multiple face images and scene data in the first scene, and to use according to the related information of the face recognition process.
  • the plurality of face images and the scene data determine related information of a face recognition process; and related information for acquiring a face recognition process.
  • the processing unit 502 is further configured to: when the preset abnormal event occurs in the face recognition process in the first scenario, acquire related information of the preset abnormal event, where the The information about the abnormal event is added to the face recognition policy corresponding to the first scenario.
  • the processing unit 502 is further configured to determine a second scenario of the mobile terminal, and to determine, if the second scenario is included in the preset mapping relationship set, a face recognition policy corresponding to the second scenario; and a face recognition policy corresponding to the second scenario is used to identify that the current user is a legitimate user, and the face recognition policy corresponding to the second scenario is different from the first A face recognition strategy corresponding to a scene.
  • the first scenario includes a location scenario, a user scenario, and an environment scenario, where the location scenario is used to indicate a location of the mobile terminal, and the user scenario is used to indicate a correlation of a current user's face.
  • the environment scenario is used to indicate related information of a scenario in which the mobile terminal is currently located.
  • the processing unit 502 can be a processor or a controller
  • the collecting unit 503 can be a biological image capturing device, such as an iris image capturing device, a facial image capturing device, a fingerprint image capturing device, etc.
  • the storage unit 501 can be a memory.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, the computer program causing the computer to execute part or all of any of the methods described in the method embodiment.
  • the computer includes a mobile terminal.
  • Embodiments of the present invention also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program, the computer program being operative to cause a computer to execute as described in the method embodiment Part or all of the steps of either method.
  • the computer program product can be a software installation package, the computer comprising a mobile terminal.
  • the disclosed apparatus may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present invention may contribute to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a memory. A number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing memory includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like, which can store program codes.
  • ROM Read-Only Memory
  • RAM Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Telephone Function (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

本发明实施例公开了一种人脸识别方法及相关产品。方法包括:移动终端确定移动终端的第一场景;在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略;建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中;根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。本发明实施例有利于提高人脸识别的安全性、可靠性和准确性。

Description

人脸识别方法及相关产品 技术领域
本发明涉及移动终端技术领域,具体涉及一种人脸识别方法及相关产品。
背景技术
随着社会的进步和科学的发展,信息交互越来越频繁,为保证信息的安全,需对用户身份进行验证,因此,常常会用到生物识别,例如:指纹识别、人脸识别、虹膜识别、静脉识别、掌纹识别等生物识别技术。
现有技术中,人脸识别过程可能发生在各种各样的场景中,导致在匹配过程中会因为一些外界因素导致即使是注册用户,也会出现匹配失败的情况,降低人脸识别的效率和准确度。
发明内容
本发明实施例提供了一种人脸识别方法及相关产品,可以提高人脸识别的安全性、可靠性和准确性。
第一方面,本发明实施例提供一种移动终端,包括处理器,连接所述处理器的人脸图像采集装置和存储器,其中,
所述人脸图像采集装置,用于采集当前用户的多张人脸图像;
所述存储器,用于存储预设人脸图像模板;
所述处理器,用于确定移动终端的第一场景;以及在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略;以及建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中;以及根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。
第二方面,本发明实施例提供一种人脸识别方法,包括:
确定移动终端的第一场景;
在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略;
建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中;
根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。
第三方面,本发明实施例提供一种移动终端,包括处理单元,
所述处理单元,用于确定移动终端的第一场景;
所述处理单元,还用于在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略;
所述处理单元,还用于建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中;
所述处理单元,还用于根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。
第四方面,本发明实施例提供一种移动终端,包括处理器、存储器、通信接口以及一个或多个程序,其中,所述一个或多个程序被存储在所述存储器中,并且被配置由所述处理器执行,所述程序包括用于执行本发明实施例第一方面任一方法中的步骤的指令。
第五方面,本发明实施例提供了一种计算机可读存储介质,其中,所述计算机可读存储介质存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如本发明实施例第一方面任一方法中所描述的部分或全部步骤,所述计算机包括移动终端。
第六方面,本发明实施例提供了一种计算机程序产品,其中,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如本发明实施例第一方面任一方法中所描述的部分或全部步骤。该计算机程序产品可以为一个软件安装包,所述计算机包括移动终端。
可以看出,本发明实施例中,移动终端首先确定移动终端的第一场景,其次,在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略,然后,建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中,最后,根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。有利于提高人脸识别的安全性、可靠性和准确性。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种移动终端的结构示意图;
图2是本发明实施例提供的一种人脸识别方法的流程示意图;
图3是本发明实施例提供的一种人脸识别方法的流程示意图;
图4发明实施例公开的一种移动终端的结构示意图;
图5是本发明实施例公开的一种移动终端的功能单元组成框图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明的说明书和权利要求书及所述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、***、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本发明的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
本发明实施例所涉及到的移动终端可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其他处理设备,以及各种形式的 用户设备(User Equipment,UE),移动台(Mobile Station,MS),终端设备(terminal device)等等。为方便描述,上面提到的设备统称为移动终端。
本发明实施例所描述的移动终端设置有生物图像采集装置,该生物图像采集装置具体包括指纹图像采集装置、虹膜图像采集装置和人脸图像采集装置,其中,指纹图像采集装置可以是指纹传感器模组、虹膜图像采集装置可以包括红外光源和虹膜摄像头,人脸图像采集装置可以是通用摄像头模组,如前置摄像头。下面结合附图对本发明实施例进行介绍。
请参阅图1,图1是本发明实施例提供了一种移动终端100的结构示意图,所述移动终端100包括:壳体、触控显示屏、主板、电池和副板,主板上设置有红外光源21、虹膜摄像头22、前置摄像头23、处理器110、存储器120和SIM卡槽等,副板上设置有振子、一体音腔、VOOC闪充接口和指纹模组24,所述红外光源21和虹膜摄像头22组成该移动终端100的虹膜图像采集装置,所述前置摄像头23组成该移动终端100的人脸图像采集装置,所述指纹传感器模组24组成该移动终端100的指纹图像采集装置,所述虹膜图像采集装置、人脸图像采集装置和指纹图像采集装置统称为该移动终端100的生物图像采集装置,其中,在进行人脸识别时,可通过人脸图像采集装置采集当前用户的多张人脸图像。其中,
所述人脸图像采集装置,用于采集当前用户的多张人脸图像;
所述存储器,用于存储预设人脸图像模板;
所述处理器110,用于确定移动终端的第一场景;以及在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略;以及建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中;以及根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。
可以看出,本发明实施例中,移动终端首先确定移动终端的第一场景,其次,在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略,然后,建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中,最后,根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。可见,由于在进行人脸识别时先确定当前所处的第一场景,并在检测到预设的映射关系集合中不包括第一场景时生成适用于所述第一场景的人脸识别策略,在通过所述生成的人脸识别策略识别到当前用户为注册用户时,更新映射关系集合,以便下次处于第一场景时可以直接使用该人脸识别策略,如此,有利于提高人脸识别的安全性、可靠性 和准确性。
在一个可能的示例中,在所述生成第一场景对应的人脸识别策略方面,所述处理器110具体用于:获取在所述第一场景中进行人脸识别过程的相关信息,所述相关信息包括所述人脸识别过程中的用户操作记录;以及根据所述相关信息生成所述第一场景对应的人脸识别策略,其中,所述第一场景对应的人脸识别策略中与所述用户操作记录对应的操作为自动执行操作。
在一个可能的示例中,在所述获取人脸识别过程的相关信息方面,所述处理器110具体用于:获取所述第一场景下的多张人脸图像和场景数据;以及根据所述多张人脸图像和所述场景数据确定人脸识别过程的相关信息;以及获取人脸识别过程的相关信息。
在一个可能的示例中,所述处理器110还用于在所述第一场景中进行人脸识别过程中发生预设异常事件时,获取所述预设异常事件的相关信息,将所述预设异常事件的相关信息添加到所述第一场景对应的人脸识别策略中。
在一个可能的示例中,所述处理器110还用于确定所述移动终端的第二场景;以及若检测到所述预设的映射关系集合中包括所述第二场景,则确定所述第二场景对应的人脸识别策略;以及根据所述第二场景对应的人脸识别策略识别出当前用户为合法用户,所述第二场景对应的人脸识别策略不同于所述第一场景对应的人脸识别策略。
在一个可能的示例中,所述第一场景包括位置场景、用户场景和环境场景,所述位置场景用于指示所述移动终端所处位置,所述用户场景用于指示当前用户人脸的相关信息,所述环境场景用于指示所述移动终端当前所处场景的相关信息
请参阅图2,图2是本发明实施例提供了一种人脸识别方法的流程示意图,应用于移动终端,如图所示,本人脸识别方法包括:
S201,移动终端确定移动终端的第一场景。
其中,第一场景为移动终端进行人脸识别时所对应的人脸识别场景。
其中,第一场景可以是位置场景。由于移动中终端会随着用户的位置变化而处于移动的状态,使得移动终端可处于不同的位置场景,位置场景包括室内场景和室外场景。室内场景例如电影院、超市、商场、咖啡厅、健身房等场景,室外场景例如露天体育场、公园、游乐园、步行街等场景。其中,第一场景可以是用户场景。用户的姿态,例如用户人脸面 向移动终端前置摄像头角度不同,用户的表情不同会对人脸识别造成一定的影响;用户人脸有无遮挡物,例如用户带着眼镜或者不带眼镜,用户放下刘海或者梳起刘海,用户有胡须或者没有胡须会成为影响人脸识别的因素;用户人脸本身大范围的变化,例如随着时间推移用户人脸变样,或者用户化妆、化不同的妆、不化妆,或者用户脸上出现疤痕等情况。所述用户场景都会对人脸识别过程中的人脸图像采集过程、特征点提取过程、特征点匹配过程造成影响。
其中,第一场景可以是环境场景。由于移动终端当前所处的位置可能不是一个可以细化描述的位置,但是当前的环境因素会对人脸识别过程造成一定的影响,因此可将该场景定义为环境场景。在移动终端处于环境场景中时,可能会光线较暗,光线较暗可能会导致无法拍摄到清晰的人脸图像;或者光线很高,光线较很高可能会导致拍摄到的照片过度曝光使得无法拍摄到清晰的人脸;或者当前环境有很多障碍物,该障碍物可能会遮挡人脸或者进入拍摄到的人脸图像导致人脸识别产生误差。
其中,所述场景类别的划分为根据人脸识别过程中,对人脸识别造成影响的因素划分的,场景类别包括位置场景、用户场景、环境场景,如果所述三种场景中有重合的地方,移动终端可以预设场景分类的优先级,例如若识别到第一场景可以是位置场景中的一个场景,也可以是用户场景中的一个场景,但是场景优先级位置场景高于用户场景,可确定第一场景为在位置场景中对应的场景。
其中,每个场景对应一个人脸识别策略,一个人脸识别策略可以适用于多个场景。
其中,移动终端在进行人脸识别之前,会先确定移动终端所处的第一场景,移动终端可能处于某个场景或者正在从一个场景切换到另一个场景,如此,有利于进一步确定人脸识别策略。
其中,移动终端确定当前的第一场景,第一场景的确认方式可以是移动终端通过全球定位***(Global Positioning System,简称GPS)对移动终端的当前位置定位以识别出当前所处的第一场景,或者可由用户手动选择移动终端当前所在的第一场景,或者,移动终端通过传感器获取当前场景的场景数据,根据场景数据确定第一场景。
S202,所述移动终端在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略。
其中,移动终端在进行人脸时,会使用移动终端预存的人脸识别算法模型,即移动终 端预存的人脸识别策略集合。该人脸识别策略集合包括多个人脸识别策略,每个人脸识别策略对应一个人脸识别场景,每一个人脸识别场景和一个人脸识别策略形成一个映射关系,多个人脸识别场景和多个人脸识别策略形成了映射关系的集合。
其中,移动终端在确定第一场景后,根据第一场景查询预设的映射关系集合,在检测到预设的映射关系集合中不包括第一场景时,表明当前移动终端没有适用于第一场景的人脸识别策略,因此,移动终端会生成第一场景对应的人脸识别策略。
其中,移动终端在检测到第一场景不是预设场景时,会抓取第一场景下的当前用户的多张人脸图像或者抓取更多的场景数据,用于生成第一场景对应的人脸识别策略。
S203,所述移动终端建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中。
其中,移动终端通过训练或者学习等方式生成了适用于第一场景的人脸识别策略后,会建立生成的人脸识别策略和第一场景的映射关系,并将该映射关系添加到映射关系集合中,实现了对映射关系集合的更新,如此,有利益当移动终端再次处于第一场景时,可以直接使用所述生成的人脸识别策略。
S204,所述移动终端根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。
其中,移动终端根据生成的人脸识别策略识别在第一场景下的当前用户是否为合法用户,可更加准确、高效地完成人脸识别过程。
可以看出,本发明实施例中,移动终端首先确定移动终端的第一场景,其次,在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略,然后,建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中,最后,根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。可见,由于在进行人脸识别时先确定当前所处的第一场景,并在检测到预设的映射关系集合中不包括第一场景时生成适用于所述第一场景的人脸识别策略,在通过所述生成的人脸识别策略识别到当前用户为注册用户时,更新映射关系集合,以便下次处于第一场景时可以直接使用该人脸识别策略,如此,有利于提高人脸识别的安全性、可靠性和准确性。
在一个可能的示例中,所述生成第一场景对应的人脸识别策略,包括:获取在所述第 一场景中进行人脸识别过程的相关信息,所述相关信息包括所述人脸识别过程中的用户操作记录;根据所述相关信息生成所述第一场景对应的人脸识别策略,其中,所述第一场景对应的人脸识别策略中与所述用户操作记录对应的操作为自动执行操作。
其中,移动终端获取在第一场景中进行人脸识别过程的相关信息,相关信息包括用户的交互操作记录,即该相关信息可以是用户输入的用于进行人脸识别的信息,移动终端可以根据该相关信息生成第一场景对应的人脸识别策略,并且,第一场景对应的人脸识别策略中与用户操作记录对应的操作可为移动终端自动执行的操作,即在本次人脸识别过程中,由用户和移动终端进行交互,输入了相关信息,移动终端可以进行自学习和训练,并使得在下一次处于第一场景时,不需要提示用户输入相关信息。
举例说明,移动终端的第一场景为位置场景,移动终端位于电影院,第一场景为电影原创场景,并且移动终端中没有和电影院场景对应的人脸识别策略。首先,移动终端会在电影院场景下抓取当前用户的多张人脸图像,发现获取到的人脸图像不清晰,人脸图像不清晰可能是由于电影院光线较暗或者光线颜色影响了图片成效效果,或者是用户在看电影时面向移动终端的角度不标准等因素,所述因素使得移动终端无法对人脸图像中的人脸特征点进行定位,此时,移动终端可以输出用于提示当前用户进行手动标记人脸特征点的提示消息,或者用户主动对人脸图像中的人脸特征点进行标记,使得移动终端可以准确地获取到人脸图像中的人脸特征点。
此外,在该场景下获取到的人脸特征点图像可能不清晰,移动终端可以智能的进行学习和训练,为了使得人脸特征点图像更加清晰,采取对人脸图像进行预处理,例如提亮、锐化等预处理,从而提取到更加清晰的人脸特征点图像在进行人脸识别。
此外,在另一种人脸识别策略中,移动终端可以智能的进行学习和训练,根据用户主动标注的人脸特征点,选取最优的人脸特征点进行匹配或者选取更少的人脸特征点进行匹配,以此提高人脸识别的匹配成功率,从而解决即使是合法用户也无法成功进行人脸识别的问题。
其中,所述人脸识别策略中,移动终端具有学习和训练的功能,在本次通过用户操作记录成功在第一场景下进行人脸识别后,后续移动终端再次处于第一场景时,不再需要用户输入相关信息。
可见,本示例中,移动终端根据用户操作记录得到人脸识别过程的相关信息,进而得 到第一场景对应的人脸识别策略,并且移动终端具有主动学习和训练的能力,有利于移动终端在第一场景时利用第一场景对应的人脸识别策略将进行人脸识别,进而有利于提高人脸识别的安全性、可靠性和准确性。
在一个可能的示例中,所述获取人脸识别过程的相关信息,包括:获取所述第一场景下的多张人脸图像和场景数据;根据所述多张人脸图像和所述场景数据确定人脸识别过程的相关信息;获取人脸识别过程的相关信息。
其中,移动终端在检测到第一场景不是预设的场景时,会在第一场景下抓取当前用户的多张人脸图像和场景数据,并根据多张人脸图像和场景数据确定人脸识别过程的相关信息,从而获取相关信息。例如,移动终端获取到当前用户的多张人脸图像后,确定多张人脸图像中最清晰的一张人脸图像,但是仍然不能对这张人脸图像中的至少一个特征点进行定位,比如无法对眉毛进行定位,可确定相关信息为确定当前用户眉毛位置的信息,此时可以由当前用户对图像中的眉毛进行手动定位。
其中,移动终端在获取到当前用户的多张人脸图像后,可以对多张图像的清晰度进行判定,选取清晰度最高的人脸图像,将该人脸图像和人脸图像模板进行匹配。
可见,本示例中,移动终端获取当前用户的多张人脸图像和第一次场景的场景信息,从而确定人脸识别过程中的相关信息,有利于提高人脸识别的可靠性和准确性。
在一个可能的示例中,所述方法还包括:在所述第一场景中进行人脸识别过程中发生预设异常事件时,获取所述预设异常事件的相关信息,将所述预设异常事件的相关信息添加到所述第一场景对应的人脸识别策略中。
其中,移动终端在进行人脸识别的过程中,可能会出现异常事件,异常事件可能是移动终端的死机、重启、卡顿、黑屏、闪退等事件,用户可将在人脸识别过程中高频出现的异常事件设置为预设异常事件,当移动终端在人脸识别过程中出现预设异常事件时,移动终端确认导致异常事件出现的原因和影响因子,在进行智能学习后消除或者避免所述影响因子的出现,使得后续在进行人脸识别时,不会出现或者减少出现预设异常事件。
可见,本示例中,移动终端将所述预设异常事件的相关信息添加到所述第一场景对应的人脸识别策略中,从而对人脸识别过程中出现预设异常事件的情况进行智能学习和训练,如此可以减少或者避免预设异常事件出现的情况。
在一个可能的示例中,所述方法还包括:确定所述移动终端的第二场景;若检测到所 述预设的映射关系集合中包括所述第二场景,则确定所述第二场景对应的人脸识别策略;根据所述第二场景对应的人脸识别策略识别出当前用户为合法用户,所述第二场景对应的人脸识别策略不同于所述第一场景对应的人脸识别策略。
其中,在检测到移动终端为第二场景时,并且第二场景为预设的映射关系中的一个场景,即在映射关系集合中,有和第二场景对应的人脸识别策略,此时,可确定第二场景对应的人脸识别策略,并根据第二场景对应的人脸识别策略识别当前用户是否为注册用户。
其中,由于第一场景不是预设场景,第二场景是预设场景,因此,第二场景对应的人脸识别策略不同于第一场景对应的人脸识别策略。
可见,本示例中,在移动终端当前所处的第二场景为预存的映射关系集合中的一个场景时,移动终端可直接确定第二场景对应的人脸识别策略并进行人脸识别,进而确定当前用户是否为合法用户,有利于提高人脸识别的效率、可靠性和准确性。
请参阅图3,图3是本发明实施例提供了一种人脸识别方法的流程示意图,应用于移动终端,如图所示,本人脸识别方法包括:
S301,移动终端确定移动终端的第一场景。
S302,所述移动终端在检测到预设的映射关系集合中不包括所述第一场景时,获取在所述第一场景中进行人脸识别过程的相关信息,所述相关信息包括所述人脸识别过程中的用户操作记录。
S303,所述移动终端根据所述相关信息生成所述第一场景对应的人脸识别策略,其中,所述第一场景对应的人脸识别策略中与所述用户操作记录对应的操作为自动执行操作。
S304,所述移动终端建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中。
S305,所述移动终端根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。
可以看出,本发明实施例中,移动终端首先确定移动终端的第一场景,其次,在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略,然后,建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中,最后,根据生成的所述第一场景对应的人脸识别策略识别当前用户 为合法用户。可见,由于在进行人脸识别时先确定当前所处的第一场景,并在检测到预设的映射关系集合中不包括第一场景时生成适用于所述第一场景的人脸识别策略,在通过所述生成的人脸识别策略识别到当前用户为注册用户时,更新映射关系集合,以便下次处于第一场景时可以直接使用该人脸识别策略,如此,有利于提高人脸识别的安全性、可靠性和准确性。
与所述图2所示的实施例一致的,请参阅图4,图4是本发明实施例提供的一种移动终端的结构示意图,如图所示,该移动终端包括处理器、存储器、通信接口以及一个或多个程序,其中,所述一个或多个程序被存储在所述存储器中,并且被配置由所述处理器执行,所述程序包括用于执行以下步骤的指令;
确定移动终端的第一场景;
在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略;
建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中;
根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。
可以看出,本发明实施例中,移动终端首先确定移动终端的第一场景,其次,在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略,然后,建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中,最后,根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。可见,由于在进行人脸识别时先确定当前所处的第一场景,并在检测到预设的映射关系集合中不包括第一场景时生成适用于所述第一场景的人脸识别策略,在通过所述生成的人脸识别策略识别到当前用户为注册用户时,更新映射关系集合,以便下次处于第一场景时可以直接使用该人脸识别策略,如此,有利于提高人脸识别的安全性、可靠性和准确性。
在一个可能的示例中,在所述生成第一场景对应的人脸识别策略方面,所述程序中的指令具体用于执行以下步骤:获取在所述第一场景中进行人脸识别过程的相关信息,所述相关信息包括所述人脸识别过程中的用户操作记录;根据所述相关信息生成所述第一场景 对应的人脸识别策略,其中,所述第一场景对应的人脸识别策略中与所述用户操作记录对应的操作为自动执行操作。
在一个可能的示例中,在所述获取人脸识别过程的相关信息方面,所述程序中的指令具体用于执行以下步骤:获取所述第一场景下的多张人脸图像和场景数据;根据所述多张人脸图像和所述场景数据确定人脸识别过程的相关信息;获取人脸识别过程的相关信息。
在一个可能的示例中,所述程序中的指令还用于执行以下步骤:在所述第一场景中进行人脸识别过程中发生预设异常事件时,获取所述预设异常事件的相关信息,将所述预设异常事件的相关信息添加到所述第一场景对应的人脸识别策略中。
在一个可能的示例中,所述程序中的指令还用于执行以下步骤:确定所述移动终端的第二场景;若检测到所述预设的映射关系集合中包括所述第二场景,则确定所述第二场景对应的人脸识别策略;根据所述第二场景对应的人脸识别策略识别出当前用户为合法用户,所述第二场景对应的人脸识别策略不同于所述第一场景对应的人脸识别策略。
在一个可能的示例中,所述第一场景包括位置场景、用户场景和环境场景,所述位置场景用于指示所述移动终端所处位置,所述用户场景用于指示当前用户人脸的相关信息,所述环境场景用于指示所述移动终端当前所处场景的相关信息。
上述主要从方法侧执行过程的角度对本发明实施例的方案进行了介绍。可以理解的是,移动终端为了实现所述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本发明能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
本发明实施例可以根据所述方法示例对移动终端进行功能单元的划分,例如,可以对应各个功能划分各个功能单元,也可以将两个或两个以上的功能集成在一个处理单元中。所述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。需要说明的是,本发明实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用集成的单元的情况下,图5示出了所述实施例中所涉及的移动终端的一种可能的功能单元组成框图。移动终端500包括:处理单元502和采集单元503。处理单元502用于对移动终端的动作进行控制管理,例如,处理单元502用于支持移动终端执行图2中的步骤S201-S204、图3中的步骤S301-S305和/或用于本文所描述的技术的其它过程。采集单元503用于采集当前用户的多张人脸图像,移动终端还可以包括存储单元501,用于存储移动终端的程序代码和数据。
其中,所述处理单元502,用于确定移动终端的第一场景;以及用于在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略;以及用于建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中;以及用于根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。
在一个可能的示例中,在所述生成第一场景对应的人脸识别策略方面,所述处理单元502具体用于:获取在所述第一场景中进行人脸识别过程的相关信息,所述相关信息包括所述人脸识别过程中的用户操作记录;以及用于根据所述相关信息生成所述第一场景对应的人脸识别策略,其中,所述第一场景对应的人脸识别策略中与所述用户操作记录对应的操作为自动执行操作。
在一个可能的示例中,在所述获取人脸识别过程的相关信息方面,所述处理单元502具体用于:获取所述第一场景下的多张人脸图像和场景数据;以及用于根据所述多张人脸图像和所述场景数据确定人脸识别过程的相关信息;以及用于获取人脸识别过程的相关信息。
在一个可能的示例中,所述处理单元502还用于在所述第一场景中进行人脸识别过程中发生预设异常事件时,获取所述预设异常事件的相关信息,将所述预设异常事件的相关信息添加到所述第一场景对应的人脸识别策略中。
在一个可能的示例中,所述处理单元502还用于确定所述移动终端的第二场景;以及用于若检测到所述预设的映射关系集合中包括所述第二场景,则确定所述第二场景对应的人脸识别策略;以及用于根据所述第二场景对应的人脸识别策略识别出当前用户为合法用户,所述第二场景对应的人脸识别策略不同于所述第一场景对应的人脸识别策略。
在一个可能的示例中,所述第一场景包括位置场景、用户场景和环境场景,所述位置 场景用于指示所述移动终端所处位置,所述用户场景用于指示当前用户人脸的相关信息,所述环境场景用于指示所述移动终端当前所处场景的相关信息。
其中,处理单元502可以是处理器或控制器,采集单元503可以是生物图像采集装置,如虹膜图像采集装置、面部图像采集装置、指纹图像采集装置等,存储单元501可以是存储器。
本发明实施例还提供一种计算机存储介质,其中,该计算机存储介质存储用于电子数据交换的计算机程序,该计算机程序使得计算机执行如所述方法实施例中记载的任一方法的部分或全部步骤,所述计算机包括移动终端。
本发明实施例还提供一种计算机程序产品,所述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,所述计算机程序可操作来使计算机执行如所述方法实施例中记载的任一方法的部分或全部步骤。该计算机程序产品可以为一个软件安装包,所述计算机包括移动终端。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本发明所必须的。
在所述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。所述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储器中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储器包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
本领域普通技术人员可以理解所述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储器中,存储器可以包括:闪存盘、只读存储器(英文:Read-Only Memory,简称:ROM)、随机存取器(英文:Random Access Memory,简称:RAM)、磁盘或光盘等。
以上对本发明实施例进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (20)

  1. 一种移动终端,其特征在于,包括处理器,连接所述处理器的人脸图像采集装置和存储器,其中,
    所述人脸图像采集装置,用于采集当前用户的多张人脸图像;
    所述存储器,用于存储预设人脸图像模板;
    所述处理器,用于确定移动终端的第一场景;以及在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略;以及建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中;以及根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。
  2. 根据权利要求1所述的移动终端,其特征在于,在所述生成第一场景对应的人脸识别策略方面,所述处理器具体用于:获取在所述第一场景中进行人脸识别过程的相关信息,所述相关信息包括所述人脸识别过程中的用户操作记录;以及根据所述相关信息生成所述第一场景对应的人脸识别策略,其中,所述第一场景对应的人脸识别策略中与所述用户操作记录对应的操作为自动执行操作。
  3. 根据权利要求2所述的移动终端,其特征在于,在所述获取人脸识别过程的相关信息方面,所述处理器具体用于:获取所述第一场景下的多张人脸图像和场景数据;以及根据所述多张人脸图像和所述场景数据确定人脸识别过程的相关信息;以及获取人脸识别过程的相关信息。
  4. 根据权利要求1-3任一项所述的移动终端,其特征在于,所述处理器还用于在所述第一场景中进行人脸识别过程中发生预设异常事件时,获取所述预设异常事件的相关信息,将所述预设异常事件的相关信息添加到所述第一场景对应的人脸识别策略中。
  5. 根据权利要求1所述的移动终端,其特征在于,所述处理器还用于确定移动终端的第二场景;以及若检测到预设的映射关系集合中包括所述第二场景,则确定所述第二场景对应的人脸识别策略;以及根据所述第二场景对应的人脸识别策略识别出当前用户为合法用户,所述第二场景对应的人脸识别策略不同于所述第一场景对应的人脸识别策略。
  6. 根据权利要求1-5任一项所述的移动终端,其特征在于,所述第一场景包括位置场景、用户场景和环境场景,所述位置场景用于指示所述移动终端所处位置,所述用户场景用于指示当前用户人脸的相关信息,所述环境场景用于指示所述移动终端当前所处场景的 相关信息。
  7. 一种人脸识别方法,其特征在于,包括:
    确定移动终端的第一场景;
    在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略;
    建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中;
    根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。
  8. 根据权利要求7所述的方法,其特征在于,所述生成第一场景对应的人脸识别策略,包括:
    获取在所述第一场景中进行人脸识别过程的相关信息,所述相关信息包括所述人脸识别过程中的用户操作记录;
    根据所述相关信息生成所述第一场景对应的人脸识别策略,其中,所述第一场景对应的人脸识别策略中与所述用户操作记录对应的操作为自动执行操作。
  9. 根据权利要求8所述的方法,其特征在于,所述获取人脸识别过程的相关信息,包括:
    获取所述第一场景下的多张人脸图像和场景数据;
    根据所述多张人脸图像和所述场景数据确定人脸识别过程的相关信息;
    获取人脸识别过程的相关信息。
  10. 根据权利要求7-9任一项所述的方法,其特征在于,所述方法还包括:
    在所述第一场景中进行人脸识别过程中发生预设异常事件时,获取所述预设异常事件的相关信息,将所述预设异常事件的相关信息添加到所述第一场景对应的人脸识别策略中。
  11. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    确定所述移动终端的第二场景;
    若检测到所述预设的映射关系集合中包括所述第二场景,则确定所述第二场景对应的人脸识别策略;
    根据所述第二场景对应的人脸识别策略识别出当前用户为合法用户,所述第二场景对 应的人脸识别策略不同于所述第一场景对应的人脸识别策略。
  12. 根据权利要求7-11任一项所述的方法,其特征在于,所述第一场景包括位置场景、用户场景和环境场景,所述位置场景用于指示所述移动终端所处位置,所述用户场景用于指示当前用户人脸的相关信息,所述环境场景用于指示所述移动终端当前所处场景的相关信息。
  13. 一种移动终端,其特征在于,包括处理单元,
    所述处理单元,用于确定移动终端的第一场景;
    所述处理单元,还用于在检测到预设的映射关系集合中不包括所述第一场景时,生成第一场景对应的人脸识别策略;
    所述处理单元,还用于建立所述第一场景与生成的所述人脸识别策略之间的映射关系,将该映射关系添加到所述映射关系集合中;
    所述处理单元,还用于根据生成的所述第一场景对应的人脸识别策略识别当前用户为合法用户。
  14. 根据权利要求13所述的移动终端,其特征在于,在所述生成第一场景对应的人脸识别策略方面,所述处理单元具体用于:获取在所述第一场景中进行人脸识别过程的相关信息,所述相关信息包括所述人脸识别过程中的用户操作记录;以及用于根据所述相关信息生成所述第一场景对应的人脸识别策略,其中,所述第一场景对应的人脸识别策略中与所述用户操作记录对应的操作为自动执行操作。
  15. 根据权利要求14所述的移动终端,其特征在于,在所述获取人脸识别过程的相关信息方面,所述处理单元具体用于:获取所述第一场景下的多张人脸图像和场景数据;以及用于根据所述多张人脸图像和所述场景数据确定人脸识别过程的相关信息;以及用于获取人脸识别过程的相关信息。
  16. 根据权利要求13-15任一项所述的移动终端,其特征在于,在所述生成第一场景对应的人脸识别策略方面,所述处理单元具体用于:在所述第一场景中进行人脸识别过程中发生预设异常事件时,获取所述预设异常事件的相关信息,将所述预设异常事件的相关信息添加到所述第一场景对应的人脸识别策略中。
  17. 根据权利要求13所述的移动终端,其特征在于,在所述生成第一场景对应的人脸 识别策略方面,所述处理单元具体用于:确定所述移动终端的第二场景;以及用于若检测到所述预设的映射关系集合中包括所述第二场景,则确定所述第二场景对应的人脸识别策略;以及用于根据所述第二场景对应的人脸识别策略识别出当前用户为合法用户,所述第二场景对应的人脸识别策略不同于所述第一场景对应的人脸识别策略。
  18. 根据权利要求13-17任一项所述的移动终端,其特征在于,在所述生成第一场景对应的人脸识别策略方面,所述处理单元具体用于:所述第一场景包括位置场景、用户场景和环境场景,所述位置场景用于指示所述移动终端所处位置,所述用户场景用于指示当前用户人脸的相关信息,所述环境场景用于指示所述移动终端当前所处场景的相关信息。
  19. 一种移动终端,其特征在于,包括处理器、存储器、通信接口以及一个或多个程序,其中,所述一个或多个程序被存储在所述存储器中,并且被配置由所述处理器执行,所述程序包括用于执行权利要求7-12任一项方法中的步骤的指令。
  20. 一种计算机可读存储介质,其特征在于,其存储用于电子数据交换的计算机程序,其中,所述计算机程序使得计算机执行如权利要求7-12任一项所述的方法,所述计算机包括移动终端。
PCT/CN2018/100102 2017-09-26 2018-08-10 人脸识别方法及相关产品 WO2019062347A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710879937.4 2017-09-26
CN201710879937.4A CN107622246B (zh) 2017-09-26 2017-09-26 人脸识别方法及相关产品

Publications (1)

Publication Number Publication Date
WO2019062347A1 true WO2019062347A1 (zh) 2019-04-04

Family

ID=61090661

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/100102 WO2019062347A1 (zh) 2017-09-26 2018-08-10 人脸识别方法及相关产品

Country Status (2)

Country Link
CN (1) CN107622246B (zh)
WO (1) WO2019062347A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580390A (zh) * 2019-09-27 2021-03-30 百度在线网络技术(北京)有限公司 基于智能音箱的安防监控方法、装置、音箱和介质

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023724A (zh) * 2016-06-23 2016-10-12 董爱满 一种培养儿童动手能力的教具
CN107622246B (zh) * 2017-09-26 2020-07-10 Oppo广东移动通信有限公司 人脸识别方法及相关产品
CN109815853A (zh) * 2019-01-04 2019-05-28 深圳壹账通智能科技有限公司 活体检测方法、装置、计算机设备和存储介质
CN112199171A (zh) * 2020-09-10 2021-01-08 中信银行股份有限公司 一种人脸识别方法、装置、电子设备及可读存储介质
CN114283464A (zh) * 2021-11-26 2022-04-05 珠海格力电器股份有限公司 一种提高人脸识别的方法及***、智能终端
CN117523638B (zh) * 2023-11-28 2024-05-17 广州视声智能科技有限公司 基于优先级筛选的人脸识别方法及***

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834908A (zh) * 2015-05-07 2015-08-12 惠州Tcl移动通信有限公司 一种移动终端基于眼纹识别的图像曝光方法及曝光***
CN105718922A (zh) * 2016-03-04 2016-06-29 北京天诚盛业科技有限公司 虹膜识别的适应性调节方法和装置
CN106446786A (zh) * 2016-08-30 2017-02-22 广东欧珀移动通信有限公司 指纹识别方法、指纹识别装置及终端设备
CN106599875A (zh) * 2016-12-23 2017-04-26 努比亚技术有限公司 指纹识别装置及方法
CN107622246A (zh) * 2017-09-26 2018-01-23 广东欧珀移动通信有限公司 人脸识别方法及相关产品

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106959754A (zh) * 2017-03-22 2017-07-18 广东小天才科技有限公司 控制移动终端的方法及移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834908A (zh) * 2015-05-07 2015-08-12 惠州Tcl移动通信有限公司 一种移动终端基于眼纹识别的图像曝光方法及曝光***
CN105718922A (zh) * 2016-03-04 2016-06-29 北京天诚盛业科技有限公司 虹膜识别的适应性调节方法和装置
CN106446786A (zh) * 2016-08-30 2017-02-22 广东欧珀移动通信有限公司 指纹识别方法、指纹识别装置及终端设备
CN106599875A (zh) * 2016-12-23 2017-04-26 努比亚技术有限公司 指纹识别装置及方法
CN107622246A (zh) * 2017-09-26 2018-01-23 广东欧珀移动通信有限公司 人脸识别方法及相关产品

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580390A (zh) * 2019-09-27 2021-03-30 百度在线网络技术(北京)有限公司 基于智能音箱的安防监控方法、装置、音箱和介质
CN112580390B (zh) * 2019-09-27 2023-10-17 百度在线网络技术(北京)有限公司 基于智能音箱的安防监控方法、装置、音箱和介质

Also Published As

Publication number Publication date
CN107622246A (zh) 2018-01-23
CN107622246B (zh) 2020-07-10

Similar Documents

Publication Publication Date Title
WO2019062347A1 (zh) 人脸识别方法及相关产品
US11288504B2 (en) Iris liveness detection for mobile devices
US11074436B1 (en) Method and apparatus for face recognition
US20220044056A1 (en) Method and apparatus for detecting keypoints of human body, electronic device and storage medium
CN111556278B (zh) 一种视频处理的方法、视频展示的方法、装置及存储介质
CN108197586B (zh) 脸部识别方法和装置
WO2017181769A1 (zh) 一种人脸识别方法、装置和***、设备、存储介质
US20190354746A1 (en) Method and apparatus for detecting living body, electronic device, and storage medium
CN108399349B (zh) 图像识别方法及装置
CN105528573B (zh) 用户终端设备及其虹膜识别方法
JP7026225B2 (ja) 生体検出方法、装置及びシステム、電子機器並びに記憶媒体
WO2021031609A1 (zh) 活体检测方法及装置、电子设备和存储介质
WO2019134516A1 (zh) 全景图像生成方法、装置、存储介质及电子设备
TW201911130A (zh) 一種翻拍影像識別方法及裝置
CN108280418A (zh) 脸部图像的欺骗识别方法及装置
WO2019011073A1 (zh) 人脸活体检测方法及相关产品
WO2017166469A1 (zh) 一种基于智能电视机的安防方法、装置
EP4033458A2 (en) Method and apparatus of face anti-spoofing, device, storage medium, and computer program product
CN105426730A (zh) 登录验证处理方法、装置及终端设备
CN112805722A (zh) 减少面部识别中的误报的方法和装置
CN108154466A (zh) 图像处理方法及装置
WO2019011106A1 (zh) 状态控制方法及相关产品
US20230206093A1 (en) Music recommendation method and apparatus
CN112989299A (zh) 一种交互式身份识别方法、***、设备及介质
WO2022063189A1 (zh) 显著性元素识别方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18861114

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18861114

Country of ref document: EP

Kind code of ref document: A1