WO2019033904A1 - 登录验证方法、***及计算机可读存储介质 - Google Patents

登录验证方法、***及计算机可读存储介质 Download PDF

Info

Publication number
WO2019033904A1
WO2019033904A1 PCT/CN2018/096877 CN2018096877W WO2019033904A1 WO 2019033904 A1 WO2019033904 A1 WO 2019033904A1 CN 2018096877 W CN2018096877 W CN 2018096877W WO 2019033904 A1 WO2019033904 A1 WO 2019033904A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
voiceprint
server
face
watermark
Prior art date
Application number
PCT/CN2018/096877
Other languages
English (en)
French (fr)
Inventor
徐佳良
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2019033904A1 publication Critical patent/WO2019033904A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan

Definitions

  • the present application relates to the field of login verification technologies, and in particular, to a login verification method, system, and computer readable storage medium.
  • biometric login method has the advantage of identifying security compared to the simple account password verification method and verification code verification method.
  • voiceprint verification login and face verification login still have certain security risks.
  • voiceprint registration can bypass the verification mechanism of the financial system through pre-recorded voiceprint information, and face registration can also be forged by face photos or face videos. Therefore, the traditional account login verification method has great security risks in the financial field, and it also poses a severe test for the security problem of the user's financial account.
  • the main purpose of the present application is to provide a login verification method, system, and computer readable storage medium, which are intended to solve the technical problem that a traditional user account login verification method has security risks in the financial field.
  • the embodiment of the present application provides a login verification method based on an operation terminal, where the login verification method based on the operation terminal includes:
  • the operation terminal When the user's account login command is detected, the operation terminal generates an account login request, and outputs a face watermark collection prompt and a voiceprint watermark collection prompt;
  • the operation terminal collects face data and voiceprint data input by the user based on the face watermark collection prompt and the voiceprint watermark collection prompt, and adds the collected face data and the voiceprint data to the account login request. ;
  • the operation terminal sends the account login request to the server;
  • the operation terminal receives a verification instruction fed back by the server based on the account login request, wherein the verification instruction includes a verification success instruction or a verification failure instruction.
  • the embodiment of the present application further provides a server-based login verification method, where the server-based login verification method includes:
  • the server When the server receives the account login request, parsing the account login request to obtain the face data and the voiceprint data;
  • the server matches the face data with preset face data in the server, and matches the voiceprint data with preset voiceprint data in the server;
  • the server detects that the face data and the voiceprint data respectively match the preset face data and the preset voiceprint data in the server, the verification success instruction is sent to the operation terminal.
  • the application further provides a login verification system, the login verification system comprising an operation terminal and a server, the login verification system comprising: a memory, a processor, a communication bus, and a login verification program stored on the memory,
  • the communication bus is used to implement a communication connection between the processor and the memory
  • the processor is configured to execute the login verification program to implement the following steps:
  • the operation terminal When the user's account login command is detected, the operation terminal generates an account login request, and outputs a face watermark collection prompt and a voiceprint watermark collection prompt;
  • the operation terminal collects face data and voiceprint data input by the user based on the face watermark collection prompt and the voiceprint watermark collection prompt, and adds the collected face data and the voiceprint data to the account login request. ;
  • the operation terminal sends the account login request to the server;
  • the operation terminal receives a verification instruction fed back by the server based on the account login request, wherein the verification instruction includes a verification success instruction or a verification failure instruction.
  • the present application also provides a computer readable storage medium storing one or more programs, the one or more programs being configurable by one or more processors Execute for:
  • the operation terminal When the user's account login command is detected, the operation terminal generates an account login request, and outputs a face watermark collection prompt and a voiceprint watermark collection prompt;
  • the operation terminal collects face data and voiceprint data input by the user based on the face watermark collection prompt and the voiceprint watermark collection prompt, and adds the collected face data and the voiceprint data to the account login request. ;
  • the operation terminal sends the account login request to the server;
  • the operation terminal receives a verification instruction fed back by the server based on the account login request, wherein the verification instruction includes a verification success instruction or a verification failure instruction.
  • the operation terminal when the user's account login instruction is detected, the operation terminal generates an account login request, and outputs a face watermark collection prompt and a voiceprint watermark collection prompt; and then the operation terminal collects the user based face watermark. Collecting prompts and voiceprint watermarks to collect the input face data and voiceprint data, and adding the collected face data and the voiceprint data to an account login request; then the operation terminal will use the account
  • the login request is sent to the server; finally, the operation terminal receives the verification instruction fed back by the server based on the account login request, wherein the verification instruction includes a verification success instruction or a verification failure instruction.
  • FIG. 1 is a schematic flowchart of a first embodiment of a login verification method based on an operation terminal according to the present application
  • FIG. 2 is a schematic flowchart of a step of adding the collected face data and the voiceprint data to an account login request according to the second embodiment of the login verification method of the operation terminal according to the second embodiment;
  • FIG. 3 is a schematic flowchart of the steps of adding the collected face data and the voiceprint data to an account login request according to the third embodiment of the login verification method of the operation terminal according to the third embodiment;
  • FIG. 4 is a schematic flowchart of a first embodiment of a server-based login verification method according to the present application
  • the server matches the face data with preset face data in a server, and the voiceprint data is in the server.
  • the server matches the face data with preset face data in the server, and presets the voiceprint data and the server.
  • FIG. 7 is a schematic flowchart of the steps of parsing the account login request in the fourth embodiment of the server-based login verification method of the present application.
  • FIG. 8 is a schematic flowchart of a step of parsing the account login request in the fifth embodiment of the server-based login verification method according to the present application;
  • FIG. 9 is a schematic structural diagram of a device in a hardware operating environment involved in a method according to an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a scenario of a method according to an embodiment of the present application.
  • the application provides a login verification method based on an operation terminal.
  • the operation terminal-based login verification method includes:
  • Step S10 when detecting the account login command of the user, the operation terminal generates an account login request, and outputs a face watermark collection prompt and a voiceprint watermark collection prompt;
  • the user can perform the financial account login operation in the operation terminal.
  • the account login operation can also use the face recognition verification login and the voiceprint identification verification login and other biometric login methods.
  • the operation terminal When receiving the account login command of the user, the operation terminal generates an account login request, which is mainly used for requesting login verification, and is used as an operation terminal to send a request instruction for verifying data to the server, and the account login request passes through the operation terminal itself.
  • the face verification instruction generator and the voiceprint verification instruction generator generate a face verification instruction and a voiceprint verification instruction, and output a face watermark collection prompt and a voiceprint watermark collection prompt to the user.
  • the face watermark collection prompt refers to a specified action that is specified by the user on the basis of the face recognition verification.
  • the voiceprint watermark collection prompt refers to the specified user to say the specified password on the basis of the voice recognition verification.
  • the face watermark collection prompt requires the user to face the camera device of the operation terminal to make a prescribed action such as a blink of an eye, which means that the face verification command generated by the face verification command generator of the operation terminal includes a blink of an eye, etc.
  • the action command; the voiceprint watermark collection prompt requires the user to face the recording device of the operation terminal to make a retelling according to the voice prompt or the login password displayed by the operation terminal, which means that the voiceprint verification command generated by the voiceprint verification command generator of the operation terminal
  • the login language in the login password is included. The user only needs to follow the prompts to make corresponding actions and passwords in the camera device and the recording device.
  • Step S20 the operation terminal collects face data and voiceprint data input by the user based on the face watermark collection prompt and the voiceprint watermark collection prompt, and adds the collected face data and the voiceprint data.
  • the camera device and the recording device on the operation terminal are simultaneously opened to collect the data input by the user.
  • the user repeats or reproduces the specified face action and login password in the face watermark collection prompt and the voiceprint watermark collection prompt according to the face watermark collection prompt and the voiceprint watermark collection prompt, as the face data and the voiceprint data.
  • the face data and voiceprint data as verification information for verifying the characteristics of the user's personal information will be added to the account login request to wait for subsequent operations.
  • Step S30 the operation terminal sends the account login request to the server
  • Step S40 The operation terminal receives a verification instruction fed back by the server based on the account login request, wherein the verification instruction includes a verification success instruction or a verification failure instruction.
  • the operation terminal adds the personal authentication data such as the user face data and the voiceprint data to the account login request
  • the operation terminal sends the account login request to the server, and the server analyzes and verifies the account login request.
  • the server will verify the account login request and feed back the corresponding verification information to the operation terminal.
  • the operation terminal will receive the verification success instruction sent by the server; and when the verification fails, the operation terminal will receive the verification failure instruction sent by the server.
  • the operation terminal may, according to the verification instruction, decide whether to log in through the account, thereby authorizing whether the financial account is logged in.
  • the operation terminal can lock the account corresponding to the account login request, and prompt the user to unlock according to the preset unlocking process.
  • the server After the operation terminal sends the account login request to the server, the server returns a verification instruction based on the account login request. Assuming that the verification command received by the operation terminal is not a verification success instruction but a verification failure instruction, the operation terminal cannot log in through the user's account. Based on this, the present embodiment adds a judging mechanism, that is, a verification mechanism for verifying a failed instruction. When the operation terminal receives the verification failure command for a period of time greater than the preset number, it proves that the user repeatedly sends the account login request multiple times in a short time, and all the verification fails.
  • a judging mechanism that is, a verification mechanism for verifying a failed instruction.
  • the account login request fails to be verified multiple times, which will trigger the account lockout mechanism, that is, the user's financial login account will be locked.
  • the user cannot continue to log in to the account and cannot send an account login request.
  • the financial account corresponding to the account login request will be locked, and the account in the locked state will be in the system protection state, and the user needs to unlock the account to re-use the account. Therefore, the operation terminal will prompt the user according to the preset unlocking process. Unlock it.
  • one or more verification methods such as security policy, bank account verification, and mobile phone number verification are used for unlocking, and the verification manner includes, but is not limited to, the above.
  • the operation terminal when the user's account login instruction is detected, the operation terminal generates an account login request, and outputs a face watermark collection prompt and a voiceprint watermark collection prompt; and then the operation terminal collects the user based face watermark. Collecting prompts and voiceprint watermarks to collect the input face data and voiceprint data, and adding the collected face data and the voiceprint data to an account login request; then the operation terminal will use the account
  • the login request is sent to the server; finally, the operation terminal receives the verification instruction fed back by the server based on the account login request, wherein the verification instruction includes a verification success instruction or a verification failure instruction.
  • a second embodiment of the login verification method based on the operation terminal is proposed.
  • the second embodiment and the first embodiment are The difference is that the step of adding the collected face data and the voiceprint data to the account login request includes:
  • Step S21 the operation terminal adds a first digital watermark to the collected face data and voiceprint data to obtain new face data and voiceprint data;
  • Digital watermarking refers to embedding some identification information directly into a digital carrier (including multimedia, documents, video, etc.) or indirectly (modifying the structure of a specific area) without affecting the use value of the original carrier, and is not easily detected and Marked information that is modified again, but that can be recognized and recognized by the producer. For example, in the source data of an image or video, an iconic code is added, the code does not affect the normal use of the image or video, but in the process of the terminal identifying the source data, the code will be authenticated as the source data. Information is identified, so digital watermark can be used as an important identifier to verify the security level of its carrier.
  • the method of adding digital watermark can be realized by various watermark algorithms, such as spatial domain algorithm, frequency domain algorithm and so on.
  • the first digital watermark is added thereto to verify the authenticity of the face data and the voiceprint data, thereby obtaining new face data and voiceprint data.
  • the first digital watermark is added to the face data and the voiceprint data, which is equivalent to adding an identification code for the face data and the voiceprint data, and adding the identity identifier to the face data and the voiceprint data through the first digital watermark, thereby ensuring the data. Source security.
  • Step S22 the operation terminal adds the new face data and voiceprint data to an account login request.
  • the first digital watermark can ensure that the face data and the voiceprint data become the data source approved by the server, so that the face data and the voiceprint data are actually collected on the operation terminal, instead of being passed through other
  • the source data provided by the non-server authenticated operation terminal can ensure the reliability of the face data and the voiceprint data through the first digital watermark, thereby preventing the server from receiving illegal deceptive face data and voiceprint data. To eliminate the possibility of fraudulent verification success.
  • the first digital watermark is not a fixed data code, and it may be dynamically changed, that is, each time the digital watermark flow is completed, the first digital watermark may undergo a certain change, thereby avoiding The fixed first digital watermark is illegally acquired, thereby affecting the security of the first digital watermark.
  • a third embodiment of the login verification method based on the operation terminal is proposed.
  • the third embodiment and the first embodiment are The difference between the face data and the voiceprint data is the initial face data and the initial voiceprint data, and the collected face data and the voiceprint data are added to the account login.
  • the requested steps also include:
  • Step S23 the operation terminal converts the initial face data into target face data of a first preset format
  • the initial face data and the initial voiceprint data are added to the account login request, subsequent data analysis is required.
  • the operation terminal adds a data conversion step for the initial face data and the initial voiceprint data.
  • the initial face data can be collected by the camera device, and may be in a default video format (such as avi, mp4) or an image format (such as jpg, png), and the operation terminal can use the initial face data to be collected by the camera device.
  • the feature is configured to assign the initial face data to the first preset format, that is, the collected initial face data is converted from the default format at the time of original collection to the first preset format, and the target face data of the first preset format is obtained.
  • the first preset format may be a multimedia format recognized by an industry standard, or may be a specific format recognizable by the operation terminal.
  • Step S24 the operation terminal converts the initial voiceprint data into target voiceprint data of a second preset format
  • the initial voiceprint data can be collected by the recording device, and the default audio format (such as mp3) can exist, and the operation terminal can use the characteristics of the initial voiceprint data collected by the recording device to give the initial voiceprint data to the second pre-
  • the format that is, the collected initial voiceprint data is converted from the default format at the time of original acquisition to the second preset format, and the target voiceprint data of the second preset format is obtained.
  • the second format may be a multimedia format recognized by an industry standard, or may be a specific format recognizable automatically by the operation terminal.
  • Step S25 the operation terminal adds the target face data and the target voiceprint data to an account login request.
  • the present application provides a server-based login verification method.
  • the server-based login verification method includes:
  • Step S50 when the server receives the account login request, parsing the account login request to obtain face data and voiceprint data;
  • the server will act as a carrier for verifying the account login request of the operating terminal.
  • the server parses the account login request to parse the face data and voiceprint data in the account login request.
  • the parsing function is mainly to distinguish the face data and the voiceprint data from the account login request, to avoid data coupling between the face data and the voiceprint data, thereby destroying the purity of the face data and the voiceprint data.
  • Step S60 the server matches the face data with preset face data in the server, and matches the voiceprint data with preset voiceprint data in the server;
  • the server After the face data and voiceprint data are parsed, the server will detect whether the data is legal and compliant through the face data.
  • the method is to match the face data with the preset face data saved in the server.
  • the preset face data refers to face data saved by the server when the user registers the financial system account for real name verification. Since the face data is not easily changed, the face data can be used as a verification method for verifying whether the account is logged in by the holder.
  • the server will verify and match the voiceprint data.
  • the voiceprint information of the user is recorded, and the server saves the recorded voiceprint information for reference data of subsequent voiceprint verification.
  • Step S70 When the server detects that the face data and the voiceprint data respectively match the preset face data and the preset voiceprint data in the server, the server sends a verification success instruction to the operation terminal.
  • the face data verification or the voiceprint data verification is compared with the preset face data or the preset voiceprint data in the server, that is, the preset face data and the preset voiceprint data saved in the server will be
  • the algorithm is compiled in advance to obtain corresponding data features.
  • the face data and voiceprint data received by the server also need to be compiled by the algorithm to obtain corresponding data features. After comparing the two, the matching degree of the feature is obtained, and the matching degree is used as a reference flag for whether the matching is successful.
  • face data verification and voiceprint data verification are successful.
  • the server sends a verification success instruction to the operation terminal as feedback information of the account login request.
  • the server when the server detects that the face data does not match the preset face data in the server, it needs to send a verification failure instruction to the operation terminal; or when the server detects the voiceprint data and the preset in the server. When the voiceprint data does not match, a verification failure instruction is sent to the operation terminal.
  • the face data verification and voiceprint data verification can appear in the following four cases: (1,1), (1,0), (0,1), (0,0).
  • the account login verification in this embodiment is as follows:
  • the server may send a verification failure instruction to the operation terminal; and if the server detects that the face data and the preset face data match each other, the voiceprint data When the preset voiceprint data cannot be matched, the verification failure instruction is sent to the operation terminal. That is to say, the verification of the face data fails, the server will not verify the voiceprint data, and the verification fails; if the face data verification is successful, the voiceprint data verification fails, and the verification still fails. Only when the face data is successfully verified and the voiceprint data is successfully verified, the successful verification of the account login request can be completed. In this way, the server can maximize the security of financial accounts and prevent security risks.
  • a fifth embodiment of the server-based login verification method is proposed. Referring to FIG. 5, between the fifth embodiment and the fourth embodiment. The difference is that the face data includes face feature data and face watermark data, and the preset face data includes standard face feature data and standard face watermark data.
  • the step of the server matching the face data with the preset face data in the server, and matching the voiceprint data with the preset voiceprint data in the server includes:
  • Step S61 when the server detects that the matching degree of the facial feature data and the standard facial feature data in the server is greater than a first threshold, matching the face watermark data with the standard face watermark data;
  • Step S62 when the server detects that the matching degree between the face watermark data and the standard face watermark data in the server is greater than a second threshold, matching the voiceprint data with the preset voiceprint data in the server.
  • the face data includes, in addition to the traditional face feature data, the face watermark data fed back by the user based on the face watermark collection prompt.
  • the face feature data refers to the muscle layout and texture status of the user's face, and the data can reflect whether the currently logged-in user is a registered user of the financial account; and the face watermark data refers to the user's prompt according to the face watermark collection.
  • the server in addition to the recorded standard face feature data of the user, the preset standard face watermark data representing the face watermark action data is saved. That is to say, the server stores the standard face feature data when the user registers the financial account, and also saves the standard face watermark data of the facial motion and the change of the texture state by the popular face action.
  • the server can perform unified judgment according to the face data corresponding to the face verification command.
  • the first threshold and the second threshold are added in the embodiment, and the matching data is established to determine the quantization standard.
  • the first threshold is used as the verification minimum threshold of the face feature data
  • the second threshold is used as the verification minimum threshold of the face watermark data.
  • the server detects that the matching degree between the face watermark data and the standard face watermark data in the server is greater than the second threshold, it is proved that the face watermark data and the standard face watermark data can match each other, that is, the face watermark data verification passes.
  • the face feature data and the face watermark data are successfully verified, that is, the face data verification is successful, and the server will verify the voiceprint data.
  • a sixth embodiment of a server-based login verification method is proposed.
  • the voiceprint data includes voiceprint feature data and voiceprint watermark data
  • the preset voiceprint data includes standard voiceprint feature data and standard voiceprint watermark data.
  • the step of the server matching the face data with the preset face data in the server, and matching the voiceprint data with the preset voiceprint data in the server further includes:
  • Step S63 when the server detects that the matching degree of the voiceprint feature data and the standard voiceprint feature data is greater than a third threshold, matching the voiceprint watermark data with the standard voiceprint watermark data;
  • Step S64 when the server detects that the matching degree of the voiceprint watermark data with the standard voiceprint watermark data in the server is greater than a fourth threshold, determining that the face data and the voiceprint data are respectively associated with the preset face in the server. The data matches the preset voiceprint data.
  • the voiceprint data includes the voiceprint watermark data fed back by the user based on the voiceprint watermark collection prompt.
  • the voiceprint feature data refers to the voice color and tone of the user voice, and the data can reflect whether the currently logged-in user is a registered user of the financial account, that is, the user has recorded the voiceprint information on the server side in advance; the voiceprint watermark The data refers to the login password made by the user according to the voiceprint watermark collection prompt.
  • the preset standard voiceprint feature data representing the voiceprint watermark login password is also saved. That is to say, the standard voiceprint feature data when the user registers the financial account is saved in the server, and the voiceprint watermark data corresponding to the login password in the voiceprint verification command is also saved.
  • the server can perform unified judgment according to the voiceprint data corresponding to the voiceprint verification command.
  • the third threshold and the fourth threshold are added in the embodiment, and the matching data is established to determine the quantization standard.
  • the third threshold is used as the verification minimum threshold of the voiceprint feature data
  • the fourth threshold is used as the verification minimum threshold of the voiceprint watermark data.
  • the server will verify the voiceprint watermark data.
  • the server detects that the matching degree of the voiceprint watermark data with the standard voiceprint watermark data in the server is greater than the fourth threshold, it is proved that the voiceprint watermark data and the standard voiceprint watermark data can match each other, that is, the voiceprint watermark data verification passes.
  • the voiceprint feature data and the voiceprint watermark data are successfully verified, that is, the voiceprint data verification is successful, and the server sends a verification success instruction to the operation terminal.
  • a seventh embodiment of the server-based login verification method is proposed. Referring to FIG. 7, between the seventh embodiment and the fourth embodiment, The difference is that the step of parsing the account login request includes:
  • Step S51 the server parses the account login request, obtains face data and voiceprint data, and extracts the first digital watermark in the face data and the voiceprint data;
  • the server Since the face data and the voiceprint data are saved in the account login request, the server needs to resolve the account login request to obtain specific face data and voiceprint data.
  • the server needs to extract the first digital watermark in the face data and the voiceprint data.
  • the server will compare with the preset second digital watermark based on the first digital watermark.
  • the preset second digital watermark is a reference verification value for verifying the face data and the voiceprint data, and when the operation terminal adds the first digital watermark to the face data and the voiceprint data, the second digital watermark is performed through the network connection and the server. Synchronize. Therefore, in the normal case, the first digital watermark is consistent with the second digital watermark, that is, when the first digital watermark is determined, the synchronization function of the network connection can become a digital watermark corresponding to the first digital watermark in the server.
  • Step S52 When the server detects that the first digital watermark is inconsistent with the preset second digital watermark, the server sends a verification failure instruction to the operation terminal.
  • the first digital watermark extracted by the server and the preset second digital watermark cannot be mutually exchanged because the operation terminal has been falsified or sent without the face data and voiceprint data collection, or when the digital watermark is illegally added.
  • the mapping that is, the first digital watermark and the second digital watermark are inconsistent, which proves that the digital watermark verification fails, and the server will send a verification failure instruction to the operation terminal.
  • an eighth embodiment of a server-based login verification method is proposed. Referring to FIG. 8, between the eighth embodiment and the fourth embodiment, The difference is that the step of parsing the account login request includes:
  • Step S53 the server extracts the target face data of the first preset format and the target voiceprint data of the second preset format in the account login request;
  • the server needs to first parse the target face data and the target voiceprint data, and refer to the format difference between the target face data and the target voiceprint data, by extracting the first preset format of the target face data and the second target voice data.
  • the preset format is differentiated to obtain data sources of different formats.
  • Step S54 the server parses the target face data and the target voiceprint data into initial face data and initial voiceprint data.
  • the server converts the target face data of the first preset format and the target voiceprint data of the second preset format. It is assumed that directly calling the data source of the preset format will have certain obstacles to the calling process, mainly requiring decoding during the calling process. This will increase the workload during the call, so the target face data and the target voiceprint data can be parsed into the initial face data and the initial voiceprint data in one step for the call.
  • FIG. 9 is a schematic structural diagram of a device in a hardware operating environment involved in a method according to an embodiment of the present application.
  • the terminal in the embodiment of the present application may be a PC, or may be a smart phone, a tablet computer, an e-book reader, and an MP3 (Moving). Picture Experts Group Audio Layer III, motion picture expert compression standard audio layer 3) player, MP4 (Moving Picture Experts Group Audio Layer IV, dynamic video experts compress standard audio layers 3) terminal devices such as players and portable computers.
  • MP3 Motion Picture Experts Group Audio Layer III, motion picture expert compression standard audio layer 3
  • MP4 Moving Picture Experts Group Audio Layer IV, dynamic video experts compress standard audio layers 3 terminal devices such as players and portable computers.
  • the login verification system can include a processor 1001, such as a CPU, a memory 1005, and a communication bus 1002.
  • the communication bus 1002 is used to implement connection communication between the processor 1001 and the memory 1005.
  • the memory 1005 may be a high speed RAM memory or a stable memory (non-volatile) Memory), such as disk storage.
  • the memory 1005 can also optionally be a storage device independent of the aforementioned processor 1001.
  • the login verification system may further include a user interface, a network interface, a camera, and an RF (Radio). Frequency, RF) circuits, sensors, audio circuits, WiFi modules, and more.
  • the user interface may include a display, an input unit such as a keyboard, and the optional user interface may also include a standard wired interface, a wireless interface.
  • the network interface can optionally include a standard wired interface or a wireless interface (such as a WI-FI interface).
  • the login verification system structure shown in FIG. 7 does not constitute a limitation on the login verification system, and may include more or less components than those illustrated, or combine some components or different components. Arrangement.
  • an operating system As shown in FIG. 9, an operating system, a network communication module, and a login verification program may be included in the memory 1005 as a computer storage medium.
  • the operating system is a program that manages and controls the hardware and software resources of the login verification system, and supports the operation of the login verification program and other software and/or programs.
  • the network communication module is used to implement communication between components within the memory 1005 and with other hardware and software in the login verification system.
  • the processor 1001 is configured to execute the login verification program stored in the memory 1005, and implements the following steps:
  • the operation terminal detects the user's account login instruction, the account login request is generated, and the face watermark collection prompt and the voiceprint watermark collection prompt are output;
  • the operation terminal collects the face data and the voiceprint data input by the user based on the face watermark collection prompt and the voiceprint watermark collection prompt, and adds the collected face data and the voiceprint data to the account login request;
  • the operation terminal sends an account login request to the server;
  • the operation terminal receives the verification instruction fed back by the server based on the account login request, wherein the verification instruction includes a verification success instruction and a verification failure instruction.
  • the application also provides a computer readable storage medium storing one or more programs, the one or more programs being further executable by one or more processors for:
  • the operation terminal detects the user's account login instruction, the account login request is generated, and the face watermark collection prompt and the voiceprint watermark collection prompt are output;
  • the operation terminal collects the face data and the voiceprint data input by the user based on the face watermark collection prompt and the voiceprint watermark collection prompt, and adds the collected face data and the voiceprint data to the account login request;
  • the operation terminal sends an account login request to the server;
  • the operation terminal receives the verification instruction fed back by the server based on the account login request, wherein the verification instruction includes a verification success instruction and a verification failure instruction.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present application, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Collating Specific Patterns (AREA)
  • Telephonic Communication Services (AREA)

Abstract

本申请公开了一种登录验证方法、***及计算机可读存储介质,该登录验证方法包括:当检测到用户的账号登录指令时,操作终端生成账号登录请求,并输出人脸水印采集提示和声纹水印采集提示;操作终端采集用户基于人脸水印采集提示和声纹水印采集提示所输入的人脸数据和声纹数据,并将所采集到的人脸数据和声纹数据加入账号登录请求;操作终端将账号登录请求发送至服务器;操作终端接收服务器基于账号登录请求所反馈的验证指令。本申请通过将人脸识别验证和声纹识别验证结合在一起,通过双重验证的方式,提高账号登录的安全性,同时通过加入人脸水印和声纹水印的方式,进一步提高了安全等级,杜绝了登录安全隐患,保护了账号登录的安全性。

Description

登录验证方法、***及计算机可读存储介质
本申请要求于2017年08月14日提交中国专利局、申请号为201710694323.9、发明名称为“登录验证方法、***及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及登录验证技术领域,尤其涉及一种登录验证方法、***及计算机可读存储介质。
背景技术
目前,用户通常可以通过声纹或人脸识别等生物识别方式在***平台上登录对应的***账号,生物识别登录方式相对于简单的账号密码验证方式和验证码验证方式具有识别安全的优势。
但是在金融领域中,传统的声纹验证登录和人脸验证登录依旧存在着一定的安全风险。例如,声纹登录可以通过事先录制的声纹信息绕过金融***的验证机制,而人脸登录也可以人脸相片或人脸视频进行伪造。因此,传统的账号登录验证方式在金融领域中存在极大的安全隐患,对用户的金融账号的安全问题也造成严峻的考验。
发明内容
本申请的主要目的在于提供一种登录验证方法、***及计算机可读存储介质,旨在解决传统的用户账号登录验证方式在金融领域中存在安全隐患的技术问题。
为实现上述目的,本申请实施例提供一种基于操作终端的登录验证方法,所述基于操作终端的登录验证方法包括:
当检测到用户的账号登录指令时,操作终端生成账号登录请求,并输出人脸水印采集提示和声纹水印采集提示;
所述操作终端采集用户基于人脸水印采集提示和声纹水印采集提示所输入的人脸数据和声纹数据,并将所采集到的所述人脸数据和所述声纹数据加入账号登录请求;
所述操作终端将所述账号登录请求发送至服务器;
所述操作终端接收服务器基于账号登录请求所反馈的验证指令,其中所述验证指令包括验证成功指令或验证失败指令。
同时,本申请实施例还提供一种基于服务器的登录验证方法,所述基于服务器的登录验证方法包括:
当服务器接收到账号登录请求时,对所述账号登录请求进行解析,以获取人脸数据和声纹数据;
所述服务器将所述人脸数据与服务器中的预设人脸数据进行匹配,并将所述声纹数据与服务器中的预设声纹数据进行匹配;
当所述服务器检测到所述人脸数据和所述声纹数据分别与服务器中的预设人脸数据和预设声纹数据相匹配时,向操作终端发送验证成功指令。
本申请还提供一种登录验证***,所述登录验证***包括操作终端和服务器,所述登录验证***包括:存储器、处理器,通信总线以及存储在所述存储器上的登录验证程序,
所述通信总线用于实现处理器与存储器间的通信连接;
所述处理器用于执行所述登录验证程序,以实现以下步骤:
当检测到用户的账号登录指令时,操作终端生成账号登录请求,并输出人脸水印采集提示和声纹水印采集提示;
所述操作终端采集用户基于人脸水印采集提示和声纹水印采集提示所输入的人脸数据和声纹数据,并将所采集到的所述人脸数据和所述声纹数据加入账号登录请求;
所述操作终端将所述账号登录请求发送至服务器;
所述操作终端接收服务器基于账号登录请求所反馈的验证指令,其中所述验证指令包括验证成功指令或验证失败指令。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者一个以上程序,所述一个或者一个以上程序可被一个或者一个以上的处理器执行以用于:
当检测到用户的账号登录指令时,操作终端生成账号登录请求,并输出人脸水印采集提示和声纹水印采集提示;
所述操作终端采集用户基于人脸水印采集提示和声纹水印采集提示所输入的人脸数据和声纹数据,并将所采集到的所述人脸数据和所述声纹数据加入账号登录请求;
所述操作终端将所述账号登录请求发送至服务器;
所述操作终端接收服务器基于账号登录请求所反馈的验证指令,其中所述验证指令包括验证成功指令或验证失败指令。
本申请的技术方案中,首先当检测到用户的账号登录指令时,操作终端生成账号登录请求,并输出人脸水印采集提示和声纹水印采集提示;然后所述操作终端采集用户基于人脸水印采集提示和声纹水印采集提示所输入的人脸数据和声纹数据,并将所采集到的所述人脸数据和所述声纹数据加入账号登录请求;接着所述操作终端将所述账号登录请求发送至服务器;最后所述操作终端接收服务器基于账号登录请求所反馈的验证指令,其中验证指令包括验证成功指令或验证失败指令。本申请通过将人脸识别验证和声纹识别验证结合在一起,通过双重验证的方式,提高账号登录的安全性,同时通过加入人脸水印和声纹水印的方式,进一步提高了安全等级,杜绝了登录安全隐患,保护了账号登录的安全性。
附图说明
图1为本申请基于操作终端的登录验证方法第一实施例的流程示意图;
图2为本申请基于操作终端的登录验证方法第二实施例中所述将所采集到的所述人脸数据和所述声纹数据加入账号登录请求的步骤的细化流程示意图;
图3为本申请基于操作终端的登录验证方法第三实施例中所述所述将所采集到的所述人脸数据和所述声纹数据加入账号登录请求的步骤的细化流程示意图;
图4为本申请基于服务器的登录验证方法第一实施例的流程示意图;
图5为本申请基于服务器的登录验证方法第二实施例中所述所述服务器将所述人脸数据与服务器中的预设人脸数据进行匹配,并将所述声纹数据与服务器中的预设声纹数据进行匹配的步骤的细化流程示意图;
图6为本申请基于服务器的登录验证方法第三实施例中所述服务器将所述人脸数据与服务器中的预设人脸数据进行匹配,并将所述声纹数据与服务器中的预设声纹数据进行匹配的步骤的细化流程示意图;
图7为本申请基于服务器的登录验证方法第四实施例中所述对所述账号登录请求进行解析的步骤的细化流程示意图;
图8为本申请基于服务器的登录验证方法第五实施例中所述对所述账号登录请求进行解析的步骤的细化流程示意图;
图9为本申请实施例方法涉及的硬件运行环境的设备结构示意图;
图10为本申请实施例方法的一场景示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种基于操作终端的登录验证方法,在基于操作终端的登录验证方法第一实施例中,参照图1,所述基于操作终端的登录验证方法包括:
步骤S10,当检测到用户的账号登录指令时,操作终端生成账号登录请求,并输出人脸水印采集提示和声纹水印采集提示;
用户可以在操作终端中执行金融账号登录操作,账号登录操作除了通过传统的手机号码验证登录和账号密码验证登录之外,还可以使用人脸识别验证登录和声纹识别验证登录等生物识别登录方式。操作终端在接收到用户的账号登录指令时,会生成账号登录请求,该账号登录请求主要用于请求登录验证,将作为操作终端向服务器发送验证数据的请求指令,并且账号登录请求通过操作终端本身的人脸验证指令发生器和声纹验证指令发生器,生成人脸验证指令和声纹验证指令,并向用户输出人脸水印采集提示和声纹水印采集提示。
人脸水印采集提示指的是规定用户在人脸识别验证的基础上作出规定的指定动作,声纹水印采集提示指的是规定用户在声音识别验证的基础上说出指定口令。例如人脸水印采集提示要求用户面对操作终端的摄像装置做出点头眯眼等规定动作,则意味着操作终端的人脸验证指令发生器所生成的人脸验证指令中包含了点头眯眼等动作指令;声纹水印采集提示要求用户面对操作终端的录音装置根据语音提示或操作终端显示的登录口令做出复述,则意味着操作终端的声纹验证指令发生器所生成的声纹验证指令中包含了登录口令中的登录语。用户只需按照提示在摄像装置和录音装置中做出相应动作和口令即可。
步骤S20,所述操作终端采集用户基于人脸水印采集提示和声纹水印采集提示所输入的人脸数据和声纹数据,并将所采集到的所述人脸数据和所述声纹数据加入账号登录请求;
参照图10,操作终端输出人脸水印采集提示和声纹水印采集提示时,将同时打开操作终端上的摄像装置以及录音装置,以采集用户所输入的数据。用户根据人脸水印采集提示和声纹水印采集提示,复述或复现人脸水印采集提示和声纹水印采集提示中的所规定的人脸动作和登录口令,以作为人脸数据和声纹数据。同时,人脸数据和声纹数据作为验证用户个人信息特征的验证信息,将被加入到账号登录请求中,以等待后续操作。
步骤S30,所述操作终端将所述账号登录请求发送至服务器;
步骤S40,所述操作终端接收服务器基于账号登录请求所反馈的验证指令,其中所述验证指令包括验证成功指令或验证失败指令。
当操作终端将记载着用户人脸数据和声纹数据等个人验证数据加入到账号登录请求中时,操作终端即将账号登录请求发送至服务器,由服务器对账号登录请求进行解析验证。
而服务器将对账号登录请求进行验证,并反馈给操作终端相应的验证信息。当服务器对账号登录请求验证通过时,操作终端将接收到服务器发送的验证成功指令;而验证未通过时,操作终端将接受到服务器发送的验证失败指令。操作终端可根据验证指令,决定是否通过账号登录请求,从而授权金融账号是否登录。
需要说明的是,若操作终端在预设时间内接收到的验证失败指令达到一定次数,则操作终端可锁定账号登录请求对应的账号,并提示用户根据预设解锁流程进行解锁。
操作终端将账号登录请求发送至服务器之后,服务器会返回一个基于账号登录请求的验证指令。假设操作终端接收到的验证指令不是验证成功指令,而是验证失败指令,那么操作终端将无法通过用户的账号登录。在此基础之上,本实施例增加判断机制,即对验证失败指令的验证机制。当操作终端在一段时间内所接收到的验证失败指令大于预设数量时,即证明用户在短时间内重复发送了多次的账号登录请求,并且全部验证失败。为保障用户***账号的安全,账号登录请求多次验证失败将触发账号锁定机制,即用户的金融的登录账号将被锁定住。用户无法继续进行账号验证登录,也无法再发送账号登录请求。此时,账号登录请求对应的金融账号将被锁定,处于锁定状态的账号会处于***保护状态,用户需要对其进行解锁才能重新使用该账号,因此,操作终端将提示用户根据预设的解锁流程进行解锁。例如,通过密保安全问题,银行账号验证,手机号码验证等一种或多种验证方式进行解锁,所述验证方式包括但不限于以上所述。
本申请的技术方案中,首先当检测到用户的账号登录指令时,操作终端生成账号登录请求,并输出人脸水印采集提示和声纹水印采集提示;然后所述操作终端采集用户基于人脸水印采集提示和声纹水印采集提示所输入的人脸数据和声纹数据,并将所采集到的所述人脸数据和所述声纹数据加入账号登录请求;接着所述操作终端将所述账号登录请求发送至服务器;最后所述操作终端接收服务器基于账号登录请求所反馈的验证指令,其中验证指令包括验证成功指令或验证失败指令。本申请通过将人脸识别验证和声纹识别验证结合在一起,通过双重验证的方式,提高账号登录的安全性,同时通过加入人脸水印和声纹水印的方式,进一步提高了安全等级,杜绝了登录安全隐患,保护了账号登录的安全性。
进一步地,在本申请基于操作终端的登录验证方法第一实施例的基础上,提出基于操作终端的登录验证方法第二实施例,参照图2,所述第二实施例与第一实施例之间的区别在于,所述将所采集到的所述人脸数据和所述声纹数据加入账号登录请求的步骤包括:
步骤S21,所述操作终端在采集到的人脸数据和声纹数据中加入第一数字水印,获得新的人脸数据和声纹数据;
数字水印指的是将一些标识信息直接嵌入数字载体当中(包括多媒体、文档、视频等)或是间接表示(修改特定区域的结构),且不影响原载体的使用价值,也不容易被探知和再次修改,但可以被生产方识别和辨认的标记信息。例如在图像或视频的源数据中,加入一段标志性的代码,该段代码不影响图像或视频的正常使用,但在终端识别源数据的过程中,该段代码将被作为源数据的身份验证信息进行识别,因此数字水印能够很大程度上作为验证其载体安全级别的重要标识,添加数字水印的方法可以利用各种水印算法实现,如空域算法、频域算法等等。
操作终端采集到人脸数据和声纹数据时,在其中加入第一数字水印,以验证人脸数据和声纹数据的真实性,从而获得新的人脸数据和声纹数据。一般地,第一数字水印加入到人脸数据和声纹数据中,相当于为人脸数据和声纹数据添加标识码,通过第一数字水印为人脸数据和声纹数据添加身份标识,保障了数据来源的安全性。
步骤S22,所述操作终端将所述新的人脸数据和声纹数据加入账号登录请求。
将新的人脸数据和声纹数据加入到账号登录请求中。在本实施例当中,第一数字水印能够保证人脸数据和声纹数据成为服务器端所认可的数据源,保障让人脸数据和声纹数据确实是在操作终端上采集而来,而不是通过其他非服务器所认证的操作终端提供的源数据,通过第一数字水印,操作终端能够保障人脸数据和声纹数据的可靠性,从而防止服务器接收到不合法的欺骗性人脸数据和声纹数据,杜绝欺诈性验证成功的可能性。
需要说明的是,第一数字水印并非是固定不变的数据代码,它可以是动态变化的,即在每次数字水印的流程作为完成之后,第一数字水印可以发生一定的改变形式,从而避免固定的第一数字水印被非法获取,进而影响到第一数字水印的安全性。
进一步地,在本申请基于操作终端的登录验证方法第一实施例的基础上,提出基于操作终端的登录验证方法第三实施例,参照图3,所述第三实施例与第一实施例之间的区别在于,设所述采集到的人脸数据和声纹数据为初始人脸数据和初始声纹数据,所述将所采集到的所述人脸数据和所述声纹数据加入账号登录请求的步骤还包括:
步骤S23,所述操作终端将所述初始人脸数据转换为第一预设格式的目标人脸数据;
在本实施例中,由于初始人脸数据和初始声纹数据加入到账号登录请求后需要进行后续的数据解析。为避免解析失误,如将二者的数据混淆解析,导致无法正常还原初始人脸数据和初始声纹数据,操作终端为初始人脸数据和初始声纹数据增加数据转换步骤。
一般地,初始人脸数据可通过摄像装置采集而成,可以默认的视频格式(如avi、mp4)或图像格式(如jpg、png)存在,操作终端可利用初始人脸数据由摄像装置采集而成的特性赋予初始人脸数据以第一预设格式,即所采集到的初始人脸数据由原来采集时的默认格式转换为第一预设格式,获得第一预设格式的目标人脸数据。所述第一预设格式可以是行业标准认定的多媒体格式,也可以是操作终端自定义的可识别的特定格式。
步骤S24,所述操作终端将所述初始声纹数据转换为第二预设格式的目标声纹数据;
同样,初始声纹数据可通过录音装置采集而成,可以默认的音频格式(如mp3)存在,操作终端可利用初始声纹数据由录音装置采集而成的特性赋予初始声纹数据以第二预设格式,即所采集到的初始声纹数据由原来采集时的默认格式转换为第二预设格式,获得第二预设格式的目标声纹数据。所述第二格式可以是行业标准认定的多媒体格式,也可以是操作终端自动以的可识别的特定格式。
步骤S25,所述操作终端将所述目标人脸数据和所述目标声纹数据加入账号登录请求。
转换为目标人脸数据和目标声纹数据之后,将其加入到账号登录请求中,以待后续操作。
本申请提供一种基于服务器的登录验证方法,在基于服务器的登录验证方法第一实施例中,参照图4,所述基于服务器的登录验证方法包括:
步骤S50,当服务器接收到账号登录请求时,对所述账号登录请求进行解析,以获取人脸数据和声纹数据;
在本实施例中,服务器将作为验证操作终端的账号登录请求的载体。当服务器接收到账号登录请求时,服务器将对账号登录请求进行解析,以便将加入账号登录请求中的人脸数据和声纹数据解析出来。解析功能主要是将人脸数据和声纹数据从账号登录请求中区分开来,避免人脸数据和声纹数据出现数据耦合,从而破坏人脸数据和声纹数据的纯洁性。
步骤S60,所述服务器将所述人脸数据与服务器中的预设人脸数据进行匹配,并将所述声纹数据与服务器中的预设声纹数据进行匹配;
当人脸数据和声纹数据解析出来之后,服务器将通过人脸数据检测数据是否合法合规。其方式为,将人脸数据与服务器中保存的预设人脸数据进行匹配。所述预设人脸数据指的是用户在注册金融***账号进行实名验证时,服务器端所保存的人脸数据。由于人脸数据不会轻易改变,因此人脸数据可以作为验证账号是否被持有人登录的验证方式。同时,服务器将对声纹数据进行验证匹配。同样,在用户注册金融账号的过程中,对用户的声纹信息进行记录,而服务器将记录下来的声纹信息进行保存,以作后续声纹验证的参考数据。
步骤S70,当所述服务器检测到所述人脸数据和所述声纹数据分别与服务器中的预设人脸数据和预设声纹数据相匹配时,向操作终端发送验证成功指令。
一般地,人脸数据验证或声纹数据验证是与服务器中的预设人脸数据或预设声纹数据进行特征对比,即在服务器中保存的预设人脸数据和预设声纹数据会事先经过算法编译,获得对应的数据特征。而服务器接收到的人脸数据和声纹数据也需要进行算法编译,获得对应的数据特征。将二者进行对比之后获取到特征的匹配度,并以匹配度的高低作为是否匹配成功的参考标志。当人脸数据验证和声纹数据验证验证成功时。服务器便向操作终端发送验证成功指令,以作为账号登录请求的反馈信息。
需要说明的是,当服务器检测到人脸数据与服务器中的预设人脸数据不匹配时,需要向操作终端发送验证失败指令;或者当服务器检测到所述声纹数据与服务器中的预设声纹数据不匹配时,向操作终端发送验证失败指令。
假设人脸数据验证失败,或者声纹数据验证失败,都将导致验证失败。假设验证成功的代号为1,验证失败的代号为0,则人脸数据验证和声纹数据验证可以出现以下4种情况:(1,1)、(1,0)、(0,1)、(0,0)。但是由于人脸数据验证优先于声纹数据验证,人脸数据验证失败时,无需进入声纹数据验证流程。因此,本实施例的账号登录验证情况为以下三种:
1、人脸数据验证成功,声纹数据验证成功;
2、人脸数据验证成功,声纹数据验证失败;
3、人脸数据验证失败。
因此,当服务器检测到人脸数据与预设人脸数据不匹配时,即可向操作终端发送验证失败指令;而若服务器检测到人脸数据与预设人脸数据相互匹配,而声纹数据与预设声纹数据无法匹配上时,则向操作终端发送验证失败指令。也就是说,人脸数据的验证失败了,服务器将不会对声纹数据进行验证,此时验证失败;而若人脸数据验证成功,声纹数据验证失败,此时验证依旧失败。只有人脸数据验证成功,且声纹数据验证成功,才能完成账号登录请求的成功验证。这样,服务器才能最大化地保障金融账号的安全性,杜绝安全隐患。
进一步地,在本申请基于服务器的登录验证方法第四实施例的基础上,提出基于服务器的登录验证方法第五实施例,参照图5,所述第五实施例与第四实施例之间的区别在于,所述人脸数据包括人脸特征数据和人脸水印数据,所述预设人脸数据包括标准人脸特征数据和标准人脸水印数据,
所述所述服务器将所述人脸数据与服务器中的预设人脸数据进行匹配,并将所述声纹数据与服务器中的预设声纹数据进行匹配的步骤包括:
步骤S61,当所述服务器检测到所述人脸特征数据与服务器中的所述标准人脸特征数据的匹配度大于第一阈值时,将人脸水印数据与标准人脸水印数据进行匹配;
步骤S62,当所述服务器检测到所述人脸水印数据与服务器中的所述标准人脸水印数据的匹配度大于第二阈值时,将声纹数据与服务器中的预设声纹数据进行匹配。
在本实施例中,人脸数据除了传统的人脸特征数据,还包括用户基于人脸水印采集提示所反馈的人脸水印数据。所述人脸特征数据指的是用户脸部的肌肉布局、纹理状况,该数据能够反应出当前登录用户是否为金融账号的注册用户;而人脸水印数据指的是用户根据人脸水印采集提示所做出的各种指令动作。而在服务器中,除了记录的用户的标准人脸特征数据之外,还保存了预设的代表人脸水印动作数据的标准人脸水印数据。也就是说,服务器中保存了用户注册金融账号时的标准人脸特征数据,还保存了大众化的人脸动作产生脸部肌肉布局、纹理状况变动情况的标准人脸水印数据。
由于人脸验证指令会在账号登录请求中一同***作终端发送至服务器,因此服务器能根据人脸验证指令所对应的人脸数据进行统一判断。而在匹配值的判断上,本实施例加入了第一阈值和第二阈值,将匹配数据建立起量化标准判断。第一阈值作为人脸特征数据的验证最低门限值,第二阈值作为人脸水印数据的验证最低门限值。当服务器检测到人脸特征数据与服务器中的标准人脸特征数据的匹配度大于第一阈值时,则证明人脸特征数据与标准人脸特征数据能够相互匹配,即人脸特征数据验证通过。此时,服务器将进行人脸水印数据的验证。当服务器检测到人脸水印数据与服务器中的标准人脸水印数据的匹配度大于第二阈值时,则证明人脸水印数据与标准人脸水印数据能够相互匹配,即人脸水印数据验证通过。而人脸特征数据和人脸水印数据验证成功,即证明人脸数据验证成功,服务器将进行声纹数据的验证。
进一步地,在本申请基于服务器的登录验证方法第四实施例的基础上,提出基于服务器的登录验证方法第六实施例,参照图6,所述第六实施例与第四实施例之间的区别在于,所述声纹数据包括声纹特征数据和声纹水印数据,所述预设声纹数据包括标准声纹特征数据和标准声纹水印数据,
所述所述服务器将所述人脸数据与服务器中的预设人脸数据进行匹配,并将所述声纹数据与服务器中的预设声纹数据进行匹配的步骤还包括:
步骤S63,当所述服务器检测到所述声纹特征数据与标准声纹特征数据的匹配度大于第三阈值时,将声纹水印数据与标准声纹水印数据进行匹配;
步骤S64,当所述服务器检测到所述声纹水印数据与服务器中的标准声纹水印数据的匹配度大于第四阈值时,确定人脸数据和声纹数据分别与服务器中的预设人脸数据和预设声纹数据相匹配。
本实施例中,声纹数据中除了声纹信息,还包括用户基于声纹水印采集提示所反馈的声纹水印数据。所述声纹特征数据指的是用户声音的音色、音调,该数据能够反应出当前登录用户是否为金融账号的注册用户,即用户在事先已经将声纹信息记录在服务器端了;声纹水印数据指的是用户根据声纹水印采集提示所做出的登录口令。在服务器中,除了保存有用户的标准声纹特征数据,还保存了预设的代表声纹水印登录口令的标准声纹特征数据。也就是说,服务器中保存用户注册金融账号时的标准声纹特征数据,还保存了声纹验证指令中对应登录口令的标注声纹水印数据
由于声纹验证指令会在账号登录请求中一同***作终端发送至服务器,因此服务器能根据声纹验证指令所对应的声纹数据进行统一判断。而在匹配值的判断上,本实施例加入了第三阈值和第四阈值,将匹配数据建立起量化标准判断。第三阈值作为声纹特征数据的验证最低门限值,第四阈值作为声纹水印数据的验证最低门限值。当服务器检测到声纹特征数据与服务器中的标准声纹特征数据的匹配度大于第三阈值时,则证明声纹特征数据与标准声纹特征数据能够相互匹配,即声纹特征数据验证通过。此时,服务器将进行声纹水印数据的验证。当服务器检测到声纹水印数据与服务器中的标准声纹水印数据的匹配度大于第四阈值时,则证明声纹水印数据与标准声纹水印数据能够相互匹配,即声纹水印数据验证通过。而声纹特征数据和声纹水印数据验证成功,即证明声纹数据验证成功,服务器将向操作终端发送验证成功指令。
进一步地,在本申请基于服务器的登录验证方法第四实施例的基础上,提出基于服务器的登录验证方法第七实施例,参照图7,所述第七实施例与第四实施例之间的区别在于,所述对所述账号登录请求进行解析的步骤包括:
步骤S51,所述服务器解析账号登录请求,获得人脸数据和声纹数据,并提取人脸数据和声纹数据中的第一数字水印;
由于人脸数据和声纹数据是保存在账号登录请求中,因此服务器需要先解析账号登录请求,以获得具体的人脸数据和声纹数据。当人脸数据和声纹数据中都加入了第一数字水印时,服务器需要先提取出人脸数据和声纹数据中的第一数字水印。同时,服务器将基于第一数字水印,与本身保存的预设的第二数字水印进行比较。预设的第二数字水印是验证人脸数据和声纹数据的参考验证值,在操作终端对人脸数据和声纹数据加入第一数字水印时,通过网络连接与服务器的第二数字水印进行同步。因此正常情况下第一数字水印与第二数字水印时一致,即第一数字水印一确定下来,那么通过网络连接的同步功能,即可在服务器中成为与第一数字水印相互对应的数字水印。
步骤S52,当所述服务器检测到所述第一数字水印与预设的第二数字水印不一致时,向操作终端发送验证失败指令。
假设操作终端因被篡改过或未经人脸数据和声纹数据采集即发送账号登录请求,或者经过非法添加数字水印时,在服务器提取的第一数字水印与预设的第二数字水印无法相互映射,即第一数字水印和第二数字水印不一致,这证明数字水印验证失败,服务器将向操作终端发送验证失败指令。
进一步地,在本申请基于服务器的登录验证方法第四实施例的基础上,提出基于服务器的登录验证方法第八实施例,参照图8,所述第八实施例与第四实施例之间的区别在于,所述对所述账号登录请求进行解析的步骤包括:
步骤S53,所述服务器提取所述账号登录请求中第一预设格式的目标人脸数据和第二预设格式的目标声纹数据;
服务器需要先将目标人脸数据和目标声纹数据解析出来,参照目标人脸数据和目标声纹数据的格式区别,通过提取目标人脸数据的第一预设格式和目标声纹数据的第二预设格式进行区分,从而获取到不同格式的数据源。
步骤S54,所述服务器对目标人脸数据和目标声纹数据解析为初始人脸数据和初始声纹数据。
为获取到方便后续调用的数据源,服务器将对第一预设格式的目标人脸数据和第二预设格式的目标声纹数据进行转换。假设直接调用预设格式的数据源,将对调用过程产生一定的阻碍,主要是需要在调用过程中还需要进行解码。这将增加调用过程中的工作量,因此可先一步将目标人脸数据和目标声纹数据解析为初始人脸数据和初始声纹数据,以供调用。
参照图9,图9是本申请实施例方法涉及的硬件运行环境的设备结构示意图。
本申请实施例终端可以是PC,也可以是智能手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面3)播放器、便携计算机等终端设备。
如图9所示,该登录验证***可以包括:处理器1001,例如CPU,存储器1005,通信总线1002。其中,通信总线1002用于实现处理器1001和存储器1005之间的连接通信。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
可选地,该登录验证***还可以包括用户接口、网络接口、摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块等等。用户接口可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口还可以包括标准的有线接口、无线接口。网络接口可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。
本领域技术人员可以理解,图7中示出的登录验证***结构并不构成对登录验证***的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图9所示,作为一种计算机存储介质的存储器1005中可以包括操作***、网络通信模块以及登录验证程序。操作***是管理和控制登录验证***硬件和软件资源的程序,支持登录验证程序以及其它软件和/或程序的运行。网络通信模块用于实现存储器1005内部各组件之间的通信,以及与登录验证***中其它硬件和软件之间通信。
在图9所示的登录验证***中,处理器1001用于执行存储器1005中存储的登录验证程序,实现以下步骤:
当操作终端检测到用户的账号登录指令时,生成账号登录请求,并输出人脸水印采集提示和声纹水印采集提示;
操作终端采集用户基于人脸水印采集提示和声纹水印采集提示所输入的人脸数据和声纹数据,并将所采集到的所述人脸数据和所述声纹数据加入账号登录请求;
操作终端将账号登录请求发送至服务器;
操作终端接收服务器基于账号登录请求所反馈的验证指令,其中验证指令包括验证成功指令和验证失败指令。
本申请登录验证***的具体实施方式与上述登录验证方法各实施例基本相同,在此不再赘述。
本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有一个或者一个以上程序,所述一个或者一个以上程序还可被一个或者一个以上的处理器执行以用于:
当操作终端检测到用户的账号登录指令时,生成账号登录请求,并输出人脸水印采集提示和声纹水印采集提示;
操作终端采集用户基于人脸水印采集提示和声纹水印采集提示所输入的人脸数据和声纹数据,并将所采集到的所述人脸数据和所述声纹数据加入账号登录请求;
操作终端将账号登录请求发送至服务器;
操作终端接收服务器基于账号登录请求所反馈的验证指令,其中验证指令包括验证成功指令和验证失败指令。
本申请计算机可读存储介质具体实施方式与上述登录验证方法和登录验证***各实施例基本相同,在此不再赘述。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种基于操作终端的登录验证方法,其特征在于,所述基于操作终端的登录验证方法包括:
    当检测到用户的账号登录指令时,操作终端生成账号登录请求,并输出人脸水印采集提示和声纹水印采集提示;
    所述操作终端采集用户基于人脸水印采集提示和声纹水印采集提示所输入的人脸数据和声纹数据,并将所采集到的所述人脸数据和所述声纹数据加入账号登录请求;
    所述操作终端将所述账号登录请求发送至服务器;
    所述操作终端接收服务器基于账号登录请求所反馈的验证指令,其中所述验证指令包括验证成功指令或验证失败指令。
  2. 如权利要求1所述的基于操作终端的登录验证方法,其特征在于,所述将所采集到的所述人脸数据和所述声纹数据加入账号登录请求的步骤包括:
    所述操作终端在采集到的人脸数据和声纹数据中加入第一数字水印,获得新的人脸数据和声纹数据;
    所述操作终端将所述新的人脸数据和声纹数据加入账号登录请求。
  3. 如权利要求2所述的基于操作终端的登录验证方法,其特征在于,设所述采集到的人脸数据和声纹数据为初始人脸数据和初始声纹数据,所述将所采集到的所述人脸数据和所述声纹数据加入账号登录请求的步骤还包括:
    所述操作终端将所述初始人脸数据转换为第一预设格式的目标人脸数据;
    所述操作终端将所述初始声纹数据转换为第二预设格式的目标声纹数据;
    所述操作终端将所述目标人脸数据和所述目标声纹数据加入账号登录请求。
  4. 如权利要求1所述的基于操作终端的登录验证方法,其特征在于,设所述采集到的人脸数据和声纹数据为初始人脸数据和初始声纹数据,所述将所采集到的所述人脸数据和所述声纹数据加入账号登录请求的步骤还包括:
    所述操作终端将所述初始人脸数据转换为第一预设格式的目标人脸数据;
    所述操作终端将所述初始声纹数据转换为第二预设格式的目标声纹数据;
    所述操作终端将所述目标人脸数据和所述目标声纹数据加入账号登录请求。
  5. 一种基于服务器的登录验证方法,其特征在于,所述基于服务器的登录验证方法包括:
    当服务器接收到账号登录请求时,对所述账号登录请求进行解析,以获取人脸数据和声纹数据;
    所述服务器将所述人脸数据与服务器中的预设人脸数据进行匹配,并将所述声纹数据与服务器中的预设声纹数据进行匹配;
    当所述服务器检测到所述人脸数据和所述声纹数据分别与服务器中的预设人脸数据和预设声纹数据相匹配时,向操作终端发送验证成功指令。
  6. 如权利要求5所述的基于服务器的登录验证方法,其特征在于,所述人脸数据包括人脸特征数据和人脸水印数据,所述预设人脸数据包括标准人脸特征数据和标准人脸水印数据,
    所述所述服务器将所述人脸数据与服务器中的预设人脸数据进行匹配,并将所述声纹数据与服务器中的预设声纹数据进行匹配的步骤包括:
    当所述服务器检测到所述人脸特征数据与服务器中的所述标准人脸特征数据的匹配度大于第一阈值时,将人脸水印数据与标准人脸水印数据进行匹配;
    当所述服务器检测到所述人脸水印数据与服务器中的所述标准人脸水印数据的匹配度大于第二阈值时,将声纹数据与服务器中的预设声纹数据进行匹配。
  7. 如权利要求6所述的基于服务器的登录验证方法,其特征在于,所述声纹数据包括声纹特征数据和声纹水印数据,所述预设声纹数据包括标准声纹特征数据和标准声纹水印数据,
    所述所述服务器将所述人脸数据与服务器中的预设人脸数据进行匹配,并将所述声纹数据与服务器中的预设声纹数据进行匹配的步骤还包括:
    当所述服务器检测到所述声纹特征数据与标准声纹特征数据的匹配度大于第三阈值时,将声纹水印数据与标准声纹水印数据进行匹配;
    当所述服务器检测到所述声纹水印数据与服务器中的标准声纹水印数据的匹配度大于第四阈值时,确定人脸数据和声纹数据分别与服务器中的预设人脸数据和预设声纹数据相匹配。
  8. 如权利要求7所述的基于服务器的登录验证方法,其特征在于,所述对所述账号登录请求进行解析的步骤包括:
    所述服务器解析账号登录请求,获得人脸数据和声纹数据,并提取人脸数据和声纹数据中的第一数字水印;
    当所述服务器检测到所述第一数字水印与预设的第二数字水印不一致时,向操作终端发送验证失败指令。
  9. 如权利要求7所述的基于服务器的登录验证方法,其特征在于,所述对所述账号登录请求进行解析的步骤包括:
    所述服务器提取所述账号登录请求中第一预设格式的目标人脸数据和第二预设格式的目标声纹数据;
    所述服务器对目标人脸数据和目标声纹数据解析为初始人脸数据和初始声纹数据。
  10. 如权利要求6所述的基于服务器的登录验证方法,其特征在于,所述对所述账号登录请求进行解析的步骤包括:
    所述服务器提取所述账号登录请求中第一预设格式的目标人脸数据和第二预设格式的目标声纹数据;
    所述服务器对目标人脸数据和目标声纹数据解析为初始人脸数据和初始声纹数据。
  11. 如权利要求6所述的基于服务器的登录验证方法,其特征在于,所述对所述账号登录请求进行解析的步骤包括:
    所述服务器解析账号登录请求,获得人脸数据和声纹数据,并提取人脸数据和声纹数据中的第一数字水印;
    当所述服务器检测到所述第一数字水印与预设的第二数字水印不一致时,向操作终端发送验证失败指令。
  12. 如权利要求5所述的基于服务器的登录验证方法,其特征在于,所述声纹数据包括声纹特征数据和声纹水印数据,所述预设声纹数据包括标准声纹特征数据和标准声纹水印数据,
    所述所述服务器将所述人脸数据与服务器中的预设人脸数据进行匹配,并将所述声纹数据与服务器中的预设声纹数据进行匹配的步骤还包括:
    当所述服务器检测到所述声纹特征数据与标准声纹特征数据的匹配度大于第三阈值时,将声纹水印数据与标准声纹水印数据进行匹配;
    当所述服务器检测到所述声纹水印数据与服务器中的标准声纹水印数据的匹配度大于第四阈值时,确定人脸数据和声纹数据分别与服务器中的预设人脸数据和预设声纹数据相匹配。
  13. 如权利要求5所述的基于服务器的登录验证方法,其特征在于,所述对所述账号登录请求进行解析的步骤包括:
    所述服务器解析账号登录请求,获得人脸数据和声纹数据,并提取人脸数据和声纹数据中的第一数字水印;
    当所述服务器检测到所述第一数字水印与预设的第二数字水印不一致时,向操作终端发送验证失败指令。
  14. 如权利要求13所述的基于服务器的登录验证方法,其特征在于,所述对所述账号登录请求进行解析的步骤包括:
    所述服务器提取所述账号登录请求中第一预设格式的目标人脸数据和第二预设格式的目标声纹数据;
    所述服务器对目标人脸数据和目标声纹数据解析为初始人脸数据和初始声纹数据。
  15. 如权利要求11所述的基于服务器的登录验证方法,其特征在于,所述对所述账号登录请求进行解析的步骤包括:
    所述服务器提取所述账号登录请求中第一预设格式的目标人脸数据和第二预设格式的目标声纹数据;
    所述服务器对目标人脸数据和目标声纹数据解析为初始人脸数据和初始声纹数据。
  16. 如权利要求5所述的基于服务器的登录验证方法,其特征在于,所述对所述账号登录请求进行解析的步骤包括:
    所述服务器提取所述账号登录请求中第一预设格式的目标人脸数据和第二预设格式的目标声纹数据;
    所述服务器对目标人脸数据和目标声纹数据解析为初始人脸数据和初始声纹数据。
  17. 一种登录验证***,其特征在于,所述登录验证***包括操作终端和服务器,所述登录验证***包括:存储器、处理器,通信总线以及存储在所述存储器上的登录验证程序,
    所述通信总线用于实现处理器与存储器间的通信连接;
    所述处理器用于执行所述登录验证程序,以实现如权利要求1至3中任一项所述的基于操作终端的登录验证方法的步骤。
  18. 如权利要求17所述的登录验证***,其特征在于,所述处理器用于执行所述登录验证程序,以实现如权利要求4至8任一项所述的基于服务器的登录验证方法的步骤。
  19. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有登录验证程序,所述登录验证程序被处理器执行时实现如权利要求1至3中任一项所述的基于操作终端的登录验证方法的步骤。
  20. 如权利要求19所述的计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有登录验证程序,所述登录验证程序被处理器执行时实现如权利要求4至8任一项所述的基于服务器的登录验证方法的步骤。
PCT/CN2018/096877 2017-08-14 2018-07-24 登录验证方法、***及计算机可读存储介质 WO2019033904A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710694323.9 2017-08-14
CN201710694323.9A CN107864118B (zh) 2017-08-14 2017-08-14 登录验证方法、***及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2019033904A1 true WO2019033904A1 (zh) 2019-02-21

Family

ID=61699192

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/096877 WO2019033904A1 (zh) 2017-08-14 2018-07-24 登录验证方法、***及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN107864118B (zh)
WO (1) WO2019033904A1 (zh)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107864118B (zh) * 2017-08-14 2020-03-17 深圳壹账通智能科技有限公司 登录验证方法、***及计算机可读存储介质
CN110647729A (zh) * 2018-06-27 2020-01-03 深圳联友科技有限公司 一种登录验证方法及***
HK1250307A2 (zh) * 2018-08-14 2018-12-07 World Concept Development Ltd 身份驗證的方法、裝置、存儲介質及終端設備
CN109670836A (zh) * 2018-09-26 2019-04-23 深圳壹账通智能科技有限公司 账户验证方法、设备、装置及计算机可读存储介质
CN109472487A (zh) * 2018-11-02 2019-03-15 深圳壹账通智能科技有限公司 视频质检方法、装置、计算机设备及存储介质
CN111224920B (zh) * 2018-11-23 2021-04-20 珠海格力电器股份有限公司 一种防止非法登录的方法、装置、设备及计算机存储介质
CN109803255B (zh) * 2018-12-18 2022-04-08 武汉华工赛百数据***有限公司 用于数字化车间的移动数据信息安全通信***及方法
CN111803955A (zh) * 2019-04-12 2020-10-23 奇酷互联网络科技(深圳)有限公司 通过可穿戴设备管理账号的方法及***、存储装置
CN110264243A (zh) * 2019-05-21 2019-09-20 深圳壹账通智能科技有限公司 基于活体检测的产品推广方法、装置、设备及存储介质
CN112615879A (zh) * 2020-12-26 2021-04-06 中国农业银行股份有限公司 一种网络请求的处理方法及装置
CN115225326B (zh) * 2022-06-17 2024-06-07 中国电信股份有限公司 登录验证方法、装置、电子设备及存储介质
CN114969766B (zh) * 2022-07-29 2022-10-21 杭州孝道科技有限公司 账号锁定绕过逻辑漏洞检测方法、***以及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6105010A (en) * 1997-05-09 2000-08-15 Gte Service Corporation Biometric certifying authorities
US6202151B1 (en) * 1997-05-09 2001-03-13 Gte Service Corporation System and method for authenticating electronic transactions using biometric certificates
US6208746B1 (en) * 1997-05-09 2001-03-27 Gte Service Corporation Biometric watermarks
CN101075868A (zh) * 2006-05-19 2007-11-21 华为技术有限公司 一种远程身份认证的***、终端、服务器和方法
CN103841108A (zh) * 2014-03-12 2014-06-04 北京天诚盛业科技有限公司 用户生物特征的认证方法和***
CN107864118A (zh) * 2017-08-14 2018-03-30 上海壹账通金融科技有限公司 登录验证方法、***及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6105010A (en) * 1997-05-09 2000-08-15 Gte Service Corporation Biometric certifying authorities
US6202151B1 (en) * 1997-05-09 2001-03-13 Gte Service Corporation System and method for authenticating electronic transactions using biometric certificates
US6208746B1 (en) * 1997-05-09 2001-03-27 Gte Service Corporation Biometric watermarks
CN101075868A (zh) * 2006-05-19 2007-11-21 华为技术有限公司 一种远程身份认证的***、终端、服务器和方法
CN103841108A (zh) * 2014-03-12 2014-06-04 北京天诚盛业科技有限公司 用户生物特征的认证方法和***
CN107864118A (zh) * 2017-08-14 2018-03-30 上海壹账通金融科技有限公司 登录验证方法、***及计算机可读存储介质

Also Published As

Publication number Publication date
CN107864118B (zh) 2020-03-17
CN107864118A (zh) 2018-03-30

Similar Documents

Publication Publication Date Title
WO2019033904A1 (zh) 登录验证方法、***及计算机可读存储介质
WO2019156499A1 (en) Electronic device and method of performing function of electronic device
WO2018166091A1 (zh) 贷款面签方法、***、终端及计算机可读存储介质
WO2019231252A1 (en) Electronic device for authenticating user and operating method thereof
WO2019216499A1 (ko) 전자 장치 및 그 제어 방법
WO2016187964A1 (zh) 智能控制受控设备的方法和装置
WO2018006489A1 (zh) 终端的语音交互方法及装置
WO2019104876A1 (zh) 保险产品的推送方法、***、终端、客户终端及存储介质
WO2010124565A1 (zh) 签名方法、设备及***
WO2019182409A1 (en) Electronic device and authentication method thereof
WO2019001110A1 (zh) 权限认证方法、***、设备及计算机可读存储介质
WO2019196213A1 (zh) 接口测试方法、装置、设备及计算机可读存储介质
US20120198491A1 (en) Transparently verifiying user identity during an e-commerce session using set-top box interaction behavior
WO2016192270A1 (zh) 媒体文件的快速启播方法及装置
US20120198489A1 (en) Detecting fraud using set-top box interaction behavior
WO2019144526A1 (zh) 借记卡激活方法、设备、***及计算机可读存储介质
WO2016023225A1 (zh) 基于移动终端的电子雾化装置的控制装置及方法
EP3984165A1 (en) Electronic device and method for generating attestation certificate based on fused key
WO2017148112A1 (zh) 一种指纹录入方法及终端
WO2021075867A1 (ko) 블록체인 기반 시스템을 위한 키의 저장 및 복구 방법과 그 장치
WO2019051902A1 (zh) 终端控制方法、空调器及计算机可读存储介质
WO2019100531A1 (zh) 数字签名生成、验证方法及其设备和存储介质
WO2017054488A1 (zh) 电视播放控制方法、服务器及电视播放控制***
WO2017012200A1 (zh) 基于电子诊疗单的诊疗机构识别方法和网络医院平台
WO2016101698A1 (zh) 基于dlna技术实现屏幕推送的方法及***

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18845703

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07.09.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18845703

Country of ref document: EP

Kind code of ref document: A1