CN113065109A - Man-machine recognition method and device - Google Patents

Man-machine recognition method and device Download PDF

Info

Publication number
CN113065109A
CN113065109A CN202110436880.7A CN202110436880A CN113065109A CN 113065109 A CN113065109 A CN 113065109A CN 202110436880 A CN202110436880 A CN 202110436880A CN 113065109 A CN113065109 A CN 113065109A
Authority
CN
China
Prior art keywords
human
sensor information
computer
sample data
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110436880.7A
Other languages
Chinese (zh)
Inventor
蒲鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110436880.7A priority Critical patent/CN113065109A/en
Publication of CN113065109A publication Critical patent/CN113065109A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a human-computer recognition method and a human-computer recognition device, which relate to the technical field of artificial intelligence and can also be used in the technical field of finance, and the method comprises the following steps: acquiring a sensor information group of target terminal equipment; obtaining a human-computer recognition result of target operation at the target terminal equipment according to the sensor information group and a preset human-computer classification model; the preset human-computer classification model is obtained by training a k nearest classification model based on a dynamic time warping algorithm by applying a plurality of historical sensor information sets and actual human-computer recognition results corresponding to the historical sensor information sets. The method and the device can improve the accuracy and efficiency of human-computer recognition.

Description

Man-machine recognition method and device
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a man-machine identification method and device.
Background
The man-Machine recognition technology is one of the most widely applied technologies in the field of Machine Learning (ML), and is essentially based on collected environmental information and a specific technical means to recognize whether an operation is initiated manually or by a Machine. Usage scenarios are spread over almost all information system registrations, logins, and information modifications, etc. For example, for an online marketing campaign, a coupon is distributed according to a mobile phone number input by a user, in order to prevent a large number of mobile phone numbers and robots from being applied to a black product to quickly and massively rob the coupon, whether a current submission behavior is an artificial behavior or a machine behavior needs to be judged when the submission of the user is received, and the machine behavior brings loss to an enterprise and represents illegal operation.
The common man-machine identification method is that an identifying code is generated at a server, a user is required to correctly input the identifying code when submitting, the correct input of the identifying code by the user represents manual operation, and if the identifying code cannot be correctly input, the operation is judged to be non-manual operation; judging whether the current operation is performed manually or not according to the input of the verification code; however, when the deep learning technology is applied to a verification code identification scene, the verification code technology is difficult to well identify man-machine operation; a new type of verification code can be deeply learned and recognized in a short time class; moreover, too complex verification codes in some specific scenarios may also affect user experience, increasing the burden of normal user usage.
Disclosure of Invention
Aiming at the problems in the prior art, the application provides a man-machine identification method and device, which can improve the accuracy and efficiency of man-machine identification.
In order to solve the technical problem, the present application provides the following technical solutions:
in a first aspect, the present application provides a human-machine recognition method, including:
acquiring a sensor information group of target terminal equipment;
obtaining a human-computer recognition result of target operation at the target terminal equipment according to the sensor information group and a preset human-computer classification model;
the preset human-computer classification model is obtained by training a k nearest classification model based on a dynamic time warping algorithm by applying a plurality of historical sensor information sets and actual human-computer recognition results corresponding to the historical sensor information sets.
Further, before the acquiring the sensor information group of the target terminal device, the method further includes:
obtaining a sample data set, wherein each piece of sample data in the sample data set comprises: the only historical sensor information group and the corresponding actual man-machine identification result thereof are as follows: manual or machine operation;
and training a k nearest classification model based on a dynamic time warping algorithm by applying the sample data set to obtain the human-computer classification model.
Further, each set of historical sensor information includes: repeatedly acquired historical sensor information corresponding to one historical operation;
correspondingly, the acquiring the sample data set includes:
acquiring a sample data set;
deleting the sample data of which the collection times in the sample data set do not accord with a preset collection time range;
and carrying out normalization processing on the historical sensor information group in the sample data set.
Further, the acquiring of the sensor information group of the target terminal device includes:
acquiring sensor information acquired for multiple times within an acquisition time range, wherein all the sensor information forms a sensor information group, and the acquisition time range comprises: the start time to the end time of the target operation.
Further, each sensor information includes: direction sensor, linear acceleration sensor, gravity sensor, acceleration sensor, gyroscope, and magnetic sensor information.
Further, the acquiring sensor information acquired multiple times within the acquisition time range includes:
and acquiring current sensor information within the acquisition time range every time the sensor information is changed.
In a second aspect, the present application provides a human-machine recognition device, comprising:
the acquisition module is used for acquiring a sensor information group of the target terminal equipment;
the identification module is used for obtaining a human-computer identification result of the target operation at the target terminal equipment according to the sensor information group and a preset human-computer classification model;
the preset human-computer classification model is obtained by training a k nearest classification model based on a dynamic time warping algorithm by applying a plurality of historical sensor information sets and actual human-computer recognition results corresponding to the historical sensor information sets.
Further, the man-machine recognition device further comprises:
a sample obtaining module, configured to obtain a sample data set, where each piece of sample data in the sample data set includes: the only historical sensor information group and the corresponding actual man-machine identification result thereof are as follows: manual or machine operation;
and the training module is used for applying the sample data set to train a k nearest classification model based on a dynamic time warping algorithm to obtain the human-computer classification model.
In a third aspect, the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the human-machine identification method when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon computer instructions that, when executed, implement the human-machine identification method.
According to the technical scheme, the application provides a man-machine recognition method and device. Wherein, the method comprises the following steps: acquiring a sensor information group of target terminal equipment; obtaining a human-computer recognition result of target operation at the target terminal equipment according to the sensor information group and a preset human-computer classification model; the preset human-computer classification model is obtained by training a k nearest classification model based on a dynamic time warping algorithm by applying a plurality of historical sensor information sets and actual human-computer recognition results corresponding to the historical sensor information sets, so that the accuracy and the efficiency of human-computer recognition can be improved; the subjective participation of a user is not needed, the user experience is improved, and the defect of failure of the verification code technology can be overcome; the method can effectively prevent fraud and avoid property loss of enterprises and the like.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of a man-machine recognition method in an embodiment of the present application;
FIG. 2 is a schematic flow chart of steps 001 and 002 of the man-machine identification method in the embodiment of the present application;
FIG. 3 is a schematic flow chart of a model training process in an application example of the present application;
FIG. 4 is a flow chart illustrating a model application process in an application example of the present application;
FIG. 5 is a schematic structural diagram of a human-machine recognition device in an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a human-machine recognition device in an application example of the present application;
fig. 7 is a schematic block diagram of a system configuration of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Based on this, in order to improve accuracy and efficiency of human-computer identification, the embodiment of the present application provides a human-computer identification apparatus, which may be a server or a client device, where the client device may include a smart phone, a tablet electronic device, a network set-top box, a portable computer, a desktop computer, a Personal Digital Assistant (PDA), an in-vehicle device, an intelligent wearable device, and the like. Wherein, intelligence wearing equipment can include intelligent glasses, intelligent wrist-watch and intelligent bracelet etc..
In practical applications, the part for performing human-machine recognition may be performed on the server side as described in the above, or all operations may be performed in the client device. The selection may be specifically performed according to the processing capability of the client device, the limitation of the user usage scenario, and the like. This is not a limitation of the present application. The client device may further include a processor if all operations are performed in the client device.
The client device may have a communication module (i.e., a communication unit), and may be communicatively connected to a remote server to implement data transmission with the server. The server may include a server on the task scheduling center side, and in other implementation scenarios, the server may also include a server on an intermediate platform, for example, a server on a third-party server platform that is communicatively linked to the task scheduling center server. The server may include a single computer device, or may include a server cluster formed by a plurality of servers, or a server structure of a distributed apparatus.
The server and the client device may communicate using any suitable network protocol, including network protocols not yet developed at the filing date of this application. The network protocol may include, for example, a TCP/IP protocol, a UDP/IP protocol, an HTTP protocol, an HTTPS protocol, or the like. Of course, the network Protocol may also include, for example, an RPC Protocol (Remote Procedure Call Protocol), a REST Protocol (Representational State Transfer Protocol), and the like used above the above Protocol.
It should be noted that the human-machine recognition method and apparatus disclosed in the present application can be used in the fields of biometric identification and financial technology, and can also be used in any fields other than the fields of biometric identification and financial technology.
The following examples are intended to illustrate the details.
In order to improve the accuracy and efficiency of human-computer recognition, the present embodiment provides a human-computer recognition method in which the execution subject is a human-computer recognition device, the human-computer recognition device includes but is not limited to a server, as shown in fig. 1, and the method specifically includes the following contents:
step 100: and acquiring a sensor information group of the target terminal equipment.
Specifically, the target terminal device may be a smart phone, and includes: direction sensors, linear acceleration sensors, gravity sensors, acceleration sensors, gyroscopes, magnetic sensors, and the like. The set of sensor information includes: sensor information which is acquired for multiple times and corresponds to target operation at a target terminal device is acquired, and the sensor information which is acquired for each time forms a sensor information group; each sensor information may include: the direction sensor, the linear acceleration sensor, the gravity sensor, the acceleration sensor, the gyroscope and the magnetic sensor of the target terminal equipment acquire information.
Step 200: obtaining a human-computer recognition result of target operation at the target terminal equipment according to the sensor information group and a preset human-computer classification model; the preset human-computer classification model is obtained by training a k nearest classification model based on a dynamic time warping algorithm by applying a plurality of historical sensor information sets and actual human-computer recognition results corresponding to the historical sensor information sets.
The target operation is the operation of carrying out man-machine identification at this time; for example, the operation at the target terminal device may be an operation of inputting a mobile phone number in a mobile phone number input box of a mobile phone software interface of the target terminal device, and the mobile phone software may be a financial transaction system.
It can be understood that the historical sensor information set is a sensor information set obtained in advance and used for model training; the actual human-machine recognition result corresponding to the historical sensor information set may be a human operation or a machine operation, and the machine operation may be a robot arm operation.
The man-machine identification method provided by the embodiment of the application can be suitable for various application scenes, for example, marketing distribution coupon application scenes are distributed according to a mobile phone number input by a user, the acquisition starting moment of the sensor information group is generated when an APP input frame acquires a focus, the acquisition is finished when the input frame loses the focus, the phenomenon that a robot is used for quickly grabbing a coupon in a dark product can be avoided, the marketing effect is guaranteed, the enterprise benefit is maintained, and meanwhile, the phenomenon that the user experience is influenced by too complicated verification codes can be avoided.
In order to further improve the reliability of the human-machine classification model, and further improve the reliability of human-machine classification by applying the reliable human-machine classification model, referring to fig. 2, in an embodiment of the present application, before step 100, the method further includes:
step 001: obtaining a sample data set, wherein each piece of sample data in the sample data set comprises: the only historical sensor information group and the corresponding actual man-machine identification result thereof are as follows: manual or machine operation.
Step 002: and training a k nearest classification model based on a dynamic time warping algorithm by applying the sample data set to obtain the human-computer classification model.
In one example, the collected sensor data is used for testing in a scenario where a volunteer and a robotic arm enter a phone number on a cell phone. The test results are shown in table 1 for a sample capacity of 400:
TABLE 1
Figure BDA0003033423780000061
From the test effect, for the scenes with less input quantity such as mobile phone numbers input by a user (the accuracy rate is improved when the input content is more), the man-machine operation can be recognized with high accuracy; the universality is strong, the man-machine identification process is simplified, and the accuracy and efficiency of the man-machine identification can be improved.
In order to further improve the reliability of the sample data set and improve the reliability of applying the dynamic time warping algorithm, in an embodiment of the present application, each group of historical sensor information includes: repeatedly acquired historical sensor information corresponding to one historical operation; correspondingly, step 001 includes:
step 010: and acquiring a sample data set.
Step 020: and deleting the sample data of which the collection times in the sample data set do not accord with a preset collection time range.
Specifically, the preset acquisition frequency range can be set according to actual needs, which is not limited in the present application; preferably, the preset collection frequency range is set to have collection frequency greater than or equal to 30 and less than or equal to 100, and sample data with collection frequency less than 30 or greater than 100 is rejected.
Specifically, a historical operation corresponds to a unique set of historical sensor information, and the historical operation may be an operation that occurred prior to the target operation.
Step 030: and carrying out normalization processing on the historical sensor information group in the sample data set.
Specifically, the dispersion normalization method may be adopted to perform normalization processing on each dimension of the historical sensor information.
In order to further improve the accuracy of human-computer recognition in a scenario where the user inputs a mobile phone number with a small input amount, in an embodiment of the present application, step 100 includes:
step 1001: acquiring sensor information acquired for multiple times within an acquisition time range, wherein all the sensor information forms a sensor information group, and the acquisition time range comprises: the start time to the end time of the target operation.
Wherein each sensor information may include: direction sensor, linear acceleration sensor, gravity sensor, acceleration sensor, gyroscope, and magnetic sensor information.
To further improve the reliability of determining sensor information, in one embodiment of the present application, step 101 comprises:
and acquiring current sensor information within the acquisition time range every time the sensor information is changed.
In order to improve and maintain stable recognition accuracy and improve user experience, the application example of the man-machine recognition method is provided, the mobile phone terminal operation based on the machine learning algorithm is adopted, various sensor data of the mobile phone terminal are obtained, and the collected data are classified by utilizing the improved machine learning algorithm, and the method comprises the following steps: model training and model application are described in detail as follows:
referring to fig. 3, model training includes:
s101: preparing sample data, namely collecting and arranging the sample data; s102: carrying out normalization processing on full-scale sample data; s103: KNN training is carried out based on DTW, and an optimal model, namely a training model, is output, and parameters are adjusted to enable the effect to be optimal.
Referring to fig. 4, the model application comprises:
s201: preparing data to be predicted, namely acquiring and arranging the data to be predicted; s202: normalizing data to be predicted according to sample data; s203: and inputting the data to be predicted into the trained model to calculate a prediction result based on the trained KNN model prediction result.
To further illustrate the solution, the present application provides an application example of another human-machine identification method, including:
step 1: and collecting sample data.
The sample data is data generated under manual operation and machine operation, and the content is six types of sensor information of the mobile phone. The six sensors are respectively a direction sensor, a linear acceleration sensor, a gravity sensor, an acceleration sensor, a gyroscope and a magnetic sensor. Each type of sensor has information in 3 directions of an x axis, a y axis and a z axis, and 18-dimensional information is calculated. The information collected at one time point is as follows:
{ [ three-dimensional direction information ], [ three-dimensional linear acceleration information ], [ three-dimensional gravity sensor information ], [ three-dimensional acceleration information ], [ three-dimensional gyroscope information ], [ three-dimensional magnetic force information ] }
{[64.5999984741211,-40.290000915527344,39.88999938964844],
[7.850800037384033,5.0976996421813965,-7.277699947357178],
[6.290299892425537,4.865699768066406,5.738100051879883],
[13.875222206115723,8.920432090759277,-1.7070082426071167],
[136.42800903320312,-692.9299926757812,-198.8300018310547],
[-39.625,3.0,1.6875]}。
The above information is information of a collection time point, and the current sensor information is collected immediately once the sensor information changes.
The mobile phone terminal information acquisition starting moment occurs when the APP input box acquires the focus, and the acquisition is finished when the input box loses the focus. One sample data corresponds to a plurality of acquisition moments (when the sensor changes), one sample data contains information which can be acquired for 100 times, and the format of the sample data is as follows:
labeling: [ collected information 1, collected information 2.. collected information 100.. DEG ]
The manually entered data has a tag value of 0 and the robotic arm has a tag value of 1.
Step 2: and (6) normalization processing.
Removing sample data with the information acquisition frequency less than 30 or more than 100; meanwhile, since the size standards of the numerical representations of the sensors are different, in order to correctly calculate the inter-sample distance by using a Dynamic Time Warping (DTW), each dimension of all sample data needs to be normalized. Applying a dispersion normalization method, the formula is as follows:
Figure BDA0003033423780000081
where x denotes the current sensor information, xminData representing the data with the minimum value in the same data as the x sensor type in the sample data of x, xmaxAnd the data with the maximum value in the data which is the same as the x sensor type in the sample data of the x is shown.
For example:
Figure BDA0003033423780000082
the data samples after normalization are as follows: [{[0.196,0.023,0.13],[0.62,0.32,0.014],...,[0.013,3.0,0.231]},...,{[0.23,0.0341,0.12],...,[0.013,3.0,0.231]}]The normalized sample data values are all in the range of [0-1 ]]In the meantime.
And step 3: and (5) training a model.
And training all the converted data samples according to the classification type variable input model. And a K-NN algorithm based on DTW is applied, the essence of model training is to calculate the similarity based on DTW, and a proper K value is selected by using a cross validation mode, so that the error rate is the lowest. The similarity between the two samples is calculated as follows:
ri,j=e(i,j)+min{r(i-1,j-1),r(i-1,j),r(i,j-1)}
wherein r (i, j) represents the regular similarity between the first i values of the X samples and the first j values of the Y samples, e (i, j) represents the Euclidean distance between the ith sequence point of the sample and the jth sequence point of the sample, and the improved calculation formula is as follows:
Figure BDA0003033423780000091
and training and selecting a proper K value in a cross validation mode to serve as a final model. And splitting part of training data and verification data according to the ratio of 6:4, taking the value of K from 3, gradually increasing, and taking the current value of K as a final model when the error rate is not reduced.
And 4, step 4: and (5) applying the model.
And converting the data acquired during the user input into a normalized data sequence, and performing classified identification by inputting the model.
From the software aspect, in order to improve the accuracy and efficiency of human-machine recognition, the present application provides an embodiment of a human-machine recognition apparatus for implementing all or part of the contents of the human-machine recognition method, and referring to fig. 5, the human-machine recognition apparatus specifically includes the following contents:
and the acquisition module 01 is used for acquiring the sensor information group of the target terminal equipment.
The identification module 02 is used for obtaining a human-computer identification result of the target operation at the target terminal equipment according to the sensor information group and a preset human-computer classification model; the preset human-computer classification model is obtained by training a k nearest classification model based on a dynamic time warping algorithm by applying a plurality of historical sensor information sets and actual human-computer recognition results corresponding to the historical sensor information sets.
In an embodiment of the present application, the human-machine recognition device further includes:
a sample obtaining module, configured to obtain a sample data set, where each piece of sample data in the sample data set includes: the only historical sensor information group and the corresponding actual man-machine identification result thereof are as follows: manual or machine operation.
And the training module is used for applying the sample data set to train a k nearest classification model based on a dynamic time warping algorithm to obtain the human-computer classification model.
The embodiment of the human-machine recognition apparatus provided in this specification may be specifically configured to execute the processing procedure of the embodiment of the human-machine recognition method, and its functions are not described herein again, and refer to the detailed description of the embodiment of the human-machine recognition method.
For further explanation of the present solution, referring to fig. 6, the present application further provides an application example of a human-machine recognition apparatus, and the data sending unit 10 is configured to collect sensor data, and convert the data into a time-series representation; the data conversion unit 20 is configured to perform normalization processing on data of each dimension; a model calculation unit 30 for calculating a prediction result; a result receiving unit 40, which is an application terminal for calling the model service; the function realized by the data sending unit 10 may be equivalent to the function realized by the combination of the obtaining module and the sample obtaining module; the functions performed by the model calculation unit 30 may correspond to the functions performed by the recognition module and the training module described above.
According to the description, the man-machine identification method and the man-machine identification device can improve the accuracy and efficiency of man-machine identification; the subjective participation of a user is not needed, the user experience is improved, and the defect of failure of the verification code technology can be overcome; the method can effectively prevent fraud and avoid property loss of enterprises and the like.
In terms of hardware, in order to improve accuracy and efficiency of human-computer recognition, the present application provides an embodiment of an electronic device for implementing all or part of contents in the human-computer recognition method, where the electronic device specifically includes the following contents:
a processor (processor), a memory (memory), a communication Interface (Communications Interface), and a bus; the processor, the memory and the communication interface complete mutual communication through the bus; the communication interface is used for realizing information transmission between the human-computer recognition device, the user terminal and other related equipment; the electronic device may be a desktop computer, a tablet computer, a mobile terminal, and the like, but the embodiment is not limited thereto. In this embodiment, the electronic device may be implemented with reference to the embodiment for implementing the human-computer recognition method and the embodiment for implementing the human-computer recognition apparatus in the embodiments, and the contents thereof are incorporated herein, and repeated details are not repeated.
Fig. 7 is a schematic block diagram of a system configuration of an electronic device 9600 according to an embodiment of the present application. As shown in fig. 7, the electronic device 9600 can include a central processor 9100 and a memory 9140; the memory 9140 is coupled to the central processor 9100. Notably, this fig. 7 is exemplary; other types of structures may also be used in addition to or in place of the structure to implement telecommunications or other functions.
In one or more embodiments of the present application, the human recognition function can be integrated into the central processor 9100. The central processor 9100 may be configured to control as follows:
step 100: and acquiring a sensor information group of the target terminal equipment.
Step 200: and obtaining a human-computer recognition result of the target operation at the target terminal equipment according to the sensor information group and a preset human-computer classification model.
Step 300: the preset human-computer classification model is obtained by training a k nearest classification model based on a dynamic time warping algorithm by applying a plurality of historical sensor information sets and actual human-computer recognition results corresponding to the historical sensor information sets.
From the above description, the electronic device provided by the embodiment of the application can improve the accuracy and efficiency of human-computer recognition.
In another embodiment, the human-machine recognition device can be configured separately from the central processor 9100, for example, the human-machine recognition device can be configured as a chip connected to the central processor 9100, and the human-machine recognition function can be realized by the control of the central processor.
As shown in fig. 7, the electronic device 9600 may further include: a communication module 9110, an input unit 9120, an audio processor 9130, a display 9160, and a power supply 9170. It is noted that the electronic device 9600 also does not necessarily include all of the components shown in fig. 7; further, the electronic device 9600 may further include components not shown in fig. 7, which may be referred to in the art.
As shown in fig. 7, a central processor 9100, sometimes referred to as a controller or operational control, can include a microprocessor or other processor device and/or logic device, which central processor 9100 receives input and controls the operation of the various components of the electronic device 9600.
The memory 9140 can be, for example, one or more of a buffer, a flash memory, a hard drive, a removable media, a volatile memory, a non-volatile memory, or other suitable device. The information relating to the failure may be stored, and a program for executing the information may be stored. And the central processing unit 9100 can execute the program stored in the memory 9140 to realize information storage or processing, or the like.
The input unit 9120 provides input to the central processor 9100. The input unit 9120 is, for example, a key or a touch input device. Power supply 9170 is used to provide power to electronic device 9600. The display 9160 is used for displaying display objects such as images and characters. The display may be, for example, an LCD display, but is not limited thereto.
The memory 9140 can be a solid state memory, e.g., Read Only Memory (ROM), Random Access Memory (RAM), a SIM card, or the like. There may also be a memory that holds information even when power is off, can be selectively erased, and is provided with more data, an example of which is sometimes called an EPROM or the like. The memory 9140 could also be some other type of device. Memory 9140 includes a buffer memory 9141 (sometimes referred to as a buffer). The memory 9140 may include an application/function storage portion 9142, the application/function storage portion 9142 being used for storing application programs and function programs or for executing a flow of operations of the electronic device 9600 by the central processor 9100.
The memory 9140 can also include a data store 9143, the data store 9143 being used to store data, such as contacts, digital data, pictures, sounds, and/or any other data used by an electronic device. The driver storage portion 9144 of the memory 9140 may include various drivers for the electronic device for communication functions and/or for performing other functions of the electronic device (e.g., messaging applications, contact book applications, etc.).
The communication module 9110 is a transmitter/receiver 9110 that transmits and receives signals via an antenna 9111. The communication module (transmitter/receiver) 9110 is coupled to the central processor 9100 to provide input signals and receive output signals, which may be the same as in the case of a conventional mobile communication terminal.
Based on different communication technologies, a plurality of communication modules 9110, such as a cellular network module, a bluetooth module, and/or a wireless local area network module, may be provided in the same electronic device. The communication module (transmitter/receiver) 9110 is also coupled to a speaker 9131 and a microphone 9132 via an audio processor 9130 to provide audio output via the speaker 9131 and receive audio input from the microphone 9132, thereby implementing ordinary telecommunications functions. The audio processor 9130 may include any suitable buffers, decoders, amplifiers and so forth. In addition, the audio processor 9130 is also coupled to the central processor 9100, thereby enabling recording locally through the microphone 9132 and enabling locally stored sounds to be played through the speaker 9131.
The above description shows that the electronic device provided by the embodiment of the application can improve the accuracy and efficiency of human-computer recognition.
Embodiments of the present application further provide a computer-readable storage medium capable of implementing all steps in the human-computer identification method in the above embodiments, where the computer-readable storage medium stores thereon a computer program, and when the computer program is executed by a processor, the computer program implements all steps of the human-computer identification method in the above embodiments, for example, when the processor executes the computer program, the processor implements the following steps:
step 100: and acquiring a sensor information group of the target terminal equipment.
Step 200: and obtaining a human-computer recognition result of the target operation at the target terminal equipment according to the sensor information group and a preset human-computer classification model.
Step 300: the preset human-computer classification model is obtained by training a k nearest classification model based on a dynamic time warping algorithm by applying a plurality of historical sensor information sets and actual human-computer recognition results corresponding to the historical sensor information sets.
As can be seen from the foregoing description, the computer-readable storage medium provided in the embodiments of the present application can improve accuracy and efficiency of human-machine recognition.
In the present application, each embodiment of the method is described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. Reference is made to the description of the method embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The principle and the implementation mode of the present application are explained by applying specific embodiments in the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A human-computer recognition method is characterized by comprising the following steps:
acquiring a sensor information group of target terminal equipment;
obtaining a human-computer recognition result of target operation at the target terminal equipment according to the sensor information group and a preset human-computer classification model;
the preset human-computer classification model is obtained by training a k nearest classification model based on a dynamic time warping algorithm by applying a plurality of historical sensor information sets and actual human-computer recognition results corresponding to the historical sensor information sets.
2. The human-computer recognition method according to claim 1, further comprising, before the acquiring the set of sensor information of the target terminal device:
obtaining a sample data set, wherein each piece of sample data in the sample data set comprises: the only historical sensor information group and the corresponding actual man-machine identification result thereof are as follows: manual or machine operation;
and training a k nearest classification model based on a dynamic time warping algorithm by applying the sample data set to obtain the human-computer classification model.
3. The human-machine recognition method of claim 2, wherein each set of historical sensor information comprises: repeatedly acquired historical sensor information corresponding to one historical operation;
correspondingly, the acquiring the sample data set includes:
acquiring a sample data set;
deleting the sample data of which the collection times in the sample data set do not accord with a preset collection time range;
and carrying out normalization processing on the historical sensor information group in the sample data set.
4. The human-computer recognition method according to claim 1, wherein the acquiring the sensor information group of the target terminal device comprises:
acquiring sensor information acquired for multiple times within an acquisition time range, wherein all the sensor information forms a sensor information group, and the acquisition time range comprises: the start time to the end time of the target operation.
5. The human-machine recognition method according to claim 4, wherein each sensor information includes: direction sensor, linear acceleration sensor, gravity sensor, acceleration sensor, gyroscope, and magnetic sensor information.
6. The human-computer recognition method according to claim 4, wherein the acquiring sensor information acquired multiple times within an acquisition time range comprises:
and acquiring current sensor information within the acquisition time range every time the sensor information is changed.
7. A human-machine identification device, comprising:
the acquisition module is used for acquiring a sensor information group of the target terminal equipment;
the identification module is used for obtaining a human-computer identification result of the target operation at the target terminal equipment according to the sensor information group and a preset human-computer classification model;
the preset human-computer classification model is obtained by training a k nearest classification model based on a dynamic time warping algorithm by applying a plurality of historical sensor information sets and actual human-computer recognition results corresponding to the historical sensor information sets.
8. The human-machine recognition device of claim 7, further comprising:
a sample obtaining module, configured to obtain a sample data set, where each piece of sample data in the sample data set includes: the only historical sensor information group and the corresponding actual man-machine identification result thereof are as follows: manual or machine operation;
and the training module is used for applying the sample data set to train a k nearest classification model based on a dynamic time warping algorithm to obtain the human-computer classification model.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the human recognition method of any one of claims 1 to 6 when executing the program.
10. A computer-readable storage medium having computer instructions stored thereon, wherein the instructions, when executed, implement the method of human-machine identification of any one of claims 1 to 6.
CN202110436880.7A 2021-04-22 2021-04-22 Man-machine recognition method and device Pending CN113065109A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110436880.7A CN113065109A (en) 2021-04-22 2021-04-22 Man-machine recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110436880.7A CN113065109A (en) 2021-04-22 2021-04-22 Man-machine recognition method and device

Publications (1)

Publication Number Publication Date
CN113065109A true CN113065109A (en) 2021-07-02

Family

ID=76567548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110436880.7A Pending CN113065109A (en) 2021-04-22 2021-04-22 Man-machine recognition method and device

Country Status (1)

Country Link
CN (1) CN113065109A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113900889A (en) * 2021-09-18 2022-01-07 百融至信(北京)征信有限公司 Method and system for intelligently identifying APP manual operation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059794A (en) * 2018-01-18 2019-07-26 北京京东金融科技控股有限公司 Man-machine recognition methods and device, electronic equipment, storage medium
US20190311114A1 (en) * 2018-04-09 2019-10-10 Zhongan Information Technology Service Co., Ltd. Man-machine identification method and device for captcha
CN110974641A (en) * 2019-12-24 2020-04-10 中南民族大学 Intelligent walking stick system integrating machine learning and Internet of things technology for blind people

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059794A (en) * 2018-01-18 2019-07-26 北京京东金融科技控股有限公司 Man-machine recognition methods and device, electronic equipment, storage medium
US20190311114A1 (en) * 2018-04-09 2019-10-10 Zhongan Information Technology Service Co., Ltd. Man-machine identification method and device for captcha
CN110974641A (en) * 2019-12-24 2020-04-10 中南民族大学 Intelligent walking stick system integrating machine learning and Internet of things technology for blind people

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113900889A (en) * 2021-09-18 2022-01-07 百融至信(北京)征信有限公司 Method and system for intelligently identifying APP manual operation
CN113900889B (en) * 2021-09-18 2023-10-24 百融至信(北京)科技有限公司 Method and system for intelligently identifying APP manual operation

Similar Documents

Publication Publication Date Title
CN109514586B (en) Method and system for realizing intelligent customer service robot
CN109993150B (en) Method and device for identifying age
CN111190600B (en) Method and system for automatically generating front-end codes based on GRU attention model
CN113064700B (en) Container deployment method and device
CN111652093B (en) Text image processing method and device
CN112785086A (en) Credit overdue risk prediction method and device
CN103679452A (en) Payment authentication method, device thereof and system thereof
CN111932267A (en) Enterprise financial service risk prediction method and device
CN111768231A (en) Product information recommendation method and device
CN110046571B (en) Method and device for identifying age
CN111931835A (en) Image identification method, device and system
CN112767167A (en) Investment transaction risk trend prediction method and device based on ensemble learning
CN112579773A (en) Risk event grading method and device
CN110008926B (en) Method and device for identifying age
CN110097004B (en) Facial expression recognition method and device
CN113065109A (en) Man-machine recognition method and device
CN113516167A (en) Biological feature recognition method and device
CN115984853A (en) Character recognition method and device
CN111352841A (en) Quality detection method and device for financial transaction software
CN111353493A (en) Text image direction correction method and device
CN113206998B (en) Method and device for quality inspection of video data recorded by service
CN112820298B (en) Voiceprint recognition method and device
CN115965456A (en) Data change analysis method and device
CN113111759A (en) Customer confirmation detection method and device in double-record data quality inspection
CN112307186A (en) Question-answering service method, system, terminal device and medium based on emotion recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination