CN111125658A - Method, device, server and storage medium for identifying fraudulent users - Google Patents

Method, device, server and storage medium for identifying fraudulent users Download PDF

Info

Publication number
CN111125658A
CN111125658A CN201911410138.8A CN201911410138A CN111125658A CN 111125658 A CN111125658 A CN 111125658A CN 201911410138 A CN201911410138 A CN 201911410138A CN 111125658 A CN111125658 A CN 111125658A
Authority
CN
China
Prior art keywords
user
identified
behavior data
fraudulent
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911410138.8A
Other languages
Chinese (zh)
Other versions
CN111125658B (en
Inventor
钱信羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Fenqile Network Technology Co ltd
Original Assignee
Shenzhen Fenqile Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Fenqile Network Technology Co ltd filed Critical Shenzhen Fenqile Network Technology Co ltd
Priority to CN201911410138.8A priority Critical patent/CN111125658B/en
Publication of CN111125658A publication Critical patent/CN111125658A/en
Application granted granted Critical
Publication of CN111125658B publication Critical patent/CN111125658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention provides a method, a device, a server and a storage medium for identifying a fraudulent user. The method for identifying the fraudulent user comprises the following steps: acquiring user behavior data of a user to be identified and reference behavior data of a reference user; determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data; and judging whether the user to be identified is a fraudulent user or not based on the similarity. Whether the user to be identified is a fraudulent user is determined according to the similarity between the user to be identified and the reference user, and the effect of improving the accuracy of identifying the fraudulent user is achieved.

Description

Method, device, server and storage medium for identifying fraudulent users
Technical Field
The embodiment of the invention relates to the technical field of risk identification, in particular to a method, a device, a server and a storage medium for identifying a fraudulent user.
Background
Currently, a common way to identify fraudulent users is to use tagged data for unsupervised modeling (e.g., clustering). The unsupervised modeling means that in each obtained category group, the fraud occupation ratio of each group is calculated, and the group with higher fraud occupation ratio is judged as a bad group. And carrying out unsupervised modeling according to the characteristics of the fraudulent users in the bad group to obtain an identification model. And inputting the user needing to be identified into the identification model so as to judge whether the user is a fraudulent user.
However, in the unsupervised modeling mode, the algorithm itself has great uncertainty and instability in high-dimensional data by using the common methods such as KNN, k-means and the like, so that the obtained result is often unstable, and the recognition effect is inaccurate.
Disclosure of Invention
The embodiment of the invention provides a method, a device, a server and a storage medium for identifying a fraudulent user, so as to achieve the effect of improving the accuracy of identifying the fraudulent user.
In a first aspect, an embodiment of the present invention provides a method for identifying a fraudulent user, including:
acquiring user behavior data of a user to be identified and reference behavior data of a reference user;
determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data;
and judging whether the user to be identified is a fraudulent user or not based on the similarity.
Optionally, the determining the similarity between the user to be identified and the reference user based on the comparison result between the user behavior data and the reference behavior data includes:
calculating a first feature vector corresponding to the user to be identified according to the user behavior data;
calculating a second feature vector corresponding to the reference user according to the reference behavior data;
calculating the feature distance of the first feature vector and the second feature vector;
and taking the characteristic distance as the similarity of the user to be identified and the reference user.
Optionally, the calculating a first feature vector corresponding to the user to be identified according to the user behavior data includes:
inputting the user behavior data into a convolutional neural network based on a first preset model;
and acquiring a first feature vector calculated based on the convolutional neural network.
Optionally, the determining, based on the similarity, whether the user to be identified is a fraudulent user includes:
judging whether the similarity is greater than a preset threshold value or not;
and when the similarity is greater than the preset threshold value, judging that the user to be identified is a fraudulent user.
Optionally, the determining, based on the similarity, whether the user to be identified is a fraudulent user includes:
grouping a plurality of users to be identified according to the similarity of each user to be identified to obtain at least one group to be identified, wherein the group to be identified corresponds to at least one user to be identified;
judging whether each group to be identified has a fraudulent user;
and when the fraudulent user exists in the group to be identified, judging that all the users to be identified corresponding to the group to be identified are the fraudulent users.
Optionally, the determining the similarity between the user to be identified and the reference user based on the comparison result between the user behavior data and the reference behavior data includes:
inputting the user behavior data and the reference behavior data serving as a recognition sample into a trained second preset model;
and obtaining a comparison result of the second preset model based on the user behavior data and the reference behavior data, and determining the similarity between the user to be identified and the reference user.
Optionally, before the inputting the user behavior data and the reference behavior data as a recognition sample into the trained second preset model, the method includes:
obtaining a plurality of training samples, each training sample comprising the reference behavior data and historical behavior data;
marking each training sample to obtain a plurality of marked training samples;
and training the second preset model based on the plurality of marked training samples to obtain a trained second preset model.
In a second aspect, an embodiment of the present invention provides an apparatus for identifying a fraudulent user, including:
the acquisition module is used for acquiring user behavior data of a user to be identified and reference behavior data of a reference user;
the similarity determining module is used for determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data;
and the fraudulent user judging module is used for judging whether the user to be identified is a fraudulent user or not based on the similarity.
In a third aspect, an embodiment of the present invention provides a server, including:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method of identifying a fraudulent user according to any embodiment of the present invention.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements a method for identifying a fraudulent user according to any of the embodiments of the present invention.
The embodiment of the invention obtains the user behavior data of the user to be identified and the reference behavior data of the reference user; determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data; whether the user to be identified is a fraudulent user is judged based on the similarity, the problems that the obtained result is often unstable and the identification effect is inaccurate due to the fact that the algorithm of the common methods such as KNN, k-means and the like in high-dimensional data is large in uncertainty and instability are solved, and the effect of improving the accuracy of identifying the fraudulent user is achieved.
Drawings
Fig. 1 is a flow chart illustrating a method for identifying a fraudulent user according to an embodiment of the present invention;
fig. 2 is a schematic diagram of calculating a first feature vector through a convolutional neural network based on a first predetermined model according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating a method for identifying a fraudulent user according to a second embodiment of the present invention;
fig. 4 is a flowchart of a method for identifying a fraudulent user according to a third embodiment of the present invention;
fig. 5 is a schematic diagram of a second default model according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of an apparatus for identifying a fraudulent user according to a fourth embodiment of the present invention;
fig. 7 is a schematic structural diagram of a server according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
Furthermore, the terms "first," "second," and the like may be used herein to describe various orientations, actions, steps, elements, or the like, but the orientations, actions, steps, or elements are not limited by these terms. These terms are only used to distinguish one direction, action, step or element from another direction, action, step or element. For example, the first feature vector may be referred to as a second feature vector, and similarly, the second feature vector may be referred to as the first feature vector, without departing from the scope of the present application. The first feature vector and the second feature vector are both feature vectors, but they are not the same feature vector. The terms "first", "second", etc. are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Example one
Fig. 1 is a flowchart of a method for identifying a fraudulent user according to an embodiment of the present invention, which is applicable to a scenario in which a fraudulent user is identified, where the method may be executed by an apparatus for identifying a fraudulent user, and the apparatus may be implemented in a software and/or hardware manner, and may be integrated on a server.
As shown in fig. 1, a method for identifying a fraudulent user according to an embodiment of the present invention includes:
and S110, acquiring user behavior data of the user to be identified and reference behavior data of a reference user.
The user to be identified refers to a user needing to be identified. The user behavior data refers to behavior data of a user to be identified. Optionally, the user behavior data includes, but is not limited to, a type of web page being browsed, a time spent at the web page, a time to start browsing the web page, and the like, and is not limited herein. The reference user refers to a user who is compared with a user to be identified. In this embodiment, the identity of the reference user is known. Alternatively, the reference user may be a user without fraud record, or may be a user with fraud record (i.e. a fraudulent user), and is not limited herein. Preferably, the reference user is a user recorded without fraud. Specifically, the behavior of the fraudulent user is changeable, the user variation of the record without the fraudulent behavior is small, and the effect of identifying the fraudulent user is more accurate by using the user without the record with the fraudulent behavior as the reference user. The reference behavior data refers to behavior data of a reference user. Similarly, the reference behavior data includes, but is not limited to, the type of web page viewed, the time spent at the web page, the time to start viewing the web page, and the like, and is not limited herein.
S120, determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data.
The comparison result refers to a result obtained by comparing the user behavior data with the reference behavior data. The similarity represents the similarity degree between the user to be identified and the reference user. Specifically, the smaller the value of the similarity is, the higher the similarity between the user to be identified and the reference user is. In this embodiment, the similarity is a numerical value between 0 and 1.
In an optional embodiment, the determining the similarity between the user to be identified and the reference user based on the comparison result between the user behavior data and the reference behavior data includes:
calculating a first feature vector corresponding to the user to be identified according to the user behavior data; calculating a second feature vector corresponding to the reference user according to the reference behavior data; calculating the feature distance of the first feature vector and the second feature vector;
and taking the characteristic distance as the similarity of the user to be identified and the reference user.
The first feature vector is a feature vector of user behavior data of the user to be identified. The second feature vector refers to a feature vector referring to reference behavior data of the user. The feature distance refers to a degree of similarity between the first feature vector and the second feature vector. The smaller the feature distance is, the more similar the first feature vector is to the second feature vector, i.e. the more similar the corresponding user to be identified is to the reference user. In this embodiment, the characteristic distance is the similarity between the user to be identified and the reference user. Alternatively, the characteristic distance is a cosine distance or a euclidean distance, and is not particularly limited herein.
In an optional embodiment, calculating the first feature vector corresponding to the user to be identified according to the user behavior data includes:
inputting the user behavior data into a convolutional neural network based on a first preset model; and acquiring a first feature vector calculated based on the convolutional neural network.
Among them, Convolutional Neural Networks (CNN) are a kind of feed forward Neural Networks (fed forward Neural Networks) containing convolution calculation and having a deep structure, and are one of the representative algorithms of deep learning (deep learning). The convolutional neural network has a representation learning (representation learning) capability, and can perform shift-invariant classification (shift-invariant classification) on input information according to a hierarchical structure of the convolutional neural network. In this embodiment, optionally, the first preset model is an attention model. Specifically, the user behavior data and the reference behavior data are both data based on time series, and therefore the positions of the key behavior points are different. The attention model can better focus on key points of user behavior data and reference behavior data through an attention mechanism, the obtained first feature vector is more accurate, and behaviors with invalid influences on feature distance calculation are smaller. Correspondingly, the calculation mode of the second feature vector is consistent with that of the first feature vector, and the reference behavior data is input into the convolutional neural network based on the first preset model to obtain the second feature vector. Specifically, the first preset model corresponding to the first feature vector and the first preset model corresponding to the second feature vector have the same model parameters and weights.
In particular, reference may be made to fig. 2. Fig. 2 is a schematic diagram of calculating a first feature vector through a convolutional neural network based on a first preset model according to an embodiment of the present invention. Referring to fig. 2, the user behavior data 100 of the user to be recognized includes user behavior 1, user behavior 2, … …, and user behavior N. Wherein, each user behavior corresponds to a time, and then the user behavior 1, the user behavior 2, the user behavior … …, and the user behavior N correspond to a time sequence. Specifically, each user behavior includes one or more of a type of a web page to be browsed, a time spent at the web page, and a time to start browsing the web page. After the user behavior data 100 passes through the convolutional neural network 200 based on the first preset model, the convolutional neural network 200 calculates the user behavior data 100 to obtain a first feature vector 300.
S130, judging whether the user to be identified is a fraudulent user or not based on the similarity.
In this step, specifically, taking the reference user as the user without the record of the fraudulent conduct as an example, the smaller the value of the similarity between the user to be identified and the reference user is, the higher the possibility that the user to be identified is the user without the record of the fraudulent conduct is, and conversely, the larger the value of the similarity is, the higher the possibility that the user to be identified is the fraudulent user is. Taking the reference user as the fraudulent user as an example, the smaller the value of the similarity between the user to be identified and the reference user is, the higher the possibility that the user to be identified is the fraudulent user is, and conversely, the larger the value of the similarity is, the higher the possibility that the user to be identified is not fraudulent is.
In an optional embodiment, the determining, based on the similarity, whether the user to be identified is a fraudulent user includes:
judging whether the similarity is greater than a preset threshold value or not; and when the similarity is greater than the preset threshold value, judging that the user to be identified is a fraudulent user.
The preset threshold value is a condition for judging whether the user to be identified is a fraudulent user. Specifically, since the reference user is a user without a fraud record, when the similarity is greater than a preset threshold, the user to be identified is a fraudulent user. Optionally, the preset threshold may be set to a fixed value, for example, any value of 0.5-1.0, and optionally, the preset threshold is 0.7. Optionally, the preset threshold may also be used to calculate the similarity of a batch of users to be identified at a time to obtain multiple similarities, sort the multiple similarities, and take the lowest similarity ranked before 20% as the preset threshold according to the ranking from large to small. In this embodiment, the setting manner and specific numerical value of the preset threshold are not limited.
According to the technical scheme of the embodiment of the invention, the user behavior data of the user to be identified and the reference behavior data of the reference user are obtained; determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data; and judging whether the user to be identified is a fraudulent user or not based on the similarity, and judging whether the user to be identified is the fraudulent user or not by calculating the similarity and according to the similarity without unsupervised modeling, so that the problem of uncertainty in a high-dimensional data algorithm due to the use of KNN is avoided, and the technical effect of improving the accuracy of identifying the fraudulent user is achieved. In addition, the convolutional neural network based on the attention model used in the embodiment calculates the first feature vector, and the result of the first feature vector is more accurate, so that the calculation of the similarity is more accurate, and whether the user is a fraudulent user or not is more accurately identified.
Example two
Fig. 3 is a flowchart illustrating a method for identifying a fraudulent user according to a second embodiment of the present invention. The embodiment is further refined in the technical scheme, and is suitable for a scene of carrying out fraudulent user identification on a plurality of users to be identified. The method may be performed by a device for identifying fraudulent users, which may be implemented in software and/or hardware, and may be integrated on a server.
As shown in fig. 3, a method for identifying a fraudulent user according to the second embodiment of the present invention includes:
s210, obtaining user behavior data of a user to be identified and reference behavior data of a reference user, wherein the number of the users to be identified is multiple.
The user to be identified refers to a user needing to be identified. The user behavior data refers to behavior data of a user to be identified. Optionally, the user behavior data includes, but is not limited to, a type of web page being browsed, a time spent at the web page, a time to start browsing the web page, and the like, and is not limited herein. The reference user refers to a user who is compared with a user to be identified. In this embodiment, the reference behavior data refers to behavior data of a reference user. Similarly, the reference behavior data includes, but is not limited to, the type of web page viewed, the time spent at the web page, the time to start viewing the web page, and the like, and is not limited herein. The number of the users to be identified is multiple in the embodiment. Specifically, the reference user of this embodiment refers to one of a plurality of users to be identified, and the identity of the reference user of this embodiment is uncertain, that is, it is uncertain whether the reference user is a fraudulent user.
S220, determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data.
The comparison result refers to a result obtained by comparing the user behavior data with the reference behavior data. The similarity represents the similarity degree between the user to be identified and the reference user. Specifically, the smaller the value of the similarity is, the higher the similarity between the user to be identified and the reference user is. In this embodiment, since the number of the users to be identified is multiple, each user to be identified corresponds to a similarity with the reference user.
S230, grouping a plurality of users to be identified according to the similarity of each user to be identified to obtain at least one group to be identified, wherein the group to be identified corresponds to at least one user to be identified.
The group to be identified is a group obtained by grouping a plurality of users to be identified according to the similarity, and each group to be identified corresponds to at least one user to be identified. Specifically, in this embodiment, the plurality of users to be identified are grouped according to the similarity of each user to be identified, which may be by setting a threshold, for example, 0.5, when the users to be identified with the similarity greater than 0.5 are divided into a group, the users to be identified with the similarity less than or equal to 0.5 are divided into a group; it is also possible to set two thresholds, for example, 0.3 and 0.7, to divide users to be identified with similarity between 0 and 0.3 into one group, to divide users to be identified with similarity between 0.3 and 0.7 into one group, and to divide users to be identified with similarity between 0.7 and 1 into one group. The embodiment does not specifically limit how to group a plurality of users to be identified according to the similarity to obtain a group to be identified.
S240, judging whether each group to be identified has a fraudulent user.
In this step, optionally, one user to be identified in the group to be identified may be selected, and compared with the user without the fraud record, so as to obtain the comparison similarity. And when the comparison similarity is larger than a preset threshold value, the user to be identified is a fraudulent user, namely the group to be identified has the fraudulent user. The judgment may be made based on manual experience, and is not particularly limited herein.
S250, when the fraudulent user exists in the group to be identified, judging that all the users to be identified corresponding to the group to be identified are the fraudulent users.
Specifically, when there is a fraudulent user in the group to be identified, it is determined that all the users to be identified in the group to be identified are fraudulent users. By grouping the users to be identified according to the similarity, the identities of all the users to be identified in the group to be identified can be determined as long as one user to be identified in the group to be identified is determined, and the users to be identified can be accurately identified when only a small number of samples of the determined identities exist.
According to the technical scheme of the embodiment of the invention, a plurality of users to be identified are obtained by acquiring the user behavior data of the users to be identified and the reference behavior data of the reference users; determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data; grouping a plurality of users to be identified according to the similarity of each user to be identified to obtain at least one group to be identified, wherein the group to be identified corresponds to at least one user to be identified, judging whether each group to be identified has a fraudulent user, judging whether all the users to be identified corresponding to the group to be identified are the fraudulent users when the group to be identified has the fraudulent users, calculating the similarity and judging whether the users to be identified are the fraudulent users according to the similarity, and without unsupervised modeling, so that the problem of uncertainty in a high-dimensional data algorithm due to the use of KNN is avoided, and the technical effect of improving the accuracy of identifying the fraudulent users is achieved. In addition, the users to be identified are grouped according to the similarity, the identities of all the users to be identified in the group to be identified can be determined as long as one user to be identified in the group to be identified is determined, and the users to be identified can be accurately identified when only a small number of samples of the determined identities exist.
EXAMPLE III
Fig. 4 is a flowchart illustrating a method for identifying a fraudulent user according to a third embodiment of the present invention. The embodiment is further refined in the technical scheme, and is suitable for determining the scene of the similarity between the user to be recognized and the reference user by using the trained model. The method may be performed by a device for identifying fraudulent users, which may be implemented in software and/or hardware, and may be integrated on a server.
As shown in fig. 4, a method for identifying a fraudulent user according to a third embodiment of the present invention includes:
s310, acquiring user behavior data of the user to be identified and reference behavior data of the reference user.
The user to be identified refers to a user needing to be identified. The user behavior data refers to behavior data of a user to be identified. Optionally, the user behavior data includes, but is not limited to, a type of web page being browsed, a time spent at the web page, a time to start browsing the web page, and the like, and is not limited herein. The reference user refers to a user who is compared with a user to be identified. In this embodiment, the identity of the reference user is known. Alternatively, the reference user may be a user without fraud record, or may be a user with fraud record (i.e. a fraudulent user), and is not limited herein. Preferably, the reference user is a user recorded without fraud. Specifically, the behavior of the fraudulent user is changeable, the user variation of the record without the fraudulent behavior is small, and the effect of identifying the fraudulent user is more accurate by using the user without the record with the fraudulent behavior as the reference user. The reference behavior data refers to behavior data of a reference user. Similarly, the reference behavior data includes, but is not limited to, the type of web page viewed, the time spent at the web page, the time to start viewing the web page, and the like, and is not limited herein.
S320, inputting the user behavior data and the reference behavior data serving as a recognition sample into a trained second preset model;
the second preset model is a model for calculating the identification sample to obtain a comparison result of the user behavior data and the reference behavior data so as to determine the similarity between the user to be identified and the reference user. Optionally, the second preset model is a twin network model. In this embodiment, specifically, the user behavior data and the reference behavior data are input to the trained second preset model as an integrated recognition sample, so as to obtain the similarity between the user to be recognized corresponding to the user behavior data and the reference user corresponding to the reference behavior data.
S330, obtaining a comparison result of the second preset model based on the user behavior data and the reference behavior data, and determining the similarity between the user to be identified and the reference user.
The comparison result refers to a result obtained by comparing the user behavior data with the reference behavior data. In this embodiment, the comparison result is determined according to the identification sample through the second predetermined model. The similarity represents the similarity degree between the user to be identified and the reference user. Specifically, the smaller the value of the similarity is, the higher the similarity between the user to be identified and the reference user is. In this embodiment, the similarity is a numerical value between 0 and 1.
In an optional embodiment, the determining the similarity between the user to be identified and the reference user based on the comparison result between the user behavior data and the reference behavior data includes:
calculating a first feature vector corresponding to the user to be identified according to the user behavior data; calculating a second feature vector corresponding to the reference user according to the reference behavior data; calculating the feature distance of the first feature vector and the second feature vector;
and taking the characteristic distance as the similarity of the user to be identified and the reference user.
The first feature vector is a feature vector of user behavior data of the user to be identified. The second feature vector refers to a feature vector referring to reference behavior data of the user. The feature distance refers to a degree of similarity between the first feature vector and the second feature vector. The smaller the feature distance is, the more similar the first feature vector is to the second feature vector, i.e. the more similar the corresponding user to be identified is to the reference user. In this embodiment, the characteristic distance is the similarity between the user to be identified and the reference user. Alternatively, the characteristic distance is a cosine distance or a euclidean distance, and is not particularly limited herein.
In an optional embodiment, calculating the first feature vector corresponding to the user to be identified according to the user behavior data includes:
inputting the user behavior data into a convolutional neural network based on a first preset model; and acquiring a first feature vector calculated based on the convolutional neural network.
Among them, Convolutional Neural Networks (CNN) are a kind of feed forward Neural Networks (fed forward Neural Networks) containing convolution calculation and having a deep structure, and are one of the representative algorithms of deep learning (deep learning). The convolutional neural network has a representation learning (representation learning) capability, and can perform shift-invariant classification (shift-invariant classification) on input information according to a hierarchical structure of the convolutional neural network. In this embodiment, optionally, the first preset model is an attention model. Specifically, the user behavior data and the reference behavior data are both data based on time series, and therefore the positions of the key behavior points are different. The attention model can better focus on key points of user behavior data and reference behavior data through an attention mechanism, the obtained first feature vector is more accurate, and behaviors with invalid influences on feature distance calculation are smaller. Correspondingly, the calculation mode of the second feature vector is consistent with that of the first feature vector, and the reference behavior data is input into the convolutional neural network based on the first preset model to obtain the second feature vector. Specifically, the first preset model corresponding to the first feature vector and the first preset model corresponding to the second feature vector have the same model parameters and weights.
In this embodiment, the first predetermined model is a sub-model in the second predetermined model. Referring to fig. 5, fig. 5 is a schematic diagram of an architecture of a second default model according to an embodiment of the present invention. As can be seen from fig. 5, the convolutional neural network 600 of the first predetermined model is a sub-model included in the second predetermined model. Specifically, the user behavior data 400 of the user to be identified and the reference behavior data 500 of the reference user are input to the second preset model as an identification sample, the convolutional neural network 600 of the first preset model included in the second preset model calculates to obtain a first feature vector 700 corresponding to the user behavior data and a second feature vector 800 corresponding to the reference behavior data, and then calculates the similarity between the first feature vector and the second feature vector, so as to obtain the similarity between the user to be identified and the reference user.
S340, judging whether the user to be identified is a fraudulent user or not based on the similarity.
In this step, specifically, taking the reference user as the user without the record of the fraudulent conduct as an example, the smaller the value of the similarity between the user to be identified and the reference user is, the higher the possibility that the user to be identified is the user without the record of the fraudulent conduct is, and conversely, the larger the value of the similarity is, the higher the possibility that the user to be identified is the fraudulent user is. Taking the reference user as the fraudulent user as an example, the smaller the value of the similarity between the user to be identified and the reference user is, the higher the possibility that the user to be identified is the fraudulent user is, and conversely, the larger the value of the similarity is, the higher the possibility that the user to be identified is not fraudulent is.
In the embodiment, the trained second preset model is used for calculating the user behavior data and the reference behavior data as one identification sample to obtain the comparison result of the user behavior data and the reference behavior data, so that the similarity between the user to be identified and the reference user is determined, and the identification efficiency is improved.
In an optional embodiment, before the inputting the user behavior data and the reference behavior data as a recognition sample to the trained second preset model, the method includes:
obtaining a plurality of training samples, each training sample comprising the reference behavior data and historical behavior data; marking each training sample to obtain a plurality of marked training samples; and training the second preset model based on the plurality of marked training samples to obtain a trained second preset model.
The training sample refers to a sample for training a second preset model. Specifically, the training samples include reference behavior data and historical behavior data. Historical behavior data refers to behavior data of users of known identities. In particular, during training, users of known identity need to include both fraudulent users and users without fraudulent activity records. The reference behavior data may be the behavior data of the user without fraud record, or may be the behavior data of the fraudulent user, which is not limited herein. Preferably, the reference behaviour data is the behaviour data of the user recorded without fraud. Marking the training sample by using the behavior data of the user, which is recorded by the reference behavior data without fraudulent behaviors, namely marking the reference behavior data and the historical behavior data. When the historical behavior data corresponds to the behavior data of the fraudulent user, the training sample is marked as 1; when the historical behavior data corresponds to the behavior data of the user without the fraudulent behavior record, the training sample is marked as 0, and a plurality of marked training samples are obtained after marking the training sample. And training the second preset model by using the plurality of marked training samples to obtain the trained second preset model, wherein the trained second preset model has the capability of distinguishing and identifying the similarity between the samples.
According to the technical scheme of the embodiment of the invention, the user behavior data of the user to be identified and the reference behavior data of the reference user are obtained; inputting the user behavior data and the reference behavior data serving as a recognition sample into a trained second preset model; acquiring a comparison result of the second preset model based on the user behavior data and the reference behavior data, and determining the similarity between the user to be identified and the reference user; and judging whether the user to be identified is a fraudulent user or not based on the similarity, and judging whether the user to be identified is the fraudulent user or not by calculating the similarity and according to the similarity without unsupervised modeling, so that the problem of uncertainty in a high-dimensional data algorithm due to the use of KNN is avoided, and the technical effect of improving the accuracy of identifying the fraudulent user is achieved. In addition, the trained second preset model is used for calculating the user behavior data and the reference behavior data as a recognition sample to obtain a comparison result of the user behavior data and the reference behavior data, so that the similarity between the user to be recognized and the reference user is determined, and the recognition efficiency is improved.
Example four
Fig. 6 is a schematic structural diagram of a device for identifying a fraudulent user according to a fourth embodiment of the present invention, where this embodiment is applicable to a scenario of identifying a fraudulent user, and the device may be implemented in a software and/or hardware manner and may be integrated on a server.
As shown in fig. 6, the apparatus for identifying a fraudulent user provided by this embodiment may include an obtaining module 410, a similarity determining module 420, and a fraudulent user determining module 430, where:
an obtaining module 410, configured to obtain user behavior data of a user to be identified and reference behavior data of a reference user; a similarity determining module 420, configured to determine, based on a comparison result between the user behavior data and reference behavior data, a similarity between the user to be identified and a reference user; and a fraudulent user judgment module 430, configured to judge, based on the similarity, whether the user to be identified is a fraudulent user.
Optionally, the similarity determining module 420 includes: the characteristic vector calculation unit is used for calculating a first characteristic vector corresponding to the user to be identified according to the user behavior data; calculating a second feature vector corresponding to the reference user according to the reference behavior data; a feature distance calculation unit for calculating a feature distance between the first feature vector and the second feature vector; and the similarity determining unit is used for taking the characteristic distance as the similarity of the user to be identified and the reference user.
Optionally, the feature vector calculation unit is specifically configured to input the user behavior data into a convolutional neural network based on a first preset model; and acquiring a first feature vector calculated based on the convolutional neural network.
Optionally, the fraudulent user determining module 430 is specifically configured to determine whether the similarity is greater than a preset threshold; and when the similarity is greater than the preset threshold value, judging that the user to be identified is a fraudulent user.
Optionally, the number of the users to be identified is multiple, and the fraudulent user determining module 430 includes: the grouping unit is used for grouping a plurality of users to be identified according to the similarity of each user to be identified to obtain at least one group to be identified, and the group to be identified corresponds to at least one user to be identified; the fraudulent user judging unit is used for judging whether each group to be identified has a fraudulent user; and when the fraudulent user exists in the group to be identified, judging that all the users to be identified corresponding to the group to be identified are the fraudulent users.
Optionally, the similarity determining module 420 is specifically configured to input the user behavior data and the reference behavior data as a recognition sample to a trained second preset model; and obtaining a comparison result of the second preset model based on the user behavior data and the reference behavior data, and determining the similarity between the user to be identified and the reference user.
Optionally, the apparatus further includes a training module, configured to obtain a plurality of training samples, where each training sample includes the reference behavior data and the historical behavior data; marking each training sample to obtain a plurality of marked training samples; and training the second preset model based on the plurality of marked training samples to obtain a trained second preset model.
The device for identifying the fraudulent user provided by the embodiment of the invention can execute the method for identifying the fraudulent user provided by any embodiment of the invention, and has the corresponding functional module and the beneficial effect of the execution method. Reference may be made to the description of any method embodiment of the invention not specifically described in this embodiment.
EXAMPLE five
Fig. 7 is a schematic structural diagram of a server according to a fifth embodiment of the present invention. FIG. 7 illustrates a block diagram of an exemplary server 612 suitable for use in implementing embodiments of the invention. The server 612 shown in fig. 7 is only an example, and should not bring any limitation to the function and the scope of the use of the embodiments of the present invention.
As shown in fig. 7, the server 612 is in the form of a general-purpose server. The components of server 612 may include, but are not limited to: one or more processors 616, a memory device 628, and a bus 618 that couples the various system components including the memory device 628 and the processors 616.
Bus 618 represents one or more of any of several types of bus structures, including a memory device bus or memory device controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
The server 612 typically includes a variety of computer system readable media. Such media can be any available media that is accessible by server 612 and includes both volatile and nonvolatile media, removable and non-removable media.
Storage 628 may include computer system readable media in the form of volatile Memory, such as Random Access Memory (RAM) 630 and/or cache Memory 632. Terminal 612 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 634 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 7, commonly referred to as a "hard drive"). Although not shown in FIG. 7, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk such as a Compact disk Read-Only Memory (CD-ROM), Digital Video disk Read-Only Memory (DVD-ROM) or other optical media may be provided. In such cases, each drive may be connected to bus 618 by one or more data media interfaces. Storage device 628 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 640 having a set (at least one) of program modules 642 may be stored, for example, in storage 628, such program modules 642 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. The program modules 642 generally perform the functions and/or methods of the described embodiments of the present invention.
The server 612 may also communicate with one or more external devices 614 (e.g., keyboard, pointing terminal, display 624, etc.), with one or more terminals that enable a user to interact with the server 612, and/or with any terminals (e.g., network card, modem, etc.) that enable the server 612 to communicate with one or more other computing terminals. Such communication may occur via input/output (I/O) interfaces 622. Further, server 612 may communicate with one or more networks (e.g., a Local Area Network (LAN), Wide Area Network (WAN), and/or a public Network such as the internet) via Network adapter 620. As shown in FIG. 7, the network adapter 620 communicates with the other modules of the server 612 via the bus 618. It should be appreciated that although not shown, other hardware and/or software modules may be used in conjunction with the server 612, including but not limited to: microcode, end drives, Redundant processors, external disk drive Arrays, RAID (Redundant Arrays of Independent Disks) systems, tape drives, and data backup storage systems, among others.
The processor 616 executes various functional applications and data processing by executing programs stored in the storage device 628, for example, implementing a method for identifying a fraudulent user provided by any embodiment of the present invention, which may include:
acquiring user behavior data of a user to be identified and reference behavior data of a reference user;
determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data;
and judging whether the user to be identified is a fraudulent user or not based on the similarity.
According to the technical scheme of the embodiment of the invention, the user behavior data of the user to be identified and the reference behavior data of the reference user are obtained; determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data; and judging whether the user to be identified is a fraudulent user or not based on the similarity, and judging whether the user to be identified is the fraudulent user or not by calculating the similarity and according to the similarity without unsupervised modeling, so that the problem of uncertainty in a high-dimensional data algorithm due to the use of KNN is avoided, and the technical effect of improving the accuracy of identifying the fraudulent user is achieved.
EXAMPLE six
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method for identifying a fraudulent user according to any embodiment of the present invention, and the method may include:
acquiring user behavior data of a user to be identified and reference behavior data of a reference user;
determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data;
and judging whether the user to be identified is a fraudulent user or not based on the similarity.
The computer-readable storage media of embodiments of the invention may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or terminal. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
According to the technical scheme of the embodiment of the invention, the user behavior data of the user to be identified and the reference behavior data of the reference user are obtained; determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data; and judging whether the user to be identified is a fraudulent user or not based on the similarity, and judging whether the user to be identified is the fraudulent user or not by calculating the similarity and according to the similarity without unsupervised modeling, so that the problem of uncertainty in a high-dimensional data algorithm due to the use of KNN is avoided, and the technical effect of improving the accuracy of identifying the fraudulent user is achieved.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A method of identifying a fraudulent user, comprising:
acquiring user behavior data of a user to be identified and reference behavior data of a reference user;
determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data;
and judging whether the user to be identified is a fraudulent user or not based on the similarity.
2. The method for identifying a fraudulent user according to claim 1, wherein said determining the similarity between said user to be identified and a reference user based on the comparison result of said user behavior data and reference behavior data comprises:
calculating a first feature vector corresponding to the user to be identified according to the user behavior data;
calculating a second feature vector corresponding to the reference user according to the reference behavior data;
calculating the feature distance of the first feature vector and the second feature vector;
and taking the characteristic distance as the similarity of the user to be identified and the reference user.
3. The method for identifying a fraudulent user according to claim 2, wherein said calculating a first feature vector corresponding to said user to be identified according to said user behavior data includes:
inputting the user behavior data into a convolutional neural network based on a first preset model;
and acquiring a first feature vector calculated based on the convolutional neural network.
4. The method for identifying a fraudulent user according to claim 1, wherein said reference user is a user without a fraudulent record of behavior, and said determining whether said user to be identified is a fraudulent user based on said similarity includes:
judging whether the similarity is greater than a preset threshold value or not;
and when the similarity is greater than the preset threshold value, judging that the user to be identified is a fraudulent user.
5. The method for identifying a fraudulent user according to claim 1, wherein the number of the users to be identified is plural, and the determining whether the user to be identified is the fraudulent user based on the similarity includes:
grouping a plurality of users to be identified according to the similarity of each user to be identified to obtain at least one group to be identified, wherein the group to be identified corresponds to at least one user to be identified;
judging whether each group to be identified has a fraudulent user;
and when the fraudulent user exists in the group to be identified, judging that all the users to be identified corresponding to the group to be identified are the fraudulent users.
6. The method for identifying a fraudulent user according to claim 1, wherein said determining the similarity between said user to be identified and a reference user based on the comparison result of said user behavior data and reference behavior data comprises:
inputting the user behavior data and the reference behavior data serving as a recognition sample into a trained second preset model;
and obtaining a comparison result of the second preset model based on the user behavior data and the reference behavior data, and determining the similarity between the user to be identified and the reference user.
7. A method of identifying fraudulent users of claim 6, including, before said inputting said user behaviour data and reference behaviour data as a recognition sample into a second trained predetermined model:
obtaining a plurality of training samples, each training sample comprising the reference behavior data and historical behavior data;
marking each training sample to obtain a plurality of marked training samples;
and training the second preset model based on the plurality of marked training samples to obtain a trained second preset model.
8. An apparatus for identifying a fraudulent user, comprising:
the acquisition module is used for acquiring user behavior data of a user to be identified and reference behavior data of a reference user;
the similarity determining module is used for determining the similarity between the user to be identified and the reference user based on the comparison result of the user behavior data and the reference behavior data;
and the fraudulent user judging module is used for judging whether the user to be identified is a fraudulent user or not based on the similarity.
9. A server, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method of identifying a fraudulent user of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method of identifying a fraudulent user of any one of claims 1 to 7.
CN201911410138.8A 2019-12-31 2019-12-31 Method, apparatus, server and storage medium for identifying fraudulent user Active CN111125658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911410138.8A CN111125658B (en) 2019-12-31 2019-12-31 Method, apparatus, server and storage medium for identifying fraudulent user

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911410138.8A CN111125658B (en) 2019-12-31 2019-12-31 Method, apparatus, server and storage medium for identifying fraudulent user

Publications (2)

Publication Number Publication Date
CN111125658A true CN111125658A (en) 2020-05-08
CN111125658B CN111125658B (en) 2024-03-22

Family

ID=70506283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911410138.8A Active CN111125658B (en) 2019-12-31 2019-12-31 Method, apparatus, server and storage medium for identifying fraudulent user

Country Status (1)

Country Link
CN (1) CN111125658B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308703A (en) * 2020-11-02 2021-02-02 创新奇智(重庆)科技有限公司 User grouping method, device, equipment and storage medium
CN112348519A (en) * 2020-10-21 2021-02-09 上海淇玥信息技术有限公司 Method and device for identifying fraudulent user and electronic equipment
CN112365338A (en) * 2020-11-11 2021-02-12 平安普惠企业管理有限公司 Artificial intelligence-based data fraud detection method, device, terminal and medium
CN113362070A (en) * 2021-06-03 2021-09-07 中国工商银行股份有限公司 Method, apparatus, electronic device, and medium for identifying operating user
CN117455518A (en) * 2023-12-25 2024-01-26 连连银通电子支付有限公司 Fraudulent transaction detection method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958215A (en) * 2017-11-23 2018-04-24 深圳市分期乐网络科技有限公司 A kind of antifraud recognition methods, device, server and storage medium
CN108009531A (en) * 2017-12-28 2018-05-08 北京工业大学 A kind of face identification method of more tactful antifraud
CN108268624A (en) * 2018-01-10 2018-07-10 清华大学 User data method for visualizing and system
CN108629593A (en) * 2018-04-28 2018-10-09 招商银行股份有限公司 Fraudulent trading recognition methods, system and storage medium based on deep learning
US20180302425A1 (en) * 2017-04-17 2018-10-18 Splunk Inc. Detecting fraud by correlating user behavior biometrics with other data sources
CN109461068A (en) * 2018-09-13 2019-03-12 深圳壹账通智能科技有限公司 Judgment method, device, equipment and the computer readable storage medium of fraud
CN110188602A (en) * 2019-04-17 2019-08-30 深圳壹账通智能科技有限公司 Face identification method and device in video
US20190295087A1 (en) * 2018-03-23 2019-09-26 Microsoft Technology Licensing, Llc System and method for detecting fraud in online transactions by tracking online account usage characteristics indicative of user behavior over time
US20190311377A1 (en) * 2017-02-20 2019-10-10 Ping An Technology (Shenzhen) Co., Ltd. Social security fraud behaviors identification method, device, apparatus and computer-readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190311377A1 (en) * 2017-02-20 2019-10-10 Ping An Technology (Shenzhen) Co., Ltd. Social security fraud behaviors identification method, device, apparatus and computer-readable storage medium
US20180302425A1 (en) * 2017-04-17 2018-10-18 Splunk Inc. Detecting fraud by correlating user behavior biometrics with other data sources
CN107958215A (en) * 2017-11-23 2018-04-24 深圳市分期乐网络科技有限公司 A kind of antifraud recognition methods, device, server and storage medium
CN108009531A (en) * 2017-12-28 2018-05-08 北京工业大学 A kind of face identification method of more tactful antifraud
CN108268624A (en) * 2018-01-10 2018-07-10 清华大学 User data method for visualizing and system
US20190295087A1 (en) * 2018-03-23 2019-09-26 Microsoft Technology Licensing, Llc System and method for detecting fraud in online transactions by tracking online account usage characteristics indicative of user behavior over time
CN108629593A (en) * 2018-04-28 2018-10-09 招商银行股份有限公司 Fraudulent trading recognition methods, system and storage medium based on deep learning
CN109461068A (en) * 2018-09-13 2019-03-12 深圳壹账通智能科技有限公司 Judgment method, device, equipment and the computer readable storage medium of fraud
CN110188602A (en) * 2019-04-17 2019-08-30 深圳壹账通智能科技有限公司 Face identification method and device in video

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348519A (en) * 2020-10-21 2021-02-09 上海淇玥信息技术有限公司 Method and device for identifying fraudulent user and electronic equipment
CN112308703A (en) * 2020-11-02 2021-02-02 创新奇智(重庆)科技有限公司 User grouping method, device, equipment and storage medium
CN112365338A (en) * 2020-11-11 2021-02-12 平安普惠企业管理有限公司 Artificial intelligence-based data fraud detection method, device, terminal and medium
CN112365338B (en) * 2020-11-11 2024-03-22 天翼安全科技有限公司 Data fraud detection method, device, terminal and medium based on artificial intelligence
CN113362070A (en) * 2021-06-03 2021-09-07 中国工商银行股份有限公司 Method, apparatus, electronic device, and medium for identifying operating user
CN117455518A (en) * 2023-12-25 2024-01-26 连连银通电子支付有限公司 Fraudulent transaction detection method and device
CN117455518B (en) * 2023-12-25 2024-04-19 连连银通电子支付有限公司 Fraudulent transaction detection method and device

Also Published As

Publication number Publication date
CN111125658B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN111125658B (en) Method, apparatus, server and storage medium for identifying fraudulent user
CN110728313B (en) Classification model training method and device for intention classification recognition
CN112990294B (en) Training method and device of behavior discrimination model, electronic equipment and storage medium
CN110826494A (en) Method and device for evaluating quality of labeled data, computer equipment and storage medium
CN112883257B (en) Behavior sequence data processing method and device, electronic equipment and storage medium
CN112149754B (en) Information classification method, device, equipment and storage medium
CN112818162A (en) Image retrieval method, image retrieval device, storage medium and electronic equipment
CN112883990A (en) Data classification method and device, computer storage medium and electronic equipment
CN112596964A (en) Disk failure prediction method and device
CN113313538A (en) User consumption capacity prediction method and device, electronic equipment and storage medium
CN110020638B (en) Facial expression recognition method, device, equipment and medium
CN111310065A (en) Social contact recommendation method and device, server and storage medium
CN111563429A (en) Drawing verification method and device, electronic equipment and storage medium
CN113762303B (en) Image classification method, device, electronic equipment and storage medium
CN111597336B (en) Training text processing method and device, electronic equipment and readable storage medium
CN110929285B (en) Method and device for processing private data
CN110826616B (en) Information processing method and device, electronic equipment and storage medium
CN115952800A (en) Named entity recognition method and device, computer equipment and readable storage medium
CN116956171A (en) Classification method, device, equipment and storage medium based on AI model
CN115511104A (en) Method, apparatus, device and medium for training a contrast learning model
CN110059180B (en) Article author identity recognition and evaluation model training method and device and storage medium
CN113127636B (en) Text clustering cluster center point selection method and device
CN110297989B (en) Test method, device, equipment and medium for anomaly detection
CN113407859B (en) Resource recommendation method and device, electronic equipment and storage medium
CN115237739B (en) Analysis method, device and equipment for board card running environment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant