CN114298417A - Anti-fraud risk assessment method, anti-fraud risk training method, anti-fraud risk assessment device, anti-fraud risk training device and readable storage medium - Google Patents

Anti-fraud risk assessment method, anti-fraud risk training method, anti-fraud risk assessment device, anti-fraud risk training device and readable storage medium Download PDF

Info

Publication number
CN114298417A
CN114298417A CN202111640205.2A CN202111640205A CN114298417A CN 114298417 A CN114298417 A CN 114298417A CN 202111640205 A CN202111640205 A CN 202111640205A CN 114298417 A CN114298417 A CN 114298417A
Authority
CN
China
Prior art keywords
fraud
risk
vector
risk assessment
app
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111640205.2A
Other languages
Chinese (zh)
Inventor
骆浩楠
龚妙岚
李嘉
周凯
章文康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN202111640205.2A priority Critical patent/CN114298417A/en
Publication of CN114298417A publication Critical patent/CN114298417A/en
Priority to PCT/CN2022/117419 priority patent/WO2023124204A1/en
Priority to TW111137564A priority patent/TW202326537A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/04Trading; Exchange, e.g. stocks, commodities, derivatives or currency exchange

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Physics & Mathematics (AREA)
  • Marketing (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • Game Theory and Decision Science (AREA)
  • Finance (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Technology Law (AREA)
  • Educational Administration (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an anti-fraud risk assessment method, a training method, a device and a readable storage medium, wherein the training method comprises the following steps: obtaining a training sample set, wherein the training sample comprises multi-dimensional features and fraud labels thereof, and the multi-dimensional features comprise: user static characteristics, user behavior characteristics and equipment risk APP characteristics; inputting the training sample set into an anti-fraud risk assessment model to be trained for iterative training; in each iteration, the anti-fraud risk assessment model performs embedding processing on input multi-dimensional features to obtain an input vector, the input vector is input into a feature learning network constructed based on a self-attention mechanism to obtain a weighted and fused coding vector, the coding vector is input into a depth network to obtain a risk prediction result, and parameters of the risk assessment model are updated by using the risk prediction result and a loss function constructed by a fraud tag. By using the method, a better anti-fraud risk assessment effect can be obtained.

Description

Anti-fraud risk assessment method, anti-fraud risk training method, anti-fraud risk assessment device, anti-fraud risk training device and readable storage medium
Technical Field
The invention belongs to the field of anti-fraud, and particularly relates to an anti-fraud risk assessment method, a training method, a device and a readable storage medium.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
With the development of telecommunication networks, real-time communication and fund transactions are increasingly facilitated, and meanwhile, opportunities are provided for fraudulent parties. Based on the current situations that users have weak consciousness of fraud prevention before a case and the difficulty of case-after-case pursuit is high, the prevention on the transaction side is particularly important.
However, at present, the identification of fraudulent transactions is still delayed and not accurate.
Disclosure of Invention
In view of the above problems in the prior art, an anti-fraud risk assessment method, a training method, an apparatus, and a readable storage medium are provided.
The present invention provides the following.
In a first aspect, a training method for an anti-fraud risk assessment model is provided, including: obtaining a training sample set, wherein the training sample comprises multi-dimensional features and fraud labels thereof, and the multi-dimensional features comprise: user static characteristics, user behavior characteristics and equipment risk APP characteristics; inputting the training sample set into an anti-fraud risk assessment model to be trained for iterative training; in each iteration, the anti-fraud risk assessment model performs embedding processing on input multi-dimensional features to obtain an input vector, the input vector is input into a feature learning network constructed based on a self-attention mechanism to obtain a weighted and fused coding vector, the coding vector is input into a depth network to obtain a risk prediction result, and parameters of the risk assessment model are updated by using the risk prediction result and a loss function constructed by a fraud tag.
In one embodiment, a Transformer encoder is employed as the feature learning network, the Transformer encoder comprising a self-attention layer, a residual and normalization layer, a feed-forward network layer, and a summation and normalization layer.
In one embodiment, the method further comprises: obtaining use time sequence information of equipment risk APPs, and obtaining use correlation of each risk APP used by user equipment and a current fund APP based on the use time sequence; performing time sequence coding on the use time sequence information by using a position coding mechanism of a Transformer coder to obtain a time sequence vector, and combining the time sequence vector with the use correlation corresponding to each risk APP to obtain a time sequence strength vector; and combining the time sequence intensity vector with an input vector corresponding to the equipment risk APP characteristic, and inputting the combined input vector into a self-attention layer.
In one embodiment, the method for temporally encoding the usage timing information by using a position encoding mechanism of a transform encoder further comprises: wherein, the time sequence coding rule is defined by the following formula:
Figure BDA0003443663690000021
Figure BDA0003443663690000022
where TE (t,2i) is the 2 i-th dimension of the time-series coded vector of time sequence t, TE (t,2i +1) is the 2i + 1-th dimension of the time-series coded vector of time sequence t, dmodelIs the dimension of the time-sequential coded vector.
In one embodiment, the method further comprises: acquiring global risk APPs, and acquiring related and/or similar other APPs by utilizing the attribute information of each risk APP so as to expand the global risk APPs; the attribute information includes one or more of: developer information, name information, APP introduction information.
In one embodiment, obtaining a training data set further comprises: collecting user transaction behavior information in a point burying mode, wherein the user transaction behavior data comprises: transaction location IP, transaction counterparty information; and periodically collecting APP use information of the user equipment, determining a risk APP used by the user equipment according to the global risk APP, and obtaining the equipment risk APP characteristics.
In one embodiment, the multi-dimensional features further comprise: and the text features comprise transaction message leaving information.
In one embodiment, the deep network employs random forests or XGBs in machine learning.
In one embodiment, a transaction amount weighting factor is provided in the loss function.
In a second aspect, an anti-fraud risk assessment method is provided, including: acquiring real-time transaction information, wherein the real-time transaction information comprises: user static characteristics, user behavior characteristics and equipment risk APP characteristics; inputting the real-time transaction information into an anti-fraud risk assessment model, executing embedding processing on the input real-time transaction information by the anti-fraud risk assessment model to obtain an input vector, inputting the input vector into a feature learning network constructed based on an attention mechanism to obtain a coding vector, and inputting the coding vector into a depth network to obtain a risk prediction result; wherein the anti-fraud risk assessment model is trained using the method of the first aspect.
In one embodiment, the method further comprises: and if the risk prediction result meets the preset condition, performing corresponding interference processing and/or alarm processing based on the real-time transaction information.
In one embodiment, the method further comprises: updating a training sample set based on the risk prediction result and the real-time transaction information; constructing a user relation graph based on the training sample set updated in real time, wherein the user transaction relation graph takes users as nodes and takes transaction relations among the users as edges; mining a group node and/or a group transaction from the user transaction relation graph through a clustering algorithm and/or a graph attention algorithm; identifying hidden fraud samples from a set of training samples based on a group node and/or a group transaction; and updating and training the risk assessment prediction model based on the fed-back hidden fraud sample.
In a third aspect, a training device for an anti-fraud risk assessment model is provided, which includes: the acquisition module is used for acquiring a training sample set, the training sample comprises multi-dimensional features and fraud tags thereof, and the multi-dimensional features comprise: user static characteristics, user behavior characteristics and equipment risk APP characteristics; the training module is used for inputting the training sample set into an anti-fraud risk assessment model to be trained for iterative training; in each iteration, the anti-fraud risk assessment model performs embedding processing on input multi-dimensional features to obtain an input vector, the input vector is input into a feature learning network constructed based on a self-attention mechanism to obtain a weighted and fused coding vector, the coding vector is input into a depth network to obtain a risk prediction result, and parameters of the risk assessment model are updated by using the risk prediction result and a loss function constructed by a fraud tag.
In a fourth aspect, an anti-fraud risk assessment apparatus is provided, including: the acquisition module is used for acquiring real-time transaction information, and the real-time transaction information comprises: user static characteristics, user behavior characteristics and equipment risk APP characteristics; the evaluation module is used for inputting the real-time transaction information into the anti-fraud risk evaluation model, the anti-fraud risk evaluation model executes embedding processing on the input real-time transaction information to obtain an input vector, the input vector is input into a feature learning network constructed based on an attention mechanism to obtain a coding vector, and the coding vector is input into a depth network to obtain a risk prediction result; wherein the anti-fraud risk assessment model is trained using the method of the first aspect.
In a fifth aspect, a training device for an anti-fraud risk assessment model is provided, which includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform: the method of the first aspect.
In a sixth aspect, an anti-fraud risk assessment apparatus is provided, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform: the method of the second aspect.
In a seventh aspect, a computer-readable storage medium is provided, which stores a program that, when executed by a multi-core processor, causes the multi-core processor to perform the method according to the first aspect and/or the method according to the second aspect.
One of the advantages of the above embodiment is that a better anti-fraud risk assessment effect can be obtained.
Other advantages of the present invention will be explained in more detail in conjunction with the following description and the accompanying drawings.
It should be understood that the above description is only an overview of the technical solutions of the present invention, so as to clearly understand the technical means of the present invention, and thus can be implemented according to the content of the description. In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
The advantages and benefits herein, as well as other advantages and benefits, will be apparent to one of ordinary skill in the art upon reading the following detailed description of the exemplary embodiments. The drawings are only for purposes of illustrating exemplary embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like elements throughout. In the drawings:
FIG. 1 is a schematic structural diagram of a training device of an anti-fraud risk assessment model according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for training an anti-fraud risk assessment model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a training process of an anti-fraud risk assessment model according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating an anti-fraud risk assessment method according to an embodiment of the invention;
FIG. 5 is a schematic diagram illustrating a usage process of an anti-fraud risk assessment model according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a training apparatus of an anti-fraud risk assessment model according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an anti-fraud risk assessment apparatus according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a training apparatus of an anti-fraud risk assessment model according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an anti-fraud risk assessment apparatus according to an embodiment of the present invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the description of the embodiments of the present application, it is to be understood that terms such as "including" or "having" are intended to indicate the presence of the features, numbers, steps, actions, components, parts, or combinations thereof disclosed in the specification, and are not intended to preclude the presence or addition of one or more other features, numbers, steps, actions, components, parts, or combinations thereof.
Unless otherwise stated, "/" indicates an OR meaning, e.g., A/B may indicate A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present application, "a plurality" means two or more unless otherwise specified.
All code in this application is exemplary and variations will occur to those skilled in the art based upon the programming language used, the specific needs and personal habits without departing from the spirit of the application.
As shown in fig. 1, fig. 1 is a schematic structural diagram of a hardware operating environment according to an embodiment of the present invention.
It should be noted that fig. 1 is a schematic structural diagram of a hardware operating environment of a training device of an anti-fraud risk assessment model. The database hot row updating device of the embodiment of the invention can be a terminal device such as a PC, a portable computer and the like.
As shown in fig. 1, the training device of the anti-fraud risk assessment model may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the training device configuration of the anti-fraud risk assessment model shown in FIG. 1 does not constitute a limitation of the device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in FIG. 1, memory 1005, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and a training program for an anti-fraud risk assessment model. The operating system is a program for managing and controlling hardware and software resources of the anti-fraud risk assessment model training device, and supports the running of a database hot-spot row updating program and other software or programs.
In the training device of the anti-fraud risk assessment model shown in fig. 1, the user interface 1003 is mainly used for receiving requests, data and the like sent by the first terminal, the second terminal and the supervision terminal; the network interface 1004 is mainly used for connecting the background server and performing data communication with the background server; and the processor 1001 may be configured to call the database hot spot row update program stored in the memory 1005, and perform the following operations:
obtaining a training sample set, wherein the training sample comprises multi-dimensional features and fraud labels thereof, and the multi-dimensional features comprise: user static characteristics, user behavior characteristics and equipment risk APP characteristics; inputting the training sample set into an anti-fraud risk assessment model to be trained for iterative training; in each iteration, the anti-fraud risk assessment model performs embedding processing on input multi-dimensional features to obtain an input vector, the input vector is input into a feature learning network constructed based on a self-attention mechanism to obtain a weighted and fused coding vector, the coding vector is input into a depth network to obtain a risk prediction result, and parameters of the risk assessment model are updated by using the risk prediction result and a loss function constructed by a fraud tag.
Therefore, the attention mechanism is used for fusing multi-dimensional data such as user static characteristics, user behavior characteristics and equipment risk APP characteristics to predict risks, and a model with better anti-fraud risk assessment effect can be trained.
Fig. 2 is a schematic flowchart of a training method of an anti-fraud risk assessment model according to an embodiment of the present application, in which from a device perspective, an execution subject may be one or more electronic devices, and more specifically, may be a processing module; from the program perspective, the execution main body may accordingly be a program loaded on these electronic devices.
Referring to fig. 1, the method includes:
202. obtaining a training sample set, wherein the training sample comprises multi-dimensional features and fraud labels thereof, and the multi-dimensional features comprise: user static characteristics, user behavior characteristics and equipment risk APP characteristics;
the training sample set comprises a plurality of black and white samples, wherein the black samples refer to the training samples with the fraud label of 'yes', the white samples refer to the training samples with the fraud label of 'no', and each training sample is obtained according to the information of the transaction side.
For example,the training samples may be: user static characteristics (user a, gender, age, occupation), user behavior characteristics (transaction location IP, transaction counterparty information), device risk APP characteristics (APP)1,t1,app2,t2,…tn-1,appn) Wherein appnFor trading APP, APP1,app2Risk APPs that belong to the user equipment installation and use, i.e. APPs in the risk list, t1,t2,tn-1The usage interval between two adjacent risk APPs is correspondingly, so that the usage habit of the user on the risk APP can be seen. )
In some embodiments, the method further comprises: acquiring global risk APPs, and acquiring related and/or similar other APPs by utilizing the attribute information of each risk APP so as to expand the global risk APPs; the attribute information includes one or more of: developer information, name information, APP introduction information.
It can be understood that new risk APPs are continuously generated, and the list of known risk APPs is difficult to be comprehensively counted, so that unknown risk APPs can be inferred by using an association algorithm such as clustering according to the existing known risk APPs, and the global risk APPs are further expanded in real time. It can be understood that there may be similarity in association between attribute information such as developer information, name information, APP introduction information, etc. between risk APPs, and thus, extension may be implemented accordingly.
In some embodiments, obtaining the training data set further comprises: collecting user static characteristics, wherein the user static characteristics comprise the age and sex of a user; collecting user transaction behavior information in a point burying mode, wherein the user transaction behavior data comprises: transaction location IP, transaction counterparty information; and periodically collecting APP use information of the user equipment, determining a risk APP used by the user equipment according to the global risk APP, and obtaining the equipment risk APP characteristics.
For example, collecting information for a device at a first point in time, discovering apps1Finding the app at a second point in time1And app2And may estimate APP usage time based on the collection time.
In some embodiments, the multi-dimensional features further comprise: and the text features comprise transaction message leaving information. It will be appreciated that some fraudulent transactions may have transaction message information that is more specific and that the risk may be identified by identifying the transaction message information.
204. Inputting the training sample set into an anti-fraud risk assessment model to be trained for iterative training;
in each iteration, the anti-fraud risk assessment model performs embedding processing on input multi-dimensional features to obtain an input vector, the input vector is input into a feature learning network constructed based on a self-attention mechanism to obtain a weighted and fused coding vector, the coding vector is input into a depth network to obtain a risk prediction result, and parameters of the risk assessment model are updated by using the risk prediction result and a loss function constructed by a fraud tag.
Referring to fig. 3, a training architecture diagram of an anti-fraud risk assessment model is shown, wherein the anti-fraud risk assessment model 300 includes an embedding layer 301 for converting multidimensional features of an input training sample into a vector form, i.e., an input vector. And the feature extraction network 302 is used for extracting effective features from the input vector sequence, and is constructed based on a self-attention mechanism, and specifically comprises a self-attention layer, a residual error and normalization layer, a feed-forward network layer and a summation and normalization layer, so that a weighted and fused coding vector can be obtained. A deep network 303, configured to obtain a risk prediction result based on the coding vector, and the deep network 30 further receives a fraud label of the sample, so as to adjust parameters of the risk assessment model through back propagation based on the risk prediction result and an error of the fraud label.
In the attention mechanism, each input vector is multiplied by three different weight matrixes to obtain 3 vectors (Q, K, V), namely a query vector Q, key vector K and a value vector V, and score = QK is calculated through similarity calculationTAnd outputting weighted matching by the weight, normalizing the weighted matching for gradient stabilization, activating by softmax and performing point multiplication on V to obtain a result of the weighted input vector after passing through an attention structure, and finally accessing to a residual error network structure to prevent deep learning degradation.
In the invention, the features such as APP (application), static attributes and the like installed on the user equipment are vectorized, and then the features are spliced respectively through an attention mechanism to obtain weighted summation, so that a risk prediction result of user dimensionality can be obtained.
In some embodiments, a Transformer encoder is employed as the feature learning network, the Transformer encoder comprising a self-attention layer, a residual and normalization layer, a feed-forward network layer, and a summation and normalization layer.
In some embodiments, further comprising: obtaining use time sequence information of equipment risk APPs, and obtaining use correlation of each risk APP used by user equipment and a current fund APP based on the use time sequence; performing time sequence coding on the use time sequence information by using a position coding mechanism of a Transformer coder to obtain a time sequence vector, and combining the time sequence vector with the use correlation corresponding to each risk APP to obtain a time sequence strength vector; and combining the time sequence intensity vector with an input vector corresponding to the equipment risk APP characteristic, and inputting the combined input vector into a self-attention layer. And then, obtaining weighted summation through attention mechanism splicing, so that a risk prediction result of the APP dimension of the equipment can be further obtained.
For example, the usage timing information of the device risk APP is: (app)1,t1,app2,t2,app3…tn-1,appn) At this time, if a certain risk APP is used a short time before the transaction APP is used, the correlation between the two is high, and if another risk APP is used a long time before the transaction APP is used, the correlation between the two is low, for example, the risk APP is set by using the following formula: appn-1With current transaction APP: appnCorrelation between:
Figure BDA0003443663690000073
meanwhile, in consideration of the use habit of the user, in addition to the absolute time relationship, the relative time is very important, so that the use time sequence information can be subjected to time sequence coding by referring to a position coding mechanism of a transform coder.
For example, the following timing encoding rules may be employed:
Figure BDA0003443663690000071
Figure BDA0003443663690000072
where TE (t,2i) is the 2 i-th dimension of the time-series coded vector of time sequence t, TE (t,2i +1) is the 2i + 1-th dimension of the time-series coded vector of time sequence t, dmodelIs the dimension of the time-sequential coded vector.
According to the above formula, the time series vector at the time t + t1 can be obtained by the linear change of the time t, so that the model captures the change between relative time sequences.
Referring to fig. 3, an embedding layer 301 includes an input embedding (inputting) layer and a timing encoding layer. In the input embedding layer, each feature of the training sample may be subjected to embedding processing, so as to obtain a word embedding tensor of each feature, where the tensor may be specifically expressed as a one-dimensional vector, a two-dimensional matrix, three-dimensional or more-dimensional data, and the like. In the time sequence coding layer, the use time sequence position of each risk APP at the user equipment can be obtained, and then a time sequence tensor is generated for the time sequence of each risk APP. After obtaining the embedding tensor of each feature and the time sequence tensor of some features (risk APPs) in the text to be processed, the time sequence tensor and the embedding tensor of the features can be combined and input into the feature extraction network.
In some embodiments, the deep network employs random forests or XGBs in machine learning.
In some embodiments, a transaction amount weighting factor is provided in the loss function. It can be understood that the fraud amount is generally larger and more serious in harm, so that the weight factor in the loss function can be set based on the transaction amount of each training sample, so that the whole model is more favorable for identifying the fraud transaction with larger amount.
Based on the same technical concept, the embodiment of the invention also provides an anti-fraud risk assessment method. Fig. 4 is a flowchart illustrating an anti-fraud risk assessment method according to an embodiment of the present invention.
As shown in fig. 4, the method 400 includes:
402. acquiring real-time transaction information, wherein the real-time transaction information comprises: one or more of a user static characteristic, a user behavior characteristic, and an equipment risk APP characteristic;
404. inputting real-time transaction information into an anti-fraud risk assessment model, performing embedding processing on the input real-time transaction information to obtain an input vector, inputting the input vector into a feature learning network constructed based on an attention mechanism to obtain a coding vector, and inputting the coding vector into a depth network to obtain a risk prediction result; wherein, the anti-fraud risk assessment model is obtained by training by using the method of the embodiment.
Referring to fig. 5, a schematic diagram of the usage of the anti-fraud risk assessment model is shown, at this time, transaction information obtained in real time is input into the trained anti-fraud risk assessment model 300, the transaction information includes one or more of user static features, user behavior features, and device risk APP features, the embedding layer 301 performs embedding processing on the transaction information to obtain vectorized data, i.e., an input vector, the feature extraction network 302 extracts effective features, i.e., a coding vector, from the input vector, and the trained deep network predicts the coding to obtain a risk prediction result.
In some embodiments, further comprising: and if the risk prediction result meets the preset condition, performing corresponding interference processing and/or alarm processing based on the real-time transaction information.
In some embodiments, further comprising: updating a training sample set based on the risk prediction result and the real-time transaction information; constructing a user relation graph based on the training sample set updated in real time, wherein the user transaction relation graph takes users as nodes and takes transaction relations among the users as edges; mining a group node and/or a group transaction from the user transaction relation graph through a clustering algorithm and/or a graph attention algorithm; identifying hidden fraud samples from a set of training samples based on a group node and/or a group transaction; and updating and training the risk assessment prediction model based on the fed-back hidden fraud sample.
Specifically, the training sample set may not be fully and accurately labeled, and based on this, a ganged plan may be further mined based on known black samples through clustering and graph algorithms, that is, black samples are mined.
In the description of the present specification, reference to the description of the terms "some possible implementations," "some embodiments," "examples," "specific examples," or "some examples," or the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
With regard to the method flow diagrams of embodiments of the present application, certain operations are described as different steps performed in a certain order. Such flow diagrams are illustrative and not restrictive. Certain steps described herein may be grouped together and performed in a single operation, may be divided into multiple sub-steps, and may be performed in an order different than that shown herein. The various steps shown in the flowcharts may be implemented in any way by any circuit structure and/or tangible mechanism (e.g., by software running on a computer device, hardware (e.g., logical functions implemented by a processor or chip), etc., and/or any combination thereof).
Based on the same technical concept, the embodiment of the invention also provides a training device of the anti-fraud risk assessment model, which is used for executing the training method of the anti-fraud risk assessment model provided by any one of the embodiments. Fig. 6 is a schematic structural diagram of a training apparatus of an anti-fraud risk assessment model according to an embodiment of the present invention.
As shown in fig. 6, the apparatus 600 includes:
an obtaining module 601, configured to obtain a training sample set, where the training sample includes a multi-dimensional feature and a fraud tag thereof, and the multi-dimensional feature includes: user static characteristics, user behavior characteristics and equipment risk APP characteristics;
a training module 602, configured to input a training sample set into an anti-fraud risk assessment model to be trained for iterative training;
in each iteration, the anti-fraud risk assessment model performs embedding processing on input multi-dimensional features to obtain an input vector, the input vector is input into a feature learning network constructed based on a self-attention mechanism to obtain a weighted and fused coding vector, the coding vector is input into a depth network to obtain a risk prediction result, and parameters of the risk assessment model are updated by using the risk prediction result and a loss function constructed by a fraud tag.
Based on the same technical concept, the embodiment of the invention also provides an anti-fraud risk assessment device, which is used for executing the anti-fraud risk assessment method provided by any one of the above embodiments. Fig. 7 is a schematic structural diagram of an anti-fraud risk assessment apparatus according to an embodiment of the present invention.
An obtaining module 701, configured to obtain real-time transaction information, where the real-time transaction information includes: user static characteristics, user behavior characteristics and equipment risk APP characteristics;
the evaluation module 702 is configured to input the real-time transaction information into an anti-fraud risk evaluation model, where the anti-fraud risk evaluation model performs embedding processing on the input real-time transaction information to obtain an input vector, inputs the input vector into a feature learning network constructed based on an attention mechanism to obtain a coding vector, and inputs the coding vector into a depth network to obtain a risk prediction result; wherein, the anti-fraud risk assessment model is obtained by training by the training method.
It should be noted that the apparatus in the embodiment of the present application may implement each process of the foregoing method embodiment, and achieve the same effect and function, which are not described herein again.
Fig. 8 is a training apparatus of an anti-fraud risk assessment model according to an embodiment of the present application, configured to perform the training method of the anti-fraud risk assessment model shown in fig. 2, where the apparatus includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the above embodiments.
Fig. 9 is an anti-fraud risk assessment apparatus according to an embodiment of the present application, configured to execute the anti-fraud risk assessment method shown in fig. 4, where the apparatus includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the above embodiments.
According to some embodiments of the application, there is provided a non-transitory computer storage medium of a training method of an anti-fraud risk assessment model and/or an anti-fraud risk assessment method, having stored thereon computer-executable instructions arranged to, when executed by a processor, perform: the method of the above embodiment.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the apparatus, device, and computer-readable storage medium embodiments, the description is simplified because they are substantially similar to the method embodiments, and reference may be made to some descriptions of the method embodiments for their relevance.
The apparatus, the device, and the computer-readable storage medium provided in the embodiment of the present application correspond to the method one to one, and therefore, the apparatus, the device, and the computer-readable storage medium also have advantageous technical effects similar to those of the corresponding method.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (17)

1. A training method of an anti-fraud risk assessment model is characterized by comprising the following steps:
obtaining a training sample set, wherein the training sample set comprises multi-dimensional features and fraud labels thereof, and the multi-dimensional features comprise: user static characteristics, user behavior characteristics and equipment risk APP characteristics;
inputting the training sample set into an anti-fraud risk assessment model to be trained for iterative training;
in each iteration, the anti-fraud risk assessment model performs embedding processing on the input multi-dimensional features to obtain an input vector, inputs the input vector into a feature learning network constructed based on a self-attention mechanism to obtain a weighted fusion encoding vector, inputs the encoding vector into a depth network to obtain a risk prediction result, and updates parameters of the risk assessment model by using the risk prediction result and a loss function constructed by the fraud tag.
2. The method of claim 1, wherein a Transformer encoder is employed as the feature learning network, the Transformer encoder comprising a self-attention layer, a residual and normalization layer, a feed-forward network layer, and a sum and normalization layer.
3. The method of claim 2, further comprising:
obtaining the use time sequence information of the equipment risk APP, and obtaining the use correlation of each risk APP used by the user equipment and the current fund APP based on the use time sequence;
performing time sequence coding on the use time sequence information by using a position coding mechanism of the Transformer coder to obtain a time sequence vector, and combining the time sequence vector with the use correlation corresponding to each risk APP to obtain a time sequence strength vector;
and combining the time sequence intensity vector and the input vector corresponding to the equipment risk APP feature, and inputting the combined time sequence intensity vector and the input vector into the self-attention layer.
4. The method of claim 3, wherein the usage timing information is temporally encoded using a position coding mechanism of the transform encoder, further comprising:
wherein, the time sequence coding rule is defined by the following formula:
Figure FDA0003443663680000011
Figure FDA0003443663680000012
where TE (t,2i) is the 2 i-th dimension of the time-series coded vector of time sequence t, TE (t,2i +1) is the 2i + 1-th dimension of the time-series coded vector of time sequence t, dmodelIs the dimension of the time-sequential coded vector.
5. The method of claim 1, further comprising:
acquiring global risk APPs, and acquiring related and/or similar other APPs by utilizing the attribute information of each risk APP so as to expand the global risk APPs;
the attribute information includes one or more of: developer information, name information, APP introduction information.
6. The method of claim 1, wherein obtaining a training data set further comprises:
collecting the user transaction behavior information in a point burying mode, wherein the user transaction behavior data comprises: transaction location IP, transaction counterparty information;
periodically collecting APP use information of user equipment, determining risk APP used by the user equipment according to the global risk APP, and obtaining the equipment risk APP characteristics.
7. The method of claim 1, wherein the multi-dimensional features further comprise: a text feature, the text feature comprising transaction message information.
8. The method of claim 1, wherein the deep network employs random forest or XGB in machine learning.
9. The method of claim 1, wherein a transaction amount weighting factor is provided in the loss function.
10. An anti-fraud risk assessment method, comprising:
acquiring real-time transaction information, wherein the real-time transaction information comprises: one or more of a user static characteristic, a user behavior characteristic, and an equipment risk APP characteristic;
inputting the real-time transaction information into an anti-fraud risk assessment model, wherein the anti-fraud risk assessment model performs embedding processing on the input real-time transaction information to obtain an input vector, inputs the input vector into a feature learning network constructed based on an attention mechanism to obtain a coding vector, and inputs the coding vector into a depth network to obtain a risk prediction result;
wherein the anti-fraud risk assessment model is trained using the method of any one of claims 1-9.
11. The method of claim 10, further comprising:
and if the risk prediction result meets the preset condition, performing corresponding interference processing and/or alarm processing based on the real-time transaction information.
12. The method of claim 10, further comprising:
updating a training sample set based on the risk prediction result and the real-time transaction information;
constructing a user relation graph based on the training sample set updated in real time, wherein the user transaction relation graph takes users as nodes and takes transaction relations among the users as edges;
mining a group node and/or a group transaction from the user transaction relationship graph through a clustering algorithm and/or a graph attention algorithm;
identifying hidden fraud samples from the set of training samples based on the group node and/or the group transaction;
and performing update training on the risk assessment prediction model based on the fed-back hidden fraud samples.
13. A training device for an anti-fraud risk assessment model, comprising:
an obtaining module, configured to obtain a training sample set, where the training sample includes a multi-dimensional feature and a fraud tag thereof, and the multi-dimensional feature includes: user static characteristics, user behavior characteristics and equipment risk APP characteristics;
the training module is used for inputting the training sample set into an anti-fraud risk assessment model to be trained for iterative training;
in each iteration, the anti-fraud risk assessment model performs embedding processing on the input multi-dimensional features to obtain an input vector, inputs the input vector into a feature learning network constructed based on a self-attention mechanism to obtain a weighted fusion encoding vector, inputs the encoding vector into a depth network to obtain a risk prediction result, and updates parameters of the risk assessment model by using the risk prediction result and a loss function constructed by the fraud tag.
14. An anti-fraud risk assessment apparatus, comprising:
the acquisition module is used for acquiring real-time transaction information, and the real-time transaction information comprises: user static characteristics, user behavior characteristics and equipment risk APP characteristics;
the evaluation module is used for inputting the real-time transaction information into an anti-fraud risk evaluation model, the anti-fraud risk evaluation model executes embedding processing on the input real-time transaction information to obtain an input vector, the input vector is input into a feature learning network constructed based on an attention mechanism to obtain a coding vector, and the coding vector is input into a depth network to obtain a risk prediction result; wherein the anti-fraud risk assessment model is trained using the method of any one of claims 1-9.
15. A training device for an anti-fraud risk assessment model, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform: the method of any one of claims 1-9.
16. An anti-fraud risk assessment apparatus, comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform: the method of any one of claims 10-12.
17. A computer-readable storage medium storing a program which, when executed by a multi-core processor, causes the multi-core processor to perform the method of any of claims 1-9, or the method of any of claims 10-12.
CN202111640205.2A 2021-12-29 2021-12-29 Anti-fraud risk assessment method, anti-fraud risk training method, anti-fraud risk assessment device, anti-fraud risk training device and readable storage medium Pending CN114298417A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202111640205.2A CN114298417A (en) 2021-12-29 2021-12-29 Anti-fraud risk assessment method, anti-fraud risk training method, anti-fraud risk assessment device, anti-fraud risk training device and readable storage medium
PCT/CN2022/117419 WO2023124204A1 (en) 2021-12-29 2022-09-07 Anti-fraud risk assessment method and apparatus, training method and apparatus, and readable storage medium
TW111137564A TW202326537A (en) 2021-12-29 2022-10-03 Anti-fraud risk assessment method and device, training method and device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111640205.2A CN114298417A (en) 2021-12-29 2021-12-29 Anti-fraud risk assessment method, anti-fraud risk training method, anti-fraud risk assessment device, anti-fraud risk training device and readable storage medium

Publications (1)

Publication Number Publication Date
CN114298417A true CN114298417A (en) 2022-04-08

Family

ID=80972114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111640205.2A Pending CN114298417A (en) 2021-12-29 2021-12-29 Anti-fraud risk assessment method, anti-fraud risk training method, anti-fraud risk assessment device, anti-fraud risk training device and readable storage medium

Country Status (3)

Country Link
CN (1) CN114298417A (en)
TW (1) TW202326537A (en)
WO (1) WO2023124204A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114757581A (en) * 2022-05-18 2022-07-15 华南理工大学 Financial transaction risk assessment method and device, electronic equipment and computer readable medium
CN114936723A (en) * 2022-07-21 2022-08-23 中国电子科技集团公司第三十研究所 Social network user attribute prediction method and system based on data enhancement
CN116258579A (en) * 2023-04-28 2023-06-13 成都新希望金融信息有限公司 Training method of user credit scoring model and user credit scoring method
WO2023124204A1 (en) * 2021-12-29 2023-07-06 ***股份有限公司 Anti-fraud risk assessment method and apparatus, training method and apparatus, and readable storage medium
CN116562901A (en) * 2023-06-25 2023-08-08 福建润楼数字科技有限公司 Automatic generation method of anti-fraud rule based on machine learning
CN117435918A (en) * 2023-12-20 2024-01-23 杭州市特种设备检测研究院(杭州市特种设备应急处置中心) Elevator risk early warning method based on spatial attention network and feature division
CN118070141A (en) * 2024-04-25 2024-05-24 成都乐超人科技有限公司 Artificial intelligence-based anti-fraud transaction identification method and system

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117113148B (en) * 2023-08-30 2024-05-17 上海智租物联科技有限公司 Risk identification method, device and storage medium based on time sequence diagram neural network
CN117575596A (en) * 2023-09-06 2024-02-20 临沂万鼎网络科技有限公司 Fraud analysis method based on artificial intelligence and digital financial big data system
CN117151851B (en) * 2023-09-12 2024-04-30 浪潮数字(山东)建设运营有限公司 Bank risk prediction method and device based on genetic algorithm and electronic equipment
CN116934098A (en) * 2023-09-14 2023-10-24 山东省标准化研究院(Wto/Tbt山东咨询工作站) Risk quantitative evaluation method for technical trade measures
CN117556224B (en) * 2024-01-12 2024-03-22 国网四川省电力公司电力科学研究院 Grid facility anti-seismic risk assessment system, method and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180253737A1 (en) * 2017-03-06 2018-09-06 International Business Machines Corporation Dynamicall Evaluating Fraud Risk
CN109978538B (en) * 2017-12-28 2023-10-10 创新先进技术有限公司 Method and device for determining fraudulent user, training model and identifying fraudulent risk
US20210374756A1 (en) * 2020-05-29 2021-12-02 Mastercard International Incorporated Methods and systems for generating rules for unseen fraud and credit risks using artificial intelligence
CN112348520A (en) * 2020-10-21 2021-02-09 上海淇玥信息技术有限公司 XGboost-based risk assessment method and device and electronic equipment
CN112365338B (en) * 2020-11-11 2024-03-22 天翼安全科技有限公司 Data fraud detection method, device, terminal and medium based on artificial intelligence
CN114298417A (en) * 2021-12-29 2022-04-08 ***股份有限公司 Anti-fraud risk assessment method, anti-fraud risk training method, anti-fraud risk assessment device, anti-fraud risk training device and readable storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124204A1 (en) * 2021-12-29 2023-07-06 ***股份有限公司 Anti-fraud risk assessment method and apparatus, training method and apparatus, and readable storage medium
CN114757581A (en) * 2022-05-18 2022-07-15 华南理工大学 Financial transaction risk assessment method and device, electronic equipment and computer readable medium
CN114936723A (en) * 2022-07-21 2022-08-23 中国电子科技集团公司第三十研究所 Social network user attribute prediction method and system based on data enhancement
CN114936723B (en) * 2022-07-21 2023-04-14 中国电子科技集团公司第三十研究所 Social network user attribute prediction method and system based on data enhancement
CN116258579A (en) * 2023-04-28 2023-06-13 成都新希望金融信息有限公司 Training method of user credit scoring model and user credit scoring method
CN116562901A (en) * 2023-06-25 2023-08-08 福建润楼数字科技有限公司 Automatic generation method of anti-fraud rule based on machine learning
CN116562901B (en) * 2023-06-25 2024-04-02 福建润楼数字科技有限公司 Automatic generation method of anti-fraud rule based on machine learning
CN117435918A (en) * 2023-12-20 2024-01-23 杭州市特种设备检测研究院(杭州市特种设备应急处置中心) Elevator risk early warning method based on spatial attention network and feature division
CN117435918B (en) * 2023-12-20 2024-03-15 杭州市特种设备检测研究院(杭州市特种设备应急处置中心) Elevator risk early warning method based on spatial attention network and feature division
CN118070141A (en) * 2024-04-25 2024-05-24 成都乐超人科技有限公司 Artificial intelligence-based anti-fraud transaction identification method and system

Also Published As

Publication number Publication date
WO2023124204A1 (en) 2023-07-06
TW202326537A (en) 2023-07-01

Similar Documents

Publication Publication Date Title
CN114298417A (en) Anti-fraud risk assessment method, anti-fraud risk training method, anti-fraud risk assessment device, anti-fraud risk training device and readable storage medium
CN110489520B (en) Knowledge graph-based event processing method, device, equipment and storage medium
US20190392258A1 (en) Method and apparatus for generating information
Tavakoli et al. An autoencoder-based deep learning approach for clustering time series data
CN109978060B (en) Training method and device of natural language element extraction model
CN110110233B (en) Information processing method, device, medium and computing equipment
CN110264277B (en) Data processing method and device executed by computing equipment, medium and computing equipment
Gao et al. A novel gapg approach to automatic property generation for formal verification: The gan perspective
CN111782824A (en) Information query method, device, system and medium
CN113190702A (en) Method and apparatus for generating information
Wang et al. Concept drift-aware temporal cloud service APIs recommendation for building composite cloud systems
CN113934851A (en) Data enhancement method and device for text classification and electronic equipment
CN112070559A (en) State acquisition method and device, electronic equipment and storage medium
Li et al. Research on the application of multimedia entropy method in data mining of retail business
CN116503031B (en) Personnel similarity calculation method, device, equipment and medium based on resume analysis
CN112330442A (en) Modeling method and device based on ultra-long behavior sequence, terminal and storage medium
CN115525831A (en) Recommendation model training method, recommendation device and computer readable storage medium
CN110990164B (en) Account detection method and device and account detection model training method and device
CN113627514A (en) Data processing method and device of knowledge graph, electronic equipment and storage medium
CN111401641A (en) Service data processing method and device and electronic equipment
CN117909505B (en) Event argument extraction method and related equipment
Jia et al. Transductive classification by robust linear neighborhood propagation
Cai et al. Community vitality in dynamic temporal networks
US11941065B1 (en) Single identifier platform for storing entity data
CN113420214B (en) Electronic transaction object recommendation method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40067108

Country of ref document: HK