CN115600106A - Risk recognition model training method, device, equipment and storage medium - Google Patents

Risk recognition model training method, device, equipment and storage medium Download PDF

Info

Publication number
CN115600106A
CN115600106A CN202211276933.4A CN202211276933A CN115600106A CN 115600106 A CN115600106 A CN 115600106A CN 202211276933 A CN202211276933 A CN 202211276933A CN 115600106 A CN115600106 A CN 115600106A
Authority
CN
China
Prior art keywords
event
parameters
risk
determining
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211276933.4A
Other languages
Chinese (zh)
Inventor
王宁涛
杨阳
朱亮
陈琢
傅幸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202211276933.4A priority Critical patent/CN115600106A/en
Publication of CN115600106A publication Critical patent/CN115600106A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4016Transaction verification involving fraud or risk level assessment in transaction processing

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Computer Security & Cryptography (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The specification discloses a training method, a training device and a storage medium of a risk identification model, wherein a training sample is input into the risk identification model to be trained, a first characteristic of the training sample is obtained through a characteristic subnet, a second characteristic is obtained by performing linear transformation on the first characteristic according to each parameter in a full-connection layer, and a classification result of the training sample is obtained through a classification subnet. And training the risk recognition model by taking the minimization of the difference between the labels of the training samples and the classification results and the maximization of the similarity among all the parameters of the full connection layer as a training target. Therefore, by increasing the maximization of the similarity among all parameters of the full connection layer in the training target, the trained risk identification model can directly output the risk type of the event, and can adapt to the database-based feature retrieval through the features of the event output by the representation subnet, so that the application scene of risk identification is expanded, and the accuracy of risk identification and the privacy protection are improved.

Description

Risk recognition model training method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for training a risk recognition model.
Background
With the improvement of the attention of people to private data and the rapid development of internet technology, online services are rapidly developed and widely concerned. However, a risk event may occur when a user executes an online service, and the online service of the user is affected, so that a service provider may perform risk identification on the online service executed by the user, and identify the risk event in time to provide a corresponding risk processing scheme.
Based on this, the present specification provides a risk identification method.
Disclosure of Invention
The present specification provides a method, an apparatus, a device and a storage medium for training a risk identification model, so as to partially solve the above problems in the prior art.
The technical scheme adopted by the specification is as follows:
the present specification provides a training method for a risk identification model, where the risk identification model includes a feature subnet, a full connection layer, and a classification subnet, and includes:
acquiring a business event and a risk type of the business event, and determining a training sample and a label of the training sample;
inputting the training sample into a risk identification model to be trained, and obtaining a first characteristic of the training sample through the characteristic subnet;
according to the first feature, weighting the first feature through all parameters in the full connection layer to obtain a second feature of the training sample;
inputting the second characteristics into the classification subnet to obtain a classification result of the training sample;
adjusting parameters of the risk identification model by taking minimization of difference between labels of the training samples and the classification results and maximization of similarity among all parameters of the full connection layer as training targets;
wherein the risk identification model is used for determining the risk type of the business event to be identified.
The present specification provides a training device for a risk identification model, wherein the risk identification model comprises a feature subnet, a full connection layer and a classification subnet, and comprises:
the system comprises a first acquisition module, a first processing module and a second acquisition module, wherein the first acquisition module is used for acquiring a business event and a risk type of the business event and determining a training sample and a label of the training sample;
the first characteristic determining module is used for inputting the training sample into a risk recognition model to be trained and obtaining a first characteristic of the training sample through the characteristic subnet;
the second characteristic determining module is used for weighting the first characteristics through all parameters in the full connection layer according to the first characteristics to obtain second characteristics of the training sample;
the classification module is used for inputting the second characteristics into the classification subnet to obtain a classification result of the training sample;
the adjusting module is used for adjusting the parameters of the risk recognition model by taking the minimization of the difference between the labels of the training samples and the classification results and the maximization of the similarity among the parameters of the full connection layer as training targets; wherein the risk identification model is used for determining the risk type of the business event to be identified.
The present specification provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described method of training a risk recognition model.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method for training the risk identification model when executing the program.
The technical scheme adopted by the specification can achieve the following beneficial effects:
in the training method for the risk recognition model provided in this description, a training sample is input into a risk recognition model to be trained, a first feature of the training sample is obtained through a feature subnet, the first feature is weighted according to each parameter in a full connection layer to obtain a second feature, and then a classification result of the training sample is obtained through a classification subnet. And training a risk recognition model by taking the minimum difference between the labels of the training samples and the classification results and the maximum similarity among all the parameters of the full connection layer as training targets. Therefore, by increasing the maximization of the similarity among all parameters of the full connection layer in the training target, the trained risk identification model can not only directly output the risk type of the event, but also adapt to a scheme for determining the risk type based on the characteristic retrieval of the database by representing the characteristics of the event output by the subnet, the application scene of risk identification is expanded, and the accuracy of risk identification is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the principles of the specification and not to limit the specification in a limiting sense. In the drawings:
FIG. 1 is a schematic flow chart of a method for training a risk identification model in the present specification;
FIG. 2 is a schematic diagram of a risk identification model according to the present disclosure;
FIG. 3 is a schematic flow chart of a risk identification method of the present disclosure;
FIG. 4 is a schematic diagram of a risk identification model training apparatus according to the present disclosure;
fig. 5 is a schematic diagram of an electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
To make the objects, technical solutions and advantages of the present specification clearer and more complete, the technical solutions of the present specification will be described in detail and completely with reference to the specific embodiments of the present specification and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the present specification.
In addition, it should be noted that all the actions of acquiring signals, information or data in the present invention are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
At present, with the rapid development of internet technology, online transaction business is developed vigorously. However, the user may encounter risks, such as fraud, while conducting transactions online. In order to timely identify transaction events with risks, whether the transaction events are at risk and what types of risks the transaction events have can be identified by training a risk identification model.
Based on this, the present specification provides a training method of a risk recognition model.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a training method of a risk identification model provided in this specification.
S100: acquiring a business event and a risk type of the business event, and determining a training sample and a label of the training sample.
Embodiments of the present disclosure provide a method for training a risk recognition model, which may be performed by an electronic device such as a server for training the risk recognition model.
The risk identification model is used for determining the risk type of the business event to be identified. The risk recognition model to be trained comprises a representation subnet, a full connection layer and a classification subnet, as shown in fig. 2, wherein the representation subnet is used for extracting the characteristics of the training sample from the training sample, the full connection layer is used for carrying out linear transformation on the characteristics of the training sample, and the classification subnet is used for determining the classification result of the training sample according to the characteristics after the linear transformation.
Specifically, the training samples used for training the risk identification model include black data samples labeled as business events with transaction risk, and white data samples labeled as business events without transaction risk. For example, for a transaction event that has occurred, the transaction event may be determined to be at risk of a violation transaction through the report and feedback of the users participating in the transaction and after manual analysis, and then the classification label of the transaction event may be set as a black data sample. On the contrary, if the user reports that the transaction participated in by the user has the risk of illegal transaction but does not have the transaction risk after manual review, the manual review result can be fed back to the user, and the classification label of the transaction event is set as a white data sample.
In addition, in practical applications, the number of risk events labeled as black data samples in the classification of transaction events may be much smaller than the number of risk events labeled as white data samples in the classification of transaction events. This may result in inefficient deep learning due to the small sample size of the black data samples. This is because the classification result output by the classification subnet of the risk identification model may be skewed to a label with a larger number of samples in the multi-classification task. In the context of embodiments of the present specification, such a tilt may affect the accuracy of the risk identification task. In addition, because the transaction scenes, the transaction types and other event feature dimensions of the white data samples are large, in the risk identification model trained by adopting the existing supervised learning or supervised contrast learning mode, the training target is to minimize the difference between the white data samples, which can reduce the accuracy of the risk identification model.
For this reason, in the training process of the risk identification model in this specification, by adjusting the composition of the loss function, the similarity between the parameters of the fully-connected layer of the risk identification model is maximized, so that the characterization sub-network of the trained risk identification model can output the characterization of the event with the euro-type property, so as to further adapt to risk type identification based on the risk identification model and risk type identification based on database vector retrieval.
S102: and inputting the training sample into a risk recognition model to be trained, and obtaining the first characteristic of the training sample through the characteristic subnet.
In practical application, the transaction characteristics of the training samples are extracted through the characteristic subnets, so that the training samples can be classified according to the transaction characteristics in the following process.
S104: and according to the first characteristics, weighting the first characteristics through all parameters in the full connection layer to obtain second characteristics of the training samples.
Further, the feature value of each dimension of the first feature of the training sample is linearly weighted, and a second feature which can be input into the classification subnet is obtained through linear transformation, wherein the input dimension of each parameter of the fully-connected layer is usually determined according to the dimension of the first feature which is used as input, and the output dimension of each parameter is usually determined according to the input dimension of the classification subnet.
S106: and inputting the second characteristics into the classification subnet to obtain a classification result of the training sample.
Specifically, the risk identification model may be a binary classification model, and the classification result of the training sample may be that there is a transaction risk or that there is no transaction risk. The risk identification model may also be a multi-classification model, and the classification result of the training sample may be that no transaction risk exists or what transaction risk exists.
S108: and adjusting parameters of the risk recognition model by taking minimization of the difference between the labels of the training samples and the classification result and maximization of the similarity among the parameters of the full connection layer as a training target.
Wherein the difference between the labels of the training samples and the classification results output by the classification subnets is minimized in order to enable the risk recognition model to learn the ability to determine the risk type of the training samples according to the transaction characteristics of the training samples. The similarity among all parameters of the full connection layer is maximized, the purpose is to enable the representation subnet of the trained risk identification model to output the characteristics of the event with European property, and further enable the risk identification model to directly output the risk type of the event, adapt to the risk type identification based on vector retrieval, expand the application scene of the risk identification and improve the accuracy and the universality of the risk identification.
In the training method for the risk recognition model provided by the present description, a training sample is input into a risk recognition model to be trained, a first feature of the training sample is obtained through a feature subnet, a second feature is obtained by weighting the first feature according to each parameter in a full connection layer, and a classification result of the training sample is obtained through a classification subnet. And training the risk recognition model by taking the minimization of the difference between the labels of the training samples and the classification results and the maximization of the similarity among all the parameters of the full connection layer as a training target. Therefore, by increasing the maximization of the similarity among all parameters of the full connection layer in the training target, the trained risk identification model can directly output the risk type of the event, and can adapt to the database-based feature retrieval by characterizing the features of the event output by the subnet, so that the application scene of risk identification is expanded, and the accuracy of risk identification is improved.
In one or more embodiments of the present specification, when the parameters of the risk identification model are adjusted with the minimization of the difference between the labels of the training samples and the classification result and the maximization of the similarity between the parameters of the fully-connected layer as the training targets as shown in step S108 in fig. 1, the similarity between the parameters of the fully-connected layer may be determined through the following optional schemes, so as to achieve the maximization of the similarity between the parameters of the fully-connected layer as the training targets, specifically, by the following schemes:
first, a first loss is determined from a difference between the labels of the training samples and the classification result.
Specifically, the classification result of the training sample is used for representing the risk type corresponding to the training sample predicted by the risk recognition model according to the characteristics of the training sample. And determining the difference between the label of the first loss training sample and the classification result, namely the first loss can be used for evaluating the difference degree between the classification result predicted by the risk identification model in the training process and the actual risk type. The first loss can be determined by using any loss function used previously, such as a square loss function, a cross entropy loss function, and the like, which is not limited in this specification.
And the label of the training sample is determined according to the risk type of the business event of the training sample. If the risk recognition model is a binary model, the labels of the training samples may be risk-present and risk-absent. If the risk recognition model is a multi-classification model, the labels of the training samples may be that no risk exists or what type of risk exists.
Secondly, determining the similarity among the parameters according to the variance of the parameters of the full connection layer to obtain a second loss.
Specifically, the second loss can represent the degree of similarity between the parameters corresponding to the fully-connected layer, and the variance of each parameter and the similarity between the parameters are in a negative correlation relationship. By maximizing the similarity between the parameters and the back propagation process of model training, the first characteristic of a training sample with European property can be output at a training position, and then after the risk identification model is trained, the first characteristic of an event can be output by adopting a characteristic subnet in the risk identification model, and a specified event which is most similar to the event to be identified is searched by taking the first characteristic as a retrieval object, so that the risk type of the event to be identified is obtained.
Alternatively, the similarity between the parameters of the fully-connected layer may be determined according to the following scheme:
firstly, determining parameter differences according to any two parameters in the parameters, secondly, traversing the parameters to obtain the parameter differences, and then determining the similarity among the parameters according to the variance of the parameter differences to obtain a second loss.
Specifically, the second loss represents similarity between parameters in the fully-connected layer, whereas the second loss can be obtained by determining a difference between the parameters, and of course, the second loss can also be determined by a discrete degree of each parameter. The parameters are traversed to obtain the parameter difference between any two parameters in the parameters, and the second loss is determined according to the variance of the parameter difference, which actually represents the similarity between the parameters according to the discrete degree of the difference between the parameters, and the smaller the variance of the parameter difference is, the larger the similarity between the parameters is.
Based on this, the above-mentioned obtaining the second loss based on the variance of each parameter, and obtaining the second loss based on the variance of each parameter difference may be an optional scheme for determining the second loss provided in the embodiment of the present specification, and of course, the second loss may also be determined by other schemes, for example, the second loss is obtained based on the variance of the square of each parameter, which is not limited in the present specification.
Then, a weight of the second loss is determined and the second loss is weighted.
Further, a second loss weight may be determined, that is, a percentage of the total training target in which the maximization of the similarity between the parameters of the fully-connected layer in the training target of the risk recognition model is maximized. In addition, the second lost weight can also be learned during training.
Finally, parameters of the risk identification model are adjusted with the minimization of the first loss and the maximization of the weighted second loss as optimization objectives.
In one or more embodiments of the present disclosure, in the training process of the risk identification model as shown in steps S106 to S108 in fig. 1, when the risk identification model is a multi-class model, there may be a case that parameter adjustment does not converge due to a high degree of freedom of the parameters, and based on this, the following scheme may be adopted to further adjust the parameters of the risk identification model:
the first step is as follows: determining a specified feature value in the second features, and updating the feature value according to the difference between the feature value and the specified feature value for each feature value in the second features.
Specifically, in the model parameters, the model parameters that can be adjusted in the training process can be used as the model parameters with the degree of freedom, and the higher the degree of freedom of the model parameters is, the stronger the risk recognition capability that the model can learn is. Meanwhile, because the model parameters may be mutually influenced, when the degree of freedom of the model parameters is too high, the difference between the output of the model and the labels of the training samples may not be reduced, so that the parameter adjustment is not converged, and the model cannot achieve the training target. Based on this, in the embodiment of the present specification, while the similarity between the parameters of the fully-connected layer is maximized, the training goal of the model training may be achieved by limiting the degrees of freedom of the model parameters.
In particular, for the multi-classification risk identification model, the increase of the degree of freedom of the model parameters can be reflected in the increase of the dimensionality of each parameter used for linearly transforming the first feature at the full connection layer. In order to reduce the degree of freedom of the parameters, only the remaining parameters other than the parameters may be adjusted by omitting the adjustment of some parameters. Meanwhile, in order not to affect the second feature determined according to the parameters, after the first feature is linearly transformed according to the parameters of the full connection layer to obtain the second feature, a specified feature value is selected from a certain dimension of the second feature, the feature value is updated according to the difference between the feature value and the specified feature value for each feature value in the second feature, and the updated second feature is obtained after each feature value in the second feature is updated.
The second step is that: and inputting the updated second characteristic into the classification subnet to obtain a classification result of the training sample.
Since the specified feature value is subtracted from the feature value of each dimension of the second feature, the classification result obtained by inputting the updated second feature into the classification subnet is the same as the classification result obtained by inputting the second feature into the classification subnet before updating, that is, the specified feature value is subtracted from the feature value of each dimension of the second feature, so that the classification result obtained based on the second feature is not influenced.
It can be understood that, in the second feature, the specified feature value in the second feature also needs to be subtracted from itself, that is, in the training process of the model, each feature value in the second feature may be subtracted from the specified feature value each time the second feature is determined, and in the updated second feature, the feature value corresponding to the dimension of the specified feature value is usually zero. Because the designated eigenvalue in the second characteristic needs to be subtracted from the second characteristic every time, the value of the designated eigenvalue does not affect the training of the model, and further, whether the parameter of the designated eigenvalue in the second characteristic is adjusted in the process of model training is obtained, the training of the model is not affected, but the degree of freedom of the model parameter can be reduced, and the convergence of the model training is assisted.
For example, the second characteristic is (z) 1 ,z 2 ,z 3 ) When determining z 3 When the feature value is specified, the updated second feature is (z) 1 -z 3 ,z 2 -z 3 And 0), respectively passing the second features before and after updating through a softmax function, and obtaining the same classification result.
The third step: and in the full connection layer, determining a designated parameter corresponding to the designated characteristic value.
Further, the second feature is obtained by linear transformation of each parameter in the fully-connected layer according to the first feature, so that the dimension of the specified feature value in the second feature corresponds to each parameter in the fully-connected layer, and the specified parameter corresponding to the specified feature value in the fully-connected layer can be obtained.
The fourth step: and adjusting the parameters of the risk identification model as a parameter adjustment direction, and adjusting the rest parameters except the designated parameters in all the parameters of the full connection layer.
The degree of freedom of the model parameters is reduced in a mode of not adjusting the designated parameters, and convergence of the model parameters is assisted.
Based on the scheme, by adjusting the scheme of the other parameters except the specified parameter in the parameters of the full connection layer, the degree of freedom of parameter adjustability in the model training process can be reduced, so that the training target with the maximized similarity among the parameters can be converged, and the feasibility of model training is improved.
In one or more embodiments of the present specification, a trained risk recognition model may be obtained by using a training method of a risk recognition model as shown in fig. 1, and based on the trained risk recognition model, a database including features of transaction events without risks and features of risk events of different risk types may be combined to perform risk recognition on an event to be recognized in a feature retrieval manner. Based on this, in the embodiment of the present specification, a risk identification method based on a trained risk identification model is provided, as shown in fig. 3:
s200: and responding to an identification request, inputting the event to be identified into the trained risk identification model, and obtaining the characteristics of the event to be identified through the characteristic subnet of the risk identification model.
The risk recognition model is a training method of the risk recognition model as shown in fig. 1, and is obtained by training a training target according to a business event and a risk type of the business event, the minimum difference between a risk classification result predicted by the model and the risk type, and the maximum similarity between parameters of a full connection layer in the risk recognition model.
Specifically, in the embodiment of the present specification, a vector retrieval manner is adopted to determine the risk type of the event to be identified.
The features of the event to be recognized, which are obtained through the feature sub-network of the trained risk recognition model, are an array formed by multidimensional numbers, namely multidimensional vectors. The scheme of obtaining the risk type of the event to be identified through vector retrieval is that based on a k-nearest neighbor (k-NN) method, in a given vector database, the features of a specified number of specified events close to the features of the event to be identified are retrieved according to a specified similarity measurement mode, and then based on the risk types of the specified number of specified events, the risk type of the event to be identified is determined.
S202: the characteristics of a plurality of specified events contained in the database and the risk types of the specified events are obtained.
The database contains a large number of characteristics of the transaction events with determined risk types, and the database can store the characteristics of the transaction events and the mapping relation of the risk types of the transaction events. That is, the risk type of the transaction event can be determined by querying the characteristics of the transaction event.
Specifically, the designated event may be a transaction event which is obtained by first performing rough classification according to a specific application scenario and may have the same risk type as the event to be identified, and of course, the designated event may also be a plurality of transaction events stored in a database without being screened, which is not limited in this specification.
The specific features stored in the database may be obtained through feature subnets in the risk identification model shown in fig. 1, or may be obtained in advance according to other pre-trained feature extraction models, which is not limited in this specification.
S204: and determining the similarity between the characteristics of the event to be identified and the characteristics of the specified event aiming at each specified event.
Furthermore, in order to determine which designated events can be selected from the designated events as target events for determining the defense line type of the event to be identified, the similarity between the features of the event to be identified and the features of the designated events can be determined for each designated event.
Optionally, the feature of the specified event and the euclidean distance between the feature of the event to be identified are determined, and the similarity between the feature of the event to be identified and the feature of the specified event is determined according to the euclidean distance. The Euclidean distance between the characteristic of the event to be identified and the characteristic of the specified event is smaller, and the similarity between the characteristic of the event to be identified and the characteristic of the specified event is larger.
S206: and selecting a specified number of target events from the specified events according to the similarity.
Specifically, according to the determined similarity, each designated event, of which the similarity is greater than a preset threshold, may be determined as a target event corresponding to the event to be identified, where the designated number may be determined in advance according to a specific application scenario, and this is not limited in this specification.
S208: and determining the risk type of the event to be identified according to the risk type of each target event.
And clustering according to the determined risk types of the target events to obtain the risk types of the events to be identified.
In the risk identification method provided by this specification, the features of the event to be identified are obtained through the feature subnet in the trained risk identification model, the similarity between the features of the event to be identified and the features of the specified time is determined for each specified event according to the features of the specified events contained in the database, the specified number of target events are selected from the specified events according to the determined similarities, and then the risk type of the event to be identified is determined according to the risk type of each target event. Therefore, one of the training targets of the risk identification model is the maximization of the similarity among all parameters of the full connection layer, so that the characteristics of the event to be identified output by the characteristic subnet have the property of carrying out similarity measurement with the characteristics of other events, the risk type of the event to be identified can be obtained in the characteristic retrieval mode through the characteristic retrieval scheme based on the database even if the database is dynamically updated, the characteristic of rapidly resisting batch risks is realized, and the accuracy of risk identification is improved.
In an alternative embodiment of the present description, when the database contains characteristics of a specified event and a risk type of the specified event is updated, the following implementation can be performed:
firstly, responding to a database updating request, and acquiring an event to be updated and a risk type of the event to be updated.
When an event with a risk type different from that of each event contained in the database occurs, the event can be taken as an event to be updated, and the corresponding relationship between the event to be updated and the risk type thereof is stored in the database.
Secondly, inputting the event to be updated into the characteristic subnet to obtain the characteristic of the event to be updated.
In addition, in the embodiments of the present specification, the risk type of the event to be identified may be identified based on a feature search of the database. Therefore, the characteristics of the event to be updated can be stored in the database, so that when other events similar to the risk type of the event to be updated occur, the risk type of the time can be quickly identified, and risk identification under different scenes can be dealt with.
And finally, storing the characteristics of the event to be updated and the risk type of the event to be updated in the database.
Fig. 4 is a training apparatus of a risk identification model provided in this specification, where the risk identification model includes a feature subnet, a full connection layer, and a classification subnet, and includes:
a first obtaining module 300, configured to obtain a business event and a risk type of the business event, and determine a training sample and a label of the training sample;
a first feature determining module 302, configured to input the training sample into a risk identification model to be trained, and obtain a first feature of the training sample through the feature subnet;
a second feature determining module 304, configured to weight the first feature according to the first feature through each parameter in the full connection layer, so as to obtain a second feature of the training sample;
a classification module 306, configured to input the second feature into the classification subnet, so as to obtain a classification result of the training sample;
an adjusting module 308, configured to adjust parameters of the risk identification model with minimization of a difference between the label of the training sample and the classification result and maximization of similarity between parameters of the full connection layer as training targets; wherein the risk identification model is used for determining the risk type of the business event to be identified.
Optionally, the adjusting module 308 is specifically configured to determine a first loss according to a difference between the label of the training sample and the classification result; determining similarity among the parameters according to the variance of the parameters of the full connection layer to obtain a second loss; determining a weight of the second loss and weighting the second loss; and adjusting parameters of the risk identification model by taking minimization of the first loss and maximization of the weighted second loss as optimization targets.
Optionally, the adjusting module 308 is specifically configured to determine a parameter difference according to any two parameters in the parameters; traversing each parameter to obtain each parameter difference; and determining the similarity among the parameters according to the variance of the difference of the parameters to obtain a second loss.
Optionally, the classification module 306 is specifically configured to determine a specified feature value in the second feature; for each feature value in the second feature, updating the feature value according to the difference between the feature value and the specified feature value; and inputting the updated second characteristic into the classification subnet to obtain a classification result of the training sample.
Optionally, the adjusting module 308 is specifically configured to determine, in the full connection layer, a specified parameter corresponding to the specified feature value; and adjusting the parameters of the risk identification model as a parameter adjustment direction, and adjusting the other parameters except the specified parameters in all the parameters of the full connection layer.
Optionally, the apparatus further comprises:
a risk type determining module 310, configured to respond to an identification request, input an event to be identified into a trained risk identification model, and obtain a feature of the event to be identified through a feature subnet of the risk identification model; the method comprises the steps of obtaining characteristics of a plurality of specified events contained in a database and the risk types of the specified events; for each specified event, determining the similarity between the characteristics of the event to be identified and the characteristics of the specified event; the selection module is used for selecting a specified number of target events from the specified events according to the similarity; and determining the risk type of the event to be identified according to the risk type of each target event.
Optionally, the risk type determining module 310 is specifically configured to determine a euclidean distance between the feature of the specified event and the feature of the event to be identified; and determining the similarity between the characteristics of the event to be identified and the characteristics of the specified event according to the Euclidean distance.
Optionally, the apparatus further comprises:
an update module 312, specifically configured to respond to a database update request, and obtain an event to be updated and a risk type of the event to be updated; inputting the event to be updated into the feature subnet to obtain the feature of the event to be updated; storing the characteristics of the event to be updated and the risk type of the event to be updated in the database.
The present specification also provides a computer-readable storage medium storing a computer program, which can be used to execute the method for training the risk recognition model shown in fig. 1.
This specification also provides a schematic block diagram of the electronic device shown in fig. 5. As shown in fig. 5, in the hardware subnet plane, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, but may also include hardware required for other services. The processor reads a corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to implement the training method of the risk identification model shown in fig. 1. Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD) (e.g., a Field Programmable Gate Array (FPGA)) is an integrated circuit whose Logic functions are determined by a user programming the Device. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually manufacturing an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development, but the original code before compiling is also written in a specific Programming Language, which is called Hardware Description Language (HDL), and the HDL is not only one kind but many kinds, such as abll (Advanced boot Expression Language), AHDL (alternate hard Description Language), traffic, CUPL (computer universal Programming Language), HDCal (Java hard Description Language), lava, lola, HDL, PALASM, software, rhydl (Hardware Description Language), and vhul-Language (vhyg-Language), which is currently used in the field. It will also be apparent to those skilled in the art that hardware circuitry for implementing the logical method flows can be readily obtained by a mere need to program the method flows with some of the hardware description languages described above and into an integrated circuit.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium that stores computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in purely computer readable program code means, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, apparatuses, modules or units described in the above embodiments may be specifically implemented by a computer chip or an entity, or implemented by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, respectively. Of course, the functionality of the various elements may be implemented in the same one or more pieces of software and/or hardware in the practice of this description.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of other like elements in a process, method, article, or apparatus comprising the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present disclosure, and is not intended to limit the present disclosure. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (18)

1. A method of training a risk recognition model, the risk recognition model comprising a feature subnet, a fully connected layer, and a classification subnet, the method comprising:
acquiring a business event and a risk type of the business event, and determining a training sample and a label of the training sample;
inputting the training sample into a risk identification model to be trained, and obtaining a first characteristic of the training sample through the characteristic subnet;
according to the first characteristic, performing linear transformation on the first characteristic through all parameters in the full connection layer to obtain a second characteristic of the training sample;
inputting the second characteristics into the classification subnet to obtain a classification result of the training sample;
adjusting parameters of the risk recognition model by taking minimization of difference between labels of the training samples and the classification result and maximization of similarity among all parameters of the full connection layer as training targets;
wherein the risk identification model is used for determining the risk type of the business event to be identified.
2. The method according to claim 1, wherein the minimizing of the difference between the labels of the training samples and the classification results and the maximizing of the similarity between the parameters of the fully-connected layer are taken as training targets, and the adjusting of the parameters of the risk identification model specifically comprises:
determining a first loss according to a difference between labels of the training samples and the classification result;
determining similarity among the parameters according to the variance of the parameters of the full connection layer to obtain a second loss;
determining a weight of the second loss and weighting the second loss;
and adjusting parameters of the risk identification model by taking minimization of the first loss and maximization of the weighted second loss as optimization targets.
3. The method of claim 2, wherein obtaining the second loss comprises:
determining parameter difference according to any two parameters in the parameters;
traversing the parameters to obtain the difference of the parameters;
and determining the similarity among the parameters according to the variance of the difference of the parameters to obtain a second loss.
4. The method of claim 1, wherein inputting the second feature into the classification subnet to obtain a classification result of the training sample comprises:
determining a specified feature value in the second feature;
for each feature value in the second feature, updating the feature value according to the difference between the feature value and the specified feature value;
and inputting the updated second characteristic into the classification subnet to obtain a classification result of the training sample.
5. The method of claim 4, wherein adjusting the parameters of the risk identification model specifically comprises:
in the full connection layer, determining a designated parameter corresponding to the designated characteristic value;
and adjusting the parameters of the risk identification model as a parameter adjustment direction, and adjusting the other parameters except the specified parameters in all the parameters of the full connection layer.
6. The method of claim 1, further comprising:
responding to an identification request, inputting an event to be identified into a trained risk identification model, and obtaining the characteristics of the event to be identified through a characteristic subnet of the risk identification model;
the method comprises the steps of obtaining characteristics of a plurality of specified events contained in a database and the risk types of the specified events;
for each appointed event, determining the similarity between the characteristics of the event to be recognized and the characteristics of the appointed event;
according to the similarity, selecting a specified number of target events from the specified events;
and determining the risk type of the event to be identified according to the risk type of each target event.
7. The method according to claim 6, wherein determining the similarity between the feature of the event to be identified and the feature of the specified event specifically comprises:
determining the Euclidean distance between the characteristic of the specified event and the characteristic of the event to be identified;
and determining the similarity between the characteristics of the event to be identified and the characteristics of the specified event according to the Euclidean distance.
8. The method of claim 6, further comprising:
responding to a database updating request, and acquiring an event to be updated and a risk type of the event to be updated;
inputting the event to be updated into the feature subnet to obtain the feature of the event to be updated;
storing the characteristics of the event to be updated and the risk type of the event to be updated in the database.
9. A training apparatus for a risk recognition model, the risk recognition model including a feature subnet, a full connection layer, and a classification subnet, comprising:
the system comprises a first acquisition module, a first processing module and a second processing module, wherein the first acquisition module is used for acquiring a business event and a risk type of the business event and determining a training sample and a label of the training sample;
the first characteristic determining module is used for inputting the training sample into a risk recognition model to be trained and obtaining a first characteristic of the training sample through the characteristic subnet;
the second characteristic determining module is used for weighting the first characteristics through all parameters in the full connection layer according to the first characteristics to obtain second characteristics of the training sample;
the classification module is used for inputting the second characteristics into the classification subnet to obtain a classification result of the training sample;
the adjusting module is used for adjusting the parameters of the risk recognition model by taking the minimization of the difference between the labels of the training samples and the classification results and the maximization of the similarity among the parameters of the full connection layer as training targets; wherein the risk identification model is used for determining the risk type of the business event to be identified.
10. The apparatus of claim 9, the adjustment module being specifically configured to determine a first loss based on a difference between labels of the training samples and the classification result; determining the similarity among all the parameters according to the variance of all the parameters of the full connection layer to obtain a second loss; determining a weight of the second loss and weighting the second loss; and adjusting parameters of the risk identification model by taking minimization of the first loss and maximization of the weighted second loss as optimization targets.
11. The apparatus of claim 10, wherein the adjustment module is specifically configured to determine a parameter difference according to any two of the parameters; traversing the parameters to obtain the difference of the parameters; and determining the similarity among the parameters according to the variance of the difference of the parameters to obtain a second loss.
12. The apparatus of claim 9, the classification module to be specifically configured to determine a specified feature value among the second features; for each feature value in the second feature, updating the feature value according to the difference between the feature value and the specified feature value; and inputting the updated second characteristic into the classification subnet to obtain a classification result of the training sample.
13. The apparatus according to claim 12, wherein the adjusting module is specifically configured to determine, in the fully-connected layer, a specific parameter corresponding to the specific feature value; and adjusting the parameters of the risk identification model as a parameter adjustment direction, and adjusting the other parameters except the specified parameters in all the parameters of the full connection layer.
14. The apparatus of claim 9, the apparatus further comprising:
the risk type determining module is specifically used for responding to the identification request, inputting the event to be identified into the trained risk identification model, and obtaining the characteristics of the event to be identified through the characteristic subnet of the risk identification model; the method comprises the steps of obtaining characteristics of a plurality of specified events contained in a database and the risk types of the specified events; for each appointed event, determining the similarity between the characteristics of the event to be recognized and the characteristics of the appointed event; the selection module is used for selecting a specified number of target events from the specified events according to the similarity; and determining the risk type of the event to be identified according to the risk type of each target event.
15. The apparatus according to claim 14, wherein the risk type determining module is specifically configured to determine a euclidean distance between the feature of the specified event and the feature of the event to be identified; and determining the similarity between the characteristics of the event to be identified and the characteristics of the specified event according to the Euclidean distance.
16. The apparatus of claim 14, the apparatus further comprising:
the updating module is specifically used for responding to a database updating request and acquiring an event to be updated and the risk type of the event to be updated; inputting the event to be updated into the feature subnet to obtain the feature of the event to be updated; storing the characteristics of the event to be updated and the risk type of the event to be updated in the database.
17. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of the preceding claims 1 to 8.
18. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any of the preceding claims 1 to 8 when executing the program.
CN202211276933.4A 2022-10-18 2022-10-18 Risk recognition model training method, device, equipment and storage medium Pending CN115600106A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211276933.4A CN115600106A (en) 2022-10-18 2022-10-18 Risk recognition model training method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211276933.4A CN115600106A (en) 2022-10-18 2022-10-18 Risk recognition model training method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115600106A true CN115600106A (en) 2023-01-13

Family

ID=84849747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211276933.4A Pending CN115600106A (en) 2022-10-18 2022-10-18 Risk recognition model training method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115600106A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937617A (en) * 2023-03-06 2023-04-07 支付宝(杭州)信息技术有限公司 Risk identification model training and risk control method, device and equipment
CN115935265A (en) * 2023-03-03 2023-04-07 支付宝(杭州)信息技术有限公司 Method for training risk recognition model, risk recognition method and corresponding device
CN116028820A (en) * 2023-03-20 2023-04-28 支付宝(杭州)信息技术有限公司 Model training method and device, storage medium and electronic equipment
CN116578877A (en) * 2023-07-14 2023-08-11 之江实验室 Method and device for model training and risk identification of secondary optimization marking

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115935265A (en) * 2023-03-03 2023-04-07 支付宝(杭州)信息技术有限公司 Method for training risk recognition model, risk recognition method and corresponding device
CN115935265B (en) * 2023-03-03 2023-05-26 支付宝(杭州)信息技术有限公司 Method for training risk identification model, risk identification method and corresponding device
CN115937617A (en) * 2023-03-06 2023-04-07 支付宝(杭州)信息技术有限公司 Risk identification model training and risk control method, device and equipment
CN116028820A (en) * 2023-03-20 2023-04-28 支付宝(杭州)信息技术有限公司 Model training method and device, storage medium and electronic equipment
CN116028820B (en) * 2023-03-20 2023-07-04 支付宝(杭州)信息技术有限公司 Model training method and device, storage medium and electronic equipment
CN116578877A (en) * 2023-07-14 2023-08-11 之江实验室 Method and device for model training and risk identification of secondary optimization marking
CN116578877B (en) * 2023-07-14 2023-12-26 之江实验室 Method and device for model training and risk identification of secondary optimization marking

Similar Documents

Publication Publication Date Title
CN115600106A (en) Risk recognition model training method, device, equipment and storage medium
CN113313575B (en) Method and device for determining risk identification model
EP3971806A1 (en) Data processing methods, apparatuses, and devices
CN110633989A (en) Method and device for determining risk behavior generation model
CN115862088A (en) Identity recognition method and device
CN114429222A (en) Model training method, device and equipment
CN116303989A (en) Patent retrieval method, device and equipment for multiple retrieval scenes
Rego et al. Artificial intelligent system for multimedia services in smart home environments
CN116049761A (en) Data processing method, device and equipment
CN115618964A (en) Model training method and device, storage medium and electronic equipment
CN116152933A (en) Training method, device, equipment and storage medium of anomaly detection model
CN115712866A (en) Data processing method, device and equipment
US20220335566A1 (en) Method and apparatus for processing point cloud data, device, and storage medium
Li et al. Object detection based on semi-supervised domain adaptation for imbalanced domain resources
WO2022096943A1 (en) Method and apparatus for processing point cloud data, device, and storage medium
CN115131570B (en) Training method of image feature extraction model, image retrieval method and related equipment
CN116402108A (en) Model training and graph data processing method, device, medium and equipment
CN113992429B (en) Event processing method, device and equipment
CN112967044B (en) Payment service processing method and device
CN111753583A (en) Identification method and device
Wang et al. Salient object detection based on multi-feature graphs and improved manifold ranking
CN113344197A (en) Training method of recognition model, service execution method and device
CN115564450B (en) Wind control method, device, storage medium and equipment
CN116340852B (en) Model training and business wind control method and device
CN116186540A (en) Data processing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination