CN114240101A - Risk identification model verification method, device and equipment - Google Patents

Risk identification model verification method, device and equipment Download PDF

Info

Publication number
CN114240101A
CN114240101A CN202111461036.6A CN202111461036A CN114240101A CN 114240101 A CN114240101 A CN 114240101A CN 202111461036 A CN202111461036 A CN 202111461036A CN 114240101 A CN114240101 A CN 114240101A
Authority
CN
China
Prior art keywords
risk
data
verified
model
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111461036.6A
Other languages
Chinese (zh)
Inventor
余坤
孙波
张晓旭
李怀松
孙富
赵亮
曾庆瑜
李晶莹
陶睿
胡研
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202111461036.6A priority Critical patent/CN114240101A/en
Publication of CN114240101A publication Critical patent/CN114240101A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/382Payment protocols; Details thereof insuring higher security of transaction

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Accounting & Taxation (AREA)
  • General Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Security & Cryptography (AREA)
  • Educational Administration (AREA)
  • Finance (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the specification discloses a method, a device and equipment for verifying a risk identification model. The method comprises the steps of constructing a risk pool containing N types of risk sets in advance, randomly extracting a plurality of risk data to be verified from any ith risk set in the risk pool, and manually verifying the extracted plurality of risk data to be verified to obtain a manual verification result, so that the consistency degree of the extracted manual verification result and a model identification result can be compared, and whether the model identification result of the risk identification model in the risk pool for the ith type of risk set passes verification or not is determined based on the consistency degree.

Description

Risk identification model verification method, device and equipment
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method, an apparatus, and a device for verifying a risk identification model.
Background
With the development of mobile internet, the risk identification model of automation intelligence has been widely applied to various scenes. For example, in payment applications, risk identification of payments and transfers between various accounts is required every day to determine whether a payment is risky, whether the account itself is risky, etc., which results in a large number of risk identification results in a short period of time. There is an attendant need for sufficiently efficient validation of risk identification results under large data to confirm the capabilities of the risk identification model.
Based on this, there is a need for a more efficient verification scheme for risk identification models.
Disclosure of Invention
One or more embodiments of the present disclosure provide a method, an apparatus, a device, and a storage medium for verifying a risk identification model, so as to solve the following technical problems: there is a need for a more efficient verification scheme for risk identification models.
To solve the above technical problem, one or more embodiments of the present specification are implemented as follows:
in a first aspect, an embodiment of the present specification provides a method for verifying a risk identification model, which is applied to a risk pool including N types of risk sets, where the risk sets are generated by risk identification models for risk identification on risk data, N is a natural number greater than 1, and the method includes: randomly extracting a plurality of risk data to be verified from the risk set aiming at any ith type of risk set in the risk pool, wherein i is more than or equal to 1 and less than or equal to N, and i is a natural number; acquiring manual verification results of the plurality of risk data to be verified, and determining model identification results of the risk identification model for the plurality of risk data to be verified; determining the consistency degree of the manual verification result and the model identification result; and when the consistency degree of the manual verification result and the model identification result meets a preset condition, determining that the model identification result of the ith type of risk set of the risk identification model passes verification.
In a second aspect, an embodiment of the present specification provides an apparatus for verifying a risk identification model, which is applied to a risk pool including N types of risk sets, where the risk sets are generated by risk identification of risk data by the risk identification model, and N is a natural number greater than 1, the apparatus including: the random extraction module is used for randomly extracting a plurality of risk data to be verified from the risk set aiming at any ith type of risk set in the risk pool, wherein i is more than or equal to 1 and less than or equal to N, and i is a natural number; the acquisition module is used for acquiring the manual verification results of the plurality of risk data to be verified and determining the model identification results of the risk identification model for the plurality of risk data to be verified; the consistency degree module is used for determining the consistency degree of the manual verification result and the model identification result; and the verification module is used for determining that the model identification result of the ith type of risk set passes verification by the risk identification model when the consistency degree of the manual verification result and the model identification result meets the preset condition.
In a third aspect, an embodiment of the present specification provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
In a fourth aspect, embodiments of the present specification provide a non-transitory computer storage medium storing computer-executable instructions that, when read by a computer, cause the one or more processors to perform the method according to the first aspect.
At least one technical scheme adopted by one or more embodiments of the specification can achieve the following beneficial effects: the method comprises the steps of constructing a risk pool containing N types of risk sets in advance, randomly extracting a plurality of risk data to be verified from any ith risk set in the risk pool, and manually verifying the extracted plurality of risk data to be verified to obtain a manual verification result, so that the consistency degree of the extracted manual verification result and a model identification result can be compared, and whether the model identification result of the risk identification model in the risk pool for the ith type of risk set passes verification or not is determined based on the consistency degree, so that the minimum investment of manual experts is used for auditing, the full verification of the model identification result of the risk identification model is completed to the maximum extent, and the method is more efficient.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic flowchart of a method for verifying a risk identification model according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a flow mechanism provided by an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a multi-party interaction mechanism provided by an embodiment of the present specification;
FIG. 4 is a diagram of a system framework provided in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a risk identification model verification apparatus provided in an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device provided in an embodiment of the present specification.
Detailed Description
The embodiment of the specification provides a risk detection method, a risk detection device, risk detection equipment and a storage medium based on small program dynamic and static analysis.
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any inventive step based on the embodiments of the present disclosure, shall fall within the scope of protection of the present application.
For a risk identification model that has been widely used, the risk data that can be identified each day is huge, for example, in a payment platform, the number of identified risk data per day may exceed million. There is a high probability that there will be some risk data that is misidentified. In this case, it is generally necessary to characterize each risk data by manual review, that is, for one risk identification model, the scale at which the risk can be determined for the model is finally determined from the amount of risk data that can be verified by manual review.
Although the risk identified by the model can be fully verified by full manpower at present, with the continuous increase of the business scale, the manual analysis and auditing model is bound to meet the bottleneck, and a scheme capable of processing diversified complex or new risk data with lower manpower proportion is needed so as to improve the verification efficiency of the risk identification model. Based on this, this specification embodiment provides a verification scheme of risk identification model.
As shown in fig. 1, fig. 1 is a schematic flowchart of a verification method for a risk identification model provided in an embodiment of the present specification, and is applied to a risk pool including N types of risk sets, where the risk sets are generated by risk identification models for risk identification of risk data, and the flowchart in fig. 1 may include the following steps:
s101: and aiming at any ith type of risk set in the risk pool, randomly extracting a plurality of risk data to be verified from the risk set, wherein i is more than or equal to 1 and less than or equal to N, and i is a natural number.
The risk identification model can always classify or cluster risk data when risk identification is carried out. For example, generally speaking, risk identification models trained through supervised learning (samples include tags) can classify risk data, while risk identification models trained through unsupervised learning (samples do not include tags) can realize homogeneous clustering of risk data, so as to cluster risk data with similar features, and at this time, the risk identification models may not know what the risk types under the various types obtained by clustering are.
In other words, the N risk types that exist may be risk types that are known in advance, for example, for the payment domain, the N risk types may be "risk a", "risk B", or "risk C", and so on, which are known as specific risk types; it may also be that only the risk identification model is known to have aggregated the risk data into N risk types, but the specific meaning of each risk type is not yet known, e.g., N risk types may be "risk type 1", "risk type 2", or "risk type 3", etc.
Each type of risk set contains a large amount of risk data, and each risk set may contain a plurality of different groups, for example, in the risk set "Risk A", a plurality of different groups may be contained, including "small group", "medium group", "large group", and so on. It is readily understood that the amount of risk data contained in different types of risk sets in a risk pool is usually different, but depends on some realistic distribution, and similarly, the amount of risk data contained in each group in the same risk set is also different.
The risk pool may be used to store a certain amount or time period of risk data, excess amounts of risk data may not be pooled, or risk data that exceeds a time period may be removed to supplement the entry of updated risk data into the risk pool.
Based on this, for any ith type of risk set, in order to verify the accuracy of the identification result in the risk set, a random extraction manner may be adopted to extract part of the risk data from the edge sealing set, so as to obtain a plurality of risk data to be verified.
The number or proportion of draws may be determined based on actual needs, for example, for any risk set, randomly drawing 1% of the risk data from the set, or randomly drawing no less than 1000 pieces of risk data.
The random extraction mode may also be set by itself, for example, the risk data with the tail number of "9" is extracted from the risk set, or the pool entry time of the risk data in the risk set is sorted and segmented (for example, divided into 24 segments based on the pool entry time), and equal random extraction is performed from each segment, so as to implement that the statistical principle can be met in the random extraction process.
S103, acquiring manual verification results of the plurality of risk data to be verified, and determining model identification results of the risk identification model for the plurality of risk data to be verified.
And distributing the extracted risk data to be verified to a professional manual management platform through the service platform.
For example, for risk data for which a risk identification model has given a specific model identification result, the risk data may be distributed to a plurality of examiners with rich examination experience for the type of risk through the service platform, and the examiners may receive the risk data through the manual administration platform and give their own manual verification results through the manual administration platform.
As mentioned above, in the supervised classification mode, the risk identification model can directly provide a risk conclusion, and the model identification result may include a risk type of the risk set given by the risk identification model, and for the model, the risk data in the risk set is all regarded as the risk type; in the unsupervised homogeneous clustering mode, the risk identification model does not provide risk conclusions, only provides multiple categories, and the model identification result at the moment means that risk data in the same category have highly similar characteristics.
It is readily understood that the manual verification results are not dependent on the risk identification results. In other words, the manual verification results may always contain specific risk conclusions given by the examiner, including the risk type and the risk degree of the risk data, without any information given by the risk identification model. For example, the manual verification result may be a result such as "risk type: risk a; degree of risk: high ".
At this time, when a specific model is given for the risk identification model to determine the risk type, for each to-be-verified risk data, the risk type of the model for the risk set may be regarded as the risk type of the to-be-verified risk data. And when the risk identification model does not give a specific model judgment risk type, counting the artificial risk types given by the artificial verification result to obtain a statistical risk type, and determining the statistical risk type as the model judgment risk type.
And S105, determining the consistency degree of the manual verification result and the model identification result.
The degree of consistency may also include consistency of the type of risk and/or degree of risk in the manual validation results and the model identification results. When both the manual verification result and the model identification result contain the risk type of the risk data to be verified, the evaluation on the consistency degree can be converted into the consistency degree of the risk types contained in the manual verification result and the model identification result.
When the risk identification model already determines the model decision risk types of the plurality of risk data to be verified, the same ratio of the risk types in the manual decision risk types of the plurality of risk data to be verified to the model decision risk types can be determined, and the consistency degree of the manual verification result and the model identification result is determined according to the same ratio.
For example, for 1000 pieces of risk data to be verified, the risk types given by the model identification result are all "risk B", the risk types given by the manual verification result are 950 pieces of "risk B", and 50 pieces of "other", then at this time, the consistency degree between the manual verification result and the model identification result may be regarded as 0.95.
Further, since in this case the risk identification model may also give more specific risk conclusions, e.g. for each piece of risk data the risk identification model may also give a specific risk level, "low risk", "medium risk" or "high risk", etc., then the result of the manual verification may also give the same specific risk level, so that the judgment of the same ratio also requires the same risk level. For example, if there are 20 risk data with different degrees of risk in 950 risk data with the same risk type, the same ratio is 930/1000-0.93.
And when the risk identification model does not determine the model judgment risk types of the plurality of risk data to be verified, determining the model identification result of the risk data based on the statistical information of the manual verification result. The statistical information here may include the number, distribution, variance, etc. of the various risk types given in the manual verification results.
For example, for 1000 pieces of risk data to be verified in a certain "risk type 1", the risk identification model itself does not give a specific risk conclusion, and the manual verification result gives 850 pieces of risk data belonging to "risk B", 100 pieces of risk data belonging to "risk a", and 50 pieces of risk data belonging to "other", the statistical risk type may be determined as "risk B", and the model identification result of the risk set of "risk type 1" is also determined as "risk B".
Furthermore, the consistency degree of the artificial risk type and the statistical risk type can be determined according to the consistency degree of the artificial verification result and the model identification result. For example, as described above, since the risk set of "risk type 1" is already regarded as "risk B", it is obvious that 100 "risk a" and 50 "others" contained therein at the same time are not consistent with the risk type of the set, and at this time, it can be confirmed that the same ratio between the manual verification result and the model identification result is 0.85, and thus the degree of consistency between the manual verification result and the model identification result is 0.85.
S107, when the consistency degree of the manual verification result and the model identification result meets a preset condition, determining that the model identification result of the ith type of risk set of the risk identification model passes verification.
The preset condition may be that the degree of consistency is greater than a preset value, for example, the degree of consistency is greater than 0.99; alternatively, for the ith type of risk set, the preset condition may further include that the degree of consistency is greater than a preset value, and the distribution difference of the degree of risk is lower than the preset value.
For example, when the model identification result gives the risk type and the risk degree, model risk distribution variances of various risk degrees in the extracted multiple risk data to be verified can be calculated, and meanwhile, artificial risk distribution variances of various given risk degrees in the artificial verification result can be calculated, so that differences between the model risk distribution variances and the artificial risk distribution variances can be compared. And when the consistency degree of the risk types is greater than a preset value and the difference of the risk distribution variances is lower than the preset value, determining that the model identification result of the risk identification model for the ith type of risk set passes verification.
In practical application, because the service platform can be adopted to distribute a plurality of risk data to be verified, the service platform can know the number of the risk data to be verified in advance. Meanwhile, the manual verification result returned by the manual management platform can also be fed back to the service platform. Therefore, the business platform can count the risk set of the ith type at any time, and whether the risk set can pass the verification or not.
For example, assume that a business platform extracts 1000 pieces of data for a risk set of the type "risk a" for distribution, and requires a degree of consistency greater than 0.99 before verification is passed. Then, only the risk type of more than 10 pieces of data in the manual verification result returned by the manual auditing platform is not 'risk A', and then the model identification result of the type can be known to be incapable of passing verification. Similarly, when more than 990 pieces of data in the manual verification result returned by the manual auditing platform are "risk a", the verification can be determined to be passed without waiting for the last pieces of data.
The method comprises the steps of constructing a risk pool containing N types of risk sets in advance, randomly extracting a plurality of risk data to be verified from any ith risk set in the risk pool, and manually verifying the extracted plurality of risk data to be verified to obtain a manual verification result, so that the consistency degree of the extracted manual verification result and a model identification result can be compared, and whether the model identification result of the risk identification model in the risk pool for the ith type of risk set passes verification or not is determined based on the consistency degree, so that the minimum investment of manual experts is used for auditing, the full verification of the model identification result of the risk identification model is completed to the maximum extent, and the method is more efficient.
In one embodiment, in order to make the extracted risk data to be verified more representative of the data in the original risk pool, the random extraction may be performed in the following manner:
determining a total number S of risk data extracted from the ith type of risk set in the risk pooli(ii) a Respectively obtaining the extraction weights k of k subsets contained in the risk setjJ is more than or equal to 1 and less than or equal to k, and j and k are natural numbers; according to the total number SiAnd said decimation weight kjDetermining the number S of samples taken from the jth subsetj
For example, assume that among the risk sets of "type 4" in the risk pool, there are a plurality of subsets "subset 1", "subset 2", and "subset 3".Now that 1000 pieces of data need to be extracted from the risk set of type 4, given that the extraction weight between subsets is 1:1:3, then 200, 600 pieces of data can be extracted from among the three subsets in proportion based on the extraction weight. Extracting a weight kjThe risk data may be predetermined or may be proportionally converted based on the number of risk data included in each subset.
Further, extracting the total quantity S of risk data in the ith type of risk setiThis can be determined as follows:
determining the quantity proportion of the risk data contained in each of the N types of risk sets in the risk pool; and determining the total quantity Si of the extracted risk data from the ith type of risk set in the risk pool according to the quantity proportion.
The risk identification model puts the risk data identified in the practical application into a risk pool, so that the proportion of the quantity of the risk data contained in each of the N types of risk sets in the risk pool represents the actual distribution of various risk types. Therefore, in order to reflect such actual distribution, data extraction should be performed on each type of risk set from the risk pool according to a ratio of the number of risk data included in each of the N types of risk sets.
For example, assume that the quantitative proportion of risk data contained in each of the N types of risk sets is C1: c2: … …: cn, then after random extraction is performed on each type of risk set in the risk pool, the proportion of the risk data to be verified extracted from each set should also be C1: c2: … …: cn, therefore, the extracted distribution of the risk data to be verified conforms to the actual distribution, statistical characteristics can be better met, and the identification accuracy of the whole data is represented based on the verification of partial data.
In one embodiment, if the consistency degree between the manual verification result and the model identification result does not satisfy the preset condition, it indicates that the risk identification model may not be accurate enough for identifying the batch of risk data, and there may be some deviation in the manual verification result of the risk identification data. Namely determining risk data to be verified, wherein the manual verification result is inconsistent with the model identification result; and sending the inconsistent risk data to be verified to a manual verifier for secondary manual verification. As shown in fig. 2, fig. 2 is a schematic diagram of a flow mechanism provided in an embodiment of the present disclosure. In the schematic diagram, the service platform may send inconsistent risk data to be verified to other parties for manual secondary verification, and determine whether to adjust the conclusion of a single risk data, whether to adjust the verification conclusion of the whole batch, and the like. By the method, the accuracy in the manual auditing result can be further improved.
In the embodiment of the present specification, since multiple parties (where the multiple parties include a certain module in the service platform or a certain department in practice) are involved in the verification process, there are various interactions between them, as shown in fig. 3, fig. 3 is a schematic diagram of a multi-party interaction mechanism provided by the embodiment of the present specification. The cases in the intention are single risk data to be verified, wherein the casual inspection strategy management, the casual inspection task management and the casual inspection source characteristics are task processing modules in the service platform, an auditing expert generates manual verification results for each case through the manual auditing platform, and the casual inspection decision expert performs secondary verification on the risk data with inconsistent manual verification results and model identification results.
In an embodiment, when the manual review platform gives the manual verification result, if the manual review platform knows the model recognition result exactly, then the manual review platform may also give feedback data based on the inconsistency between the manual verification result and the model recognition result, and return the feedback data to the service platform, and the service platform returns the feedback data to the trainer of the risk recognition model.
Such feedback data is typically a structured data, for example, when the risk data is payment transaction data for a user, the feedback data may contain various characteristics in the payment transaction data, including user characteristics, location, payment amount, and the like. The feedback data is used for representing that the risk identification model uses inaccurate or inconsistent at least one characteristic contained in the risk data, so that a training party of the risk identification model carries out statistics and adjusts the training direction based on the feedback opinions to obtain a more accurate risk identification model. Fig. 4 is a schematic diagram of a system framework provided in an embodiment of the present disclosure, as shown in fig. 4. In the schematic diagram, the business platform automatically returns feedback opinions to the training party of the model through a feedback component.
Further, after the trainer of the risk identification model obtains the feedback data and updates the risk identification model, another to-be-verified risk data can be obtained, and the inconsistent to-be-verified risk data is replaced by the another to-be-verified risk data in the ith type of risk set. And classifying the other to-be-verified risk data into the ith type risk set by using an updated risk identification model, wherein the updated risk identification model is obtained by updating and training based on the feedback data. By the method, the model is updated based on the feedback data of verification failure, the risk data identified by the updated model replaces the inconsistent risk data to be verified, so that a logically closed loop is realized, the model can be dynamically and continuously adjusted in an iterative manner, and the dynamic change of the risk type in reality is met.
In an embodiment, for a risk pool, the service platform may further perform periodic management on risk data in the risk pool, that is, determine a stay time of the risk data to be verified in the risk pool for each type of risk set, and remove the risk data to be verified, whose stay time exceeds a preset time, from the risk pool. For example, the preset time duration may be set to 1 month, and if the risk data exceeds 1 month in the data pool, the risk data may be proposed or replaced with new risk data, so as to improve the timeliness of the risk data in the risk pool.
Based on the same idea, one or more embodiments of the present specification further provide apparatuses and devices corresponding to the above-described method, as shown in fig. 5 and fig. 6.
In a second aspect, as shown in fig. 5. Fig. 5 is a schematic diagram of a verification apparatus for a risk identification model provided in an embodiment of the present specification, and is applied to a risk pool including N types of risk sets, where the risk sets are generated by risk identification models for risk identification on risk data, and N is a natural number greater than 1, and the apparatus includes:
a random extraction module 501, configured to randomly extract multiple risk data to be verified from any ith type of risk set in the risk pool, where i is greater than or equal to 1 and less than or equal to N, and i is a natural number;
an obtaining module 503, configured to obtain manual verification results for the multiple risk data to be verified, and determine a model identification result of the risk identification model for the multiple risk data to be verified;
a consistency degree module 505, configured to determine a consistency degree between the manual verification result and the model identification result;
and the verification module 507 determines that the model identification result of the ith type of risk set passes verification by the risk identification model when the consistency degree of the manual verification result and the model identification result meets a preset condition.
Optionally, when the risk identification model has determined the model judgment risk types of the multiple risk data to be verified, the consistency degree module 505 determines the same ratio of the risk types of the multiple risk data to be verified, which are the same in the manual judgment risk types and the model judgment risk types, and determines the consistency degree of the manual verification result and the model identification result according to the same ratio.
Optionally, when the risk identification model does not determine the model decision risk type of the multiple risk data to be verified, the obtaining module 503 determines the statistical risk type of the multiple risk data to be verified according to the statistical information of the artificial risk type; determining the statistical risk type as a model identification result of the risk identification model for a plurality of risk data to be verified; accordingly, the consistency degree module 505 determines the consistency degree of the artificial risk type and the statistical risk type.
Optionally, the random extraction module 501 determines a total number S of risk data extracted from the ith type of risk set in the risk pooli(ii) a Respectively obtaining the extraction weights k of k subsets contained in the risk setjJ is more than or equal to 1 and less than or equal to k, and j and k are natural numbers; according to the total number SiAnd said decimation weight kjDetermining the number S of samples taken from the jth subsetj
Optionally, the random extraction module 501 determines a quantity ratio of risk data included in each of the N types of risk sets in the risk pool; determining the total quantity S of the risk data extracted from the ith type of risk set in the risk pool according to the quantity proportioni
Optionally, the apparatus further includes a secondary verification module 509, and when the consistency degree between the manual verification result and the model identification result does not satisfy a preset condition, determining to-be-verified risk data with an inconsistent manual verification result and the model identification result; and sending the inconsistent risk data to be verified to a manual verifier for secondary manual verification.
Optionally, the system further includes a feedback module 511, configured to obtain feedback data of the inconsistent risk data to be verified, where the feedback data is used to characterize that the risk identification model is used inaccurately for at least one feature included in the risk data; and returning the feedback data to a trainer of the risk identification model.
Optionally, the system further includes a replacement module 513, which acquires another risk data to be verified, where the another risk data to be verified is classified into the ith type risk set by an updated risk identification model, and the updated risk identification model is obtained by updating and training based on the feedback data; and in the ith type of risk set, replacing the inconsistent risk data to be verified with the other risk data to be verified.
Optionally, the system further includes a risk pool management module 515, which determines a stay time of the risk data to be verified in the risk pool, and removes the risk data to be verified, whose stay time exceeds a preset time, from the risk pool.
In a third aspect, as shown in fig. 6. Fig. 6 is a schematic structural diagram of an electronic device according to one or more embodiments of the present disclosure, where the electronic device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
Based on the same idea, the embodiments of the present specification further provide a non-volatile computer storage medium corresponding to the method described above, and store computer-executable instructions, which, when read by a computer, cause one or more processors to execute the method according to the first aspect.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Hardware Description Language), traffic, pl (core universal Programming Language), HDCal (jhdware Description Language), lang, Lola, HDL, laspam, hardward Description Language (vhr Description Language), vhal (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, the present specification embodiments may be provided as a method, system, or computer program product. Accordingly, embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a model, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the device, and the nonvolatile computer storage medium, since they are substantially similar to the embodiments of the method, the description is simple, and for the relevant points, reference may be made to the partial description of the embodiments of the method.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is merely one or more embodiments of the present disclosure and is not intended to limit the present disclosure. Various modifications and alterations to one or more embodiments of the present description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of one or more embodiments of the present specification should be included in the scope of the claims of the present specification.

Claims (19)

1. A verification method of a risk identification model is applied to a risk pool containing N types of risk sets, wherein the risk sets are generated by risk identification of the risk identification model for risk data, N is a natural number greater than 1, and the method comprises the following steps:
randomly extracting a plurality of risk data to be verified from the risk set aiming at any ith type of risk set in the risk pool, wherein i is more than or equal to 1 and less than or equal to N, and i is a natural number;
acquiring manual verification results of the plurality of risk data to be verified, and determining model identification results of the risk identification model for the plurality of risk data to be verified;
determining the consistency degree of the manual verification result and the model identification result;
and when the consistency degree of the manual verification result and the model identification result meets a preset condition, determining that the model identification result of the ith type of risk set of the risk identification model passes verification.
2. The method of claim 1, when the risk identification model has determined a model decision risk type for the plurality of risk data to be verified, wherein determining the degree of correspondence of the manual verification result and the model identification result comprises:
determining the same ratio of the same risk types in the manual judgment risk types of the plurality of risk data to be verified and the model judgment risk types;
and determining the consistency degree of the manual verification result and the model identification result according to the same ratio.
3. The method of claim 1, when the risk identification model does not determine the model decision risk type for the plurality of risk data to be verified, wherein determining the model identification result of the risk identification model for the plurality of risk data to be verified comprises:
acquiring the artificial risk types of the plurality of risk data to be verified determined by the artificial verification result;
determining the statistical risk types of the plurality of risk data to be verified according to the statistical information of the artificial risk types;
determining the statistical risk type as a model identification result of the risk identification model for a plurality of risk data to be verified;
correspondingly, determining the consistency degree of the manual verification result and the model identification result comprises the following steps: determining a degree of correspondence of the artificial risk type and the statistical risk type.
4. The method of claim 1, wherein randomly extracting a plurality of risk data to be verified from the risk set comprises:
determining to extract wind from the ith type of risk set in the risk poolTotal amount of risk data Si
Respectively obtaining the extraction weights k of k subsets contained in the risk setjJ is more than or equal to 1 and less than or equal to k, and j and k are natural numbers;
according to the total number SiAnd said decimation weight kjDetermining the number S of samples taken from the jth subsetj
5. The method of claim 2, wherein determining a total number S of risk data to extract from the ith type of risk set in the risk pooliThe method comprises the following steps:
determining the quantity proportion of the risk data contained in each of the N types of risk sets in the risk pool;
determining the total quantity S of the risk data extracted from the ith type of risk set in the risk pool according to the quantity proportioni
6. The method of claim 1, further comprising:
when the consistency degree of the manual verification result and the model identification result does not meet a preset condition, determining to-be-verified risk data with inconsistent manual verification results and model identification results;
and sending the inconsistent risk data to be verified to a manual verifier for secondary manual verification.
7. The method of claim 6, further comprising:
obtaining feedback data of the inconsistent risk data to be verified, wherein the feedback data are used for representing that the risk identification model is inaccurate in using at least one feature contained in the risk data;
and returning the feedback data to a trainer of the risk identification model.
8. The method of claim 7, further comprising:
acquiring another risk data to be verified, wherein the another risk data to be verified is classified to the ith type risk set by an updated risk identification model, and the updated risk identification model is obtained by updating and training based on the feedback data;
and in the ith type of risk set, replacing the inconsistent risk data to be verified with the other risk data to be verified.
9. The method of claim 1, further comprising:
determining the stay time of the risk data to be verified in the risk pool, and removing the risk data to be verified, of which the stay time exceeds the preset time, from the risk pool.
10. A verification device of a risk identification model is applied to a risk pool containing N types of risk sets, wherein the risk sets are generated by risk identification of the risk identification model for risk data, N is a natural number greater than 1, and the device comprises:
the random extraction module is used for randomly extracting a plurality of risk data to be verified from the risk set aiming at any ith type of risk set in the risk pool, wherein i is more than or equal to 1 and less than or equal to N, and i is a natural number;
the acquisition module is used for acquiring the manual verification results of the plurality of risk data to be verified and determining the model identification results of the risk identification model for the plurality of risk data to be verified;
the consistency degree module is used for determining the consistency degree of the manual verification result and the model identification result;
and the verification module is used for determining that the model identification result of the ith type of risk set passes verification by the risk identification model when the consistency degree of the manual verification result and the model identification result meets the preset condition.
11. The apparatus according to claim 10, wherein when the risk identification model has determined the model decision risk types of the plurality of risk data to be verified, the consistency degree module determines the same ratio of the same risk types in the manually decision risk types and the model decision risk types of the plurality of risk data to be verified, and determines the consistency degree of the manually decision result and the model identification result according to the same ratio.
12. The apparatus of claim 10, wherein the obtaining module determines the statistical risk type of the plurality of risk data to be verified according to the statistical information of the artificial risk type when the risk identification model does not determine the model decision risk type of the plurality of risk data to be verified; determining the statistical risk type as a model identification result of the risk identification model for a plurality of risk data to be verified; correspondingly, the consistency degree module determines the consistency degree of the artificial risk type and the statistical risk type.
13. The apparatus of claim 10, the random extraction module to determine a total number S of risk data extracted from the ith type of risk set in the risk pooli(ii) a Respectively obtaining the extraction weights k of k subsets contained in the risk setjJ is more than or equal to 1 and less than or equal to k, and j and k are natural numbers; according to the total number SiAnd said decimation weight kjDetermining the number S of samples taken from the jth subsetj
14. The apparatus of claim 13, the random drawing module to determine a quantitative proportion of risk data contained by each of the N types of risk sets in the risk pool; determining the total quantity S of the risk data extracted from the ith type of risk set in the risk pool according to the quantity proportioni
15. The device of claim 10, further comprising a secondary verification module, which determines risk data to be verified that the manual verification result is inconsistent with the model identification result when the consistency degree between the manual verification result and the model identification result does not meet a preset condition; and sending the inconsistent risk data to be verified to a manual verifier for secondary manual verification.
16. The apparatus of claim 15, further comprising a feedback module that obtains feedback data of the inconsistent risk data to be verified, wherein the feedback data is used to characterize an inaccurate use of at least one feature contained in the risk data by a risk identification model; and returning the feedback data to a trainer of the risk identification model.
17. The apparatus of claim 16, further comprising a replacement module that obtains another risk data to be verified, wherein the another risk data to be verified is classified into the ith type of risk set by an updated risk identification model that is updated and trained based on the feedback data; and in the ith type of risk set, replacing the inconsistent risk data to be verified with the other risk data to be verified.
18. The apparatus of claim 10, further comprising a risk pool management module that determines a dwell time of the risk data to be verified in the risk pool, and removes the risk data to be verified from the risk pool, the dwell time of which exceeds a preset time.
19. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 9.
CN202111461036.6A 2021-12-02 2021-12-02 Risk identification model verification method, device and equipment Pending CN114240101A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111461036.6A CN114240101A (en) 2021-12-02 2021-12-02 Risk identification model verification method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111461036.6A CN114240101A (en) 2021-12-02 2021-12-02 Risk identification model verification method, device and equipment

Publications (1)

Publication Number Publication Date
CN114240101A true CN114240101A (en) 2022-03-25

Family

ID=80752990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111461036.6A Pending CN114240101A (en) 2021-12-02 2021-12-02 Risk identification model verification method, device and equipment

Country Status (1)

Country Link
CN (1) CN114240101A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409290A (en) * 2022-10-31 2022-11-29 北京领雁科技股份有限公司 Business data risk model verification method and device, electronic equipment and medium
CN115730233A (en) * 2022-10-28 2023-03-03 支付宝(杭州)信息技术有限公司 Data processing method and device, readable storage medium and electronic equipment
CN117579625A (en) * 2024-01-17 2024-02-20 中国矿业大学 Inspection task pre-distribution method for double prevention mechanism

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778853A (en) * 2016-12-07 2017-05-31 中南大学 Unbalanced data sorting technique based on weight cluster and sub- sampling
CN107729403A (en) * 2017-09-25 2018-02-23 中国工商银行股份有限公司 Internet information indicating risk method and system
CN107749031A (en) * 2017-11-29 2018-03-02 南京甄视智能科技有限公司 Risk control system after the automatic update method of risk control system, loan after loan
CN108615190A (en) * 2018-04-27 2018-10-02 深圳市分期乐网络科技有限公司 Air control model verification method, device, equipment and storage medium
CN109241418A (en) * 2018-08-22 2019-01-18 中国平安人寿保险股份有限公司 Abnormal user recognition methods and device, equipment, medium based on random forest
CN109635110A (en) * 2018-11-30 2019-04-16 北京百度网讯科技有限公司 Data processing method, device, equipment and computer readable storage medium
CN109902721A (en) * 2019-01-28 2019-06-18 平安科技(深圳)有限公司 Outlier detection model verification method, device, computer equipment and storage medium
CN110222791A (en) * 2019-06-20 2019-09-10 杭州睿琪软件有限公司 Sample labeling information auditing method and device
CN110298030A (en) * 2019-05-24 2019-10-01 平安科技(深圳)有限公司 Method of calibration, device, storage medium and the equipment of semantic analysis model accuracy
CN110347566A (en) * 2019-06-25 2019-10-18 阿里巴巴集团控股有限公司 For carrying out the method and device of measures of effectiveness to registration air control model
CN111414609A (en) * 2020-03-19 2020-07-14 腾讯科技(深圳)有限公司 Object verification method and device
CN111931488A (en) * 2020-09-24 2020-11-13 北京百度网讯科技有限公司 Method, device, electronic equipment and medium for verifying accuracy of judgment result
CN112447167A (en) * 2020-11-17 2021-03-05 康键信息技术(深圳)有限公司 Voice recognition model verification method and device, computer equipment and storage medium
CN112599137A (en) * 2020-12-16 2021-04-02 康键信息技术(深圳)有限公司 Method and device for verifying voiceprint model recognition effect and computer equipment
CN112819595A (en) * 2021-01-13 2021-05-18 中国建设银行股份有限公司 Method and device for intelligent disposal of certificate risk
CN113032862A (en) * 2020-07-27 2021-06-25 深圳市前海数字城市科技有限公司 Building information model checking method and device and terminal equipment
CN113076901A (en) * 2021-04-12 2021-07-06 深圳前海微众银行股份有限公司 Model stability interpretation method, device, equipment and storage medium
CN113077051A (en) * 2021-04-14 2021-07-06 广东博智林机器人有限公司 Network model training method and device, text classification model and network model
CN113268596A (en) * 2021-05-24 2021-08-17 康键信息技术(深圳)有限公司 Verification method, device and equipment of department classification model and storage medium
CN113326900A (en) * 2021-06-30 2021-08-31 深圳前海微众银行股份有限公司 Data processing method and device of federal learning model and storage medium
CN113408558A (en) * 2020-03-17 2021-09-17 百度在线网络技术(北京)有限公司 Method, apparatus, device and medium for model verification
CN113420792A (en) * 2021-06-03 2021-09-21 阿波罗智联(北京)科技有限公司 Training method of image model, electronic equipment, road side equipment and cloud control platform

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778853A (en) * 2016-12-07 2017-05-31 中南大学 Unbalanced data sorting technique based on weight cluster and sub- sampling
CN107729403A (en) * 2017-09-25 2018-02-23 中国工商银行股份有限公司 Internet information indicating risk method and system
CN107749031A (en) * 2017-11-29 2018-03-02 南京甄视智能科技有限公司 Risk control system after the automatic update method of risk control system, loan after loan
CN108615190A (en) * 2018-04-27 2018-10-02 深圳市分期乐网络科技有限公司 Air control model verification method, device, equipment and storage medium
CN109241418A (en) * 2018-08-22 2019-01-18 中国平安人寿保险股份有限公司 Abnormal user recognition methods and device, equipment, medium based on random forest
CN109635110A (en) * 2018-11-30 2019-04-16 北京百度网讯科技有限公司 Data processing method, device, equipment and computer readable storage medium
CN109902721A (en) * 2019-01-28 2019-06-18 平安科技(深圳)有限公司 Outlier detection model verification method, device, computer equipment and storage medium
CN110298030A (en) * 2019-05-24 2019-10-01 平安科技(深圳)有限公司 Method of calibration, device, storage medium and the equipment of semantic analysis model accuracy
CN110222791A (en) * 2019-06-20 2019-09-10 杭州睿琪软件有限公司 Sample labeling information auditing method and device
CN110347566A (en) * 2019-06-25 2019-10-18 阿里巴巴集团控股有限公司 For carrying out the method and device of measures of effectiveness to registration air control model
CN113408558A (en) * 2020-03-17 2021-09-17 百度在线网络技术(北京)有限公司 Method, apparatus, device and medium for model verification
CN111414609A (en) * 2020-03-19 2020-07-14 腾讯科技(深圳)有限公司 Object verification method and device
CN113032862A (en) * 2020-07-27 2021-06-25 深圳市前海数字城市科技有限公司 Building information model checking method and device and terminal equipment
CN111931488A (en) * 2020-09-24 2020-11-13 北京百度网讯科技有限公司 Method, device, electronic equipment and medium for verifying accuracy of judgment result
CN112447167A (en) * 2020-11-17 2021-03-05 康键信息技术(深圳)有限公司 Voice recognition model verification method and device, computer equipment and storage medium
CN112599137A (en) * 2020-12-16 2021-04-02 康键信息技术(深圳)有限公司 Method and device for verifying voiceprint model recognition effect and computer equipment
CN112819595A (en) * 2021-01-13 2021-05-18 中国建设银行股份有限公司 Method and device for intelligent disposal of certificate risk
CN113076901A (en) * 2021-04-12 2021-07-06 深圳前海微众银行股份有限公司 Model stability interpretation method, device, equipment and storage medium
CN113077051A (en) * 2021-04-14 2021-07-06 广东博智林机器人有限公司 Network model training method and device, text classification model and network model
CN113268596A (en) * 2021-05-24 2021-08-17 康键信息技术(深圳)有限公司 Verification method, device and equipment of department classification model and storage medium
CN113420792A (en) * 2021-06-03 2021-09-21 阿波罗智联(北京)科技有限公司 Training method of image model, electronic equipment, road side equipment and cloud control platform
CN113326900A (en) * 2021-06-30 2021-08-31 深圳前海微众银行股份有限公司 Data processing method and device of federal learning model and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115730233A (en) * 2022-10-28 2023-03-03 支付宝(杭州)信息技术有限公司 Data processing method and device, readable storage medium and electronic equipment
CN115730233B (en) * 2022-10-28 2023-07-11 支付宝(杭州)信息技术有限公司 Data processing method and device, readable storage medium and electronic equipment
CN115409290A (en) * 2022-10-31 2022-11-29 北京领雁科技股份有限公司 Business data risk model verification method and device, electronic equipment and medium
CN117579625A (en) * 2024-01-17 2024-02-20 中国矿业大学 Inspection task pre-distribution method for double prevention mechanism
CN117579625B (en) * 2024-01-17 2024-04-09 中国矿业大学 Inspection task pre-distribution method for double prevention mechanism

Similar Documents

Publication Publication Date Title
CN108629687B (en) Anti-money laundering method, device and equipment
CN108305158B (en) Method, device and equipment for training wind control model and wind control
CN114240101A (en) Risk identification model verification method, device and equipment
CN108596410B (en) Automatic wind control event processing method and device
CN105718490A (en) Method and device for updating classifying model
CN108764915B (en) Model training method, data type identification method and computer equipment
CN110674188A (en) Feature extraction method, device and equipment
CN112257114A (en) Application privacy compliance detection method, device, equipment and medium
CN112153426A (en) Content account management method and device, computer equipment and storage medium
CN107463935A (en) Application class methods and applications sorter
CN110069545A (en) A kind of behavioral data appraisal procedure and device
CN110782349A (en) Model training method and system
CN111047220A (en) Method, device, equipment and readable medium for determining condition of wind control threshold
CN115858774A (en) Data enhancement method and device for text classification, electronic equipment and medium
CN110033092B (en) Data label generation method, data label training device, event recognition method and event recognition device
CN114490786A (en) Data sorting method and device
CN109492401B (en) Content carrier risk detection method, device, equipment and medium
CN113010562B (en) Information recommendation method and device
CN113379528A (en) Wind control model establishing method and device and risk control method
EP3901789A1 (en) Method and apparatus for outputting information
CN110263817B (en) Risk grade classification method and device based on user account
CN113344695A (en) Elastic wind control method, device, equipment and readable medium
CN113159213A (en) Service distribution method, device and equipment
CN112634048A (en) Anti-money laundering model training method and device
CN111259975A (en) Method and device for generating classifier and method and device for classifying text

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination