CN114723547A - Urging collection method, urging collection device, computer equipment and computer program product - Google Patents

Urging collection method, urging collection device, computer equipment and computer program product Download PDF

Info

Publication number
CN114723547A
CN114723547A CN202111469343.9A CN202111469343A CN114723547A CN 114723547 A CN114723547 A CN 114723547A CN 202111469343 A CN202111469343 A CN 202111469343A CN 114723547 A CN114723547 A CN 114723547A
Authority
CN
China
Prior art keywords
repayment
type
collection
machine
call
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111469343.9A
Other languages
Chinese (zh)
Inventor
邹江华
颜谨
陶韬
温建兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202111469343.9A priority Critical patent/CN114723547A/en
Publication of CN114723547A publication Critical patent/CN114723547A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/03Credit; Loans; Processing thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • G06F40/35Discourse or dialogue representation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Finance (AREA)
  • General Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Technology Law (AREA)
  • Databases & Information Systems (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application relates to a collection urging method, a collection urging device, computer equipment and a computer program product. The method comprises the following steps: inputting repayment project information into the trained collection urging mode selection model, and outputting a collection urging mode type; if the machine-induced collection is the machine-induced collection, initiating a machine-induced collection call, and verifying the identity and repayment items to a call object in the machine-induced collection call; if the answer is verified to be correct, acquiring a first answer voice of the call object to the inquiry voice, converting the first answer voice into a first answer text, inputting the first answer text into the trained emotion analysis model, and outputting a first repayment willingness type; based on the first repayment wish type, the call for expecting payment is broadcasted in the machine call for expecting payment. In the collection urging process, the payment willingness of overdue customers can be recognized through technical means such as voice recognition and emotion analysis, and a plurality of conversation systems such as explanation, inquiry and pressure test are prepared according to different scenes, so that the efficiency and success rate of intelligent collection urging can be improved.

Description

Method and device for collection, computer equipment and computer program product
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a collection urging method, a collection urging apparatus, a computer device, and a computer program product.
Background
At present, the common overdue arrears are hased to the user through the telephone. The traditional telephone collection is to directly broadcast the preset same collection voice to the user through the telephone when the collection time is up after the collection time is set by collection staff. The process mainly realizes the function of information reminding or informing, the effect is close to the short message reminding effect, the broadcasted content is rigid, and the effective collection prompting effect is difficult to achieve. In some cases, it may also result in a large number of overdue customer calls to customer service agents, resulting in additional labor costs.
Disclosure of Invention
In view of the above, it is desirable to provide a collection method, a collection device, a computer device and a computer program product for improving collection effect.
In a first aspect, the present application is directed to a catalytic recovery process comprising:
acquiring repayment project information to be promised, inputting the repayment project information into a trained prompter mode selection model, and outputting a prompter mode type, wherein the prompter mode type is machine prompter or manual prompter;
if the collection urging mode type is machine collection urging, initiating a machine collection urging conversation, and verifying the identity and repayment items to the conversation object in the machine collection urging conversation;
if the answer is verified to be correct, an inquiry voice for confirming the repayment intention is broadcasted in the machine collection call, a first reply voice of the call object to the inquiry voice in the machine collection call is obtained, the first reply voice is converted into a first reply text, the first reply text is input to the trained emotion analysis model, a first repayment intention type is output, and the first repayment intention type is an active type, a neutral type or a passive type;
based on the first repayment wish type, the call for expecting payment is broadcasted in the machine call for expecting payment.
In one embodiment, the process of training the emotion analysis model comprises the following steps:
acquiring a training text and a corresponding classification label, wherein the classification label is any one of the following three classification results, and the following three classification results are respectively positive type, neutral type or negative type;
inputting the training text into the emotion analysis model, outputting a prediction classification result of the training text, adjusting parameters in the emotion analysis model based on the difference between the prediction classification result and the classification label until the training is stopped, and obtaining the trained emotion analysis model.
In one embodiment, the sentiment analysis model comprises a door structure, and the parameters are door weight parameters of the door structure; adjusting parameters in the emotion analysis model based on the differences between the predicted classification results and the classification labels, including:
for the t training process, randomly obtaining n groups of values of gate weight parameters and loss function values corresponding to the emotion analysis model under each group of values, wherein t is a positive integer not less than 1, and n is a positive integer not less than 2;
and adjusting the value of the gate weight parameter in the t-th training process according to the n groups of values and the loss function value corresponding to the emotion analysis model under each group of values.
In one embodiment, n is 3; according to loss function values corresponding to the emotion analysis models under the multiple groups of values and each group of values, the values of the gate weight parameters are adjusted in the t-th training process, and the method comprises the following steps:
determining a maximum loss function value from the loss function values corresponding to the emotion analysis models under each group of values, taking the maximum loss function value and the value corresponding to the maximum loss function value as a first space point, and taking each group of values and the loss function value corresponding to each group of values in the remaining two groups of values as a second space point and a third space point respectively;
determining a first gradient descending direction according to the first space point and the second space point, determining a second gradient descending direction according to the first space point and the third space point, and calculating a first step length and a second step length according to the loss function values corresponding to each group of values;
and adjusting the value of the gate weight parameter in the t training process according to the first gradient descending direction, the second gradient descending direction, the first step length and the second step length.
In one embodiment, adjusting the value of the gate weight parameter in the tth training process according to the first gradient descending direction, the second gradient descending direction, the first step length and the second step length includes:
integrating the first gradient descending direction and the second gradient descending direction to obtain the gradient descending direction of the gate weight parameter in the t training process;
and adjusting the value of the gate weight parameter in the t training process according to the gradient descending direction of the gate weight parameter in the t training process, the gradient descending direction of the gate weight parameter in the t-1 training process, the first step length and the second step length.
In one embodiment, the loss function used by the emotion analysis model is:
Figure RE-GDA0003676101870000031
wherein, λ (y)i_true,yi_pred) Representing a correction term, yi_trueIs determined based on the classification label of the ith training sample, yi_predIs determined based on the predicted classification result of the ith training sample, and n represents the total number of the training samples.
In a second aspect, the present application also provides a harvesting apparatus comprising:
the acquisition module is used for acquiring repayment item information to be promised, inputting the repayment item information into the trained prompter mode selection model and outputting a prompter mode type, wherein the prompter mode type is machine prompter mode or manual prompter mode;
the initiating module is used for initiating the machine collection call when the collection mode type is machine collection, and verifying the identity and repayment items to the call object in the machine collection call;
the prediction module is used for broadcasting inquiry voice for confirming repayment willingness in the machine collection call when the answer is verified to be correct, acquiring first reply voice of a call object to the inquiry voice in the machine collection call, converting the first reply voice into a first reply text, inputting the first reply text into a trained emotion analysis model, and outputting a first repayment willingness type, wherein the first repayment willingness type is positive, neutral or negative;
and the broadcasting module is used for broadcasting collection prompting voice for prompting collection of the payment in the machine collection prompting conversation based on the first payment intention type.
In a third aspect, the present application further provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring repayment project information to be promised, inputting the repayment project information into a trained prompter mode selection model, and outputting a prompter mode type, wherein the prompter mode type is machine prompter or manual prompter;
if the collection urging mode type is machine collection urging, initiating a machine collection urging conversation, and verifying the identity and repayment items to the conversation object in the machine collection urging conversation;
if the answer is verified to be correct, an inquiry voice for confirming repayment willingness is broadcasted in the machine collection call, a first answer voice of the call object to the inquiry voice in the machine collection call is obtained, the first answer voice is converted into a first answer text, the first answer text is input to the trained emotion analysis model, a first repayment willingness type is output, and the first repayment willingness type is positive, neutral or negative;
and broadcasting a payment prompting voice for prompting payment prompting in the machine payment prompting conversation based on the first payment prompting type.
In a fourth aspect, the present application further provides a computer readable storage medium having a computer program stored thereon, the computer program when executed by a processor implementing the steps of:
acquiring repayment project information to be urged to be received, inputting the repayment project information into a trained urge receiving mode selection model, and outputting an urge receiving mode type, wherein the urge receiving mode type is machine urge receiving or manual urge receiving;
if the collection urging mode type is machine collection urging, initiating a machine collection urging conversation, and verifying the identity and repayment items to the conversation object in the machine collection urging conversation;
if the answer is verified to be correct, an inquiry voice for confirming the repayment intention is broadcasted in the machine collection call, a first reply voice of the call object to the inquiry voice in the machine collection call is obtained, the first reply voice is converted into a first reply text, the first reply text is input to the trained emotion analysis model, a first repayment intention type is output, and the first repayment intention type is an active type, a neutral type or a passive type;
based on the first repayment wish type, the call for expecting payment is broadcasted in the machine call for expecting payment.
In a fifth aspect, the present application further provides a computer program product. Computer program product comprising a computer program which, when executed by a processor, performs the steps of:
acquiring repayment project information to be promised, inputting the repayment project information into a trained prompter mode selection model, and outputting a prompter mode type, wherein the prompter mode type is machine prompter or manual prompter;
if the collection urging mode type is machine collection urging, initiating a machine collection urging conversation, and verifying the identity and repayment items to the conversation object in the machine collection urging conversation;
if the answer is verified to be correct, an inquiry voice for confirming the repayment intention is broadcasted in the machine collection call, a first reply voice of the call object to the inquiry voice in the machine collection call is obtained, the first reply voice is converted into a first reply text, the first reply text is input to the trained emotion analysis model, a first repayment intention type is output, and the first repayment intention type is an active type, a neutral type or a passive type;
based on the first repayment wish type, the call for expecting payment is broadcasted in the machine call for expecting payment.
According to the collection urging method, the collection urging device, the computer equipment and the computer program product, in the collection urging process, the payment willingness of an overdue client can be identified through technical means such as voice recognition and emotion analysis, and multiple conversation systems such as explanation, inquiry and pressure test are prepared according to different scenes, so that the efficiency and the success rate of intelligent collection urging can be improved.
Drawings
FIG. 1 is a schematic diagram illustrating an implementation scenario of an exemplary embodiment of an induced revenue process;
FIG. 2 is a schematic flow diagram of a catalytic recovery process in one embodiment;
FIG. 3 is a diagram illustrating the operation of the emotion analysis model in one embodiment;
FIG. 4 is a schematic diagram of an exemplary catalytic recovery process;
FIG. 5 is a schematic diagram of a recurrent neural network in one embodiment;
FIG. 6 is a schematic flow diagram of another embodiment of a catalytic recovery process;
FIG. 7 is a schematic diagram of another embodiment of a catalytic recovery process;
FIG. 8 is a block diagram of an exemplary embodiment of a catalytic converter;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various terms, but these terms are not limited by these terms unless otherwise specified. These terms are only used to distinguish one term from another. For example, the third preset threshold and the fourth preset threshold may be the same or different without departing from the scope of the present application.
At present, the common overdue arrears are hased to the user through the telephone. The traditional telephone collection urging method is that after the collection urging staff sets fixed collection urging time, preset same collection urging voice is directly broadcasted to the user through the telephone when the collection urging time is reached. This process has mainly realized the effect that information reminded or told, reminds the effect with the SMS and is close, and the content of reporting dies, hastens the receipts talk skill comparatively single, can't accomplish in the communication with the customer dynamic adjustment tactics, is difficult to reach effectual receipts effect of hastening, can only play the effect of telling, can't effectively promote its repayment wish to most customers. In some cases, it may also result in a large number of overdue customer calls to customer service agents, resulting in additional labor costs.
Based on the above requirements, the embodiments of the present application provide a collection method, which may be applied between different servers and terminals, and may be specifically applied in the application environment shown in fig. 1. Wherein the server 102 may place an incoming call to the terminal 104 for communication (e.g., a voice call). The data storage system may store data that the server 102 needs to process, and the data storage system may be integrated on the server 102, or may be placed on a cloud or other network server. The server 102 transmits the call-receiving voice to the terminal 104 by acquiring the call-receiving voice for call-receiving, and the terminal 104 receives and broadcasts the call-receiving voice. The server 102 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In some embodiments, referring to fig. 2, a catalytic recovery process is provided. Taking the application of the method to the server 102 in fig. 1 as an example, the method includes the following steps:
202. and acquiring repayment item information to be promised, inputting the repayment item information into a trained prompter mode selection model, and outputting a prompter mode type, wherein the prompter mode type is machine prompter or manual prompter.
The repayment item information is mainly used for verifying whether a repayment item of a subsequent call object is correct, and the repayment item can be a house loan item, a consumption loan item or an operation loan item. It will be appreciated that some payment terms are suitable for machine collection, such as a house loan, etc., while some payment terms are suitable for manual collection, such as some short term loans, etc. Further, the payment item information may further include the payment condition of the payment item, such as whether the payment item has been overdue and the amount of the payment that has not been left. Therefore, in this step, the model may be selected by the trained collection urging mode, and the collection urging mode type may be determined based on the repayment item information. The collection urging mode selection model can be obtained by training a deep learning model based on repayment project sample information and corresponding classification labels.
204. If the collection mode type is machine collection, initiating a machine collection call, and verifying the identity and payment items to the call object in the machine collection call.
Wherein, if the type of the collection mode is manual collection, the operator can directly transfer to the manual seat. The machine-induced call can be dialed by the server 102 to the terminal 104, and a voice such as "ask for XXX or not, and if you are not transacting XXX loan before" can be broadcasted in the machine-induced call for the caller to confirm so as to verify the identity and repayment item of the caller.
206. If the answer is verified to be correct, an inquiry voice for confirming the repayment intention is broadcasted in the machine collection call, a first reply voice of the call object to the inquiry voice in the machine collection call is obtained, the first reply voice is converted into a first reply text, the first reply text is input to the trained emotion analysis model, and a first repayment intention type is output, wherein the first repayment intention type is positive, neutral or negative.
If the verification is correct, the call object is the collection object corresponding to the payment item information, and the payment item information is also correct. This makes it possible to ask the other party about the intention of payment. The emotion analysis model may be implemented by modeling a convolutional neural network. However, because the convolutional neural network has no memory function, context concatenation cannot be effectively realized, and the fully-connected mode using the convolutional neural network is too redundant and inefficient, there are also deficiencies in feature understanding. Therefore, in the present embodiment, a recurrent neural network may be employed.
The recurrent neural network is a recurrent neural network in which sequence data is mainly input, recursion (recursion) is performed in the direction of evolution of the sequence, and all nodes (recurrent units) are connected in a chain manner. Further, long and short term memory networks may be employed. The long-short term memory network is a time cycle neural network, which is specially designed for solving the long-term dependence problem of the common cycle neural network, and all the cycle neural networks have a chain form of a repeated neural network module.
It can be understood that based on the evaluation of the intelligent collection scene, the long-term and short-term memory network in the recurrent neural network is used for modeling, so that the long-term dependence problem in the recurrent neural network can be solved, the sequence with 100 orders of magnitude can be well supported, the long-term memory function is realized, and the method is suitable for the text emotion analysis problem. In which, the process of inputting the first reply text into the emotion analysis model and outputting the first payment will type may refer to fig. 3. In fig. 3, the deep learning model may be a long-short term memory network model; the call text generated in the intelligent call collection record can be a first reply text or a call text of a call object in a subsequent call, and the reply texts are natural language texts; and the emotion analysis result is the first payment intention type or the payment intention type obtained by subsequent analysis.
208. Based on the first repayment wish type, the call for expecting payment is broadcasted in the machine call for expecting payment.
The first payment intention type may be an initial reply of the call partner when hearing an inquiry voice confirming the payment intention. It can be understood that if the first payment intention type is a negative type, the first payment intention type indicates that the call object basically has no payment intention, and the prompt receiving voice for applying the payment pressure can be broadcasted again. The collection process mentioned in the above steps 202 to 208 can refer to fig. 4, wherein the emotion analysis batch process mainly refers to the process of the query voice for determining the payment willingness to start to determine the initial dialect used for the initial reply. And the machine hastens the reply content of the subsequent call object in the call, so that the dialect used for real-time reply can be determined through real-time emotional analysis.
According to the method provided by the embodiment of the application, in the collection urging process, the payment willingness of overdue customers can be recognized through technical means such as voice recognition and emotion analysis, and multiple conversation systems such as explanation, inquiry and pressure test are prepared according to different scenes, so that the efficiency and the success rate of intelligent collection urging can be improved.
In some embodiments, a method of training an emotion analysis model is provided. Taking the application of the method to the server 102 in fig. 1 as an example, the method includes the following steps: acquiring a training text and a corresponding classification label, wherein the classification label is any one of the following three classification results, and the following three classification results are respectively positive type, neutral type or negative type; inputting the training text into the emotion analysis model, outputting a prediction classification result of the training text, adjusting parameters in the emotion analysis model based on the difference between the prediction classification result and the classification label until the training is stopped, and obtaining the trained emotion analysis model. The training stopping condition may be an emotion analysis model convergence, which is not specifically limited in this embodiment of the present application.
According to the method provided by the embodiment of the application, in the collection urging process, the payment willingness of overdue customers can be recognized through technical means such as voice recognition and emotion analysis, and multiple conversation systems such as explanation, inquiry and pressure test are prepared according to different scenes, so that the efficiency and the success rate of intelligent collection urging can be improved.
In the related art, a gradient descent algorithm is generally used for adjusting parameters, however, the gradient descent method has uncertainty such as gradient disappearance and gradient explosion, and the parameter adjustment is invalid. Therefore, the pseudo gradient descent algorithm can be adopted in the application to adjust the parameters in the long-time memory network model. In some embodiments, the sentiment analysis model includes a gate structure, and the parameter is a gate weight parameter of the gate structure; accordingly, the embodiment of the present application does not specifically limit the way of adjusting the parameters in the emotion analysis model based on the difference between the predicted classification result and the classification label, and includes but is not limited to: for the t training process, randomly obtaining n groups of values of gate weight parameters and loss function values corresponding to the emotion analysis model under each group of values, wherein t is a positive integer not less than 1, and n is a positive integer not less than 2; and adjusting the value of the gate weight parameter in the t-th training process according to the n groups of values and the loss function value corresponding to the emotion analysis model under each group of values.
Specifically, for example, if the emotion analysis model is a recurrent neural network model, the structure thereof can be referred to fig. 5. In fig. 5, the emotion analysis model is mainly composed of an output layer, a hidden layer, a loop layer, and an input layer. Wherein, a plurality of nodes can be expanded according to the time line to jointly form a hidden layer and a circulating layer. The schematic diagram of each node can be seen in the right half of fig. 5, where each node has two inputs, namely the input value Xt for the network at the current moment and the output value St-1 for the network at the previous moment. Each node has two outputs, which are the cell state at the current time Ot and the output value St of the network at the current time. The calculation process of the different layers in fig. 5 may refer to the following equations (1) to (4), respectively:
St=f(UxsXt+WssSt-1);(1)
Ot=g(WsoSt);(2)
Ut=UxsXt+WssSt-1;(3)
Vt=WsoSt;(4)
the long-term memory network is formed by adding a plurality of gates on the basis of the recurrent neural network, and the gates are used for controlling the number level and controlling the flow of information. The long-time memory network introduces three gates in total, namely an input gate i, a forgetting gate f and an output gate o. The input gate determines how much information can flow into the unit node, the forgetting gate determines how much information can be forgotten in the unit node, and the output gate determines how much information is output in the unit node. The unit node is used to memorize the previous state, and a sigmoid function may be used so that the domain of i, f, o is (0, 1).
Each node expanded according to a time line in the long-time memory network has three inputs, namely an input value Xt of the network at the current time, a hidden state output value ht-1 of the LSTM at the previous time and a unit state Ct-1 at the previous time, and each node has two outputs, namely a hidden state output value ht of the LSTM at the current time and a unit state Ct at the current time. Take i for input gate, f for forget gate, O for output gate, C for cell state and h for hidden state as examplest、ft、Ot、CtAnd HtThe calculation process of (c) can be referred to the following formulas (5) to (9):
it=sigmoid(Wxixt+Whiht-1+bi);(5)
ft=sigmoid(Wxfxt+Whfht-1+bf);(6)
Ot=sigmoid(Wxoxt+Whoht-1+bo);(7)
Figure RE-GDA0003676101870000101
Figure RE-GDA0003676101870000102
the gate weight parameter is denoted as w, and the values of the random w can be denoted as w1, w2, … and w 3. It will be appreciated that for each gate weight parameter value, a loss function value can be calculated. When the loss function value is denoted by L and w and L are combined based on the correspondence relationship, they can be denoted by (w0, L0), (w1, L1), …, (wn, Ln). It can be understood that, taking n as 3 as an example, each of the 3 sets of random values of the gate weight parameters corresponds to one loss function value, and there will be one maximum loss function value and two smaller values in the 3 loss function values. It can also be understood that adjusting the values of the gate weight parameters should make the loss function values as small as possible, so that the adjustment can be performed not in the direction of the value of the gate weight parameter corresponding to the maximum loss function value, but in the direction of the values of the gate weight parameters corresponding to the two smaller values. In conclusion, how the value of the gate weight parameter is adjusted can be guided.
According to the method provided by the embodiment of the application, as the model overcomes the problem that the time sequence cannot be kept when the input sequence is input, deeper relation among words can be mined, and the subsequent prediction through the model is more accurate. In addition, the pseudo gradient descent method explores a reasonable classification threshold region in the labeled training text, and defines a new loss function according to the region, so that the model is selectively updated more efficiently in the training process, and the failure of parameter adjustment caused by uncertainty such as gradient extinction and gradient explosion caused by the gradient descent method can be avoided, and the model training can be further stabilized.
In some embodiments, n is 3; accordingly, the embodiment of the present application does not specifically limit the manner of adjusting the values of the gate weight parameters in the tth training process according to the loss function values corresponding to the emotion analysis models for the multiple groups of values and each group of values, which includes but is not limited to: determining a maximum loss function value from the loss function values corresponding to the emotion analysis models under each group of values, taking the maximum loss function value and the value corresponding to the maximum loss function value as a first space point, and taking each group of values and the loss function value corresponding to each group of values in the remaining two groups of values as a second space point and a second space point respectively; determining a first gradient descending direction according to the first space point and the second space point, determining a second gradient descending direction according to the first space point and the third space point, and calculating a first step length and a second step length according to the loss function values corresponding to each group of values; and adjusting the value of the gate weight parameter in the t training process according to the first gradient descending direction, the second gradient descending direction, the first step length and the second step length.
Specifically, (w0, L0), (w1, L1) and (w2, L2) are respectively referred to as x0, x1 and x2 as 3 points. Where wi represents the value of the gate weight parameter. As to which gate weight parameter is specific, the embodiment of the present application is not limited to this specifically, and may be input gates, forgetting gates, or output gates. If the maximum loss function value is L0, the point with the maximum loss function value is x0, which is the first spatial point. The remaining two sets of values and their corresponding loss function values, namely x1 and x2, x1 may be used as the second spatial point, and x2 may be used as the third spatial point.
Since the loss function values corresponding to x1 and x2 are both smaller than the loss function value corresponding to x0, the directions from x0 to x1 can be determined to be the directions in which the loss function values decrease, that is, the first gradient decreasing directions. Similarly, it can be determined that the direction of x0 to x2 is also the direction in which the loss function value decreases, that is, the second gradient decreasing direction. The step length is an adjustment range of a value of the gate weight parameter, the first step length may be calculated according to a loss function value corresponding to the maximum loss function value and the second spatial point, and the second step length may be calculated according to a loss function value corresponding to the maximum loss function value and the third spatial point, if a difference between the two is adopted, the embodiment of the present application is not specifically limited to this.
In summary, two adjustment directions and two step sizes of the gate weight parameter values can be determined. In the actual implementation process, one of the adjustment directions and one of the step lengths can be selected, and the value of the gate weight parameter in the t training process is adjusted. Alternatively, the first gradient descent direction and the first step length may be used as one direction vector, the second gradient descent direction and the second step length may be used as another direction vector, and the two direction vectors are combined to obtain a third gradient descent direction and a third step length, so that the value of the gate weight parameter in the t-th training process is adjusted based on the third gradient descent direction and the third step length, which is not specifically limited in this embodiment of the present application. It is understood that, since the first gradient descent direction and the second gradient descent direction are both directions in which the loss function value is caused to descend, the combined direction of the two is also a direction in which the loss function value is caused to descend, that is, the third gradient descent direction.
The larger the step size should be set, considering the more direction the loss function value decreases. Thus, the calculation processes of the first step size and the second step size can refer to the following equations (10) to (11), respectively:
p1=(L0-L1)/((L0-L1)+(L0-L2));(10)
p2=(L0-L2)/((L0-L1)+(L0-L2));(11)
the first gradient descending direction is denoted as d1 ═ x0x1, that is, the direction from x0 to x1 is indicated. The second gradient descending direction is denoted as d2 ═ x0x2, that is, the direction from x0 to x2 is indicated. The above-mentioned merging process can be referred to the following formula (12):
x0x3=d1×p1+d2×p2;(12)
wherein x is0x3I.e. the combined third gradient descent direction and third step length, so that the value x3 of the adjusted gate weight parameter can be determined, and the corresponding loss function value L3 can be determined. The values of the weight parameters can be adjusted in each training process based on the above processes until the training stopping condition is reached, such as the accuracy rate of the model tends to be stable.
According to the method provided by the embodiment of the application, the reasonable classification threshold region in the labeled training text is explored by the pseudo-gradient descent method, and the new loss function is customized according to the region, so that the model is selectively updated more efficiently in the training process, and the failure of parameter adjustment caused by uncertainty such as gradient extinction and gradient explosion caused by the gradient descent method can be avoided, and therefore model training can be more and more stable.
In some embodiments, the method for adjusting the value of the gate weight parameter in the tth training process according to the first gradient descent direction, the second gradient descent direction, the first step length, and the second step length is not specifically limited in the embodiments of the present application, and includes but is not limited to: integrating the first gradient descending direction and the second gradient descending direction to obtain the gradient descending direction of the gate weight parameter in the t training process; and adjusting the value of the gate weight parameter in the t training process according to the gradient descending direction of the gate weight parameter in the t training process, the gradient descending direction of the gate weight parameter in the t-1 training process, the first step length and the second step length.
The above process mainly considers the uncertainty of random values in each training process. Therefore, in the embodiment of the application, the gradient descending direction of the gate weight parameter in the t training process can be determined, and then the value of the gate weight parameter is adjusted by combining the gradient descending direction of the gate weight parameter in the t-1 training process. In an actual implementation process, the gradient descent direction of the gate weight parameter in the t-th training process may be obtained through the above integration process, or may be calculated based on a gradient descent method, which is not specifically limited in this embodiment of the present application.
Taking the example that the gradient descending direction of the gate weight parameter in the t-th training process is calculated based on the gradient descending method, correspondingly, the embodiment of the present application does not specifically limit the manner of adjusting the value of the gate weight parameter in the t-th training process according to the gradient descending direction of the gate weight parameter in the t-th training process, the gradient descending direction of the gate weight parameter in the t-1-th training process, the first step length, and the second step length, and includes but is not limited to: determining the opposite direction of the gradient descending direction of the gate weight parameter in the t training process; combining the opposite direction with the gradient descending direction of the gate weight parameter in the t-1 training process, and combining the first step length with the second step length; and adjusting the value of the gate weight parameter in the tth training process based on the combined result.
According to the method provided by the embodiment of the application, the reasonable classification threshold region in the labeled training text is explored by the pseudo-gradient descent method, and the new loss function is customized according to the region, so that the model is selectively updated more efficiently in the training process, and the failure of parameter adjustment caused by uncertainty such as gradient extinction and gradient explosion caused by the gradient descent method can be avoided, and therefore model training can be more and more stable. In addition, the value of the gate weight parameter can be adjusted in the opposite direction of the gradient descending direction of the gate weight parameter in the t-th training process, so that the descending direction can be corrected, and the accuracy of the training process can be ensured.
In the related art, the loss function of cross entropy is generally expressed in the form of the following equation (13):
Figure RE-GDA0003676101870000131
it will be appreciated that this loss function is difficult to distinguish between training texts having a "neutral" willingness to pay type. Thus, the loss function can be modified. In some embodiments, a method of calculating a loss function is provided; taking the application of the method to the server 102 in fig. 1 as an example, the method includes the following formula (14):
Figure RE-GDA0003676101870000132
wherein, λ (y)i_true,yi_pred) Representing a correction term, yi_trueIs determined based on the classification label of the ith training sample, yi_predIs determined based on the predicted classification result of the ith training sample, and n represents the total number of the training samples.
The calculation process of the correction term can refer to the following formula (15):
Figure RE-GDA0003676101870000133
wherein m is a threshold set by the training text for distinguishing the 'neutral' payment intention type, and is greater than 0 and less than 1.μ (x) represents a state update equation. When x >0, μ (x) is 1; when x is 0, μ (x) is 0.5; when x <0, μ (x) is 0.
For the ith trainingTraining the text, if the split label corresponding to the training text is positive, then yi_trueIs 1. At this time, λ (1, y)i_pred)=1-μ(yi_pred-m). If yi_predIf the probability value of the positive type of the prediction classification result of the emotion analysis model is greater than m, the correction term is 0. At this time, the loss function value can reach the minimum value, the value of the gate weight parameter can not be updated any more, and the training process of the emotion analysis model can be stopped. When y isi_predWhen m is not more than m, the correction term lambda (1, y)i_pred) 1. At this time, the loss function value does not reach the minimum value yet, the value of the gate weight parameter can be continuously updated, that is, the training process of the emotion analysis model can be continuously performed.
If the split label corresponding to the training text is negative, yi_trueIs 0. At this time, λ (0, y)i_pred)=1-μ(1-m)μ(0.9-yi_pred-m). If yi_predLess than 0.9-m, the correction term has a value of 0. At this time, the loss function value can reach the minimum value, the value of the gate weight parameter can not be updated any more, and the training process of the emotion analysis model can be stopped. If yi_predNot less than 0.9-m, the correction term is λ (0, y)i_pred) 1. At this time, the loss function value does not reach the minimum value yet, the value of the gate weight parameter can be continuously updated, that is, the training process of the emotion analysis model can be continuously performed.
According to the method provided by the embodiment of the application, because the neutral repayment intention type which is difficult to distinguish is considered, the correction term is added into the loss function, and the preset threshold value m is added into the correction term for distinguishing, so that the model training process can be selectively updated more efficiently, the optimal point can be reached as soon as possible, and the model training efficiency is improved.
In some embodiments, referring to fig. 6, a method of harvesting is also provided. Taking the application of the method to the server 102 in fig. 1 as an example, the method includes the following steps:
602. and acquiring repayment item information to be promised, inputting the repayment item information into a trained prompter mode selection model, and outputting a prompter mode type, wherein the prompter mode type is machine prompter or manual prompter.
The process of this step can refer to the contents of the above embodiments, and is not described herein again.
604. If the type of the collection-urging mode is machine collection-urging, initiating a machine collection-urging call, and verifying the identity and repayment items to the call object in the machine collection-urging call.
The process of this step can refer to the contents of the above embodiments, and is not described herein again.
606. If the answer is verified to be correct, an inquiry voice for confirming the repayment intention is broadcasted in the machine collection call, a first reply voice of the call object to the inquiry voice in the machine collection call is obtained, the first reply voice is converted into a first reply text, the first reply text is input to the trained emotion analysis model, and a first repayment intention type is output, wherein the first repayment intention type is positive, neutral or negative.
The process of this step can refer to the contents of the above embodiments, and is not described herein again.
608. If the first repayment intention type is negative, broadcasting the voice for urging slight pressure in the machine-urged call, acquiring a second reply voice of the call object for urging slight pressure in the machine-urged call, converting the second reply voice into a second reply text, inputting the second reply text into the emotion analysis model, and outputting the second repayment intention type.
610. If the second repayment intention type is a negative type, broadcasting the voice for pressing heavily to urge to accept in the machine urge to accept call, acquiring a third reply voice of the call object for pressing heavily to urge to accept in the machine urge to accept call, converting the third reply voice into a third reply text, inputting the third reply text into the emotion analysis model, and outputting the third repayment intention type.
612. If the third repayment will type is negative, the prompting voice transferred to the manual seat is broadcasted in the machine collection call, and the third repayment will type is transferred to the manual seat collection call.
The voice of calling for receiving the slight pressure can be 'pay is obligation that you should pay, please pay on time', and the voice of calling for receiving the heavy pressure can be 'please pay on time, otherwise you will be responsible for the corresponding law'. It can be understood that through one inquiry, one light collection and one heavy collection, if the call object still faces to the repayment in a negative way, the call object can be determined to automatically collect money and collect money more difficultly through the machine. Therefore, the user can turn to a manual seat to urge to collect. In conjunction with the above process, the actual catalytic recovery process can be referred to in fig. 7.
According to the method provided by the embodiment of the application, in the intelligent collection process, the voice of the client can be converted into the text through voice conversion, the long-term and short-term memory model is used for carrying out text emotion analysis, and the repayment intention of the client is obtained through dialects such as inquiry, broadcasting and explanation. If the repayment attitude of the client is negative, the client can further adopt minor and severe pressure application and other dialogical adjustment measures to promote the repayment of the client. The hastening effect can be improved because the hastening can be carried out progressively.
It should be understood that, although the steps in the flowcharts related to the embodiments are shown in sequence as indicated by the arrows, the steps are not necessarily executed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a catalytic recovery device for realizing the catalytic recovery method. The solution of the problem provided by the apparatus is similar to the solution described in the above method, so the specific limitations in one or more embodiments of the catalytic recovery apparatus provided below can be referred to the limitations of the catalytic recovery method in the above, and are not described herein again.
In one embodiment, as shown in FIG. 8, there is provided a catalytic device comprising: obtain module 802, initiate module 804, prediction module 806 and report module 808, wherein:
an obtaining module 802, configured to obtain repayment item information to be promised, input the repayment item information to a trained prompter mode selection model, and output a prompter mode type, where the prompter mode type is machine prompter or manual prompter;
the initiating module 804 is configured to initiate a machine-induced call when the type of the call receiving mode is machine call receiving, and verify the identity and the payment item to the call object in the machine-induced call receiving;
the prediction module 806 is configured to, when the answer is verified to be correct, broadcast an inquiry voice for confirming a repayment intention in the machine collection call, acquire a first reply voice of the call object to the inquiry voice in the machine collection call, convert the first reply voice into a first reply text, input the first reply text into the trained emotion analysis model, and output a first repayment intention type, where the first repayment intention type is an active type, a neutral type, or a passive type;
and the broadcasting module 808 is used for broadcasting collection prompting voice for prompting payment in the machine collection prompting conversation based on the first payment willingness type.
In some embodiments, the apparatus further comprises: the method comprises the steps of obtaining a submodule, an output submodule and an adjusting submodule;
the acquisition submodule is used for acquiring the training text and the corresponding classification label, wherein the classification label is any one of the following three classification results, and the following three classification results are respectively positive, neutral or negative;
the output submodule is used for inputting the training text into the emotion analysis model and outputting a prediction classification result of the training text;
and the adjusting submodule is used for adjusting parameters in the emotion analysis model based on the difference between the prediction classification result and the classification label until the training is stopped, so that the trained emotion analysis model is obtained.
In some embodiments, the sentiment analysis model includes a gate structure, and the parameter is a gate weight parameter of the gate structure; the adjusting submodule comprises an obtaining unit and an adjusting unit;
the obtaining unit is used for randomly obtaining n groups of values of the gate weight parameters and loss function values corresponding to the emotion analysis model under each group of values in the tth training process, wherein t is a positive integer not less than 1, and n is a positive integer not less than 2;
and the adjusting unit is used for adjusting the value of the gate weight parameter in the t training process according to the n groups of values and the loss function value corresponding to the emotion analysis model under each group of values.
In some embodiments, n is 3; the adjusting unit comprises a determining subunit, a calculating subunit and an adjusting subunit;
the determining subunit is used for determining a maximum loss function value from the loss function values corresponding to the emotion analysis models under each group of values, taking the maximum loss function value and the value corresponding to the maximum loss function value as a first space point, and taking each group of values and the loss function value corresponding to each group of values in the remaining two groups of values as a second space point and a third space point respectively;
the calculating subunit is configured to determine a first gradient descent direction according to the first spatial point and the second spatial point, determine a second gradient descent direction according to the first spatial point and the third spatial point, and calculate a first step length and a second step length according to the loss function value corresponding to each group of values;
and the adjusting subunit is used for adjusting the value of the gate weight parameter in the tth training process according to the first gradient descending direction, the second gradient descending direction, the first step length and the second step length.
In some embodiments, the adjusting subunit is configured to integrate the first gradient descent direction and the second gradient descent direction to obtain a gradient descent direction of the gate weight parameter in the t-th training process; and adjusting the value of the gate weight parameter in the t training process according to the gradient descending direction of the gate weight parameter in the t training process, the gradient descending direction of the gate weight parameter in the t-1 training process, the first step length and the second step length.
In some embodiments, the loss function used by the emotion analysis model involved in the device is:
Figure RE-GDA0003676101870000181
wherein, λ (y)i_true,yi_pred) Representing a correction term, yi_trueIs determined based on the classification label of the ith training sample, yi_predIs determined based on the predicted classification result of the ith training sample, and n represents the total number of the training samples.
According to the device provided by the embodiment of the invention, in the collection prompting process, the payment willingness of overdue customers can be recognized through technical means such as voice recognition and emotion analysis, and a plurality of conversation systems such as explanation, inquiry and pressure test are prepared according to different scenes, so that the efficiency and the success rate of intelligent collection prompting can be improved.
Secondly, as the model overcomes the problem that the time sequence of the input sequence cannot be kept, deeper relation between words can be mined, and the subsequent prediction through the model is more accurate.
Moreover, the pseudo gradient descent method explores a reasonable classification threshold region in the labeled training text, and defines a new loss function according to the region, so that the model is selectively updated more efficiently in the training process, and the failure of parameter adjustment caused by uncertainty such as gradient extinction and gradient explosion caused by the gradient descent method can be avoided, and the model training can be further stabilized.
In addition, a reasonable classification threshold region in the labeled training text is explored by the pseudo gradient descent method, and a new loss function is customized according to the region, so that the model is selectively updated more efficiently in the training process, and the failure of parameter adjustment caused by uncertainty such as gradient disappearance, gradient explosion and the like caused by the gradient descent method can be avoided, and the model training can be more stable.
Finally, because the payment willingness type of a 'neutral type' which is difficult to distinguish is considered, a correction term is added into the loss function, and a preset threshold value m is added into the correction term for distinguishing, so that the model training process can be selectively updated more efficiently, and further the optimal point can be reached as soon as possible, so that the model training efficiency is improved.
For specific limitations of the hastening device, reference may be made to the above limitations of the hastening method, which are not described herein again. All or part of the modules in the catalytic recovery device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing variable data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a collection method.
It will be appreciated by those skilled in the art that the configuration shown in fig. 9 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring repayment project information to be urged to be received, inputting the repayment project information into a trained urge receiving mode selection model, and outputting an urge receiving mode type, wherein the urge receiving mode type is machine urge receiving or manual urge receiving;
if the collection urging mode type is machine collection urging, initiating a machine collection urging conversation, and verifying the identity and repayment items to the conversation object in the machine collection urging conversation;
if the answer is verified to be correct, an inquiry voice for confirming the repayment intention is broadcasted in the machine collection call, a first reply voice of the call object to the inquiry voice in the machine collection call is obtained, the first reply voice is converted into a first reply text, the first reply text is input to the trained emotion analysis model, a first repayment intention type is output, and the first repayment intention type is an active type, a neutral type or a passive type;
based on the first repayment wish type, the call for expecting payment is broadcasted in the machine call for expecting payment.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
acquiring a training text and a corresponding classification label, wherein the classification label is any one of the following three classification results, and the following three classification results are respectively positive type, neutral type or negative type;
inputting the training text into the emotion analysis model, outputting a prediction classification result of the training text, adjusting parameters in the emotion analysis model based on the difference between the prediction classification result and the classification label until the training is stopped, and obtaining the trained emotion analysis model.
In one embodiment, the emotion analysis model comprises a gate structure, and the parameter is a gate weight parameter of the gate structure; the processor, when executing the computer program, further performs the steps of:
for the t training process, randomly obtaining n groups of values of gate weight parameters and loss function values corresponding to the emotion analysis model under each group of values, wherein t is a positive integer not less than 1, and n is a positive integer not less than 2;
and adjusting the value of the gate weight parameter in the t-th training process according to the n groups of values and the loss function value corresponding to the emotion analysis model under each group of values.
In one embodiment, n is 3; the processor, when executing the computer program, further performs the steps of:
determining a maximum loss function value from the loss function values corresponding to the emotion analysis models under each group of values, taking the maximum loss function value and the value corresponding to the maximum loss function value as a first space point, and taking each group of values and the loss function value corresponding to each group of values in the remaining two groups of values as a second space point and a third space point respectively;
determining a first gradient descending direction according to the first space point and the second space point, determining a second gradient descending direction according to the first space point and the third space point, and calculating a first step length and a second step length according to the loss function values corresponding to each group of values;
and adjusting the value of the gate weight parameter in the t training process according to the first gradient descending direction, the second gradient descending direction, the first step length and the second step length.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
integrating the first gradient descending direction and the second gradient descending direction to obtain the gradient descending direction of the gate weight parameter in the t training process;
and adjusting the value of the gate weight parameter in the t training process according to the gradient descending direction of the gate weight parameter in the t training process, the gradient descending direction of the gate weight parameter in the t-1 training process, the first step length and the second step length.
In one embodiment, the processor when executing the computer program further performs the steps of:
Figure RE-GDA0003676101870000201
wherein, λ (y)i_true,yi_pred) Representing a correction term, yi_trueIs determined based on the classification label of the ith training sample, yi_predIs determined based on the predicted classification result of the ith training sample, and n represents the total number of the training samples.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring repayment project information to be promised, inputting the repayment project information into a trained prompter mode selection model, and outputting a prompter mode type, wherein the prompter mode type is machine prompter or manual prompter;
if the collection urging mode type is machine collection urging, initiating a machine collection urging conversation, and verifying the identity and repayment items to the conversation object in the machine collection urging conversation;
if the answer is verified to be correct, an inquiry voice for confirming the repayment intention is broadcasted in the machine collection call, a first reply voice of the call object to the inquiry voice in the machine collection call is obtained, the first reply voice is converted into a first reply text, the first reply text is input to the trained emotion analysis model, a first repayment intention type is output, and the first repayment intention type is an active type, a neutral type or a passive type;
based on the first repayment wish type, the call for expecting payment is broadcasted in the machine call for expecting payment.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a training text and a corresponding classification label, wherein the classification label is any one of the following three classification results, and the following three classification results are respectively positive type, neutral type or negative type;
inputting the training text into the emotion analysis model, outputting a prediction classification result of the training text, adjusting parameters in the emotion analysis model based on the difference between the prediction classification result and the classification label until the training is stopped, and obtaining the trained emotion analysis model.
In one embodiment, the emotion analysis model comprises a gate structure, and the parameter is a gate weight parameter of the gate structure; the computer program when executed by the processor further realizes the steps of:
for the t training process, randomly obtaining n groups of values of gate weight parameters and loss function values corresponding to the emotion analysis model under each group of values, wherein t is a positive integer not less than 1, and n is a positive integer not less than 2;
and adjusting the value of the gate weight parameter in the t-th training process according to the n groups of values and the loss function value corresponding to the emotion analysis model under each group of values.
In one embodiment, n is 3; the computer program when executed by the processor further realizes the steps of:
determining a maximum loss function value from the loss function values corresponding to the emotion analysis models under each group of values, taking the maximum loss function value and the value corresponding to the maximum loss function value as a first space point, and taking each group of values and the loss function value corresponding to each group of values in the remaining two groups of values as a second space point and a third space point respectively;
determining a first gradient descending direction according to the first space point and the second space point, determining a second gradient descending direction according to the first space point and the third space point, and calculating a first step length and a second step length according to the loss function values corresponding to each group of values;
and adjusting the value of the gate weight parameter in the t training process according to the first gradient descending direction, the second gradient descending direction, the first step length and the second step length.
In one embodiment, the computer program when executed by the processor further performs the steps of:
integrating the first gradient descending direction and the second gradient descending direction to obtain the gradient descending direction of the gate weight parameter in the t training process;
and adjusting the value of the gate weight parameter in the t training process according to the gradient descending direction of the gate weight parameter in the t training process, the gradient descending direction of the gate weight parameter in the t-1 training process, the first step length and the second step length.
In one embodiment, the computer program when executed by the processor further performs the steps of:
Figure RE-GDA0003676101870000221
wherein, λ (y)i_true,yi_pred) Representing a correction term, yi_trueIs determined based on the classification label of the ith training sample, yi_predIs determined based on the predicted classification result of the ith training sample, and n represents the total number of the training samples.
It should be noted that, the user information (including but not limited to user device information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A catalytic recovery process, comprising:
acquiring repayment project information to be promised, inputting the repayment project information into a trained prompter mode selection model, and outputting a prompter mode type, wherein the prompter mode type is machine prompter or manual prompter;
if the type of the collection urging mode is machine collection urging, initiating a machine collection urging conversation, and verifying the identity and repayment items to a conversation object in the machine collection urging conversation;
if the answer is verified to be correct, broadcasting inquiry voice for confirming repayment willingness in the machine collection call, acquiring first reply voice of the call object to the inquiry voice in the machine collection call, converting the first reply voice into a first reply text, inputting the first reply text into a trained emotion analysis model, and outputting a first repayment willingness type, wherein the first repayment willingness type is an active type, a neutral type or a passive type;
based on first repayment wish type the machine asks for receiving to report in the conversation and asks for receiving voice of asking for receiving the repayment.
2. The method of claim 1, wherein the emotion analysis model training process comprises:
acquiring a training text and a corresponding classification label, wherein the classification label is any one of the following three classification results, and the following three classification results are respectively positive type, neutral type or negative type;
inputting the training text into an emotion analysis model, outputting a prediction classification result of the training text, adjusting parameters in the emotion analysis model based on the difference between the prediction classification result and the classification label until the training is finished under a training stop condition, and obtaining the trained emotion analysis model.
3. The method according to claim 2, wherein the emotion analysis model comprises a gate structure, and the parameter is a gate weight parameter of the gate structure; the adjusting parameters in the emotion analysis model based on the difference between the predicted classification result and the classification label comprises:
for the t training process, randomly obtaining n groups of values of the gate weight parameters and loss function values corresponding to the emotion analysis model under each group of values, wherein t is a positive integer not less than 1, and n is a positive integer not less than 2;
and adjusting the value of the gate weight parameter in the t-th training process according to the n groups of values and the loss function value corresponding to the emotion analysis model under each group of values.
4. The method of claim 3, wherein n is 3; the adjusting the value of the gate weight parameter in the tth training process according to the loss function value corresponding to the emotion analysis model under the multiple groups of values and each group of values comprises:
determining a maximum loss function value from the loss function values corresponding to the emotion analysis models under each group of values, taking the maximum loss function value and the value corresponding to the maximum loss function value as a first space point, and taking each group of values and the loss function value corresponding to each group of values in the remaining two groups of values as a second space point and a third space point respectively;
determining a first gradient descending direction according to the first space point and the second space point, determining a second gradient descending direction according to the first space point and the third space point, and calculating a first step length and a second step length according to the loss function value corresponding to each group of values;
and adjusting the value of the gate weight parameter in the t training process according to the first gradient descending direction, the second gradient descending direction, the first step length and the second step length.
5. The method of claim 4, wherein the adjusting the gate weight parameter value during the t-th training according to the first gradient descent direction, the second gradient descent direction, the first step size, and the second step size comprises:
integrating the first gradient descending direction and the second gradient descending direction to obtain the gradient descending direction of the gate weight parameter in the t training process;
and adjusting the value of the gate weight parameter in the t training process according to the gradient descending direction of the gate weight parameter in the t training process, the gradient descending direction of the gate weight parameter in the t-1 training process, the first step length and the second step length.
6. The method according to any of claims 3 to 5, characterized in that the loss function used by the emotion analysis model is:
Figure FDA0003390961330000021
wherein, λ (y)i_true,yi_pred) Representing a correction term, yi_trueIs determined based on the classification label of the ith training sample, yi_predIs determined based on the predicted classification result of the ith training sample, and n represents the total number of the training samples.
7. A harvesting apparatus, the apparatus comprising:
the acquisition module is used for acquiring repayment project information to be promised, inputting the repayment project information into a trained prompter mode selection model and outputting a prompter mode type, wherein the prompter mode type is machine prompter mode or manual prompter mode;
the initiating module is used for initiating the machine to receive calls when the type of the receiving mode is machine receiving, and verifying the identity and repayment items to the call object in the machine receiving calls;
the prediction module is used for broadcasting inquiry voice for confirming repayment willingness in the machine collection call when the answer is verified to be correct, acquiring first answer voice of the call object to the inquiry voice in the machine collection call, converting the first answer voice into a first answer text, inputting the first answer text into a trained emotion analysis model, and outputting a first repayment willingness type, wherein the first repayment willingness type is an active type, a neutral type or a passive type;
and the broadcasting module is used for broadcasting collection prompting voice for prompting collection of the repayment in the machine collection prompting conversation based on the first repayment wish type.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method of any one of claims 1 to 6 when executed by a processor.
CN202111469343.9A 2021-12-03 2021-12-03 Urging collection method, urging collection device, computer equipment and computer program product Pending CN114723547A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111469343.9A CN114723547A (en) 2021-12-03 2021-12-03 Urging collection method, urging collection device, computer equipment and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111469343.9A CN114723547A (en) 2021-12-03 2021-12-03 Urging collection method, urging collection device, computer equipment and computer program product

Publications (1)

Publication Number Publication Date
CN114723547A true CN114723547A (en) 2022-07-08

Family

ID=82234638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111469343.9A Pending CN114723547A (en) 2021-12-03 2021-12-03 Urging collection method, urging collection device, computer equipment and computer program product

Country Status (1)

Country Link
CN (1) CN114723547A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116153330A (en) * 2023-04-04 2023-05-23 杭州度言软件有限公司 Intelligent telephone voice robot control method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116153330A (en) * 2023-04-04 2023-05-23 杭州度言软件有限公司 Intelligent telephone voice robot control method
CN116153330B (en) * 2023-04-04 2023-06-23 杭州度言软件有限公司 Intelligent telephone voice robot control method

Similar Documents

Publication Publication Date Title
US20210248993A1 (en) Systems and methods for providing automated natural language dialogue with customers
US9437215B2 (en) Predictive video analytics system and methods
CN108021934B (en) Method and device for recognizing multiple elements
CN111488433A (en) Artificial intelligence interactive system suitable for bank and capable of improving field experience
CN111695415A (en) Construction method and identification method of image identification model and related equipment
WO2020077874A1 (en) Method and apparatus for processing question-and-answer data, computer device, and storage medium
US10440187B1 (en) Bootstrapped predicative routing in CRM
CN112989046B (en) Real-time speech prejudging method, device, computer equipment and storage medium
CN116049360A (en) Intelligent voice dialogue scene conversation intervention method and system based on client image
CN109635079A (en) A kind of determination method, apparatus, computer equipment and storage medium that user is intended to
CN114723547A (en) Urging collection method, urging collection device, computer equipment and computer program product
CN115374266A (en) Interaction method, device, equipment and storage medium based on plot interaction node
CN113569017B (en) Model processing method and device, electronic equipment and storage medium
CN115525740A (en) Method and device for generating dialogue response sentence, electronic equipment and storage medium
CN115168554A (en) Callback object return visit method and device, storage medium and computer equipment
CN115146292A (en) Tree model construction method and device, electronic equipment and storage medium
CN113873087A (en) Outbound method, device, computer equipment and storage medium
US20220375468A1 (en) System method and apparatus for combining words and behaviors
Patel et al. Interactive voice response field classifiers
CN115934901A (en) Intelligent conversation method and device, electronic equipment and storage medium
CN117493658A (en) Training method of information push model, information push method and equipment
CN117236384A (en) Training and predicting method and device for terminal machine change prediction model and storage medium
CN114282643A (en) Data processing method and device and computing equipment
CN114969280A (en) Dialog generation method and device, and training method and device of dialog prediction model
Sergiu USING MACHINE LEARNING ALGORITHMS TO DETECT FRAUDS IN TELEPHONE NETWORKS.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination