CN110263326A - A kind of user's behavior prediction method, prediction meanss, storage medium and terminal device - Google Patents
A kind of user's behavior prediction method, prediction meanss, storage medium and terminal device Download PDFInfo
- Publication number
- CN110263326A CN110263326A CN201910422647.6A CN201910422647A CN110263326A CN 110263326 A CN110263326 A CN 110263326A CN 201910422647 A CN201910422647 A CN 201910422647A CN 110263326 A CN110263326 A CN 110263326A
- Authority
- CN
- China
- Prior art keywords
- user
- evaluation
- label
- behavior
- result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000011156 evaluation Methods 0.000 claims abstract description 211
- 230000036651 mood Effects 0.000 claims abstract description 106
- 238000004458 analytical method Methods 0.000 claims abstract description 38
- 230000006399 behavior Effects 0.000 claims description 148
- 238000012549 training Methods 0.000 claims description 69
- 239000011159 matrix material Substances 0.000 claims description 35
- 239000003550 marker Substances 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 4
- 235000013399 edible fruits Nutrition 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000010365 information processing Effects 0.000 abstract description 2
- 230000008451 emotion Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 230000003542 behavioural effect Effects 0.000 description 4
- 238000012512 characterization method Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 206010027940 Mood altered Diseases 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Child & Adolescent Psychology (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Hospice & Palliative Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The present invention relates to technical field of information processing more particularly to a kind of user's behavior prediction method, prediction meanss, computer readable storage medium and terminal devices.User's behavior prediction method provided by the invention, comprising: obtain the Speech Assessment information of user's evaluation target service, and Speech Assessment information is converted into corresponding text information;Text information is analyzed, determines the evaluation object of user's evaluation target service and the corresponding evaluation result of evaluation object;Extract sound bite corresponding with evaluation result in Speech Assessment information;Speech analysis is carried out to sound bite, obtains the corresponding mood label of evaluation result and tone label;According to mood label, tone label, evaluation object and evaluation result, predict the behavior of user, with by mood, the tone, evaluation object and evaluation content carry out comprehensive analysis come comprehensively, accurately predict user behavior, user's behavior prediction mistake is avoided, the accuracy of user's behavior prediction is improved.
Description
Technical field
The present invention relates to technical field of information processing more particularly to a kind of user's behavior prediction methods, prediction meanss, calculating
Machine readable storage medium storing program for executing and terminal device.
Background technique
In every profession and trade, there is the occurrence of much leading to customer churn due to the reasons such as service satisfaction is poor, because
And analyze customer churn reason effectively to reduce or avoid customer churn, become the key of each enterprise development.Existing user
Churn analysis method substantially predicts user's row by determining the matching between user behavior data and behavioral indicator
For, i.e., predict whether user is potential loss user according only to the matching of behavioral indicator, and when behavioral indicator setting does not conform to
It is this to be easy to make to carry out the mode of user's behavior prediction according only to the matching of user behavior data and behavioral indicator when reason
At user's behavior prediction mistake, the accuracy of user's behavior prediction is significantly reduced.
Summary of the invention
The embodiment of the invention provides a kind of user's behavior prediction method, prediction meanss, computer readable storage medium and
Terminal device, can comprehensively, accurately predict user behavior, reduce or avoid user's behavior prediction mistake, improve user's row
For the accuracy of prediction.
The embodiment of the present invention is in a first aspect, provide a kind of user's behavior prediction method, comprising:
The Speech Assessment information of user's evaluation target service is obtained, and the Speech Assessment information is converted into corresponding text
This information;
The text information is analyzed, determines the evaluation object of target service described in the user's evaluation and described
The corresponding evaluation result of evaluation object;
Extract sound bite corresponding with the evaluation result in the Speech Assessment information;
Speech analysis is carried out to the sound bite, obtains the corresponding mood label of the evaluation result and tone label;
According to the mood label, the tone label, the evaluation object and the evaluation result, the use is predicted
The behavior at family.
Second aspect of the embodiment of the present invention provides a kind of user's behavior prediction device, comprising:
Info conversion module, for obtaining the Speech Assessment information of user's evaluation target service, and by the Speech Assessment
Information is converted to corresponding text information;
Evaluation result determining module determines target described in the user's evaluation for analyzing the text information
The evaluation object of business and the corresponding evaluation result of the evaluation object;
Sound bite extraction module, for extracting voice corresponding with the evaluation result in the Speech Assessment information
Segment;
It is corresponding to obtain the evaluation result for carrying out speech analysis to the sound bite for sound bite analysis module
Mood label and tone label;
Behavior prediction module, for according to the mood label, the tone label, the evaluation object and institute's commentary
Valence is as a result, predict the behavior of the user.
The third aspect of the embodiment of the present invention provides a kind of computer readable storage medium, the computer-readable storage
Media storage has computer-readable instruction, realizes when the computer-readable instruction is executed by processor such as aforementioned first aspect institute
The step of stating user's behavior prediction method.
Fourth aspect of the embodiment of the present invention, provides a kind of terminal device, including memory, processor and is stored in institute
The computer-readable instruction that can be run in memory and on the processor is stated, the processor executes described computer-readable
Following steps are realized when instruction:
The Speech Assessment information of user's evaluation target service is obtained, and the Speech Assessment information is converted into corresponding text
This information;
The text information is analyzed, determines the evaluation object of target service described in the user's evaluation and described
The corresponding evaluation result of evaluation object;
Extract sound bite corresponding with the evaluation result in the Speech Assessment information;
Speech analysis is carried out to the sound bite, obtains the corresponding mood label of the evaluation result and tone label;
According to the mood label, the tone label, the evaluation object and the evaluation result, the use is predicted
The behavior at family.
As can be seen from the above technical solutions, the embodiment of the present invention has the advantage that
It, can be first by voice after getting the Speech Assessment information of user's evaluation target service in the embodiment of the present invention
Evaluation information is converted to text information, and by analyzing text information, to determine user's evaluation target service institute needle
Pair evaluation object and specific evaluation result;Secondly, can extract sound bite corresponding with evaluation result, and by language
Tablet section carries out speech analysis, mood label and the tone label when determining that user evaluates, thus by mood mark
Label, tone label, evaluation object and evaluation result carry out comprehensive analysis and determine whether user is latent to predict the behavior of user
It is being lost user, by carrying out comprehensive analysis to mood, the tone, evaluation object and evaluation content come comprehensively, accurately
It predicts user behavior, avoids user's behavior prediction mistake, improve the accuracy of user's behavior prediction.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art
Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only of the invention some
Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these
Attached drawing obtains other attached drawings.
Fig. 1 is a kind of one embodiment flow chart of user's behavior prediction method in the embodiment of the present invention;
Fig. 2 obtains tone label for user's behavior prediction method a kind of in the embodiment of the present invention under an application scenarios
Flow diagram;
Fig. 3 is a kind of user's behavior prediction method training behavior prediction mould under an application scenarios in the embodiment of the present invention
The flow diagram of type;
Fig. 4 is associated with storing data for user's behavior prediction method a kind of in the embodiment of the present invention under an application scenarios
Flow diagram;
Fig. 5 is a kind of one embodiment structure chart of user's behavior prediction device in the embodiment of the present invention;
Fig. 6 is a kind of schematic diagram for terminal device that one embodiment of the invention provides.
Specific embodiment
The embodiment of the invention provides a kind of user's behavior prediction method, prediction meanss, computer readable storage medium and
Terminal device, for comprehensively, accurately predict user behavior, reduce or avoid user's behavior prediction mistake, improve user's row
For the accuracy of prediction.
In order to make the invention's purpose, features and advantages of the invention more obvious and easy to understand, below in conjunction with the present invention
Attached drawing in embodiment, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that disclosed below
Embodiment be only a part of the embodiment of the present invention, and not all embodiment.Based on the embodiments of the present invention, this field
Those of ordinary skill's all other embodiment obtained without making creative work, belongs to protection of the present invention
Range.
Referring to Fig. 1, the embodiment of the invention provides a kind of user's behavior prediction method, the user's behavior prediction side
Method, comprising:
Step S101, the Speech Assessment information of user's evaluation target service is obtained, and the Speech Assessment information is converted
For corresponding text information;
In the embodiment of the present invention, after telephone service is completed in customer service, user can be carried out target service according to prompt information
Speech Assessment or user can carry out voice to the target service by entering the corresponding website of target service or APP etc.
Evaluation, such as can be with the product function of Related product in the customer service quality or evaluation goal business in evaluation goal business
Deng obtaining the Speech Assessment information of user with this.After obtaining the Speech Assessment information of user, speech recognition technology pair can be passed through
The Speech Assessment information carries out speech recognition, which is converted to corresponding text information.
Step S102, the text information is analyzed, determines the evaluation pair of target service described in the user's evaluation
As and the corresponding evaluation result of the evaluation object;
It is understood that can be carried out to text information after obtaining the corresponding text information of Speech Assessment information
Keyword extraction, and determine that user exists according to the matching of the predetermined keyword saved in extracted keyword and database
Targeted evaluation object when the target service, wherein the evaluation object can be the product type of the target service, product function
Energy, operation flow and customer service quality etc..It, then can be further to the text after determining targeted evaluation object
Information carries out word cutting processing etc., to obtain each participle, and identifies from each participle the qualifier of emotion word and emotion word
Deng so that emotion phrase composed by the qualifier of emotion word and emotion word is determined as the corresponding evaluation knot of the evaluation object
Fruit.
Step S103, sound bite corresponding with the evaluation result in the Speech Assessment information is extracted;
In the embodiment of the present invention, due in the Speech Assessment information there may be many be not related to specifically evaluate it is unrelated
Content can be believed after obtaining the corresponding evaluation result of the evaluation object from the Speech Assessment to filter these irrelevant contents
Sound bite corresponding with the evaluation result is intercepted out in breath, and by being analyzed the sound bite feelings to determine user
Thread label and tone label, the interference of invalid information is reduced, to mention by carrying out speech analysis to specific sound bite
The efficiency and accuracy that high mood label and tone label determine.
Step S104, to the sound bite carry out speech analysis, obtain the corresponding mood label of the evaluation result and
Tone label;
It is understood that after intercepting out sound bite corresponding with the evaluation result, then can to the sound bite into
Row speech analysis, to obtain mood situation when the user's evaluation evaluation object and tone situation to get to the evaluation object pair
The mood label and tone label answered.
Further, as shown in Fig. 2, described carry out speech analysis to the sound bite, the evaluation result pair is obtained
The mood label and tone label answered may include:
Step S201, mood analysis is carried out to the sound bite, obtains the corresponding mood label of the evaluation result;
It is understood that whole speech analysis can be carried out to the sound bite first, such as in the embodiment of the present invention
Can carry out whole mood analysis, mood label when determining the user's evaluation evaluation object with this, i.e. mood analysis be from
The whole angle of the sound bite measures mood situation when user carries out Speech Assessment, wherein the mood label can be with
Including positive mood label and negative emotions label, positive mood label shows that user holds positive state certainly to the evaluation object
Degree, and negative emotions label then shows that user holds negative negative attitude to the evaluation object.
Step S202, extract in the evaluation result the first keyword identical with the mood label mood and with institute
State the second opposite keyword of mood label mood;
In the different moments of Speech Assessment, user may have different mood situations, i.e., the described evaluation result is corresponding
Sound bite in there may be a variety of mood situations, therefore, in the embodiment of the present invention, obtaining the corresponding feelings of the evaluation result
After thread label, can further be extracted from the evaluation result the first keyword identical with the mood label mood and with this
The second opposite keyword of mood label mood, such as when the mood label is positive mood label, the first keyword then may be used
For the keyword of characterization front affirmative mood, the second keyword can be then the keyword of characterization negative acknowledgement mood;For another example when this
When mood label is negative emotions label, the first keyword can be then the keyword of characterization negative acknowledgement mood, and second is crucial
Word can be then the keyword of characterization front affirmative mood.
Step S203, volume analysis is carried out to the sound bite, determines that first keyword is first decibel corresponding
It is worth the second decibel value corresponding with second keyword;
Here, after extracting first keyword and second keyword, then it can be to language corresponding to the evaluation result
Tablet is disconnected to carry out speech analysis again, carries out volume analysis described above, such as to obtain first keyword corresponding first
The volume analysis of decibel value the second decibel value corresponding with second keyword, the embodiment of the present invention mainly matches the voice
The first keyword and the second keyword in segment, and when recording that the first keyword and the second keyword occur in the sound bite
Decibel value.
Step S204, according to first decibel value and second decibel value, it is corresponding most to calculate the sound bite
Whole decibel value;
In the embodiment of the present invention, obtaining, corresponding first decibel value of first keyword is corresponding with second keyword
After second decibel value, this can be determined by the decibel value and the decibel values of negative emotions for comprehensively considering the user front mood
The final decibel value of user, so that it is determined that the final tone situation of the user, specifically, can be calculated according to following calculation formula should
Final decibel value corresponding to sound bite:
Wherein, DecibelSum is the corresponding final decibel value of sound bite, Decibel1iFor i-th of first decibel values,
Quotiety1For the corresponding default weight of the first decibel value, Decibel2tFor t-th of second decibel values, Quotiety2It is second
The corresponding default weight of decibel value, N are the total number of the first decibel value, and T is the total number of the second decibel value.
Step S205, according to the mood label and the final decibel value, the corresponding tone of the evaluation result is obtained
Label.
It is understood that after obtaining the corresponding final decibel value of the evaluation result, then it can be according to the evaluation result institute
Corresponding mood label and the final decibel value, obtain the corresponding tone label of the evaluation result, such as when mood label is positive
Face mood label, and when final decibel value is located at the first pre-set interval, it obtains characterizing good first tone mark of user's tone
Label;When mood label is positive mood label, and final decibel value is located at the second pre-set interval, obtain characterizing user's tone
General second tone label;When mood label is negative emotions label, and final decibel value is located at third pre-set interval, then
It obtains characterizing the poor third tone label, etc. of user's tone.Here, the first pre-set interval, the second pre-set interval and
Third pre-set interval can be configured as the case may be, can such as set the first pre-set interval to 40 decibels to 55 decibels, can
55 decibels to 70 decibels are set by the second pre-set interval and can set third pre-set interval to 70 decibels or more, etc..
Step S105, according to the mood label, the tone label, the evaluation object and the evaluation result,
Predict the behavior of the user.
In the embodiment of the present invention, mood label, tone label, evaluation object are being obtained and the evaluation object is corresponding comments
After valence result, then the row that can predict user according to the mood label, the tone label, the evaluation object and the evaluation result
For, i.e., prediction user behavior whether be potential loss behavior, such as when evaluation object be product function, and evaluation result be function
Can be very poor, while the corresponding mood label of the evaluation result is negative emotions label and the corresponding tone label of the evaluation result
When poor for the tone, then it is believed that the user most probably abandons the product, the behavior that can predict the user is potential loss
Behavior, namely determine that the user is potential loss user.
Further, described according to the mood label, the tone label, the evaluation pair in the embodiment of the present invention
As and the evaluation result, predict the behavior of the user, may include:
Step a, model is generated using default vector generate the mood label, the tone label, the evaluation respectively
Object and the corresponding Vector Groups of the evaluation result, and each Vector Groups are constituted into input matrix;
Step b, the input matrix is input in the behavior prediction model of pre-training completion, obtains the behavior prediction
The prediction result of model output, the prediction result includes user type and corresponding probability value, and the user type includes
Potential loss user and loss user non-potential;
If step c, the described prediction result is potential loss user and corresponding probability value is greater than predetermined probabilities threshold value,
The behavior for predicting the user is potential loss behavior;
If step d, the described prediction result is potential loss user and corresponding probability value is less than or equal to described pre-
If probability threshold value or the prediction result are loss user non-potential, then predict that the behavior of the user is loss non-potential
Behavior.
For above-mentioned steps a to step d, it is to be understood that behavior prediction model can be constructed in advance, the behavior predicts mould
Type can be with the input matrix that mood label, tone label, evaluation object and the corresponding Vector Groups of evaluation result are constituted into
Ginseng, and be to join with specific prediction result, so that user can be predicted according to the prediction result that behavior prediction model exports
Behavior, so that it is determined that whether the user is potential loss user.Therefore, mood label, tone label, evaluation object are being obtained
And after evaluation result, model can be generated first with default vector and generate corresponding Vector Groups respectively.Wherein, pre- to the behavior
The specific training for surveying model will be described in detail in subsequent content.
Specifically, which can be the vector model constructed using word2vec technology, such as can be base
In the vector model of CBOW (Continuous Bag of Words) model construction, or be based on Skip-gram model structure
The vector model built.In the embodiment of the present invention, after obtaining mood label, tone label, evaluation object and evaluation result, then
Vector model can be preset by this and generates corresponding Vector Groups, that is, generates the corresponding primary vector of mood label, the tone
The corresponding third vector of the corresponding secondary vector of label, the evaluation object and corresponding 4th vector of the evaluation result, and will
The primary vector, the secondary vector, the 4th vector of third vector sum form an input matrix, such as constitute and obtain input square
Battle array is WordMatrix=(WordVec1, WordVec2, WordVec3, WordVec4), wherein WordVec1For this first to
Amount, WordVec2For the secondary vector, WordVec3For the third vector, WordVec4For the 4th vector.And it is being corresponded to
Input matrix after, then obtained input matrix can be input to pre-training completion behavior prediction model in, obtain the row
The prediction result exported by prediction model, here, the prediction result exported may include user type and corresponding general
Rate value, wherein the user type may include potential loss user and loss user non-potential, when the prediction result is potential
It is lost user and when corresponding probability value is greater than predetermined probabilities threshold value, then can be predicted the behavior of the user is that potential loss is gone
For;And when the prediction result be potential loss user but corresponding probability value be less than or equal to the predetermined probabilities threshold value or
When the prediction result is loss user non-potential, then the behavior that can be predicted the user is loss behavior non-potential.
Preferably, in the embodiment of the present invention, the prediction process of the behavior prediction model may include:
Step e, the probability value of each default result is calculated according to the following formula:
Wherein, ProbmFor the probability value of m-th of default result, WeightMatrixmIt is corresponding with m-th of default result
Weight matrix, WordMatrix are input matrix, and M is the quantity of default result.
Step f, that the maximum default result of probability value and corresponding probability value are determined as the input matrix is corresponding
Prediction result.
For step e and step f, it is to be understood that behavior prediction model is determining that the input matrix is corresponding
When prediction result, can calculate first the input matrix correspond to each default result probability value, and can according to each probability value come
Determine prediction result corresponding to the input matrix, wherein the default result may include " potential loss user " and " non-latent
It is being lost user " two kinds, that is, calculate separately the first probability value and correspondence that the input matrix corresponds to " potential loss user "
In second probability value of " loss user non-potential ", for example, corresponding to " potential loss use when the input matrix is calculated
First probability value at family " be 0.7, and correspond to " loss user non-potential " the second probability value be 0.3 when, then can will be " potential
Loss user " and the first corresponding probability value 0.7 are determined as prediction result corresponding to the input matrix.
The specific training process of behavior prediction model will be described in detail below, specifically, as shown in figure 3, this
In inventive embodiments, the behavior prediction model is obtained by following step training:
Step S301, the training sample of preset number is chosen, each training sample includes an input matrix and a phase
Hope output as a result, the desired output result includes the corresponding Standard User type of training sample and corresponding normal probability
Value;
Step S302, each training sample is input in initial behavior prediction model, obtains the initial row
For the training prediction result of prediction model output;
Step S303, the global of epicycle training is calculated according to the trained prediction result and the desired output result to miss
Difference;
Step S304, judge whether the global error meets the first preset condition;
If step S305, the described global error meets first preset condition, it is determined that the behavior prediction model instruction
Practice and completes;
If step S306, the described global error is unsatisfactory for first preset condition, the behavior prediction model is adjusted
Model parameter, and using model parameter behavior prediction model adjusted as initial behavior prediction model, return execute general
Each training sample is input in initial behavior prediction model, obtains the training of the initial behavior prediction model output
The step of prediction result and subsequent step.
For above-mentioned steps S301, before the training for carrying out behavior prediction model, needs to choose in advance and be used for training
Training sample, that is, need to choose the training sample of preset number in advance, each training sample include input matrix and
One desired output corresponding with the input matrix is as a result, the desired output result may include the corresponding standard of training sample
User type and corresponding normal probability value, Standard User type can according to the corresponding actual user's behavior of the training sample come
It determines, such as can be identified as potential loss user and loss user non-potential.It is understood that the data volume of these training samples
It is bigger, will be better to the training effect of behavior prediction model, thus, in the embodiment of the present invention, more training can be chosen as far as possible
Sample.
Specifically, then the Speech Assessment information that can obtain history first determines the Speech Assessment letter of these history respectively
Breath corresponding mood label, tone label, evaluation object and evaluation result, and the Speech Assessment letter of each history is generated respectively
Vector Groups corresponding to mood label, tone label, evaluation object and evaluation result in breath, finally by the voice of each history
Vector corresponding to evaluation information forms each input matrix, using as the training sample.In addition, each training sample may be used also
To include desired output result determined by the agenda after the Speech Assessment information for making according to each user each history.
For above-mentioned steps S302, it is to be understood that, then can be by these after having chosen the training sample of preset number
Training sample is input in initial behavior prediction model, is arrived with obtaining the initial training prediction result of each training sample
The corresponding initial predicted of each input matrix as a result, due to it is initial when behavior prediction model not yet training complete, it is defeated at this time
Can there are certain deviation, error between training prediction result and desired output result out.
It, can be pre- according to each training after obtaining each trained prediction result for above-mentioned steps S303 and step S304
It surveys result and calculates the global error of epicycle training with corresponding desired output result, and judge whether the global error meets first
Whether preset condition such as judges the global error less than 5%.Here, first preset condition can be in the specific row of training
To determine when prediction model, it is less than specific threshold, the certain threshold as global error for example, first preset condition can be set
Value can be a percentages, wherein the specific threshold is smaller, then the behavior prediction model that last training is completed to obtain is got over
Stablize, prediction accuracy also will be higher.
For above-mentioned steps S305, it is to be understood that when the global error of epicycle training meets first preset condition
When, for example, can then determine that the training of behavior prediction model is completed when the global error of epicycle training is less than 5%.
Epicycle is such as worked as when the global error of epicycle training is unsatisfactory for first preset condition for above-mentioned steps S306
When trained global error is 10%, then it can adjust the model parameter of behavior prediction model, and model parameter is adjusted
Then behavior prediction model re-starts the training of training sample as initial behavior prediction model, by adjusting repeatedly
The model parameter of behavior prediction model, and carry out the training of multiple training sample, come so that it is subsequent according to training prediction result with
The global error that desired output result is calculated minimizes, until final global error meets first preset condition.
Optionally, as shown in figure 4, in the embodiment of the present invention, according to the mood label, tone label, described
Evaluation object and the evaluation result after the behavior for predicting the user, can also include:
Step S401, according to the mood label, the tone label, the evaluation object and the evaluation result, in advance
Survey the behavior of the user;
Step S402, judge whether the behavior of the user meets the second preset condition;
If the behavior of step S403, the described user meets second preset condition, distinguish according to corresponding relationship is preset
Obtain the mood label, the tone label, marker color corresponding to the evaluation object and the evaluation result;
Step S404, according to each marker color to the mood label, the tone label, the evaluation object and described
Evaluation result carries out color mark;
Step S405, by mood label, tone label, evaluation object, evaluation result and the user after color mark
Associated storage is to presetting database.
For above-mentioned steps S401 to step S405, in the embodiment of the present invention, which may be configured as potential
Loss behavior, i.e., when the prediction result of behavior prediction model output is potential loss user, and corresponding probability value is more than
When preset predetermined probabilities threshold value, then the behavior that can predict the user is potential loss behavior, meets and presets
Second preset condition, mood label and tone label and the use when which can be carried out to Speech Assessment at this time
Family evaluation goal business targeted evaluation object and evaluation result are marked with obvious color, can such as be marked and are
Or yellow etc., and can be by mood label, tone label, evaluation object, evaluation result and use after progress color mark
Family associated storage is into presetting database, so that can pass through label face when subsequent query recalls relevant evaluation content
Color helps related personnel to be quickly found out the reason of user may be lost, and specific aim analysis is carried out to it to facilitate, and solves to be lost
Reason, to reduce customer churn.
Here, marker color corresponding to mood label, tone label, evaluation object and evaluation result can be according to setting in advance
The corresponding relationship set is determined, for example, the corresponding marker color of label of being in a bad mood can be preset as red, tone label pair
The marker color answered is yellow, the corresponding marker color of evaluation object is green and the corresponding marker color of evaluation result is blue
Color etc., therefore, then can it is red come mark the user mood label, with yellow come mark the user tone label, with
Green marks evaluation knot corresponding to the evaluation object the evaluation object that marks the user's evaluation targeted and with blue
Fruit, to facilitate the evaluation situation of related personnel's fast understanding user.
It, can be first by voice after getting the Speech Assessment information of user's evaluation target service in the embodiment of the present invention
Evaluation information is converted to text information, and by analyzing text information, to determine user's evaluation target service institute needle
Pair evaluation object and specific evaluation result;Secondly, can extract sound bite corresponding with evaluation result, and by language
Tablet section carries out speech analysis, mood label and the tone label when determining that user evaluates, thus by mood mark
Label, tone label, evaluation object and evaluation result carry out comprehensive analysis and determine whether user is latent to predict the behavior of user
It is being lost user, by carrying out comprehensive analysis to mood, the tone, evaluation object and evaluation content come comprehensively, accurately
It predicts user behavior, avoids user's behavior prediction mistake, improve the accuracy of user's behavior prediction.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process
Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit
It is fixed.
A kind of user's behavior prediction method is essentially described above, a kind of user's behavior prediction device will be carried out below detailed
Thin description.
As shown in figure 5, a kind of user's behavior prediction device is provided in the embodiment of the present invention, the user's behavior prediction dress
It sets, comprising:
Info conversion module 501 is commented for obtaining the Speech Assessment information of user's evaluation target service, and by the voice
Valence information is converted to corresponding text information;
Evaluation result determining module 502 determines mesh described in the user's evaluation for analyzing the text information
The evaluation object of mark business and the corresponding evaluation result of the evaluation object;
Sound bite extraction module 503, it is corresponding with the evaluation result in the Speech Assessment information for extracting
Sound bite;
Sound bite analysis module 504 obtains the evaluation result pair for carrying out speech analysis to the sound bite
The mood label and tone label answered;
Behavior prediction module 505, for according to the mood label, the tone label, the evaluation object and institute
Commentary valence is as a result, predict the behavior of the user.
Further, the behavior prediction module 505 may include:
Matrix Component units generate the mood label, the tone mark for generating model using default vector respectively
Label, the evaluation object and the corresponding Vector Groups of the evaluation result, and each Vector Groups are constituted into input matrix;
Input matrix unit is obtained for the input matrix to be input in the behavior prediction model of pre-training completion
The prediction result of behavior prediction model output, the prediction result include user type and corresponding probability value, described
User type includes potential loss user and loss user non-potential;
First behavior predicting unit, if being potential loss user for the prediction result and corresponding probability value is greater than
Predetermined probabilities threshold value then predicts that the behavior of the user is potential loss behavior;
Second behavior prediction unit, if being potential loss user for the prediction result and corresponding probability value is less than
Perhaps it is equal to the predetermined probabilities threshold value or the prediction result is loss user non-potential, then predicts the row of the user
For for loss behavior non-potential.
Preferably, the Input matrix unit may include:
Probability value computation subunit, for calculating the probability value of each default result according to the following formula:
Wherein, ProbmFor the probability value of m-th of default result, WeightMatrixmIt is corresponding with m-th of default result
Weight matrix, WordMatrix are input matrix, and M is the quantity of default result.
Prediction result determines subelement, for the maximum default result of probability value and corresponding probability value to be determined as institute
State the corresponding prediction result of input matrix.
Optionally, the user's behavior prediction device can also include:
Training sample chooses module, and for choosing the training sample of preset number, each training sample includes an input
Matrix and a desired output are as a result, the desired output result includes the corresponding Standard User type of training sample and corresponding
Normal probability value;
Training prediction result obtains module, for each training sample to be input in initial behavior prediction model,
Obtain the training prediction result of the initial behavior prediction model output;
Global error computing module, for calculating this training in rotation according to the trained prediction result and the desired output result
Experienced global error;
Model parameter adjusts module and adjusts the behavior if being unsatisfactory for the first preset condition for the global error
The model parameter of prediction model, and using model parameter behavior prediction model adjusted as initial behavior prediction model, it returns
Receipt is about to each training sample and is input in initial behavior prediction model, and it is defeated to obtain the initial behavior prediction model
The step of training prediction result out and subsequent step;
Determining module is completed in training, if meeting first preset condition for the global error, it is determined that the row
It is completed for prediction model training.
Further, the sound bite analysis module 504 may include:
It is corresponding to obtain the evaluation result for carrying out mood analysis to the sound bite for mood label acquiring unit
Mood label;
Keyword extracting unit, for extracting the first key identical with the mood label mood in the evaluation result
Word and second keyword opposite with the mood label mood;
Decibel value acquiring unit determines that first keyword is corresponding for carrying out volume analysis to the sound bite
The first decibel value and corresponding second decibel value of second keyword;
Decibel value computing unit, for calculating the voice sheet according to first decibel value and second decibel value
The corresponding final decibel value of section;
Tone label acquiring unit, for obtaining the evaluation knot according to the mood label and the final decibel value
The corresponding tone label of fruit.
Preferably, the decibel value computing unit is specifically used for calculating the sound bite pair according to following calculation formula
The final decibel value answered:
Wherein, DecibelSum is the corresponding final decibel value of sound bite, Decibel1iFor i-th of first decibel values,
Quotiety1For the corresponding default weight of the first decibel value, Decibel2tFor t-th of second decibel values, Quotiety2It is second
The corresponding default weight of decibel value, N are the total number of the first decibel value, and T is the total number of the second decibel value.
Optionally, the user's behavior prediction device can also include:
Behavior judgment module, for judging whether the behavior of the user meets the second preset condition;
Marker color obtains module, if the behavior for the user meets second preset condition, according to default
Corresponding relationship is obtained respectively corresponding to the mood label, the tone label, the evaluation object and the evaluation result
Marker color;
Color mark module is used for according to each marker color to the mood label, the tone label, the evaluation pair
As carrying out color mark with the evaluation result;
Associated storage module, for by after color mark mood label, tone label, evaluation object, evaluation result with
The user-association is stored to presetting database.
Fig. 6 is the schematic diagram for the terminal device that one embodiment of the invention provides.As shown in fig. 6, the terminal of the embodiment is set
Standby 6 include: processor 60, memory 61 and are stored in the meter that can be run in the memory 61 and on the processor 60
Calculation machine readable instruction 62, such as user's behavior prediction program.The processor 60 executes real when the computer-readable instruction 62
Step in existing above-mentioned each user's behavior prediction embodiment of the method, such as step S101 shown in FIG. 1 to step S105.Or
Person, the processor 60 realize each module/unit in above-mentioned each Installation practice when executing the computer-readable instruction 62
Function, such as module shown in fig. 5 501 is to the function of module 505.
Illustratively, the computer-readable instruction 62 can be divided into one or more module/units, one
Or multiple module/units are stored in the memory 61, and are executed by the processor 60, to complete the present invention.Institute
Stating one or more module/units can be the series of computation machine readable instruction section that can complete specific function, the instruction segment
For describing implementation procedure of the computer-readable instruction 62 in the terminal device 6.
The terminal device 6 can be the calculating such as desktop PC, notebook, palm PC and cloud server and set
It is standby.The terminal device may include, but be not limited only to, processor 60, memory 61.It will be understood by those skilled in the art that Fig. 6
The only example of terminal device 6 does not constitute the restriction to terminal device 6, may include than illustrating more or fewer portions
Part perhaps combines certain components or different components, such as the terminal device can also include input-output equipment, net
Network access device, bus etc..
The processor 60 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
The memory 61 can be the internal storage unit of the terminal device 6, such as the hard disk or interior of terminal device 6
It deposits.The memory 61 is also possible to the External memory equipment of the terminal device 6, such as be equipped on the terminal device 6
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge
Deposit card (Flash Card) etc..Further, the memory 61 can also both include the storage inside list of the terminal device 6
Member also includes External memory equipment.The memory 61 is for storing the computer-readable instruction and terminal device institute
Other programs and data needed.The memory 61 can be also used for temporarily storing the number that has exported or will export
According to.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the present invention
Portion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey
The medium of sequence code.
The above, the above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations;Although referring to before
Stating embodiment, invention is explained in detail, those skilled in the art should understand that: it still can be to preceding
Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these
It modifies or replaces, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution.
Claims (10)
1. a kind of user's behavior prediction method characterized by comprising
The Speech Assessment information of user's evaluation target service is obtained, and the Speech Assessment information is converted into corresponding text envelope
Breath;
The text information is analyzed, determine target service described in the user's evaluation evaluation object and the evaluation
The corresponding evaluation result of object;
Extract sound bite corresponding with the evaluation result in the Speech Assessment information;
Speech analysis is carried out to the sound bite, obtains the corresponding mood label of the evaluation result and tone label;
According to the mood label, the tone label, the evaluation object and the evaluation result, predict the user's
Behavior.
2. user's behavior prediction method according to claim 1, which is characterized in that described according to the mood label, institute
Predicate gas label, the evaluation object and the evaluation result, predict the behavior of the user, comprising:
Model, which is generated, using default vector generates the mood label, the tone label, the evaluation object and institute respectively
The corresponding Vector Groups of evaluation result are stated, and each Vector Groups are constituted into input matrix;
The input matrix is input in the behavior prediction model of pre-training completion, obtains the behavior prediction model output
Prediction result, the prediction result include user type and corresponding probability value, and the user type includes that potential loss is used
Family and loss user non-potential;
If the prediction result is potential loss user and corresponding probability value is greater than predetermined probabilities threshold value, the use is predicted
The behavior at family is potential loss behavior;
If the prediction result is potential loss user and corresponding probability value is less than or equal to the predetermined probabilities threshold value,
Or the prediction result is loss user non-potential, then predicts that the behavior of the user is loss behavior non-potential.
3. user's behavior prediction method according to claim 2, which is characterized in that the prediction of the behavior prediction model
Journey includes:
The probability value of each default result is calculated according to the following formula:
Wherein, ProbmFor the probability value of m-th of default result, WeightMatrixmFor weight corresponding with m-th of default result
Matrix, WordMatrix are input matrix, and M is the quantity of default result.
The maximum default result of probability value and corresponding probability value are determined as the corresponding prediction result of the input matrix.
4. user's behavior prediction method according to claim 2, which is characterized in that the behavior prediction model passes through following
Step training obtains:
The training sample of preset number is chosen, each training sample includes an input matrix and a desired output as a result, institute
Stating desired output result includes the corresponding Standard User type of training sample and corresponding normal probability value;
Each training sample is input in initial behavior prediction model, obtains the initial behavior prediction model output
Training prediction result;
The global error of epicycle training is calculated according to the trained prediction result and the desired output result;
If the global error is unsatisfactory for the first preset condition, the model parameter of the behavior prediction model is adjusted, and by mould
Shape parameter behavior prediction model adjusted is returned and is executed each training sample input as initial behavior prediction model
Into initial behavior prediction model, the step of obtaining the training prediction result of the initial behavior prediction model output and
Subsequent step;
If the global error meets first preset condition, it is determined that the behavior prediction model training is completed.
5. user's behavior prediction method according to claim 1, which is characterized in that described to carry out language to the sound bite
Cent analysis, obtains the corresponding mood label of the evaluation result and tone label, comprising:
Mood analysis is carried out to the sound bite, obtains the corresponding mood label of the evaluation result;
Extract in the evaluation result the first keyword identical with the mood label mood and with the mood label feelings
The second opposite keyword of thread;
Volume analysis is carried out to the sound bite, determines that corresponding first decibel value of first keyword and described second closes
Corresponding second decibel value of keyword;
According to first decibel value and second decibel value, the corresponding final decibel value of the sound bite is calculated;
According to the mood label and the final decibel value, the corresponding tone label of the evaluation result is obtained.
6. user's behavior prediction method according to claim 5, which is characterized in that it is described according to first decibel value and
Second decibel value calculates the corresponding final decibel value of the sound bite, comprising:
The corresponding final decibel value of the sound bite is calculated according to following calculation formula:
Wherein, DecibelSum is the corresponding final decibel value of sound bite, Decibel1iFor i-th of first decibel values,
Quotiety1For the corresponding default weight of the first decibel value, Decibel2tFor t-th of second decibel values, Quotiety2It is second
The corresponding default weight of decibel value, N are the total number of the first decibel value, and T is the total number of the second decibel value.
7. user's behavior prediction method according to any one of claim 1 to 6, which is characterized in that according to the feelings
Thread label, the tone label, the evaluation object and the evaluation result, after the behavior for predicting the user, comprising:
Judge whether the behavior of the user meets the second preset condition;
If the behavior of the user meets second preset condition, the mood mark is obtained respectively according to corresponding relationship is preset
Marker color corresponding to label, the tone label, the evaluation object and the evaluation result;
The mood label, the tone label, the evaluation object and the evaluation result are carried out according to each marker color
Color mark;
Mood label, tone label, evaluation object, evaluation result and the user-association after color mark is stored to default
Database.
8. a kind of user's behavior prediction device characterized by comprising
Info conversion module, for obtaining the Speech Assessment information of user's evaluation target service, and by the Speech Assessment information
Be converted to corresponding text information;
Evaluation result determining module determines target service described in the user's evaluation for analyzing the text information
Evaluation object and the corresponding evaluation result of the evaluation object;
Sound bite extraction module, for extracting voice sheet corresponding with the evaluation result in the Speech Assessment information
Section;
Sound bite analysis module obtains the corresponding feelings of the evaluation result for carrying out speech analysis to the sound bite
Thread label and tone label;
Behavior prediction module, for being tied according to the mood label, the tone label, the evaluation object and the evaluation
Fruit predicts the behavior of the user.
9. a kind of computer readable storage medium, the computer-readable recording medium storage has computer-readable instruction, special
Sign is, the user behavior as described in any one of claims 1 to 7 is realized when the computer-readable instruction is executed by processor
The step of prediction technique.
10. a kind of terminal device, including memory, processor and storage are in the memory and can be on the processor
The computer-readable instruction of operation, which is characterized in that the processor realizes following step when executing the computer-readable instruction
It is rapid:
The Speech Assessment information of user's evaluation target service is obtained, and the Speech Assessment information is converted into corresponding text envelope
Breath;
The text information is analyzed, determine target service described in the user's evaluation evaluation object and the evaluation
The corresponding evaluation result of object;
Extract sound bite corresponding with the evaluation result in the Speech Assessment information;
Speech analysis is carried out to the sound bite, obtains the corresponding mood label of the evaluation result and tone label;
According to the mood label, the tone label, the evaluation object and the evaluation result, predict the user's
Behavior.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910422647.6A CN110263326B (en) | 2019-05-21 | 2019-05-21 | User behavior prediction method, prediction device, storage medium and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910422647.6A CN110263326B (en) | 2019-05-21 | 2019-05-21 | User behavior prediction method, prediction device, storage medium and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110263326A true CN110263326A (en) | 2019-09-20 |
CN110263326B CN110263326B (en) | 2022-05-03 |
Family
ID=67914925
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910422647.6A Active CN110263326B (en) | 2019-05-21 | 2019-05-21 | User behavior prediction method, prediction device, storage medium and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110263326B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110751553A (en) * | 2019-10-24 | 2020-02-04 | 深圳前海微众银行股份有限公司 | Identification method and device of potential risk object, terminal equipment and storage medium |
CN111898810A (en) * | 2020-07-16 | 2020-11-06 | 上海松鼠课堂人工智能科技有限公司 | User loss prediction system based on teacher-student communication |
WO2021057146A1 (en) * | 2019-09-23 | 2021-04-01 | 平安科技(深圳)有限公司 | Voice-based interviewee determination method and device, terminal, and storage medium |
CN113010784A (en) * | 2021-03-17 | 2021-06-22 | 北京十一贝科技有限公司 | Method, apparatus, electronic device, and medium for generating prediction information |
CN113657108A (en) * | 2021-08-24 | 2021-11-16 | 平安国际智慧城市科技股份有限公司 | Doctor-patient relationship monitoring method and device, computer readable storage medium and server |
CN113935803A (en) * | 2021-10-15 | 2022-01-14 | 易小武 | Application and management method and system based on big data of small and medium-sized micro-enterprises |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090292583A1 (en) * | 2008-05-07 | 2009-11-26 | Nice Systems Ltd. | Method and apparatus for predicting customer churn |
WO2015078395A1 (en) * | 2013-11-29 | 2015-06-04 | Tencent Technology (Shenzhen) Company Limited | Devices and methods for preventing user churn |
CN106024014A (en) * | 2016-05-24 | 2016-10-12 | 努比亚技术有限公司 | Voice conversion method and device and mobile terminal |
CN106910512A (en) * | 2015-12-18 | 2017-06-30 | 株式会社理光 | The analysis method of voice document, apparatus and system |
CN107609708A (en) * | 2017-09-25 | 2018-01-19 | 广州赫炎大数据科技有限公司 | A kind of customer loss Forecasting Methodology and system based on mobile phone games shop |
CN107885726A (en) * | 2017-11-06 | 2018-04-06 | 广州杰赛科技股份有限公司 | Customer service quality evaluating method and device |
CN109408809A (en) * | 2018-09-25 | 2019-03-01 | 天津大学 | A kind of sentiment analysis method for automobile product comment based on term vector |
CN109473122A (en) * | 2018-11-12 | 2019-03-15 | 平安科技(深圳)有限公司 | Mood analysis method, device and terminal device based on detection model |
-
2019
- 2019-05-21 CN CN201910422647.6A patent/CN110263326B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090292583A1 (en) * | 2008-05-07 | 2009-11-26 | Nice Systems Ltd. | Method and apparatus for predicting customer churn |
WO2015078395A1 (en) * | 2013-11-29 | 2015-06-04 | Tencent Technology (Shenzhen) Company Limited | Devices and methods for preventing user churn |
CN106910512A (en) * | 2015-12-18 | 2017-06-30 | 株式会社理光 | The analysis method of voice document, apparatus and system |
CN106024014A (en) * | 2016-05-24 | 2016-10-12 | 努比亚技术有限公司 | Voice conversion method and device and mobile terminal |
CN107609708A (en) * | 2017-09-25 | 2018-01-19 | 广州赫炎大数据科技有限公司 | A kind of customer loss Forecasting Methodology and system based on mobile phone games shop |
CN107885726A (en) * | 2017-11-06 | 2018-04-06 | 广州杰赛科技股份有限公司 | Customer service quality evaluating method and device |
CN109408809A (en) * | 2018-09-25 | 2019-03-01 | 天津大学 | A kind of sentiment analysis method for automobile product comment based on term vector |
CN109473122A (en) * | 2018-11-12 | 2019-03-15 | 平安科技(深圳)有限公司 | Mood analysis method, device and terminal device based on detection model |
Non-Patent Citations (1)
Title |
---|
夏国恩等: "融入客户价值特征和情感特征的网络客户流失预测研究", 《管理学报》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021057146A1 (en) * | 2019-09-23 | 2021-04-01 | 平安科技(深圳)有限公司 | Voice-based interviewee determination method and device, terminal, and storage medium |
CN110751553A (en) * | 2019-10-24 | 2020-02-04 | 深圳前海微众银行股份有限公司 | Identification method and device of potential risk object, terminal equipment and storage medium |
CN111898810A (en) * | 2020-07-16 | 2020-11-06 | 上海松鼠课堂人工智能科技有限公司 | User loss prediction system based on teacher-student communication |
CN111898810B (en) * | 2020-07-16 | 2021-06-01 | 上海松鼠课堂人工智能科技有限公司 | User loss prediction system based on teacher-student communication |
WO2022012605A1 (en) * | 2020-07-16 | 2022-01-20 | 上海松鼠课堂人工智能科技有限公司 | Pre-trained deep neural network model-based user churn prediction system |
CN113010784A (en) * | 2021-03-17 | 2021-06-22 | 北京十一贝科技有限公司 | Method, apparatus, electronic device, and medium for generating prediction information |
CN113010784B (en) * | 2021-03-17 | 2024-02-06 | 北京十一贝科技有限公司 | Method, apparatus, electronic device and medium for generating prediction information |
CN113657108A (en) * | 2021-08-24 | 2021-11-16 | 平安国际智慧城市科技股份有限公司 | Doctor-patient relationship monitoring method and device, computer readable storage medium and server |
CN113935803A (en) * | 2021-10-15 | 2022-01-14 | 易小武 | Application and management method and system based on big data of small and medium-sized micro-enterprises |
Also Published As
Publication number | Publication date |
---|---|
CN110263326B (en) | 2022-05-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110263326A (en) | A kind of user's behavior prediction method, prediction meanss, storage medium and terminal device | |
WO2020237869A1 (en) | Question intention recognition method and apparatus, computer device, and storage medium | |
WO2020143233A1 (en) | Method and device for building scorecard model, computer apparatus and storage medium | |
CN110442516B (en) | Information processing method, apparatus, and computer-readable storage medium | |
CN109299344A (en) | The generation method of order models, the sort method of search result, device and equipment | |
WO2020073714A1 (en) | Training sample obtaining method, account prediction method, and corresponding devices | |
CN108733644B (en) | A kind of text emotion analysis method, computer readable storage medium and terminal device | |
CN108491406B (en) | Information classification method and device, computer equipment and storage medium | |
US11803731B2 (en) | Neural architecture search with weight sharing | |
EP3138058A1 (en) | Method and apparatus for classifying object based on social networking service, and storage medium | |
CN110264038A (en) | A kind of generation method and equipment of product appraisal model | |
CN110362798B (en) | Method, apparatus, computer device and storage medium for judging information retrieval analysis | |
CN110147926A (en) | A kind of risk class calculation method, storage medium and the terminal device of type of service | |
CN103488782B (en) | A kind of method utilizing lyrics identification music emotion | |
CN111815169A (en) | Business approval parameter configuration method and device | |
CN109902157A (en) | A kind of training sample validation checking method and device | |
CN110119880A (en) | A kind of automatic measure grading method, apparatus, storage medium and terminal device | |
CN110263328A (en) | A kind of disciplinary capability type mask method, device, storage medium and terminal device | |
CN111178537A (en) | Feature extraction model training method and device | |
CN112307048A (en) | Semantic matching model training method, matching device, equipment and storage medium | |
CN111354354B (en) | Training method, training device and terminal equipment based on semantic recognition | |
CN114528391A (en) | Method, device and equipment for training question-answer pair scoring model and storage medium | |
CN113657773A (en) | Method and device for testing speech technology, electronic equipment and storage medium | |
CN113032524A (en) | Trademark infringement identification method, terminal device and storage medium | |
CN107071553A (en) | Method, device and computer readable storage medium for modifying video and voice |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |