CN109787821B - Intelligent prediction method for large-scale mobile client traffic consumption - Google Patents
Intelligent prediction method for large-scale mobile client traffic consumption Download PDFInfo
- Publication number
- CN109787821B CN109787821B CN201910006654.8A CN201910006654A CN109787821B CN 109787821 B CN109787821 B CN 109787821B CN 201910006654 A CN201910006654 A CN 201910006654A CN 109787821 B CN109787821 B CN 109787821B
- Authority
- CN
- China
- Prior art keywords
- predictor
- value
- layer
- classification
- regression
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 37
- 238000007781 pre-processing Methods 0.000 claims abstract description 13
- 230000006870 function Effects 0.000 claims description 29
- 210000005036 nerve Anatomy 0.000 claims description 25
- 239000013598 vector Substances 0.000 claims description 24
- 230000004913 activation Effects 0.000 claims description 18
- 230000008569 process Effects 0.000 claims description 18
- 230000001537 neural effect Effects 0.000 claims description 16
- 238000013528 artificial neural network Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 8
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 6
- 238000003066 decision tree Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 238000000638 solvent extraction Methods 0.000 claims description 4
- 230000009471 action Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 2
- 238000007667 floating Methods 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 238000000926 separation method Methods 0.000 claims description 2
- 238000007794 visualization technique Methods 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 claims 1
- 238000012800 visualization Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 208000032892 Diaschisis Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000013079 data visualisation Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Landscapes
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention discloses an intelligent prediction method for large-scale mobile client flow consumption, which comprises the following steps: 1) collecting mobile user attribute characteristics and consumption behavior data, visualizing and preprocessing; 2) constructing a classification predictor and a regression predictor, and training to obtain two prediction models with different scales; 3) the combined classification predictor and the regression predictor are in trainable linear combination, and the second stage of training is carried out to obtain a combined prediction model; 4) and predicting the flow consumption value of the user in the next month according to the attribute characteristics and the consumption behavior of the mobile user by using a joint prediction model. The invention combines the classification predictor and the regression predictor, and performs two-stage training on large-scale mobile user data, so that the obtained combined flow prediction model has higher accuracy and robustness, thereby providing a more accurate and effective marketing thought for mobile service popularization.
Description
Technical Field
The invention relates to the technical field of data mining, in particular to an intelligent prediction method for large-scale mobile customer traffic consumption.
Background
With the popularization of 4G mobile communication technology and the explosive development of mobile internet, the life style of users gradually changes. The marketing emphasis of telecommunications operators is gradually shifting from traditional voice traffic to traffic. The future traffic consumption situation of the user is accurately predicted, so that the traffic service can be more effectively promoted by an operator, the consumption of the user is stimulated, and the traffic revenue is improved.
The traditional flow prediction method only predicts the flow consumption value of a user through a regression method, is easily interfered by noise in data, and has insufficient accuracy and robustness. The method utilizes numerical category diaschisis of the discretized flow consumption field to construct a prediction model combining a classification predictor and a regression predictor, and digs out implicit rules of flow consumption of the mobile users on massive mobile user attribute characteristics and consumption behavior data, so that the future flow consumption condition of the mobile users is predicted, and then the flow package is customized to be promoted, and the purposes of accurate marketing and flow revenue improvement are achieved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides an intelligent prediction method for large-scale mobile client traffic consumption, which combines a classification predictor and a regression predictor and performs two-stage training on large-scale mobile user data to ensure that an obtained combined traffic prediction model has higher accuracy and robustness, thereby providing a more accurate and effective marketing idea for mobile service popularization.
In order to achieve the purpose, the technical scheme provided by the invention is as follows: a large-scale mobile customer traffic consumption intelligent prediction method comprises the following steps:
1) collecting mobile user attribute characteristics and consumption behavior data, visualizing and preprocessing;
2) constructing a classification predictor and a regression predictor, and training to obtain two prediction models with different scales;
3) the combined classification predictor and the regression predictor are in trainable linear combination, and the second stage of training is carried out to obtain a combined prediction model;
4) and predicting the flow consumption value of the user in the next month according to the attribute characteristics and the consumption behavior of the mobile user by using a joint prediction model.
In step 1), before preprocessing, million-level data visualization is performed to quickly identify data characteristics and retrieve abnormal values, and the visualization method comprises the following steps:
1.1) carrying out Hash barrel partitioning and interval operation on all characteristic fields of a mobile user;
1.2) taking any two characteristic fields, one is an X axis, and the other is a Y axis, and drawing data points on a Cartesian coordinate system;
1.3) data points of the same eigenvalue do not overlap each other but are plotted in a close point-by-point arrangement.
In the step 1), the preprocessing operation comprises extracting a user traffic consumption field from data as a label, and then carrying out barrel-partitioning and interval processing on the user traffic consumption field to form numerical class duality of the traffic consumption field, so that the method is suitable for combined prediction of a classification predictor and a regression predictor.
In step 2), the class predictor is a neural network DNN class predictor, and the construction and training process thereof is as follows:
2.1.1) constructing an input layer, an output layer and a hidden layer nerve unit; the input layer and the output layer are respectively one layer, the unit number of the input layer corresponds to the data dimension, and the unit number of the output layer corresponds to the category number; the hidden layers are any layers;
2.1.2) the neural network advances from the input layer to the hidden layer and then to the output layer by layer, and the neural unit of each layer is calculated, and the formula is as follows:
in the formula, a is the nerve unit activation value, z is the input of the nerve unit activation function, g is the activation function, l is the serial number of the nerve network layer, k is the serial number of the nerve unit of the l +1 th layer, slNumber of nerve units in layer l, [ theta ](l)A parameter matrix of the l layer;represents the activation value of the kth nerve unit of the l +1 th layer,represents the input of the kth neural unit activation function of layer l + 1;
2.1.3) training the neural network by taking the cross entropy as a cost function, wherein the cost function is as follows:
the cost function is the sum of the two terms; in the first term, m is the total number of samples, K is the total number of neural units in the output layer, i is the sample number, K is the neural unit number,is the value of the k-th neural unit in the real value of the i-th sample, x(i)Is the feature vector of the ith sample, (h)θ(x(i)))kThe value of the kth nerve unit in the predicted value of the ith sample is taken as the value of the kth nerve unit; the second term is a regularization term, where λ is the regularization coefficient, L is the number of neural network layers,is the ith parameter of the ith row of the parameter matrix of the ith layer;
2.1.4) using an adam self-adaptive optimizer in the training process, and storing the current model after the training is finished.
In step 2), the regression predictor is a decision tree regression predictor, and the construction and training process thereof is as follows:
2.2.1) constructing a CART regression tree of selecting characteristics according to the kini coefficient, wherein the tree divides the value of a certain characteristic into two branches each time to form a binary tree;
2.2.2) adding constraint conditions to the tree, and limiting the maximum depth, the minimum sample number of the branches of the middle nodes of the tree and the minimum sample number of each leaf node;
2.2.3) evaluating the tree model by taking the mean square error as a cost function, and storing the model with the minimum error; the mean square error calculation formula is as follows:
where MSE is the mean square error, m is the total number of samples, y(i)Is the true value, x, of the ith sample(i)Is the feature vector of the ith sample, hθ(x(i)) Is the predicted value of the ith sample.
In step 3), the learned classification predictor and the regression predictor are linearly combined and trained again to obtain a joint prediction model, and the specific process is as follows:
3.1) taking the activation value of the last layer of the neural network classification predictor, and sequentially setting confidence degrees corresponding to the classification values of the flow size intervals after barrel division;
3.2) taking a flow size predicted value of the decision tree regression predictor;
3.3) carrying out element-by-element multiplication on each class value of the classification predictor and the corresponding confidence coefficient, carrying out local linear combination, and then carrying out global linear combination on the classification predictor and the regression predictor under the action of an influence factor, wherein the specific formula is as follows:
in the formula, hθ(x) Representing an assumed function of the joint prediction model, wherein Vector _ classes is a traffic size interval class value Vector, Vector _ confidence is a confidence coefficient Vector corresponding to the traffic size interval class value, and wTIs the weight vector of the local linear combination of the classification predictors,α and β are respectively influence factors of the classification predictor and the regression predictor for the real-value output result of the regression predictor;
and 3.4) simultaneously training the influence factors and the weight vectors of the classification predictor, taking the mean square error as a cost function, using an adam self-adaptive optimizer, and storing the current model after training.
In step 4), the combined prediction model is used for predicting the flow consumption condition of the mobile user in the next month, and compared with the method of singly using a classification predictor or a regression predictor, the method has higher accuracy and robustness, and the specific process is as follows:
4.1) carrying out preprocessing operation on new mobile user data, wherein the preprocessing operation comprises the steps of extracting a user flow consumption field from the data as a label, and carrying out barrel-dividing and interval processing on the user flow consumption field to form numerical value class duality of the flow consumption field, and the method is suitable for combined prediction of a classification predictor and a regression predictor;
4.2) taking the preprocessed data as input, and operating a joint prediction model;
4.3) outputting the predicted value by the model.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the invention combines the regression predictor and the classification predictor together, is applied to the prediction of the same label, and has higher accuracy and robustness compared with a single model.
2. The invention improves the model effect and stability through two-stage training in the combination of the regression predictor and the classification predictor.
3. The invention realizes two-dimensional visualization of million-level data points and is greatly helpful for observing data characteristics, retrieving abnormal values, explaining models and the like.
4. The invention has guiding function for the telecom operator to carry out flow business marketing, and can provide customized flow package recommendation for mobile users, thereby improving the flow revenue of the operator and the use experience of the users.
Drawings
FIG. 1 is a logic flow diagram of the method of the present invention.
Detailed Description
The present invention will be further described with reference to the following specific examples.
As shown in fig. 1, the intelligent prediction method for large-scale mobile client traffic consumption provided by this embodiment includes the following steps:
1) collecting mobile user attribute characteristics and consumption behavior data, visualizing and carrying out preprocessing operation;
1.1) visualization procedure as follows:
1.1.1) carrying out Hash barrel partitioning and interval operation on all characteristic fields of a mobile user;
1.1.2) taking any two characteristic fields, one is an X axis, and the other is a Y axis, and drawing data points on a Cartesian coordinate system;
1.1.3) data points of the same eigenvalue do not overlap each other but are plotted in a point-by-point close packing.
1.2) combining the visualization result to carry out pretreatment operation, wherein the process is as follows:
1.2.1) deleting the fields with abnormal values to reduce the noise in the data set;
1.2.2) deleting fields with the NaN value accounting for more than 99 percent;
1.2.3) counting the characteristic fields with high base numbers, performing box separation according to the occurrence frequency of field values, mapping the original characteristic distribution with high base numbers into a new characteristic distribution with low base numbers, and reserving a high-frequency part in the original distribution as much as possible;
1.2.4) selecting a time scale with moderate variance for the field with the type of time according to the floating change range of the field value, converting the time scale into time offset, and carrying out standardization processing;
1.2.5) carrying out one-hot coding on the category field;
1.2.6) carrying out mean filling on the fields with missing values;
1.2.7) extracting the flow consumption field and carrying out barrel-based interval processing.
2) Constructing a classification predictor and a regression predictor, and training to obtain two prediction models with different scales;
2.1) the neural network classification predictor construction, training and storage processes are as follows:
2.1.1) constructing input layer, output layer and hidden layer nerve units. The input layer and the output layer are respectively one layer, the unit number of the input layer corresponds to the data dimension, and the unit number of the output layer corresponds to the category number. The hidden layer is two layers, and the number of the units is 32 and 64 respectively.
2.1.2) the neural network advances from the input layer to the hidden layer and then to the output layer by layer, and the neural unit of each layer is calculated. The formula is as follows:
in the formula, a is the nerve unit activation value, z is the input of the nerve unit activation function, g is the activation function, l is the serial number of the nerve network layer, k is the serial number of the nerve unit of the l +1 th layer, slNumber of nerve units in layer l, [ theta ](l)Is the parameter matrix of the l-th layer.Represents the activation value of the kth nerve unit of the l +1 th layer,represents the input to the kth neural unit activation function at layer l + 1.
2.1.3) taking the cross entropy as a cost function to train the neural network. The cost function is as follows:
the cost function is the sum of two terms. In the first term, m is the total number of samples, K is the total number of neural units in the output layer, i is the sample number, K is the neural unit number,is the value of the k-th neural unit in the real value of the i-th sample, x(i)Is the feature vector of the ith sample, (h)θ(x(i)))kIs the value of the kth neural unit in the prediction value of the ith sample. The second term is a regularization term, where λ is the regularization coefficient, L is the number of neural network layers,is the ith parameter of the ith row of the parameter matrix of the ith layer.
2.1.4) optimizer of training procedure using adam adaptive optimizer, initial learning rate was set to 0.0001. The training batch size and maximum number of iterations are 200, respectively.
2.1.5) in each training process, data are input into the network from the input layer one by one, a prediction result is obtained on the output layer after hidden layer transformation, a prediction value is compared with a true value, training loss and gradient are calculated according to a cost function, and network parameters are updated. When the training loss is improved by less than 0.0001 after two continuous iterations or the network training is finished for 200 times, the training process is stopped. And after the training is finished, saving the current model.
2.2) the construction, training and storage processes of the regression predictor of the decision tree are as follows:
2.2.1) constructing a CART regression tree for selecting features according to the kini coefficient, and bisecting the value of a certain feature each time to form a binary tree.
2.2.2) adding constraint conditions to the tree, wherein the maximum depth is 10, the middle node of the tree should have at least 1000 samples for bifurcation, and each leaf node should also have at least 50 samples.
2.2.3) evaluating the tree model by taking the mean square error as a cost function, and saving the model with the minimum error. The mean square error calculation formula is as follows:
where MSE is the mean square error, m is the total number of samples, y(i)Is the true value, x, of the ith sample(i)Is the feature vector of the ith sample, hθ(x(i)) Is the predicted value of the ith sample.
3) The combined classification predictor and the regression predictor are in trainable linear combination, and the second stage of training is carried out to obtain a combined prediction model, wherein the specific process is as follows:
and 3.1) taking the activation value of the last layer of the neural network classification predictor, and sequentially obtaining confidence degrees corresponding to the classification values of the flow size intervals after barrel division.
And 3.2) taking a flow size predicted value of the decision tree regression predictor.
3.3) carrying out element-by-element multiplication on each class value of the classification predictor and the corresponding confidence coefficient, and carrying out local linear combination. Then, under the action of the influence factors, the split predictor and the regression predictor are subjected to global linear combination. The specific formula is as follows:
in the formula, hθ(x) Representing an assumed function of the joint prediction model, wherein Vector _ classes is a traffic size interval class value Vector, Vector _ confidence is a confidence coefficient Vector corresponding to the traffic size interval class value, and wTIs the weight vector of the local linear combination of the classification predictors,α and β are the impact factors of the class predictor and the regression predictor, respectively, for the real-valued output results of the regression predictor.
And 3.4) simultaneously training the influence factors and the weight vectors of the classification predictor, taking the mean square error as a cost function, using an adam self-adaptive optimizer, and storing the current model after training.
4) And (3) predicting the flow consumption value of the user in the next month according to the attribute characteristics and the consumption behavior of the mobile user by using a joint prediction model, wherein the specific process is as follows:
4.1) carrying out the same preprocessing operation as the step 1.2) on the new mobile user data;
4.2) operating the joint prediction model by taking the preprocessed data as input
4.3) outputting the predicted value by the model.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.
Claims (2)
1. An intelligent prediction method for large-scale mobile customer traffic consumption is characterized by comprising the following steps:
1) collecting mobile user attribute characteristics and consumption behavior data, visualizing and preprocessing; wherein the visualization method comprises the following steps:
1.1) carrying out Hash barrel partitioning and interval operation on all characteristic fields of a mobile user;
1.2) taking any two characteristic fields, one is an X axis, and the other is a Y axis, and drawing data points on a Cartesian coordinate system;
1.3) data points with the same characteristic value do not overlap each other, but are drawn in a form of closely arranging point by point;
the preprocessing operation comprises the steps of extracting a user flow consumption field from data to serve as a label, and then carrying out barrel-dividing and interval processing on the user flow consumption field to form numerical value class duality of the flow consumption field, wherein the numerical value class duality is suitable for combined prediction of a classification predictor and a regression predictor; wherein, the visual result is combined to carry out the preprocessing operation, and the process is as follows:
1.2.1) deleting the fields with abnormal values to reduce the noise in the data set;
1.2.2) deleting fields with the NaN value accounting for more than 99 percent;
1.2.3) counting the characteristic fields with high base numbers, performing box separation according to the occurrence frequency of field values, mapping the original characteristic distribution with high base numbers into a new characteristic distribution with low base numbers, and reserving a high-frequency part in the original distribution as much as possible;
1.2.4) selecting a time scale with moderate variance for the field with the type of time according to the floating change range of the field value, converting the time scale into time offset, and carrying out standardization processing;
1.2.5) carrying out one-hot coding on the category field;
1.2.6) carrying out mean filling on the fields with missing values;
1.2.7) extracting a flow consumption field and carrying out barrel-divided and interval processing;
2) constructing a classification predictor and a regression predictor, and training to obtain two prediction models with different scales;
the classification predictor is a neural network DNN classification predictor, and the construction and training process of the classification predictor is as follows:
2.1.1) constructing an input layer, an output layer and a hidden layer nerve unit; the input layer and the output layer are respectively one layer, the unit number of the input layer corresponds to the data dimension, and the unit number of the output layer corresponds to the category number; the hidden layers are any layers;
2.1.2) the neural network advances from the input layer to the hidden layer and then to the output layer by layer, and the neural unit of each layer is calculated, and the formula is as follows:
in the formula, a is the nerve unit activation value, z is the input of the nerve unit activation function, g is the activation function, l is the serial number of the nerve network layer, k is the serial number of the nerve unit of the l +1 th layer, slNumber of nerve units in layer l, [ theta ](l)A parameter matrix of the l layer;represents the activation value of the kth nerve unit of the l +1 th layer,represents the input of the kth neural unit activation function of layer l + 1;
2.1.3) training the neural network by taking the cross entropy as a cost function, wherein the cost function is as follows:
the cost function is the sum of the two terms; in the first term, m is the total number of samples, K is the total number of neural units in the output layer, i is the sample number, K is the neural unit number,is the value of the k-th neural unit in the real value of the i-th sample, x(i)Is the feature vector of the ith sample, (h)θ(x(i)))kThe value of the kth nerve unit in the predicted value of the ith sample is taken as the value of the kth nerve unit; the second term is a regularization term, where λ is the regularization coefficient, L is the number of neural network layers,is the ith parameter of the ith row of the parameter matrix of the ith layer;
2.1.4) using an adam self-adaptive optimizer in the training process, and storing the current model after training;
the regression predictor is a decision tree regression predictor, and the construction and training process of the regression predictor is as follows:
2.2.1) constructing a CART regression tree of selecting characteristics according to the kini coefficient, wherein the tree divides the value of a certain characteristic into two branches each time to form a binary tree;
2.2.2) adding constraint conditions to the tree, and limiting the maximum depth, the minimum sample number of the branches of the middle nodes of the tree and the minimum sample number of each leaf node;
2.2.3) evaluating the tree model by taking the mean square error as a cost function, and storing the model with the minimum error; the mean square error calculation formula is as follows:
where MSE is the mean square error, m is the total number of samples, y(i)Is the true value, x, of the ith sample(i)Is the feature vector of the ith sample, hθ(x(i)) Is the predicted value of the ith sample;
3) the combined classification predictor and the regression predictor are in trainable linear combination, and the second stage of training is carried out to obtain a combined prediction model, wherein the specific process is as follows:
3.1) taking the activation value of the last layer of the neural network classification predictor, and sequentially setting confidence degrees corresponding to the classification values of the flow size intervals after barrel division;
3.2) taking a flow size predicted value of the decision tree regression predictor;
3.3) carrying out element-by-element multiplication on each class value of the classification predictor and the corresponding confidence coefficient, carrying out local linear combination, and then carrying out global linear combination on the classification predictor and the regression predictor under the action of an influence factor, wherein the specific formula is as follows:
in the formula, hθ(x) Representing an assumed function of the joint prediction model, wherein Vector _ classes is a traffic size interval class value Vector, and Vector _ confidence is a traffic size interval classConfidence vector, w, corresponding to the valueTIs the weight vector of the local linear combination of the classification predictors,α and β are respectively influence factors of the classification predictor and the regression predictor for the real-value output result of the regression predictor;
3.4) simultaneously training the influence factors and the weight vectors of the classification predictor, taking the mean square error as a cost function, using an adam self-adaptive optimizer, and storing the current model after training;
4) and predicting the flow consumption value of the user in the next month according to the attribute characteristics and the consumption behavior of the mobile user by using a joint prediction model.
2. The intelligent prediction method for large-scale mobile customer traffic consumption according to claim 1, wherein: in step 4), the combined prediction model is used for predicting the flow consumption condition of the mobile user in the next month, and compared with the method of singly using a classification predictor or a regression predictor, the method has higher accuracy and robustness, and the specific process is as follows:
4.1) carrying out preprocessing operation on new mobile user data, wherein the preprocessing operation comprises the steps of extracting a user flow consumption field from the data as a label, and carrying out barrel-dividing and interval processing on the user flow consumption field to form numerical value class duality of the flow consumption field, and the method is suitable for combined prediction of a classification predictor and a regression predictor;
4.2) taking the preprocessed data as input, and operating a joint prediction model;
4.3) outputting the predicted value by the model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910006654.8A CN109787821B (en) | 2019-01-04 | 2019-01-04 | Intelligent prediction method for large-scale mobile client traffic consumption |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910006654.8A CN109787821B (en) | 2019-01-04 | 2019-01-04 | Intelligent prediction method for large-scale mobile client traffic consumption |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109787821A CN109787821A (en) | 2019-05-21 |
CN109787821B true CN109787821B (en) | 2020-06-19 |
Family
ID=66499919
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910006654.8A Expired - Fee Related CN109787821B (en) | 2019-01-04 | 2019-01-04 | Intelligent prediction method for large-scale mobile client traffic consumption |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109787821B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112330368B (en) * | 2020-11-16 | 2024-04-09 | 腾讯科技(深圳)有限公司 | Data processing method, system, storage medium and terminal equipment |
CN112926845A (en) * | 2021-02-18 | 2021-06-08 | 上海翰声信息技术有限公司 | Big data based outbound method, electronic device and computer readable storage medium |
CN113869750A (en) * | 2021-09-30 | 2021-12-31 | 中国计量大学 | Automatic elevator maintenance enterprise rating system based on big data |
CN117057852B (en) * | 2023-10-09 | 2024-01-26 | 头流(杭州)网络科技有限公司 | Internet marketing system and method based on artificial intelligence technology |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105302831A (en) * | 2014-07-18 | 2016-02-03 | 上海星红桉数据科技有限公司 | High-speed calculation analysis method based on mass user behavior data |
CN107972662A (en) * | 2017-10-16 | 2018-05-01 | 华南理工大学 | To anti-collision warning method before a kind of vehicle based on deep learning |
WO2018123051A1 (en) * | 2016-12-28 | 2018-07-05 | 株式会社日立製作所 | Information processing system and method |
CN108388954A (en) * | 2018-01-05 | 2018-08-10 | 上海电力学院 | A kind of cascade hydropower robust Optimization Scheduling based on random security domain |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150356576A1 (en) * | 2011-05-27 | 2015-12-10 | Ashutosh Malaviya | Computerized systems, processes, and user interfaces for targeted marketing associated with a population of real-estate assets |
US20160358290A1 (en) * | 2012-04-20 | 2016-12-08 | Humana Inc. | Health severity score predictive model |
CN102982373B (en) * | 2012-12-31 | 2015-04-22 | 山东大学 | OIN (Optimal Input Normalization) neural network training method for mixed SVM (Support Vector Machine) regression algorithm |
CN106096623A (en) * | 2016-05-25 | 2016-11-09 | 中山大学 | A kind of crime identifies and Forecasting Methodology |
-
2019
- 2019-01-04 CN CN201910006654.8A patent/CN109787821B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105302831A (en) * | 2014-07-18 | 2016-02-03 | 上海星红桉数据科技有限公司 | High-speed calculation analysis method based on mass user behavior data |
WO2018123051A1 (en) * | 2016-12-28 | 2018-07-05 | 株式会社日立製作所 | Information processing system and method |
CN107972662A (en) * | 2017-10-16 | 2018-05-01 | 华南理工大学 | To anti-collision warning method before a kind of vehicle based on deep learning |
CN108388954A (en) * | 2018-01-05 | 2018-08-10 | 上海电力学院 | A kind of cascade hydropower robust Optimization Scheduling based on random security domain |
Also Published As
Publication number | Publication date |
---|---|
CN109787821A (en) | 2019-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109787821B (en) | Intelligent prediction method for large-scale mobile client traffic consumption | |
CN109165664B (en) | Attribute-missing data set completion and prediction method based on generation of countermeasure network | |
CN113905391B (en) | Integrated learning network traffic prediction method, system, equipment, terminal and medium | |
CN111563706A (en) | Multivariable logistics freight volume prediction method based on LSTM network | |
CN112488395A (en) | Power distribution network line loss prediction method and system | |
CN110212528B (en) | Power distribution network measurement data missing reconstruction method | |
CN113298230B (en) | Prediction method based on unbalanced data set generated against network | |
CN112910690A (en) | Network traffic prediction method, device and equipment based on neural network model | |
CN110110372B (en) | Automatic segmentation prediction method for user time sequence behavior | |
CN113487855B (en) | Traffic flow prediction method based on EMD-GAN neural network structure | |
CN111798991A (en) | LSTM-based method for predicting population situation of new coronary pneumonia epidemic situation | |
CN111178585A (en) | Fault reporting amount prediction method based on multi-algorithm model fusion | |
CN112650933B (en) | Session recommendation method based on higher-order aggregation graph convolution fusion multi-head attention mechanism | |
CN105893669A (en) | Global simulation performance predication method based on data digging | |
CN112200375B (en) | Prediction model generation method, prediction model generation device, and computer-readable medium | |
CN109754122A (en) | A kind of Numerical Predicting Method of the BP neural network based on random forest feature extraction | |
CN116187835A (en) | Data-driven-based method and system for estimating theoretical line loss interval of transformer area | |
CN116522912B (en) | Training method, device, medium and equipment for package design language model | |
CN113886454A (en) | Cloud resource prediction method based on LSTM-RBF | |
CN115794805B (en) | Method for supplementing measurement data of medium-low voltage distribution network | |
CN117196033A (en) | Wireless communication network knowledge graph representation learning method based on heterogeneous graph neural network | |
CN114861739B (en) | Characteristic channel selectable multi-component system degradation prediction method and system | |
CN115496338A (en) | Electric power payment channel drainage method, system and medium based on big data technology | |
CN115456260A (en) | Customer service telephone traffic prediction method | |
CN112667394B (en) | Computer resource utilization rate optimization method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200619 |
|
CF01 | Termination of patent right due to non-payment of annual fee |