CN110427484A - A kind of Chinese natural language processing method based on deep learning - Google Patents
A kind of Chinese natural language processing method based on deep learning Download PDFInfo
- Publication number
- CN110427484A CN110427484A CN201810387340.2A CN201810387340A CN110427484A CN 110427484 A CN110427484 A CN 110427484A CN 201810387340 A CN201810387340 A CN 201810387340A CN 110427484 A CN110427484 A CN 110427484A
- Authority
- CN
- China
- Prior art keywords
- module
- training
- model
- data
- chinese
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 238000003058 natural language processing Methods 0.000 title claims abstract description 46
- 238000013135 deep learning Methods 0.000 title claims abstract description 22
- 238000012549 training Methods 0.000 claims abstract description 103
- 230000008569 process Effects 0.000 claims abstract description 36
- 230000003993 interaction Effects 0.000 claims abstract description 7
- 230000006870 function Effects 0.000 claims description 23
- 238000013528 artificial neural network Methods 0.000 claims description 22
- 238000012545 processing Methods 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 4
- 238000004140 cleaning Methods 0.000 claims 1
- 238000003062 neural network model Methods 0.000 abstract description 13
- 238000010801 machine learning Methods 0.000 abstract description 9
- 230000013016 learning Effects 0.000 description 17
- 230000011218 segmentation Effects 0.000 description 9
- 239000013598 vector Substances 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 6
- 230000001537 neural effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Machine Translation (AREA)
Abstract
The Chinese natural language processing method based on deep learning that the present invention relates to a kind of, server includes model scheduling module, data slicer module, data cleansing module, model training module, model database, model scheduling module includes human-computer interaction interface, operation scheduling module, model training module includes the processes such as general mathematical algoritic module, decoding algorithm module, mixing tuning algorithm module, depth sorting module, including configuration parameter, input training data, training data, generation customizing model, prediction urtext.The present invention completes Chinese natural language using the Chinese deep neural network model that machine learning training generates and handles task, has the characteristics that intelligent machine study.
Description
Technical field
The present invention relates to a kind of Chinese natural language processing method, in particular to a kind of NLP participle based on deep learning
Chinese natural language processing method, belong to Chinese natural language process field.
Background technique
Machine learning refers to the learning behavior of computer simulation or the realization mankind, to obtain new knowledge or skills, again
The existing structure of knowledge is organized to be allowed to constantly improve the performance of itself.Machine learning is the core of artificial intelligence, is to make computer
Fundamental way with intelligence, application spread the every field of artificial intelligence, it mainly using conclusion, synthesis rather than is drilled
It unravels silk.Machine learning is widely used in the neck such as data mining, natural language processing, living things feature recognition, search engine, medical diagnosis
Domain seeks to realize the natural language processing process based on deep learning specific to natural language processing field, is instructed using study
Practice the neural network model generated to complete the processing of data and export analysis result.
Summary of the invention
The present invention is based on the Chinese natural language processing methods of deep learning to disclose new scheme, is instructed using machine learning
Practice the Chinese deep neural network model generated and complete Chinese natural language processing task, solves existing similar scheme without intelligence
The problem of changing deep learning system.
The present invention is based on the Chinese natural language processing method of deep learning, Chinese natural language processing method passes through service
Device realizes that server includes model scheduling module, data slicer module, data cleansing module, model training module, model data
Library, model scheduling module include human-computer interaction interface, operation scheduling module, and model training module includes general mathematical algorithm mould
Block, decoding algorithm module, mixing tuning algorithm module, depth sorting module, including process: user passes through human-computer interaction interface tune
With the operation parameter of scheduling module allocation models, training dataset, model parameter of the server according to user configuration, training data
Collection calls data slicer module, data cleansing module, model training module processing, training data to generate customizing model and be stored in model
Data are cut into different dimensions by database, data slicer module, and data cleansing module filters out valid data, model training
Data are called in deep learning network to deploy the algorithm of setting and carry out calculating training by module, mixing tuning algorithm module calls,
Coordinate general mathematical algoritic module, decoding algorithm module, depth sorting module data are carried out to calculate training generation customizing model,
Server reads output text sequence label result after customizing model predicts urtext data.
Further, the process of the model training of the method for this programme includes unsupervised pre-training, has supervision tuning training, nothing
Supervision pre-training has supervision tuning training using back-propagation algorithm, and back-propagation algorithm includes process: for each trained sample
Example calculates the loss function on "current" model parameter value and training examples for the partial derivative of model parameter, and according to
The partial derivative is iterated model parameter to gradient descent direction, passes through the side of error layer-by-layer backpropagation in neural network
Formula calculates gradient.
Further, the mixing tuning algorithm module of the method for this programme completes model by the way of multi-task parallel processing
Training process, multi-task parallel training process share the common trait of each task to improve training effect.
The Chinese generated the present invention is based on the Chinese natural language processing method of deep learning using machine learning training is deep
It spends neural network model and completes Chinese natural language processing task, have the characteristics that intelligent machine study.
Detailed description of the invention
Fig. 1 is the module principle figure of the Chinese natural language processing method the present invention is based on deep learning.
Fig. 2 is the general flow chart of the Chinese natural language processing method based on deep learning.
Specific embodiment
The present invention is based on the Chinese natural language processing method of deep learning, Chinese natural language processing method passes through service
Device realizes that server includes model scheduling module, data slicer module, data cleansing module, model training module, model data
Library, model scheduling module include human-computer interaction interface, operation scheduling module, and model training module includes general mathematical algorithm mould
Block, decoding algorithm module, mixing tuning algorithm module, depth sorting module, including process: user passes through human-computer interaction interface tune
With the operation parameter of scheduling module allocation models, training dataset, model parameter of the server according to user configuration, training data
Collection calls data slicer module, data cleansing module, model training module processing, training data to generate customizing model and be stored in model
Data are cut into different dimensions by database, data slicer module, and data cleansing module filters out valid data, model training
Data are called in deep learning network to deploy the algorithm of setting and carry out calculating training by module, mixing tuning algorithm module calls,
Coordinate general mathematical algoritic module, decoding algorithm module, depth sorting module data are carried out to calculate training generation customizing model,
Server reads output text sequence label result after customizing model predicts urtext data.The method of this programme
The process of model training includes unsupervised pre-training, has supervision tuning training, and unsupervised pre-training has supervision tuning training to use
Back-propagation algorithm, back-propagation algorithm include process: for each training examples, calculate in "current" model parameter value and
Loss function in training examples for model parameter partial derivative, and according to the partial derivative to model parameter to gradient decline side
To being iterated, gradient is calculated by way of layer-by-layer backpropagation in neural network error.Further, the method for this programme
Mixing tuning algorithm module using multi-task parallel processing by the way of complete model training process, multi-task parallel training process
The common trait of each task is shared to improve training effect.
This programme discloses a kind of Chinese natural language processing system based on deep learning, passes through good participle technique
Allow computer to understand semanteme, and distinguish the dual semanteme of grammer well, belongs to NLP participle field.This programme packet
It includes the configuration of complete model, parallel unsupervised pre-training and has supervision tuning training module, the support to multitask training, with
And the functions such as label prediction and decoding, it can be used for training (multitask) Chinese NLP deep neural network model, and complete Chinese
Participle, part-of-speech tagging and name Entity recognition task.As shown in Figure 1, model scheduling module is the entrance of whole system, bear
Duty interacts and coordinates system function by order line and user, is to Chinese NLP deep neural network model and its to match
The synthesis allocation process of confidence breath.Data slicer module is data to be cut into different dimensions.Data cleansing module is screening
Valid data.Depth network model module is handled the abstract modeling of sequence labelling problem and training data input and output
It realizes, is data to be called in deep learning network, and then deploy different algorithms and carry out calculating training.General mathematical module is logical
Mathematical computations are realized, such as matrix manipulation, wait realizations process to the abstract of function.Decoder module realizes sequence mark solution
Code algorithm.General mathematical module and decoding algoritic module are that the parallel stochastic gradient descent based on Akka (a kind of bottom architecture) is instructed
Practice the realization of algorithm.It mixes tuning algorithm module and is realized by calling and coordinating general mathematical module and other correlation modules
The training algorithm of Chinese NLP deep neural network model, and the support to multi-task learning is provided, by calling and coordinating
Depth network model and three kinds of algorithm model modules and other correlation modules realize Chinese NLP deep neural network model
Training algorithm.Depth sorting module is calculated result and subdivided data depth sorting.
As shown in Fig. 2, user when realizing Chinese natural language processing system using this programme, configures Chinese NLP first
The parameter of deep neural network model, size, the quantity of hidden layer, training data of dimension, hidden layer including word embeding layer
Collection and whether multi-task learning etc..System will generate Chinese NLP deep neural network model according to the concrete configuration of user, and
Using the Chinese NLP deep neural network parallel training module in system unsupervised pre-training is carried out to model first, then carried out
There is supervision tuning training.Since entire training process has all carried out parallelization, the hardware of multi -CPU core can be effectively utilized
Environment, training for promotion efficiency.Model after the completion of training will be saved in the file that user specifies, and the mould can be used in user
Type is predicted.In prediction, decoder (server) uses reading model file and initial data first by training
Chinese NLP deep neural network model initial data is predicted, then execute decoding algorithm and export final label knot
Fruit.Therefore, this programme builds point in the processing of model realization Chinese natural language based on the depth learning technology in machine learning
The processing such as word, part of speech label, name entity.
The quantitative calculating of Chinese character is completed by text sequence labeling process.Text sequence label, which refers to, regards text as one
The linear order being made of text, and given one by all possible mark groups at tag set, then pass through a classification
Device specifies a label in tag set to each of sequence text.Chinese word segmentation, part-of-speech tagging and name are real
Body identification can all regard text sequence label task as, and each of sentence Chinese character can be given by training one by also meaning that
These tasks can be completed in the mode for distributing the classifier of a label.For example, sentence " how do you do the world " is segmented, it can
A label is respectively specified that each of sentence Chinese character by classifier, to obtain " your (B) good (L) (U) generation
(B) word segmentation result can be obtained further according to label in boundary (L) ": " hello// world ".The Chinese NLP depth proposed in this patent
Neural network (model) is exactly the classifier of a text sequence label task.
The model framework of Chinese NLP deep neural network is as shown in Fig. 2, its input is the documentwindow of a fixed size
Mouthful, output is the probability distribution of the label of text window center position Chinese character.The deep neural network is divided into three modules:
Embeding layer (Embedding Layer), hidden layer (Hidden Layer), output layer (Output Layer).Chinese dictionary is
One list, wherein including the manageable all Chinese characters of system, and for indicating other character (examples in addition to these Chinese characters
Such as number, punctuate, not landed Chinese character) substitute symbol.Each single item in dictionary is using its position in dictionary list as sequence
Number, each character of input Chinese NLP deep neural network can all correspond to a certain in dictionary.Chinese character is embedded in by a reality
Value matrix indicates that each single item corresponds to a column vector in Chinese character embeded matrix in dictionary.Embeding layer is directly handled in input
The text window of literary NLP deep neural network finds the character to each of text window character in dictionary first
And its corresponding serial number, then construct the one-dimensional vector that dimension is equal to dictionary size, and by the vector with current word
The corresponding dimension of serial number accorded in dictionary is set to l, other dimensions are set to 0, and the one-hot that this vector is known as character is indicated.
It indicates to carry out matrix inquiry in Chinese character embeded matrix using the one-hot of the character, find each in input text window
A character corresponding real-valued vectors in Chinese character embeded matrix.It is finally that the end to end composition of all these real-valued vectors is one long
Vector, and input in hidden layer.The hidden layer of deep neural network can regard a high-level characteristic withdrawal device as, for constructing
The expression of data feature on the middle and senior level.Therefore for Chinese word segmentation, part-of-speech tagging and life based on Chinese NLP deep neural network
For name Entity recognition task, embeding layer and hidden layer part are all that identical, different part is only in that output layer.
The training method of this system set forth below.The Chinese NLP deep neural network of this programme passes through back-propagation algorithm
It is trained.As a parameterized model, training process can be described as one group of parameter of searching to be made neural network model
Obtain the process that loss function that some includes the model obtains minimum value on some given data set, another equivalent
Describing mode be find one group of parameter make some include the model objective function obtained on some given data set
The process of maximum value, it is that corresponding loss function takes negative value that objective function, which can be simply considered that, here.The Chinese NLP of this programme
In deep neural network model training process, either unsupervised pre-training still has supervision tuning training to be all made of backpropagation
Algorithm.Unsupervised pre-training and the difference for having supervision tuning to train are output layer structure, training data and objective function
It is different.This programme use back-propagation algorithm its be substantially a kind of optimization algorithm based on gradient.Back-propagation algorithm by
Two parts form, and are stochastic gradient descent algorithm first, i.e., for each training examples, calculate in "current" model parameter value
And the loss function in training examples is for the partial derivative of model parameter, and according to the partial derivative to model parameter under gradient
Drop direction is iterated.Common loss function is least squares error (Least Square Error) in neural metwork training
And log-likelihood error (Log-likelihood Error) etc., function represented by training data and the neural network are all
The component part of loss function.The training process of neural network as finds the minimum point of the loss function, but due to mind
It is usually nonlinearity function through function represented by network model, it is difficult to find global minimum, therefore training process
It is commonly terminated on stable point, these stable points are the local minizing points of loss function.Stochastic gradient descent algorithm it is basic
Step is as shown in table l.
Table 1: the backpropagation training algorithm of neural network
In gradient descent algorithm, the specific process for calculating gradient is the second part of back-propagation algorithm, that is, passes through error
The mode of layer-by-layer " backpropagation " calculates gradient in neural network.It is guidable feelings in the neuron activation functions of neural network
Under condition, the error of loss function can be according to the chain rule of derivative in neural network layer-by-layer " backpropagation ", while herein
Neural metwork training loss function can be quickly calculated very much in the process to the first-order partial derivative of all model parameters, therefore
So that using alternative manner (stochastic gradient descent algorithm) fast search neural metwork training objective function based on first derivative
Stable point be possibly realized.
This programme is using the Model Establishment Chinese word segmentation of multi-task learning, part-of-speech tagging and name Entity recognition multitask
Learning model.Chinese word segmentation, part-of-speech tagging and name these three tasks of Entity recognition be it is interrelated, it is interactional.Example
Such as, the boundary for naming entity is obviously also the boundary segmented, and naming often has very strong association between the classification of entity and part of speech
Property etc..In traditional Chinese natural language processing method, often these three task complete parttions are driven into row, to ignore
Relationship between them.Or using first segmenting, rear part-of-speech tagging finally names " pipeline " process as Entity recognition, from
And cause the error propagation of previous task to latter task." pipeline " process has another disadvantage that, even if below
Useful information can be obtained in task, can not also influence the result of front task.
In machine learning field, multi-task learning refers to multiple inter-related tasks by sharing to a certain extent
The method of character representation is placed in a model while being learnt.So often than using multiple Model Independent study different
Task have better effect because in multi-task learning, learner can using the denominator between multiple tasks as
A kind of means effective use of regularization.Therefore, the essence of multi-task learning is a kind of conclusion migration (Inductive
Transfer), such machine learning method is by using the training signal in inter-related task as a kind of induction bias
Carry out the generalization ability of lift scheme.The means for reaching this target are used in the learning process of multiple inter-related task classifiers
Shared character representation unit.That is acquired in a task in this way also will help other tasking learnings to obtain more preferably.
Chinese NLP deep neural network is highly suitable for multi-task learning.The hidden layer of deep neural network can be seen
At being a high-level characteristic withdrawal device, for constructing the expression of data feature on the middle and senior level.Therefore for based on Chinese NLP depth mind
For Chinese word segmentation, part-of-speech tagging and name Entity recognition task through network, embeding layer and hidden layer part are all complete
Identical, different parts are only in that output layer.Therefore it can be realized and be based on by way of shared embeding layer and hidden layer
Chinese word segmentation, part-of-speech tagging and the name Entity recognition multi-task learning of deep neural network.Due to not being related to task phase
The output layer of pass, Chinese word segmentation, part-of-speech tagging and the pre-training and single task mould of naming Entity recognition multi-task learning model
The pre-training process of type is identical.In Training, randomly choosed with equal probability first before each round training
Then current training mission takes turns having for the task according to selection result progress one and supervises stochastic gradient descent training, and updates embedding
Enter the output layer of layer, hidden layer and the task, the output layer of other tasks then remains unchanged.It can accomplish multiple tasks in this way
It alternately trains, and guarantees that the weight of each task in training process is equal.
Another benefit of Chinese word segmentation, part-of-speech tagging and name Entity recognition multi-task learning model is can be pre-
Accelerate calculating speed when survey, due to having shared embeding layer and hidden layer in multi-task learning model, for a text
For window, it is only necessary to calculate an embeding layer and hidden layer, then calculate separately the output layer of different task to be completed at the same time
The label of multiple tasks is predicted.In addition, in the multi-task learning model, when prediction, do not need actually to calculate participle task
Output layer because part of speech label task implied participle task, according to part of speech mark task output layer output mark
Note can be completed at the same time participle and part of speech label.Here the presence of output layer is segmented just for the sake of in the training process can
The parameter of hidden layer and embeding layer in deep neural network is adjusted using the information for including in participle data.
Based on the above content, the Chinese natural language processing method based on deep learning of this programme compares existing similar side
Case has substantive distinguishing features outstanding and significant progress.
The Chinese natural language processing method based on deep learning of this programme is not limited to disclose in specific embodiment
Content, the technical solution occurred in embodiment can the understanding based on those skilled in the art and extend, those skilled in the art
Member also belongs to the range of this programme according to the simple replacement scheme that this programme combination common knowledge is made.
Claims (3)
1. a kind of Chinese natural language processing method based on deep learning, the Chinese natural language processing method pass through service
Device realizes that the server includes model scheduling module, data slicer module, data cleansing module, model training module, model
Database, the model scheduling module include human-computer interaction interface, operation scheduling module, and the model training module includes general
Mathematical algorithm module, decoding algorithm module, mixing tuning algorithm module, depth sorting module, it is characterized in that including process:
User by the parameter of human-computer interaction interface call operation scheduling module allocation models, training dataset, server according to
The model parameter of user configuration, training dataset call data slicer module, data cleansing module, model training module processing,
Training data generates customizing model and is stored in model database, and data are cut into different dimensions by data slicer module, and data are clear
Mold cleaning block filters out valid data, and data are called in deep learning network and counted to deploy the algorithm of setting by model training module
Training is calculated, mixing tuning algorithm module is called, coordinates general mathematical algoritic module, decoding algorithm module, depth sorting module pair
Data carry out calculating training generation customizing model, and server reads after customizing model predicts urtext data and exports text
This sequence mark result.
2. the Chinese natural language processing method according to claim 1 based on deep learning, which is characterized in that model instruction
Experienced process includes unsupervised pre-training, has supervision tuning training, and unsupervised pre-training has the training of supervision tuning to pass using reversed
Algorithm is broadcast, back-propagation algorithm includes process: for each training examples, calculated in "current" model parameter value and training sample
Loss function in example and carries out the partial derivative of model parameter according to the partial derivative to model parameter to gradient descent direction
Iteration calculates gradient by way of layer-by-layer backpropagation in neural network error.
3. the Chinese natural language processing method according to claim 2 based on deep learning, which is characterized in that mixing association
Algoritic module is adjusted to complete model training process by the way of multi-task parallel processing, multi-task parallel training process shares each
The common trait of business improves training effect.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810387340.2A CN110427484A (en) | 2018-04-26 | 2018-04-26 | A kind of Chinese natural language processing method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810387340.2A CN110427484A (en) | 2018-04-26 | 2018-04-26 | A kind of Chinese natural language processing method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110427484A true CN110427484A (en) | 2019-11-08 |
Family
ID=68408317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810387340.2A Pending CN110427484A (en) | 2018-04-26 | 2018-04-26 | A kind of Chinese natural language processing method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110427484A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111079447A (en) * | 2020-03-23 | 2020-04-28 | 深圳智能思创科技有限公司 | Chinese-oriented pre-training method and system |
CN111241279A (en) * | 2020-01-07 | 2020-06-05 | 华东师范大学 | Natural language relation extraction method based on multi-task learning mechanism |
CN111611808A (en) * | 2020-05-22 | 2020-09-01 | 北京百度网讯科技有限公司 | Method and apparatus for generating natural language model |
CN111651270A (en) * | 2020-05-19 | 2020-09-11 | 南京擎盾信息科技有限公司 | Visualization method and device for completing multitask semantic annotation on legal data |
CN111651271A (en) * | 2020-05-19 | 2020-09-11 | 南京擎盾信息科技有限公司 | Multi-task learning semantic annotation method and device based on legal data |
CN112434804A (en) * | 2020-10-23 | 2021-03-02 | 东南数字经济发展研究院 | Compression algorithm for deep transform cascade neural network model |
CN113807496A (en) * | 2021-05-31 | 2021-12-17 | 华为技术有限公司 | Method, apparatus, device, medium and program product for constructing neural network model |
TWI798513B (en) * | 2019-12-20 | 2023-04-11 | 國立清華大學 | Training method of natural language corpus for the decision making model of machine learning |
CN117290429A (en) * | 2023-11-24 | 2023-12-26 | 山东焦易网数字科技股份有限公司 | Method for calling data system interface through natural language |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106649275A (en) * | 2016-12-28 | 2017-05-10 | 成都数联铭品科技有限公司 | Relation extraction method based on part-of-speech information and convolutional neural network |
CN106779069A (en) * | 2016-12-08 | 2017-05-31 | 国家电网公司 | A kind of abnormal electricity consumption detection method based on neutral net |
CN107193801A (en) * | 2017-05-21 | 2017-09-22 | 北京工业大学 | A kind of short text characteristic optimization and sentiment analysis method based on depth belief network |
CN107291232A (en) * | 2017-06-20 | 2017-10-24 | 深圳市泽科科技有限公司 | A kind of somatic sensation television game exchange method and system based on deep learning and big data |
CN107662617A (en) * | 2017-09-25 | 2018-02-06 | 重庆邮电大学 | Vehicle-mounted interactive controlling algorithm based on deep learning |
CN107730002A (en) * | 2017-10-13 | 2018-02-23 | 国网湖南省电力公司 | A kind of communication network shutdown remote control parameter intelligent fuzzy comparison method |
-
2018
- 2018-04-26 CN CN201810387340.2A patent/CN110427484A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106779069A (en) * | 2016-12-08 | 2017-05-31 | 国家电网公司 | A kind of abnormal electricity consumption detection method based on neutral net |
CN106649275A (en) * | 2016-12-28 | 2017-05-10 | 成都数联铭品科技有限公司 | Relation extraction method based on part-of-speech information and convolutional neural network |
CN107193801A (en) * | 2017-05-21 | 2017-09-22 | 北京工业大学 | A kind of short text characteristic optimization and sentiment analysis method based on depth belief network |
CN107291232A (en) * | 2017-06-20 | 2017-10-24 | 深圳市泽科科技有限公司 | A kind of somatic sensation television game exchange method and system based on deep learning and big data |
CN107662617A (en) * | 2017-09-25 | 2018-02-06 | 重庆邮电大学 | Vehicle-mounted interactive controlling algorithm based on deep learning |
CN107730002A (en) * | 2017-10-13 | 2018-02-23 | 国网湖南省电力公司 | A kind of communication network shutdown remote control parameter intelligent fuzzy comparison method |
Non-Patent Citations (1)
Title |
---|
王蕾: "基于神经网络的中文命名实体识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI798513B (en) * | 2019-12-20 | 2023-04-11 | 國立清華大學 | Training method of natural language corpus for the decision making model of machine learning |
CN111241279A (en) * | 2020-01-07 | 2020-06-05 | 华东师范大学 | Natural language relation extraction method based on multi-task learning mechanism |
CN111241279B (en) * | 2020-01-07 | 2020-10-30 | 华东师范大学 | Natural language relation extraction method based on multi-task learning mechanism |
CN111079447A (en) * | 2020-03-23 | 2020-04-28 | 深圳智能思创科技有限公司 | Chinese-oriented pre-training method and system |
CN111651270A (en) * | 2020-05-19 | 2020-09-11 | 南京擎盾信息科技有限公司 | Visualization method and device for completing multitask semantic annotation on legal data |
CN111651271A (en) * | 2020-05-19 | 2020-09-11 | 南京擎盾信息科技有限公司 | Multi-task learning semantic annotation method and device based on legal data |
CN111651271B (en) * | 2020-05-19 | 2021-07-20 | 南京擎盾信息科技有限公司 | Multi-task learning semantic annotation method and device based on legal data |
CN111611808A (en) * | 2020-05-22 | 2020-09-01 | 北京百度网讯科技有限公司 | Method and apparatus for generating natural language model |
CN112434804A (en) * | 2020-10-23 | 2021-03-02 | 东南数字经济发展研究院 | Compression algorithm for deep transform cascade neural network model |
CN113807496A (en) * | 2021-05-31 | 2021-12-17 | 华为技术有限公司 | Method, apparatus, device, medium and program product for constructing neural network model |
CN117290429A (en) * | 2023-11-24 | 2023-12-26 | 山东焦易网数字科技股份有限公司 | Method for calling data system interface through natural language |
CN117290429B (en) * | 2023-11-24 | 2024-02-20 | 山东焦易网数字科技股份有限公司 | Method for calling data system interface through natural language |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110427484A (en) | A kind of Chinese natural language processing method based on deep learning | |
Alzubaidi et al. | A survey on deep learning tools dealing with data scarcity: definitions, challenges, solutions, tips, and applications | |
Wang et al. | Learning to combine: Knowledge aggregation for multi-source domain adaptation | |
He et al. | AutoML: A survey of the state-of-the-art | |
CN110134757B (en) | Event argument role extraction method based on multi-head attention mechanism | |
Muhammad et al. | SUPERVISED MACHINE LEARNING APPROACHES: A SURVEY. | |
CN110083700A (en) | A kind of enterprise's public sentiment sensibility classification method and system based on convolutional neural networks | |
CN109522942A (en) | A kind of image classification method, device, terminal device and storage medium | |
CN110362684A (en) | A kind of file classification method, device and computer equipment | |
CN111639679A (en) | Small sample learning method based on multi-scale metric learning | |
Gandhi et al. | Classification rule construction using particle swarm optimization algorithm for breast cancer data sets | |
CN107729497A (en) | A kind of word insert depth learning method of knowledge based collection of illustrative plates | |
CN106447066A (en) | Big data feature extraction method and device | |
CN110175235A (en) | Intelligence commodity tax sorting code number method and system neural network based | |
CN104966105A (en) | Robust machine error retrieving method and system | |
Raschka | Machine Learning Q and AI: 30 Essential Questions and Answers on Machine Learning and AI | |
CN109101579A (en) | customer service robot knowledge base ambiguity detection method | |
CN110263979A (en) | Method and device based on intensified learning model prediction sample label | |
CN110188195A (en) | A kind of text intension recognizing method, device and equipment based on deep learning | |
Fonnegra et al. | Performance comparison of deep learning frameworks in image classification problems using convolutional and recurrent networks | |
Sarraf et al. | A comprehensive review of deep learning architectures for computer vision applications | |
CN109214407A (en) | Event detection model, calculates equipment and storage medium at method, apparatus | |
CN103049490A (en) | Attribute generation system and generation method among knowledge network nodes | |
CN114692605A (en) | Keyword generation method and device fusing syntactic structure information | |
CN112732872A (en) | Biomedical text-oriented multi-label classification method based on subject attention mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20191108 |
|
WD01 | Invention patent application deemed withdrawn after publication |