CN109934260A - Image, text and data fusion sensibility classification method and device based on random forest - Google Patents

Image, text and data fusion sensibility classification method and device based on random forest Download PDF

Info

Publication number
CN109934260A
CN109934260A CN201910098349.6A CN201910098349A CN109934260A CN 109934260 A CN109934260 A CN 109934260A CN 201910098349 A CN201910098349 A CN 201910098349A CN 109934260 A CN109934260 A CN 109934260A
Authority
CN
China
Prior art keywords
feature
text
picture
classification
random forest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910098349.6A
Other languages
Chinese (zh)
Inventor
林政�
耿悦
付鹏
王伟平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Information Engineering of CAS
Original Assignee
Institute of Information Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Information Engineering of CAS filed Critical Institute of Information Engineering of CAS
Priority to CN201910098349.6A priority Critical patent/CN109934260A/en
Publication of CN109934260A publication Critical patent/CN109934260A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of, and the image, text and data based on random forest merges sensibility classification method and device.The step of this method includes: 1) to extract the feature of picture and the feature of text in multi-modal data;2) feature of the feature of the picture of extraction and text is merged, obtains picture and text global feature;3) feature selecting is carried out to picture and text global feature by Corruption mechanism;4) classified by random forest grader to the picture and text global feature after carrying out feature selecting, obtain emotional semantic classification result.Preferably, the feature that picture in multi-modal data is extracted by VGG-ISC network extracts the feature of text in multi-modal data by CNN-TSC network.The present invention can effectively get the feature in single mode, and the feature vector of the two can be combined, and feature as a whole is put into random forest and carries out classification learning and carry out emotional semantic classification.

Description

Image, text and data fusion sensibility classification method and device based on random forest
Technical field
The invention belongs to information technology fields, and in particular to a kind of image, text and data fusion emotional semantic classification based on random forest Method and apparatus.
Background technique
There are the figures of magnanimity, literary fused data on internet at present, and carrying out emotional semantic classification for these data can be effective Help carries out business decision, progress the analysis of public opinion etc..But current emotional semantic classification is studied mainly for the number in single mode According to few emotional semantic classification researchs carried out for multi-modal data.Such as the emotional semantic classification of text, have based on emotion word The method of allusion quotation, or the method based on machine learning;For the emotional semantic classification of picture, there is the side based on Pixel-level distribution of color Method, the method for view-based access control model feature bag of words, or the method based on deep neural network.Also there is small part for multiple mode The method for carrying out emotional semantic classification, such as cross-module state consistency regression model (Cross-modality Consistent Regression, CCR) by binding the Classification Loss between multiple mode complete multi-modal emotional semantic classification task (YOU Q,LUO J,JIN H,et al.Cross-modality consistent regression for joint visual- textual sentiment analysis of social multimedia[C]//Proceedings of the Ninth ACM International Conference on Web Search and Data Mining.[S.l.]:ACM,2016: 13–22.).There are also the modes of view-based access control model feature bag of words, unify the feature between picture and text by the visual signature bag of words of picture and carry out Emotional semantic classification (CAO D, JI R, LIN D, et al.A cross-media public sentiment analysis system for microblog[J].Multimedia Systems,2016,22(4):479–486.)。
For the emotional semantic classification in single mode, either the emotional semantic classification based on picture either text based emotion Classification, all can not effectively expressing picture and text entirety emotion, therefore such methods itself are restricted.Cross-module state consistency returns mould Although type can carry out emotional semantic classification in view of the common feature of picture and text, model is difficult to train, and to training data and survey The quality requirement for trying data is very high.The kernel method of canonical correlation analysis can be provided for obtaining the related information of multi-modal data.Though Right canonical correlation analysis and its core extension can model the association different characteristic, but it is high-rise between capturing feature There is limitation when relevance between abstract.
Summary of the invention
The present invention is intended to provide effectively fusion picture, text, can be effective come the method for carrying out emotional semantic classification for one kind Picture and text multimodal information fusion is subjected to emotional semantic classification together.
The feature of picture, text is extracted respectively, and the feature extracted is combined, the input as final classification device Emotional semantic classification is carried out, referred to as middle fusion.But feature is only simply combined by common middle fusion, cannot be handled more Weight between a modal characteristics.The invention proposes a kind of fusion methods based on random forest, can effectively obtain Feature onto single mode, and the feature vector of the two being combined, feature as a whole be put into Machine forest carries out classification learning and carries out emotional semantic classification.
The technical solution adopted by the invention is as follows:
A kind of image, text and data fusion sensibility classification method based on random forest, comprising the following steps:
1) feature of picture and the feature of text in multi-modal data are extracted;
2) feature of the feature of the picture of extraction and text is merged, obtains picture and text global feature;
3) feature selecting is carried out to picture and text global feature by Corruption mechanism;
4) classified by random forest grader to the picture and text global feature after carrying out feature selecting, obtain emotion point Class result.
Further, step 1) extracts the feature of picture in multi-modal data by VGG-ISC network, passes through CNN-TSC Network extracts the feature of text in multi-modal data.
Further, the training method of the VGG-ISC network are as follows: the pre-training first on ILSVRC-2012 data set VGG-19 network;After training VGG-19 network, its last two layers of dimension is modified, i.e., the dimension of second full articulamentum is The output classification of k, softmax are 2 classes or 3 classes;Finally freeze all convolution layer parameters, uses the figure in training dataset For piece as input, the emotional category of picture continues last three layers of the parameter for training VGG-ISC network as output;It trains After VGG-ISC network, exported second full articulamentum as the feature of picture.
Further, the word expression matrix after the participle of the CNN-TSC Web vector graphic text utilizes pre- instruction as input Word after participle is mapped to vector by the Word2Vec perfected, using the matrix formed after the vector transposition of each word as CNN- The input of TSC carries out convolution operation to the word expression matrix using the convolution kernel of multiple and different sizes later, and by any time Between maximum pond, flatten, Dropout, full connection, after Softmax layer output text feature classification.
Further, CNN-TSC the and VGG-ISC network is trained by the way of the migration of field.
Further, the Corruption mechanism is rewritten as 0 according to probability q per one-dimensional to picture and text global feature f, will Other dimensions are changed to 1/ (1-q) times of initial value.
Further, in the Corruption mechanism, rewrite probability q sequence 0.1,0.2,0.3,0.4,0.5, 0.6,0.7,0.8,0.9 value in }.
Further, the random forest grader passes through the training subset for having the sampling put back to obtain a decision tree, And input of the feature as decision tree is selected by random, keep the feature of every decision tree concern different, to make every to determine The structure of plan tree is different, to improve the accuracy rate of classification.
Further, in the random forest grader, decision tree quantity is 600, the aspect ratio that every decision tree uses Weight value from sequence { 0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5 }.
Accordingly with above method, the present invention also provides a kind of, and the image, text and data fusion emotional semantic classification based on random forest fills It sets comprising:
Picture feature extraction module is responsible for extracting the feature of picture in multi-modal data;
Text character extraction module is responsible for extracting the feature of text in multi-modal data;
Feature merging module is responsible for merging the feature of picture and the feature of text of extraction, obtains picture and text entirety Feature;
Feature selection module is responsible for carrying out feature selecting to picture and text global feature by Corruption mechanism;
Categorization module is responsible for dividing the picture and text global feature after carrying out feature selecting by random forest grader Class obtains emotional semantic classification result.
Beneficial effects of the present invention are as follows:
1) present invention can effectively get the feature in single mode, and the feature vector of the two can be merged Get up, feature as a whole is put into random forest and carries out classification learning and carry out emotional semantic classification.
2) in the case where two, three classification, in the Fusion Model of several feature levels, emotional semantic classification network of the invention It shows relatively more preferable.In two classification, classification accuracy of the invention has reached 83.42%, is higher than multiple depth convolution The accuracy rate of the 83.16% of 80.19% and cross-module state consistency regression model of neural network, three classification has reached 74.21%, Higher than the 73.15% of 71.69% and cross-module state consistency regression model of multiple depth convolutional neural networks.
Detailed description of the invention
Fig. 1 is the step flow chart of the method for the present invention.
Fig. 2 is the structural schematic diagram of CNN-TSC network.
Fig. 3 is the structural schematic diagram of VGG-ISC network.
Specific embodiment
In order to make the foregoing objectives, features and advantages of the present invention clearer and more comprehensible, below by specific embodiment and Attached drawing is described in further details the present invention.
Fig. 1 is the step flow chart of the method for the present invention.The present invention is based on the VGG-ISC networks of VGG-19 to extract picture Feature vector, and use the feature vector that text is extracted based on the CNN-TSC network of CNN.CNN-TSC network (CNN Text Sentiment Classification, the text emotion sorter network based on CNN) and VGG-ISC network (VGG Image Sentiment Classification, the picture emotional semantic classification network based on VGG) structure respectively such as Fig. 2 and Fig. 3 It is shown.Other additional large-scale data have been used to be instructed in advance the training of CNN-TSC network and VGG-ISC network Practice, to obtain the parameter of better model.Training network, leads to again later first on the training text of a large amount of emotion mark It crosses using the text fine tuning parameter in microblogging training set and allows the distribution of network-adaptive microblogging text.
After the feature of the feature of picture and text is combined, pass through Corruption mechanism (CHEN M.Efficient vector representation for documents through corruption[J].arXiv Preprint arXiv:1707.02377,2017.) selection feature.Corruption is substantially to the picture and text obtained after merging Global feature f is 0 per one-dimensional be written over according to probability q, is changed to original for other dimensions in order to guarantee no deviation 1/ (1-q) times of value.Wherein q indicates to be rewritten as 0 probability.
Classification accuracy of the model in single mode is higher, and model is better to the extraction performance of feature in the dimension, And model then represents the performance of model entirety in whole emotional category.The present invention obtains final body using random forest Emotional category, here it is middle fusions, i.e. the fusion based on figure, the individual feature of text.
The present invention is in order to obtain better classification results, using random forest grader.Random forest is by multiple decision trees Composition, each sample have about 36.8% probability not sampled, these about 36.8% data not sampled, The outer data of bag are commonly referred to as, since data are not engaged in the training of model outside bag, it is possible to for the extensive energy of detection model Power.In decision tree quantity difference, model has any different for the utility of feature, decision tree quantity be 600 when effect most It is good.In order to improve the performance of model, lesser feature specific gravity is used, and less spy has been selected with corruption mechanism Sign accelerates model training speed.
Key problem in technology point of the invention is:
1) the present invention is based on the fusion features that the method for deep neural network gets picture and text automatically.
2) feature after being merged by the processing of Corruption mechanism, can select the feature of needs, prevent over-fitting, and And it is a kind of more efficient reduction process, reduces parameter, is punished the similar features being repeated as many times.In this way into The effective dimensionality reduction of row, can effectively obtain the feature of single mode and feature reasonably combines and carry out emotion point Class.
3) classified using random forest, introduce two randomnesss: sample is random, feature is random.By putting back to Sampling obtain the training subset of a decision tree, in this subset, existing duplicate data, and have other decision trees training No data are concentrated, and some data may be not belonging to the training subset of arbitrary decision tree forever.By with Machine selects input of the feature as decision tree, and the feature that each tree can be made to pay close attention to is different, to make the knot of every decision tree Structure is different, improves the accuracy rate of classification.
Using sorter network provided by the invention, have the advantages that
The data of the multi-modal emotion classifiers of training are mainly using the microblogging image, text and data for having accomplished fluently label.In data set One shares the picture and text microblogging of 10269 marks, has integrally beaten wherein picture, text and microblogging the affective tag of three classification, i.e., Positive emotion, neutral emotion and negative sense emotion.Pre-training VGG-19 network also uses other large-scale datasets.In training During the distributed vocabulary of text reaches, the text data of wechat public platform article data and microblogging is also used, belongs to more necks Domain Chinese balances corpus.Corpus includes 8,000,000 wechat public platform articles and about 800,000 microblog datas, and in total about 65,200,000,000 Word.In the case where two, three classification, in the Fusion Model of several feature levels, emotional semantic classification network of the invention shows phase To more preferable.In two classification, classification accuracy of the invention has reached 83.42%, is higher than multiple depth convolutional Neural net The accuracy rate of the 83.16% of 80.19% and cross-module state consistency regression model of network, three classification has reached 74.21%, is higher than more The 73.15% of 71.69% and cross-module state consistency regression model of weight depth convolutional neural networks.
A specific example using the method for the present invention is provided below.By taking two classification task of microblog data as an example, including with Lower step:
1) for the extraction of picture feature, feature is extracted using VGG-ISC network.The structure of VGG-ISC network such as Fig. 3 institute To show, training method is the pre-training VGG-19 network first on ILSVRC-2012 data set, after having trained VGG-19, Modify its last two layers of dimension, i.e., the dimension of the classification layer of the dimension and softmax of second full articulamentum.Setting second The dimension of full articulamentum is k, and the output classification of softmax is become 2 classes or 3 classes.Finally freeze all convolutional layer ginsengs Number uses the picture in microblog data collection as input, and the emotional category of picture continues the upper torus network of training most as output Three layers of parameter afterwards.After training VGG-ISC network, exported second full articulamentum as the feature of picture.
2) for text feature, the feature representation of text is extracted using the CNN-TSC network based on CNN.CNN-TSC network Structure as shown in Fig. 2, the word expression matrix after using the participle of text is as input, the mode that vocabulary reaches uses distributed word Word after participle is mapped to vector using the good Word2Vec of pre-training by representation, by shape after the vector transposition of each word At input of the matrix as CNN-TSC, this word expression matrix is rolled up using the convolution kernel of multiple and different sizes later Product operation, and by maximum pond at any time, flatten, Dropout, full connection, after Softmax layers output text feature Classification.CNN-TSC is trained in such a way that field migrates.In trained CNN-TSC network, the output of full articulamentum can To regard the affective characteristics of text as.In Text character extraction network, the size of convolution kernel is respectively 2,3,4,5, every kind of size Convolution kernel have 256, Dropout is set as 0.5.
3) text feature that aforementioned process is extracted and picture feature are done into after primary normalization direct splicing to get arriving The feature of picture and text entirety.Figure, literary feature are 128 dimensions, i.e., picture and text global feature is 256 dimensions.
4) Corruption mechanism is added to abandon a part of acquired spy in significantly more efficient picture and text feature in order to obtain Sign.In Corruption mechanism, rewrite the value of Probability p sequence 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8, 0.9 } value in.Operation in this way extracts the feature of text, picture respectively.
5) random forest is constructed, setting decision tree quantity is 600, and the feature specific gravity that every decision tree uses is from sequence Value in { 0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5 }.
6) using the picture and text feature extracted above as the input of above-mentioned random forest.That is, using the feature extracted as One entirety, which is put into random forest emotion classifiers, carries out emotional semantic classification.
7) result of classifier output and label are compared, the ratio with the consistent number of results of label and overall result number is Classification accuracy.
In the present invention, other network structures, such as CNN, RNN, LSTM, GRU etc., picture is also can be used in Text character extraction The network structures such as VGG also can be used in feature extraction.
Another embodiment of the present invention provides a kind of, and the image, text and data based on random forest merges emotional semantic classification device, packet It includes:
Picture feature extraction module is responsible for extracting the feature of picture in multi-modal data;
Text character extraction module is responsible for extracting the feature of text in multi-modal data;
Feature merging module is responsible for merging the feature of picture and the feature of text of extraction, obtains picture and text entirety Feature;
Feature selection module is responsible for carrying out feature selecting to picture and text global feature by Corruption mechanism;
Categorization module is responsible for dividing the picture and text global feature after carrying out feature selecting by random forest grader Class obtains emotional semantic classification result.
Wherein, the specific implementation of each module is referring to above to the explanation of the method for the present invention.
The above embodiments are merely illustrative of the technical solutions of the present invention rather than is limited, the ordinary skill of this field Personnel can be with modification or equivalent replacement of the technical solution of the present invention are made, without departing from the principle and scope of the present invention, originally The protection scope of invention should be subject to described in claims.

Claims (10)

1. a kind of image, text and data based on random forest merges sensibility classification method, which comprises the following steps:
1) feature of picture and the feature of text in multi-modal data are extracted;
2) feature of the feature of the picture of extraction and text is merged, obtains picture and text global feature;
3) feature selecting is carried out to picture and text global feature by Corruption mechanism;
4) classified by random forest grader to the picture and text global feature after carrying out feature selecting, obtain emotional semantic classification knot Fruit.
2. the method according to claim 1, wherein step 1) passes through the picture emotional semantic classification network based on VGG The feature for extracting picture in multi-modal data extracts text in multi-modal data by the text emotion sorter network based on CNN Feature.
3. according to the method described in claim 2, it is characterized in that, the training of the picture emotional semantic classification network based on VGG Mode are as follows: the pre-training VGG-19 network first on ILSVRC-2012 data set;After training VGG-19 network, it is modified Last two layers of dimension, i.e., the dimension of second full articulamentum are k, and the output classification of softmax is 2 classes or 3 classes;Finally freeze All convolution layer parameters are tied, use the picture in training dataset as input, the emotional category of picture continues as output Last three layers of the parameter of picture emotional semantic classification network of the training based on VGG;Train the picture emotional semantic classification network based on VGG Afterwards, it is exported second full articulamentum as the feature of picture.
4. according to the method described in claim 2, it is characterized in that, the text emotion sorter network based on CNN uses text Word expression matrix after this participle as input, using the good Word2Vec of pre-training by the word after participle be mapped to Amount, using the matrix formed after the vector transposition of each word as the input of the text emotion sorter network based on CNN, uses later The convolution kernel of multiple and different sizes carries out convolution operation to the word expression matrix, and by maximum pond at any time, flatten, Dropout, the full feature classification for connecting, exporting text after Softmax layers.
5. the method according to claim 1, wherein the text emotion sorter network based on CNN and described Picture emotional semantic classification network based on VGG is trained in such a way that field migrates.
6. the method according to claim 1, wherein the Corruption mechanism is to picture and text global feature f's It is rewritten as 0 according to probability q per one-dimensional, other dimensions are changed to 1/ (1-q) times of initial value.
7. according to the method described in claim 6, it is characterized in that, rewriteeing probability q in sequence in the Corruption mechanism Value in { 0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9 }.
8. the method according to claim 1, wherein the random forest grader is by there is the sampling put back to obtain The training subset of a decision tree is taken, and input of the feature as decision tree is selected by random, makes every decision tree concern Feature it is different, to keep the structure of every decision tree different, to improve the accuracy rate of classification.
9. according to the method described in claim 8, it is characterized in that, in the random forest grader, decision tree quantity is 600, the feature specific gravity that every decision tree uses from sequence 0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45, 0.5 } value in.
10. a kind of image, text and data based on random forest merges emotional semantic classification device characterized by comprising
Picture feature extraction module is responsible for extracting the feature of picture in multi-modal data;
Text character extraction module is responsible for extracting the feature of text in multi-modal data;
Feature merging module is responsible for merging the feature of picture and the feature of text of extraction, obtains picture and text global feature;
Feature selection module is responsible for carrying out feature selecting to picture and text global feature by Corruption mechanism;
Categorization module is responsible for classifying to the picture and text global feature after carrying out feature selecting by random forest grader, be obtained To emotional semantic classification result.
CN201910098349.6A 2019-01-31 2019-01-31 Image, text and data fusion sensibility classification method and device based on random forest Pending CN109934260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910098349.6A CN109934260A (en) 2019-01-31 2019-01-31 Image, text and data fusion sensibility classification method and device based on random forest

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910098349.6A CN109934260A (en) 2019-01-31 2019-01-31 Image, text and data fusion sensibility classification method and device based on random forest

Publications (1)

Publication Number Publication Date
CN109934260A true CN109934260A (en) 2019-06-25

Family

ID=66985372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910098349.6A Pending CN109934260A (en) 2019-01-31 2019-01-31 Image, text and data fusion sensibility classification method and device based on random forest

Country Status (1)

Country Link
CN (1) CN109934260A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503090A (en) * 2019-07-09 2019-11-26 中国科学院信息工程研究所 Character machining network training method, character detection method and character machining device based on limited attention model
CN110516748A (en) * 2019-08-29 2019-11-29 泰康保险集团股份有限公司 Method for processing business, device, medium and electronic equipment
CN110597965A (en) * 2019-09-29 2019-12-20 腾讯科技(深圳)有限公司 Sentiment polarity analysis method and device of article, electronic equipment and storage medium
CN110781333A (en) * 2019-06-26 2020-02-11 杭州鲁尔物联科技有限公司 Method for processing unstructured monitoring data of cable-stayed bridge based on machine learning
CN111199426A (en) * 2019-12-31 2020-05-26 上海昌投网络科技有限公司 WeChat public number ROI estimation method and device based on random forest model
CN112800875A (en) * 2021-01-14 2021-05-14 北京理工大学 Multi-mode emotion recognition method based on mixed feature fusion and decision fusion
CN112884053A (en) * 2021-02-28 2021-06-01 江苏匠算天诚信息科技有限公司 Website classification method, system, equipment and medium based on image-text mixed characteristics
CN113033610A (en) * 2021-02-23 2021-06-25 河南科技大学 Multi-mode fusion sensitive information classification detection method
CN113627550A (en) * 2021-08-17 2021-11-09 北京计算机技术及应用研究所 Image-text emotion analysis method based on multi-mode fusion
CN113961710A (en) * 2021-12-21 2022-01-21 北京邮电大学 Fine-grained thesis classification method and device based on multi-mode layered fusion network
WO2022121163A1 (en) * 2020-12-11 2022-06-16 平安科技(深圳)有限公司 User behavior tendency identification method, apparatus, and device, and storage medium
CN116226702A (en) * 2022-09-09 2023-06-06 武汉中数医疗科技有限公司 Thyroid sampling data identification method based on bioelectrical impedance

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107066583A (en) * 2017-04-14 2017-08-18 华侨大学 A kind of picture and text cross-module state sensibility classification method merged based on compact bilinearity
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN108764268A (en) * 2018-04-02 2018-11-06 华南理工大学 A kind of multi-modal emotion identification method of picture and text based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107066583A (en) * 2017-04-14 2017-08-18 华侨大学 A kind of picture and text cross-module state sensibility classification method merged based on compact bilinearity
CN107679580A (en) * 2017-10-21 2018-02-09 桂林电子科技大学 A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN108764268A (en) * 2018-04-02 2018-11-06 华南理工大学 A kind of multi-modal emotion identification method of picture and text based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHEN M, ZHANG LL, YU X ET AL: ""Weighted Co-Training for Cross-Domain Image Sentiment Classifification"", 《JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY》 *
MINMIN CHEN: ""EFFICIENT VECTOR REPRESENTATION FOR DOCUMENTS THROUGH CORRUPTION"", 《ARXIV》 *
王哲理,杨鹏飞等: ""基于多特征融合的深层网络图像语义识别方法"", 《计算机工程与应用》 *
饶绍奇: "《中华医学统计百科全书.遗传统计分册》", 31 May 2013, 中国统计出版社 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110781333A (en) * 2019-06-26 2020-02-11 杭州鲁尔物联科技有限公司 Method for processing unstructured monitoring data of cable-stayed bridge based on machine learning
CN110503090A (en) * 2019-07-09 2019-11-26 中国科学院信息工程研究所 Character machining network training method, character detection method and character machining device based on limited attention model
CN110503090B (en) * 2019-07-09 2021-11-09 中国科学院信息工程研究所 Character detection network training method based on limited attention model, character detection method and character detector
CN110516748A (en) * 2019-08-29 2019-11-29 泰康保险集团股份有限公司 Method for processing business, device, medium and electronic equipment
CN110597965A (en) * 2019-09-29 2019-12-20 腾讯科技(深圳)有限公司 Sentiment polarity analysis method and device of article, electronic equipment and storage medium
CN110597965B (en) * 2019-09-29 2024-04-16 深圳市雅阅科技有限公司 Emotion polarity analysis method and device for article, electronic equipment and storage medium
CN111199426A (en) * 2019-12-31 2020-05-26 上海昌投网络科技有限公司 WeChat public number ROI estimation method and device based on random forest model
CN111199426B (en) * 2019-12-31 2023-09-12 上海昌投网络科技有限公司 WeChat public signal ROI estimation method and device based on random forest model
WO2022121163A1 (en) * 2020-12-11 2022-06-16 平安科技(深圳)有限公司 User behavior tendency identification method, apparatus, and device, and storage medium
CN112800875A (en) * 2021-01-14 2021-05-14 北京理工大学 Multi-mode emotion recognition method based on mixed feature fusion and decision fusion
CN113033610A (en) * 2021-02-23 2021-06-25 河南科技大学 Multi-mode fusion sensitive information classification detection method
CN113033610B (en) * 2021-02-23 2022-09-13 河南科技大学 Multi-mode fusion sensitive information classification detection method
CN112884053B (en) * 2021-02-28 2022-04-15 江苏匠算天诚信息科技有限公司 Website classification method, system, equipment and medium based on image-text mixed characteristics
CN112884053A (en) * 2021-02-28 2021-06-01 江苏匠算天诚信息科技有限公司 Website classification method, system, equipment and medium based on image-text mixed characteristics
CN113627550A (en) * 2021-08-17 2021-11-09 北京计算机技术及应用研究所 Image-text emotion analysis method based on multi-mode fusion
CN113961710B (en) * 2021-12-21 2022-03-08 北京邮电大学 Fine-grained thesis classification method and device based on multi-mode layered fusion network
CN113961710A (en) * 2021-12-21 2022-01-21 北京邮电大学 Fine-grained thesis classification method and device based on multi-mode layered fusion network
CN116226702A (en) * 2022-09-09 2023-06-06 武汉中数医疗科技有限公司 Thyroid sampling data identification method based on bioelectrical impedance
CN116226702B (en) * 2022-09-09 2024-04-26 武汉中数医疗科技有限公司 Thyroid sampling data identification method based on bioelectrical impedance

Similar Documents

Publication Publication Date Title
CN109934260A (en) Image, text and data fusion sensibility classification method and device based on random forest
Campos et al. From pixels to sentiment: Fine-tuning CNNs for visual sentiment prediction
CN104834729B (en) Topic recommends method and topic recommendation apparatus
CN107291795A (en) A kind of dynamic word insertion of combination and the file classification method of part-of-speech tagging
CN107679580A (en) A kind of isomery shift image feeling polarities analysis method based on the potential association of multi-modal depth
CN104346440A (en) Neural-network-based cross-media Hash indexing method
CN107729513A (en) Discrete supervision cross-module state Hash search method based on semanteme alignment
CN106815369A (en) A kind of file classification method based on Xgboost sorting algorithms
CN104142995B (en) The social event recognition methods of view-based access control model attribute
CN106934071A (en) Recommendation method and device based on Heterogeneous Information network and Bayes's personalized ordering
CN107145573A (en) The problem of artificial intelligence customer service robot, answers method and system
CN110196945A (en) A kind of microblog users age prediction technique merged based on LSTM with LeNet
Yan et al. Data augmentation for deep learning of judgment documents
Madan et al. Synthetically trained icon proposals for parsing and summarizing infographics
CN113051914A (en) Enterprise hidden label extraction method and device based on multi-feature dynamic portrait
CN107862322A (en) The method, apparatus and system of picture attribute classification are carried out with reference to picture and text
Roy et al. Automated detection of substance use-related social media posts based on image and text analysis
CN107491782A (en) Utilize the image classification method for a small amount of training data of semantic space information
CN106777040A (en) A kind of across media microblogging the analysis of public opinion methods based on feeling polarities perception algorithm
Smitha et al. Meme classification using textual and visual features
CN103942214B (en) Natural image classification method and device on basis of multi-modal matrix filling
CN111813939A (en) Text classification method based on representation enhancement and fusion
CN105701516A (en) Method for automatically marking image on the basis of attribute discrimination
CN101561880A (en) Pattern recognition method based on immune antibody network
Zhai et al. Deep convolutional neural network for facial expression recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190625