CN114756763A - False news detection method and device for social network - Google Patents

False news detection method and device for social network Download PDF

Info

Publication number
CN114756763A
CN114756763A CN202210168205.5A CN202210168205A CN114756763A CN 114756763 A CN114756763 A CN 114756763A CN 202210168205 A CN202210168205 A CN 202210168205A CN 114756763 A CN114756763 A CN 114756763A
Authority
CN
China
Prior art keywords
text
news
visual
features
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210168205.5A
Other languages
Chinese (zh)
Other versions
CN114756763B (en
Inventor
郭颖
狄冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China University of Technology
Original Assignee
North China University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China University of Technology filed Critical North China University of Technology
Priority to CN202210168205.5A priority Critical patent/CN114756763B/en
Publication of CN114756763A publication Critical patent/CN114756763A/en
Application granted granted Critical
Publication of CN114756763B publication Critical patent/CN114756763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9536Search customisation based on social or collaborative filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/3332Query translation
    • G06F16/3335Syntactic pre-processing, e.g. stopword elimination, stemming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Library & Information Science (AREA)
  • Business, Economics & Management (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a device for detecting false news of a social network, wherein the method comprises the following steps: obtaining text elements and image elements of social network news data; extracting text features of the preprocessed news character elements by adopting a Bi-LSTM neural network model; extracting visual features of news image elements by adopting a VGG-19 neural network model and carrying out post-processing; mutually fusing the text features and the image features by using a dot product attention mechanism to obtain corrected text features and corrected visual features in sequence; performing time level self-attention fusion on the corrected text features at different moments to obtain ultimate text features; carrying out spatial level self-attention fusion on the corrected visual features at different positions to obtain final visual features; and after the ultimate text feature and the ultimate visual feature are spliced, inputting a full-connection layer neural network model with softmax, so that false news can be detected in time.

Description

Social network false news detection method and device
Technical Field
The invention relates to the field of social network fraud detection, in particular to a social network false news detection method and device.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
Social media, represented by microblogs, has become the current primary way to obtain news information. However, false news occurs from time to time due to the reader's lack of knowledge of the source reliability and content authority of the news. False news refers to news articles that are intentionally or verified as false, and mislead the reader.
Therefore, there is a need for a social network false news detection scheme that can detect false news in a timely manner and help prevent significant spread of such news.
Disclosure of Invention
The embodiment of the invention provides a social network false news detection method, which is used for detecting the social network false news and detecting the false news in time, and is favorable for helping to prevent the great spread of the news, and the method comprises the following steps:
obtaining social network news data, wherein the social network news data comprise news character elements and news image elements;
preprocessing the news character elements, wherein the preprocessing comprises the following steps: special symbol removing processing, word segmentation processing, stop word removing processing and word embedding coding processing;
extracting text features of the preprocessed news character elements by adopting a Bi-LSTM neural network model to obtain word segmentation text vectors;
performing visual feature extraction on the news image elements by adopting a VGG-19 neural network model to obtain layered visual vectors;
post-processing the layered visual vector, wherein post-processing comprises: performing dimension reduction processing and alignment processing;
taking the word segmentation text vector as a target, taking the correlation between the layered visual vector and the word segmentation text vector as a weight coefficient, and obtaining a corrected text characteristic by using a dot product type attention mechanism;
taking the layered visual vector as a target, taking the correlation between the word segmentation text vector and the layered visual vector as a weight coefficient, and obtaining a corrected visual characteristic by using a dot product type attention mechanism;
performing time level self-attention fusion on the corrected text features at different moments to obtain ultimate text features;
performing spatial level self-attention fusion on the corrected visual features at different positions to obtain final visual features;
and after the ultimate text features and the ultimate visual features are spliced, inputting a neural network model with a softmax full-connection layer, and performing social network false news detection.
The embodiment of the invention provides a social network false news detection device, which is used for detecting the social network false news and detecting the false news in time, and is favorable for helping to prevent the great spread of the news, and the device comprises:
the news data acquisition module is used for acquiring social network news data, and the social network news data comprises news character elements and news image elements;
the text characteristic preprocessing module is used for preprocessing the news character elements, wherein the preprocessing comprises the following steps: special symbol removing processing, word segmentation processing, stop word removing processing and word embedding coding processing;
the text feature extraction module is used for extracting text features of the preprocessed news character elements by adopting a Bi-LSTM neural network model to obtain word segmentation text vectors;
the visual feature extraction module is used for extracting visual features of the news image elements by adopting a VGG-19 neural network model to obtain layered visual vectors;
a visual feature post-processing module for post-processing the layered visual vectors, wherein the post-processing comprises: dimension reduction processing and alignment processing;
the visual fusion text module is used for obtaining corrected text characteristics by taking the word segmentation text vector as a target and taking the correlation between the layered visual vector and the word segmentation text vector as a weight coefficient and utilizing a dot product type attention mechanism;
the text fusion visual module is used for obtaining corrected visual features by taking the layered visual vectors as targets and taking the correlation between the word segmentation text vectors and the layered visual vectors as weight coefficients and utilizing a dot product type attention mechanism;
the text self-fusion module is used for performing time-level self-attention fusion on the corrected text features at different moments to obtain final text features;
the vision self-fusion module is used for carrying out spatial level self-attention fusion on the corrected vision characteristics at different positions to obtain final vision characteristics;
and the false news detection module is used for splicing the ultimate text feature and the ultimate visual feature, inputting the neural network model with the softmax full-connection layer, and detecting false news of the social network.
The embodiment of the invention also provides computer equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor realizes the social network false news detection method when executing the computer program.
The embodiment of the invention also provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the social network false news detection method is realized.
The embodiment of the invention also provides a computer program product, which comprises a computer program, and when the computer program is executed by a processor, the method for detecting the false news of the social network is realized.
The embodiment of the invention obtains the social network news data, wherein the social network news data comprises news character elements and news image elements; preprocessing the news character elements, wherein the preprocessing comprises the following steps: special symbol removing processing, word segmentation processing, stop word removing processing and word embedding coding processing; extracting text features of the preprocessed news character elements by adopting a Bi-LSTM neural network model to obtain word segmentation text vectors; performing visual feature extraction on the news image elements by adopting a VGG-19 neural network model to obtain layered visual vectors; post-processing the layered visual vector, wherein post-processing comprises: dimension reduction processing and alignment processing; taking the word segmentation text vector as a target, taking the correlation between the layered visual vector and the word segmentation text vector as a weight coefficient, and obtaining a corrected text characteristic by using a dot product type attention mechanism; taking the layered visual vector as a target, taking the correlation between the word segmentation text vector and the layered visual vector as a weight coefficient, and obtaining a corrected visual characteristic by using a dot product type attention mechanism; performing time level self-attention fusion on the corrected text features at different moments to obtain final text features; carrying out spatial level self-attention fusion on the corrected visual features at different positions to obtain final visual features; and after the ultimate text feature and the ultimate visual feature are spliced, inputting a neural network model with a softmax full-connection layer, and detecting virtual and false news of the social network.
The embodiment of the invention utilizes the Bi-LSTM neural network model to extract the text characteristics, utilizes the VGG-19 neural network model to extract the visual characteristics, utilizes the dot product type attention mechanism to respectively modify the text characteristics and the visual characteristics, is easier to keep edge traces, plays an auxiliary role in identifying the authenticity, and can fully realize the cross-modal interaction effect by adopting the idea of fusing images by texts and fusing texts by images. In addition, intra-modal fusion is considered, time-level self-attention fusion is carried out on the corrected text features at different moments to obtain final text features, space-level self-attention fusion is carried out on the corrected visual features at different positions to obtain final visual features, the final text features and the final visual features are spliced and then input into a full-connection-layer neural network model with softmax, and the deep fusion is realized by effectively combining the time-space effect, so that the detection of false news in time is realized, and the important diffusion of the news is favorably prevented.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
FIG. 1 is a schematic diagram of a method for detecting false news in a social network according to an embodiment of the present invention;
FIG. 2 is a flow chart of social network false news detection in an embodiment of the present invention;
FIG. 3 is a block diagram of an apparatus for detecting false news in a social network according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention are further described in detail below with reference to the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention and not to limit the present invention.
As mentioned above, there is a strong need for a class of automated fake news detection algorithms that can detect fake news as early as possible and help prevent this type of news from spreading significantly. The development of artificial intelligence provides an effective technical means for false news detection, and the artificial intelligence is mainly divided into the following two categories: manual feature-based methods and neural network-based methods. Most of methods based on manual features extract key features, such as vocabulary features, semantic features and the like, from text contents oriented to news. The method based on the neural network solves the defect of manually extracting the features, and adopts the Recurrent Neural Network (RNN) or the Convolutional Neural Network (CNN) to learn and judge the true and false attributes of the news from end to end. However, the methods extract single-mode features, and in the current social media multi-source heterogeneous scene, the generalization of the methods is greatly weakened.
In order to perform false news detection on a social network, detect false news in time and help prevent the serious spread of the news, an embodiment of the present invention provides a method for detecting false news on a social network, as shown in fig. 1, where the method may include:
step 101, obtaining social network news data, wherein the social network news data comprise news character elements and news image elements;
step 102, preprocessing the news character elements, wherein the preprocessing comprises the following steps: special symbol removing processing, word segmentation processing, stop word removing processing and word embedding coding processing;
103, extracting text features of the preprocessed news character elements by adopting a Bi-LSTM neural network model to obtain word segmentation text vectors;
104, extracting visual features of the news image elements by adopting a VGG-19 neural network model to obtain layered visual vectors;
step 105, performing post-processing on the layered visual vector, wherein the post-processing comprises: dimension reduction processing and alignment processing;
step 106, taking the word segmentation text vector as a target, taking the correlation between the layered visual vector and the word segmentation text vector as a weight coefficient, and obtaining a corrected text characteristic by using a dot product type attention mechanism;
step 107, taking the layered visual vector as a target, taking the correlation between the word segmentation text vector and the layered visual vector as a weight coefficient, and obtaining a corrected visual characteristic by using a dot product type attention mechanism;
108, performing time-level self-attention fusion on the corrected text features at different moments to obtain ultimate text features;
step 109, performing spatial level self-attention fusion on the corrected visual features at different positions to obtain final visual features;
and step 110, after the ultimate text feature and the ultimate visual feature are spliced, inputting a neural network model with a softmax full-connection layer, and performing false news detection on the social network.
As shown in fig. 1, in the embodiment of the present invention, a Bi-LSTM neural network model is used to extract text features, a VGG-19 neural network model is used to extract visual features, and a dot product type attention mechanism is used to modify the text features and the visual features respectively, so that edge traces are more easily retained, and an auxiliary effect on authenticity identification is achieved. In addition, intra-modal fusion is considered, time-level self-attention fusion is carried out on corrected text features at different moments to obtain final text features, space-level self-attention fusion is carried out on corrected visual features at different positions to obtain final visual features, the final text features and the final visual features are spliced and then input into a full-connection-layer neural network model with softmax, and deep fusion is achieved by effectively combining the time-space effect, so that false news is timely detected, and the method is beneficial to preventing the great diffusion of the news.
The inventor finds that the multi-modal feature has better characteristics than the single-modal feature, and is gradually a research hotspot of false news detection. In one aspect, multimodal rendering presents multiple aspects or angles of news content; secondly, the information acquired by different modalities can be mutually supplemented in the aspect of identifying the authority of news; furthermore, different sources of information often involve different domain expertise, such as NLP experts being good at manipulating text and CV experts being good at manipulating images; most importantly, the real world is often constructed by complex text, image, voice and other cross-modal information. Therefore, the multi-modal idea is important for false news detection, and how to extract multi-modal features becomes a technical difficulty of current false news detection. The multi-modal features comprise two links of multi-modal feature representation and multi-modal feature fusion, and due to cross-modal interaction, how to effectively fuse the features is still a challenging problem. At present, most of multi-mode fusion algorithms adopt a simple splicing mode and are too violent directly. The inventor finds that the attRNN model can fuse the representative results of different modes by means of attention mechanism, specifically, fuse the top-level features of the characters by taking the top-level features of the images in the pre-training model as weights. Visual elements that indicate similar semantics to text elements should be given greater weight, but there is no explicit mechanism to indicate the validity of this association. The embodiment of the invention adopts the RNN to extract the text characteristics of the character mode of the news in the time level, adopts the CNN to extract the visual characteristics of the image mode of the news in the space level, and then adopts the attention mechanism to perform the fusion between the modes. Specifically, a text feature is sent out, and the visual feature is used as the attention weight to obtain a corrected text feature of the ultimate visual feature; similarly, starting from the visual features, the text features are taken as attention weights, and the corrected visual features of the final text features are obtained. On the basis of the above, the corrected text features are subjected to attention weighted summation at different moments, and the corrected visual features are subjected to attention weighted summation at different positions. And finally splicing the two feature vectors as feature representations after cross fusion and inputting the feature vectors into a classifier to obtain the authenticity label of the news. The embodiment of the invention considers the text characteristic flow of a news character mode at different moments, and the visual characteristic piles of an image mode at different levels; considering the mapping relation of (query, key, value) in the attention mechanism, respectively taking the text feature stream and the visual feature stack as the query, and taking the visual feature stack and the text feature stream as the key and the value, and calculating the visual feature of the fusion text and the visual feature of the fusion text, thereby realizing the inter-modality fusion; on the basis, self-attention is realized on the corrected text features at different moments, and self-attention is realized on the corrected visual features at different levels, so that intra-modal fusion is realized, and the effect of cross-attention fusion is finally achieved.
Each step is analyzed in detail below.
In step 101, social network news data is obtained, which includes news text elements and news image elements.
In particular, the social network news data may be represented as (T, I), wherein the news text element may be represented as T ═ T1,T2,...,TSA news image element may be represented as I ═ I1,I2,...,ISS is the number of the sample sets of the social network news data, and the authenticity attribute label of the news can be represented as y.
In step 102, preprocessing the news text element, wherein the preprocessing includes: special symbol removing processing, word segmentation processing, stop word removing processing and word embedding encoding processing.
In one embodiment, the word-embedding encoding process is performed on the news text element as follows:
and after special symbol removing processing, word segmentation processing and stop word removing processing are carried out on the news character elements, word embedding coding processing is carried out according to a word vector mapping table established in advance.
When the method is specifically implemented, firstly, special symbol removing processing is carried out on news character elements, then word segmentation processing is carried out on the news character elements subjected to the special symbol removing processing by utilizing a Jieba word segmentation algorithm, stop word removing processing is carried out on the news character elements subjected to the word segmentation processing, and word embedding coding processing is carried out according to the news character elements subjected to the stop word removing processing and a pre-established word vector mapping table.
For example, for a news text element of any news sample, the following pre-processing operation f (-) is performed: the text portion is first cleaned, i.e., removed. \\ of; l- "-" -/nsp + &; @ and "() ()) # O! : special symbols such as' and the like, then, the word segmentation operation is carried out on the cleaned character part, and a Jieba word segmentation algorithm can be adopted; remove "$ 0123456789? A _ ",. Stop words such as "… …". Finally, according to a word vector mapping table (embedding mapping table) of each word, obtaining a low-dimensional text vector representation of the news, namely, a preprocessed news character element.
In step 103, extracting text features of the preprocessed news character elements by adopting a Bi-LSTM neural network model to obtain word segmentation text vectors.
During specific implementation, the preprocessed news character elements, namely low-dimensional text word vectors, are sequentially input into the Bi-LSTM neural network model for circular calculation, and word segmentation text vectors, namely high-dimensional text word vectors, of the news are obtained. Assuming that the sequence length is N, the extracted features of the Bi-LSTM neural network model can be expressed as [ h ]1,h2,...,hN]=Bi-LSTM[f(T)]Wherein h isiCorresponding to the ith word.
In step 104, visual feature extraction is performed on the news image elements by adopting a VGG-19 neural network model to obtain layered visual vectors.
In specific implementation, for a news image element of any news sample, a pre-trained VGG-19 neural network model is sequentially input, and a layered visual vector of the news is obtained.
It should be noted that the feature extraction is performed by using a pre-trained Bi-LSTM neural network model and a VGG-19 neural network model, the Bi-LSTM neural network model is pre-trained by using the historical news character elements, and the VGG-19 neural network model is pre-trained by using the historical news image elements.
In step 105, post-processing the layered visual vector, wherein post-processing comprises: dimension reduction processing and alignment processing.
In one embodiment, post-processing the layered visual vector comprises:
carrying out average pooling treatment on each level in the layered visual vectors to obtain the dimensionality-reduced vector of each level;
and inputting the vectors subjected to dimensionality reduction of each level into a full-connection layer neural network model to obtain the aligned layered visual vectors of hidden layer dimensionalities.
In specific implementation, the layered visual vector of the news is composed of tensors with different dimensions, and the size of each layer is<Number of channels, width, height>. In order to map tensors of different sizes to vectors of the same dimension, the invention adopts two technologies of average pooling and dimension alignment. Specifically, the average value is firstly calculated in each channel number, then the vector taking the channel number as the dimensionality is sent into a full connection layer, and the dimensionality is converted into the hidden unit number of the Bi-LSTM in the text extractor. If the above-mentioned link is denoted as g (-), the result extracted from VGG-19 can be represented as [ c ]1,c2,...,cM]=g[VGG-19(V)]。
In step 106, the modified text features are obtained by using a dot-product attention mechanism with the segmented text vectors as targets and the correlations between the layered visual vectors and the segmented text vectors as weight coefficients.
In one embodiment, the modified text features are obtained using a dot-product attention mechanism as follows:
Figure BDA0003516397030000081
wherein, Q'1To correct text features, Q1For word-segmented text vectors, K1And V1As a hierarchical visual vector, dkFor the scaling factor, t is the transposed transform.
In particular, the text (Q1) is represented by an image (V1) using a dot-product attention (dot-product) mechanism. The specific idea is to connect the image (K1) and the text (Q1) to calculate the relevance, then weight the vector in the superposed image (V1) by taking the calculated relevance as the weight, and finally generate the corrected text (Q1') by taking the text (Q1) as the target. According to a calculation formula
Figure BDA0003516397030000082
Text feature set [ h ]1,h2,...,hN]Viewed as Q1, visual feature set [ c1,c2,...,cM]Consider K1 and V1, neglecting the scaling factor. Sequentially inputting text features, calculating attention weights of the text features on the visual features, and aggregating the attention weights to obtain a text feature representation aggregated by the visual features, wherein the calculation formula is as follows:
Figure BDA0003516397030000083
wherein the content of the first and second substances,
Figure BDA0003516397030000084
is a text feature hiRelative to visual feature cjAttention weight of (1):
Figure BDA0003516397030000085
in step 107, the modified visual features are obtained by using a dot-product attention mechanism with the target of the layered visual vector and the correlation between the segmented text vector and the layered visual vector as a weight coefficient.
In one embodiment, the modified visual characteristics are obtained using a dot-product attention mechanism as follows:
Figure BDA0003516397030000091
wherein, Q'2To correct visual characteristics, Q2As a hierarchical visual vector, K2And V2For word-segmented text vectors, dkFor the scaling factor, t is the transposed transform.
In specific implementation, the text (K2) and the image (Q2) are connected to calculate the relevance, then the calculated relevance is used as a weight, the vectors in the superimposed text (V2) are weighted, and finally the corrected image (Q2') is generated by taking the image (Q2) as a target. According to a calculation formula
Figure BDA0003516397030000092
Set of visual features [ c ]1,c2,...,cM]Viewed as Q2, text feature set [ h ]1,h2,...,hN]Identified as K2 and V2. Sequentially inputting visual features, calculating attention weights of the visual features on the text features and aggregating the attention weights to obtain visual feature representations aggregated by the text features, wherein the calculation formula is as follows:
Figure BDA0003516397030000093
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003516397030000094
is a visual feature cjWith respect to text feature hiAttention weight of (2):
Figure BDA0003516397030000095
in step 108, time-level self-attention fusion is performed on the corrected text features at different moments to obtain final text features.
In specific implementation, the self-attention fusion of the time layer is carried out facing to text vectors at different moments on the basis of the image fusion text. Specifically, the relevance of the words 1,2, N relative to the last word is calculated respectively, and the top-level text features are weighted and superposed in sequence to obtain the final text feature T.
In step 109, spatial level self-attention fusion is performed on the corrected visual features at different positions to obtain final visual features.
In specific implementation, on the basis of text fusion images, the invention aims at visual vectors at different positions to perform self-attention fusion at a spatial level. Specifically, the relativity of the levels 1,2, I, M relative to the last level is calculated respectively, and the visual features of the top level are weighted and superimposed in sequence to obtain the final visual feature I.
In step 110, after the ultimate text feature and the ultimate visual feature are spliced, a neural network model with a softmax full connection layer is input, and false news detection of the social network is performed.
In specific implementation, after the final text feature T and the final visual feature I obtained through two times of correction are spliced, a full connection layer with softmax is connected to serve as a false news detector. With x representing the feature, y representing the tag, probability M of outputting a false tagd(x;θd) Wherein thetadRepresenting the parameters in the detector (neural network model with softmax fully connected layer) described above. Calculating cross entropy loss, and continuously reducing a loss function by adopting a gradient descent method to enable a predicted label to be close to a ground truth, namely:
Ldd)=-IE(x,y)~(X,Y)[ylogMd(x;θd)+(1-y)(1-logMd(x;θd))]
wherein x is a feature, y is a label, Md(x;θd) To output the probability of a false tag, θdParameters in the neural network model with softmax full connectivity layer. It should be noted that a pre-trained neural network model with a softmax fully-connected layer is adopted, and the neural network model with the softmax fully-connected layer is pre-trained by using spliced historical ultimate text features and historical ultimate visual features.
In the traditional multi-modal false news detection algorithm, text
The fusion between different modalities of the image and the present method mostly adopts simple fusion represented by splicing, and interaction between modalities is not considered. In order to solve the problem, Jin and the like propose a framework for fusing texts and images by adopting an attention mechanism, but the image features adopt top-level features of news in pre-trained VGG-19, and attention is only one-way fusion from texts to images, so that certain limitations exist. Embodiments of the present invention address this drawback, where the extracted image features are vector representations of multiple levels on the VGG-19, and the attention mechanism also includes cross-fusion of four dimensions, text-to-image, image-to-text, text-to-text, and image-to-image. On one hand, the pre-training features suitable for the image classification field cannot reflect the authenticity characteristics, and compared with the top layer features, the hidden layer features are easier to keep edge traces and play an auxiliary role in identifying authenticity; on the other hand, the idea of fusing images with texts and fusing texts with images by adopting an attention mechanism can fully realize cross-modal interaction; in addition, on the basis of the inter-modal fusion, the intra-modal fusion is also considered by the self-attention of the text and the image, and the depth fusion is realized by effectively combining the space-time effect. The multi-mode false news detection algorithm based on attention cross fusion is superior to the traditional algorithm in accuracy rate, precision rate, recall rate and F1-score.
In the embodiment of the invention, a microblog data set is taken as an example, after matching of characters and a matched graph is completed, training data comprise 7531 groups of pictures and texts, wherein real news accounts for 3783 groups, false news accounts for 3748 groups, and the rest is test data. In the test data, there were 996 real news items, 1000 false news items, and a total of 2996 items. By adopting the method of the embodiment of the invention, the loss function can be reduced to 0.3177 after 100 epochs of learning. Compared with the loss function in the traditional multi-mode false news detection algorithm attRNN based on attention mechanism modal fusion, the loss function is reduced to 0.4277%, and the loss function is improved by 25.7%. The accuracy of the training set is approximately around 0.9956 and the accuracy of the validation set is approximately around 0.7571. Compared with the accuracy rates 0.8847 and 0.6095 of a training set and a verification set of a traditional algorithm, the accuracy rates are improved by 12.5% and 24.2% in sequence. The accuracy of the test set is 0.780, which is 31.8% higher than 0.592 of the conventional algorithm. Specifically, the precision rate, the recall rate and the F1-score of the real news in the test set are respectively 0.795, 0.772 and 0.783, and the precision rate, the recall rate and the F1-score of the real news in the test set are respectively 0.594, 0.667 and 0.628 by the traditional algorithm. Compared with the two, the three indexes are sequentially improved by 33.8%, 15.7% and 24.7%. Specifically, the precision rate, the recall rate and the F1-score of the false news in the test set are respectively 0.764, 0.772 and 0.783, and the precision rate, the recall rate and the F1-score of the traditional algorithm are respectively 0.591, 0.513 and 0.549 in the test set. Compared with the two, the three indexes are sequentially improved by 29.3%, 50.5% and 42.6%.
A specific embodiment is given below to illustrate a specific application of the social network false news detection in the embodiment of the present invention. As shown in fig. 2, the social network false news detection is performed as follows:
the method comprises the following steps: and (6) data processing. Finishing the data sorting work in sequence according to four paths of real news of training data, false news of training data, real news of test data and false news of test data, wherein the formats of all news are unified into id, text, image and label.
(1) Removing special characters;
(2) word segmentation;
(3) removing stop words;
(4) and (5) word embedding coding.
Step two: and extracting text features. For the embedding vector representation of the news, Bi-LSTM is sequentially input for cyclic calculation, and the high-dimensional text vector representation of the news is obtained. Assuming that the sequence length is N, the features obtained by the Bi-LSTM extraction can be expressed as [ h1,h2,...,hN]=Bi-LSTM[f(T)]Wherein h isiCorresponding to the ith word.
Step three: and (5) extracting visual features. For any image element of a news sample, pre-trained VGG-19 is input in sequence, and a layered visual vector representation of the news is obtained.
Step four: and (5) performing visual feature post-processing. For the layered visual vector representation of news, the representation is composed of tensors with different dimensions, and the size of each layer is<Number of channels, width, height>. In order to map tensors of different sizes to vectors of the same dimension, the invention is intended to adopt two techniques of average pooling and dimension alignment. Specifically, the average value is firstly calculated in each channel number, then the vector taking the channel number as the dimensionality is sent to a full connection layer, and the dimensionality is converted into the hidden unit number of the Bi-LSTM in the text extractor. If the above-mentioned link is denoted as g (-), the result extracted from VGG-19 can be represented as [ c ]1,c2,...,cM]=g[VGG-19(V)]。
Step five: and extracting text features. And sequentially inputting the preprocessed text of each news into the Bi-LSTM, and respectively storing the hidden-layer state vectors of the neural network. The size of the final text feature vector is 163 × 32, where 163 denotes the maximum word number seq _ len of the text, and 32 denotes the hidden layer dimension hid _ dim;
step six: and (5) extracting image features. And sequentially inputting each news image into the VGG-19, and respectively storing hidden layer position vectors of the neural network.
(1) The input initial size is 224 × 224 × 3
(2) Level 0(3 × 3 × 64 convolution): the output size is 224 × 224 × 64;
(3) level 1 (relu): the output size is 224 × 224 × 64
(4) Level 2(3 × 3 × 64 convolution): the output size is 224 × 224 × 64
(5) Level 3 (relu): the output size is 224 × 224 × 64
(6) Level 4(2 × 2 pooling): output size of 112 × 112 × 64
(7) Level 5(3 × 3 × 128 convolution): the output size is 112 × 112 × 128
……
(37) Level 35 (relu): the output size is 14 × 14 × 512
(38) Level 36(2 × 2 pooling): the output size is 7 × 7 × 512
Step seven: and (5) image post-processing.
(1) And (3) reducing the dimensionality: and sequentially carrying out average pooling on the tensors of each level on a plane formed by the width and the height, and reducing the dimension of the tensors of the width, the height and the channel number into a vector of the channel number. Outputting hidden layer vectors with the dimension of 64 or 128 or 256 or 512;
(2) alignment: and sequentially sending the vectors of each level into a full connection layer, and aligning the vectors of the channel number dimension into the vector of the hidden layer dimension. The size of the final image vector feature is 37 × 32, where 37 refers to the maximum layer level number of the image, lay _ len, and 32 refers to the hidden layer dimension, hid _ dim;
step eight: the images are fused in text. Q1As an image vector (dimension 37X 32),K1And V1Is a text vector (dimension 136 x 32). According to the attention mechanism, the weight is a pair matrix K1Q1 tThe softmax line of (dimension 163 × 37) is normalized, and the dimension is 163. Finally, the image vector is multiplied by a weight coefficient in an overlapping manner to obtain a corrected image vector (dimension 37 × 32).
Step nine: text is fused with images. Q2Is a text vector (dimension 163 × 32), K2And V2Is the image vector (dimension 37 × 32). According to the attention mechanism, the weight is a pair matrix K2Q2 tThe softmax column of (dimension 37 × 163) is normalized, with a dimension of 163. Finally, the text vector is multiplied by the weight coefficient superposition to obtain a modified text vector (dimension 163 × 32).
Step ten: and self-fusing the texts. Q1、K1、V1And e, obtaining the corrected text vector in the step seven by repeatedly adopting the attention mechanism. The vectors at the respective moments are fused to the last moment in time, and the final output is the vector of the last word of the text (dimension 32).
Step eleven: and (4) self-fusing the images. Q2、K2、V2And (4) obtaining the image vectors corrected again by repeatedly adopting the attention mechanism. The vectors of the positions are fused to the last position in space, and the final output is the vector of the last level of the image (the dimension is 32).
Step twelve: false news detection. And (4) splicing the vectors in the step eight and the step nine (the dimension is 64), inputting a full connection layer with softmax, and finally outputting the authenticity label of the news (the dimension is 2).
Step thirteen: and (4) aiming at the training set in the step one, establishing a neural network by adopting the learning strategies from the step two to the step ten, and calculating a cross entropy loss function. In the learning process, an Adam optimizer and a batch _ size ═ 128 random batch gradient descent method are adopted to reduce loss, the weight is continuously adjusted, and the training of 100 epochs is completed. And finally, completing the test on the test set in the step one, and calculating indexes such as accuracy, precision, recall rate, F1-score and the like.
Based on the same inventive concept, the embodiment of the present invention further provides a device for detecting false news in a social network, as described in the following embodiments. Because the principles for solving the problems are similar to the social network false news detection method, the implementation of the social network false news detection device can be referred to the implementation of the method, and repeated parts are not described again.
Fig. 3 is a structural diagram of a social network false news detection apparatus according to an embodiment of the present invention, and as shown in fig. 3, the social network false news detection apparatus includes:
a news data obtaining module 301, configured to obtain social network news data, where the social network news data includes news text elements and news image elements;
a text feature preprocessing module 302, configured to preprocess the news text element, where the preprocessing includes: special symbol removing processing, word segmentation processing, stop word removing processing and word embedding coding processing;
the text feature extraction module 303 is configured to perform text feature extraction on the preprocessed news character elements by using a Bi-LSTM neural network model to obtain word segmentation text vectors;
the visual feature extraction module 304 is configured to perform visual feature extraction on the news image elements by using a VGG-19 neural network model to obtain layered visual vectors;
a visual feature post-processing module 305, configured to post-process the layered visual vector, where the post-processing includes: performing dimension reduction processing and alignment processing;
the visual fusion text module 306 is configured to obtain modified text features by using a dot product attention mechanism with the segmented text vectors as targets and the correlations between the layered visual vectors and the segmented text vectors as weight coefficients;
a text fusion vision module 307, configured to obtain a modified vision feature by using a dot product type attention mechanism with a layered vision vector as a target and a correlation between a word segmentation text vector and the layered vision vector as a weight coefficient;
the text self-fusion module 308 is configured to perform time-plane self-attention fusion on the corrected text features at different moments to obtain final text features;
the vision self-fusion module 309 is configured to perform spatial level self-attention fusion on the corrected vision features at different positions to obtain final vision features;
and the false news detection module 310 is configured to splice the ultimate text feature and the ultimate visual feature, and then input the neural network model with the softmax full connection layer to perform false news detection on the social network.
In one embodiment, the visual fusion text module 306 is further configured to derive the modified text feature using a dot-product attention mechanism as follows:
Figure BDA0003516397030000141
wherein, Q'1To correct text features, Q1For word-segmented text vectors, K1And V1As a hierarchical visual vector, dkFor the scaling factor, t is the transposed transform.
In one embodiment, the text fusion vision module 307 is further configured to obtain the modified visual features using a dot-product attention mechanism according to the following formula:
Figure BDA0003516397030000142
wherein, Q'2To correct visual characteristics, Q2As a hierarchical visual vector, K2And V2For word-segmented text vectors, dkFor the scaling factor, t is the transposed transform.
Based on the aforementioned inventive concept, as shown in fig. 4, an embodiment of the present invention further provides a computer device 400, which includes a memory 410, a processor 420, and a computer program 430 stored in the memory 410 and running on the processor 420, wherein the processor 420 executes the computer program 430 to implement the social network false news detection method.
Based on the foregoing inventive concept, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the method for detecting false news in social networks is implemented.
The embodiment of the invention also provides a computer program product, which comprises a computer program, and when the computer program is executed by a processor, the method for detecting the false news of the social network is realized.
The embodiment of the invention utilizes the Bi-LSTM neural network model to extract the text characteristics, utilizes the VGG-19 neural network model to extract the visual characteristics, utilizes the dot product type attention mechanism to respectively correct the text characteristics and the visual characteristics, is easier to keep edge traces, plays an auxiliary role in identifying authenticity, and can fully realize cross-modal interaction by adopting the idea of fusing images by texts and texts by images. In addition, intra-modal fusion is considered, time-level self-attention fusion is carried out on corrected text features at different moments to obtain final text features, space-level self-attention fusion is carried out on corrected visual features at different positions to obtain final visual features, after the final text features and the final visual features are spliced, a full-link layer neural network model with softmax is input, and the deep fusion is realized by effectively combining the space-time effect, so that the false news is detected in time, and the important diffusion of the news is prevented.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A social network false news detection method is characterized by comprising the following steps:
obtaining social network news data, wherein the social network news data comprise news text elements and news image elements;
preprocessing the news character elements, wherein the preprocessing comprises the following steps: special symbol removing processing, word segmentation processing, stop word removing processing and word embedding coding processing;
extracting text features of the preprocessed news character elements by adopting a Bi-LSTM neural network model to obtain word segmentation text vectors;
performing visual feature extraction on the news image elements by adopting a VGG-19 neural network model to obtain layered visual vectors;
post-processing the layered visual vector, wherein post-processing comprises: dimension reduction processing and alignment processing;
taking the word segmentation text vector as a target, taking the correlation between the layered visual vector and the word segmentation text vector as a weight coefficient, and obtaining a corrected text characteristic by using a dot product type attention mechanism;
taking the layered visual vector as a target, taking the correlation between the word segmentation text vector and the layered visual vector as a weight coefficient, and obtaining a corrected visual characteristic by using a dot product type attention mechanism;
performing time level self-attention fusion on the corrected text features at different moments to obtain ultimate text features;
performing spatial level self-attention fusion on the corrected visual features at different positions to obtain final visual features;
and after the ultimate text feature and the ultimate visual feature are spliced, inputting a neural network model with a softmax full-connection layer, and performing false news detection on the social network.
2. The method for false news detection in social networks according to claim 1, wherein the word embedding encoding process is performed on the news text elements as follows:
and after special symbol removing processing, word segmentation processing and stop word removing processing are carried out on the news character elements, word embedding coding processing is carried out according to a pre-established word vector mapping table.
3. The method for false news detection in social networks according to claim 1, wherein the corrected text feature is obtained by a dot-product attention mechanism according to the following formula:
Figure FDA0003516397020000011
wherein, Q'1To correct text features, Q1For word-segmented text vectors, K1And V1As a hierarchical visual vector, dkFor the scaling factor, t is the transposed transform.
4. The method for false news detection in social networks according to claim 1, wherein the modified visual features are obtained by a dot product type attention mechanism according to the following formula:
Figure FDA0003516397020000021
wherein, Q'2To correct visual characteristics, Q2As a hierarchical visual vector, K2And V2For word-segmented text vectors, dkFor the scaling factor, t is the transposed transform.
5. An apparatus for detecting false news in a social network, comprising:
the system comprises a news data acquisition module, a news image acquisition module and a news display module, wherein the news data acquisition module is used for acquiring social network news data which comprises news character elements and news image elements;
the text characteristic preprocessing module is used for preprocessing the news character elements, wherein the preprocessing comprises the following steps: special symbol removing processing, word segmentation processing, stop word removing processing and word embedding coding processing;
the text feature extraction module is used for extracting text features of the preprocessed news character elements by adopting a Bi-LSTM neural network model to obtain word segmentation text vectors;
the visual feature extraction module is used for extracting visual features of the news image elements by adopting a VGG-19 neural network model to obtain layered visual vectors;
a visual feature post-processing module for post-processing the layered visual vectors, wherein the post-processing comprises: dimension reduction processing and alignment processing;
the visual fusion text module is used for obtaining corrected text characteristics by taking the word segmentation text vector as a target and taking the correlation between the layered visual vector and the word segmentation text vector as a weight coefficient and utilizing a dot product type attention mechanism;
the text fusion visual module is used for obtaining corrected visual characteristics by taking the layered visual vectors as targets and taking the correlation between the word segmentation text vectors and the layered visual vectors as weight coefficients and utilizing a dot product type attention mechanism;
the text self-fusion module is used for performing time-level self-attention fusion on the corrected text features at different moments to obtain final text features;
the vision self-fusion module is used for carrying out spatial level self-attention fusion on the corrected vision characteristics at different positions to obtain final vision characteristics;
and the false news detection module is used for splicing the ultimate text characteristic and the ultimate visual characteristic, inputting the neural network model with the softmax full-connection layer and detecting false news of the social network.
6. The social network false news detection apparatus of claim 5, wherein the visual fusion text module is further configured to derive the modified text feature using a dot product attention mechanism according to the following formula:
Figure FDA0003516397020000031
wherein, Q'1To correct text features, Q1For word-segmented text vectors, K1And V1As a hierarchical visual vector, dkFor the scaling factor, t is the transposed transform.
7. The social network false news detection apparatus of claim 5, wherein the text fusion vision module is further configured to obtain the modified vision feature using a dot-product attention mechanism according to the following formula:
Figure FDA0003516397020000032
wherein, Q'2To correct visual characteristics, Q2As a hierarchical visual vector, K2And V2For word-segmented text vectors, dkFor the scaling factor, t is the transposed transform.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 4 when executing the computer program.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method of any of claims 1 to 4.
10. A computer program product, characterized in that the computer program product comprises a computer program which, when being executed by a processor, carries out the method of any one of claims 1 to 4.
CN202210168205.5A 2022-02-23 2022-02-23 Social network false news detection method and device Active CN114756763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210168205.5A CN114756763B (en) 2022-02-23 2022-02-23 Social network false news detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210168205.5A CN114756763B (en) 2022-02-23 2022-02-23 Social network false news detection method and device

Publications (2)

Publication Number Publication Date
CN114756763A true CN114756763A (en) 2022-07-15
CN114756763B CN114756763B (en) 2024-06-21

Family

ID=82325390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210168205.5A Active CN114756763B (en) 2022-02-23 2022-02-23 Social network false news detection method and device

Country Status (1)

Country Link
CN (1) CN114756763B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115309860A (en) * 2022-07-18 2022-11-08 黑龙江大学 False news detection method based on pseudo twin network
CN116030295A (en) * 2022-10-13 2023-04-28 中电金信软件(上海)有限公司 Article identification method, apparatus, electronic device and storage medium
CN116052171A (en) * 2023-03-31 2023-05-02 国网数字科技控股有限公司 Electronic evidence correlation calibration method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170083623A1 (en) * 2015-09-21 2017-03-23 Qualcomm Incorporated Semantic multisensory embeddings for video search by text
CN107688870A (en) * 2017-08-15 2018-02-13 中国科学院软件研究所 A kind of the classification factor visual analysis method and device of the deep neural network based on text flow input
CN111295669A (en) * 2017-06-16 2020-06-16 马克波尔公司 Image processing system
CN112966127A (en) * 2021-04-07 2021-06-15 北方民族大学 Cross-modal retrieval method based on multilayer semantic alignment
US20210212651A1 (en) * 2020-01-09 2021-07-15 Ping An Technology (Shenzhen) Co., Ltd. Device and method for computer-aided diagnosis based on image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170083623A1 (en) * 2015-09-21 2017-03-23 Qualcomm Incorporated Semantic multisensory embeddings for video search by text
CN111295669A (en) * 2017-06-16 2020-06-16 马克波尔公司 Image processing system
CN107688870A (en) * 2017-08-15 2018-02-13 中国科学院软件研究所 A kind of the classification factor visual analysis method and device of the deep neural network based on text flow input
US20210212651A1 (en) * 2020-01-09 2021-07-15 Ping An Technology (Shenzhen) Co., Ltd. Device and method for computer-aided diagnosis based on image
CN112966127A (en) * 2021-04-07 2021-06-15 北方民族大学 Cross-modal retrieval method based on multilayer semantic alignment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张琪;李福华;孙基男;: "多模态学习分析:走向计算教育时代的学习分析学", 中国电化教育, no. 09, 10 September 2020 (2020-09-10) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115309860A (en) * 2022-07-18 2022-11-08 黑龙江大学 False news detection method based on pseudo twin network
CN115309860B (en) * 2022-07-18 2023-04-18 黑龙江大学 False news detection method based on pseudo twin network
CN116030295A (en) * 2022-10-13 2023-04-28 中电金信软件(上海)有限公司 Article identification method, apparatus, electronic device and storage medium
CN116052171A (en) * 2023-03-31 2023-05-02 国网数字科技控股有限公司 Electronic evidence correlation calibration method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114756763B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
CN110222140B (en) Cross-modal retrieval method based on counterstudy and asymmetric hash
CN114756763A (en) False news detection method and device for social network
Zhou et al. Salient object detection in stereoscopic 3D images using a deep convolutional residual autoencoder
CN113158862B (en) Multitasking-based lightweight real-time face detection method
CN110175986B (en) Stereo image visual saliency detection method based on convolutional neural network
CN111444881A (en) Fake face video detection method and device
CN112926396A (en) Action identification method based on double-current convolution attention
CN115438215B (en) Image-text bidirectional search and matching model training method, device, equipment and medium
CN112651940B (en) Collaborative visual saliency detection method based on dual-encoder generation type countermeasure network
CN111723841A (en) Text detection method and device, electronic equipment and storage medium
CN115116066A (en) Scene text recognition method based on character distance perception
CN115937655A (en) Target detection model of multi-order feature interaction, and construction method, device and application thereof
CN111461175A (en) Label recommendation model construction method and device of self-attention and cooperative attention mechanism
CN114282013A (en) Data processing method, device and storage medium
CN113360621A (en) Scene text visual question-answering method based on modal inference graph neural network
CN115223020A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN113870286A (en) Foreground segmentation method based on multi-level feature and mask fusion
Kumar et al. Pair wise training for stacked convolutional autoencoders using small scale images
CN116975350A (en) Image-text retrieval method, device, equipment and storage medium
CN114548274A (en) Multi-modal interaction-based rumor detection method and system
CN116933051A (en) Multi-mode emotion recognition method and system for modal missing scene
CN113254575B (en) Machine reading understanding method and system based on multi-step evidence reasoning
CN113159071A (en) Cross-modal image-text association anomaly detection method
CN117171746A (en) Malicious code homology analysis method and device, electronic equipment and storage medium
CN113780241B (en) Acceleration method and device for detecting remarkable object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant