CN115964626A - Community detection method based on dynamic multi-scale feature fusion network - Google Patents

Community detection method based on dynamic multi-scale feature fusion network Download PDF

Info

Publication number
CN115964626A
CN115964626A CN202211326522.1A CN202211326522A CN115964626A CN 115964626 A CN115964626 A CN 115964626A CN 202211326522 A CN202211326522 A CN 202211326522A CN 115964626 A CN115964626 A CN 115964626A
Authority
CN
China
Prior art keywords
community
data
network
community detection
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211326522.1A
Other languages
Chinese (zh)
Inventor
赵雅靓
张春春
王金科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University filed Critical Henan University
Priority to CN202211326522.1A priority Critical patent/CN115964626A/en
Publication of CN115964626A publication Critical patent/CN115964626A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of community detection, in particular to a community detection method based on a dynamic multi-scale feature fusion network, which comprises the following steps: acquiring a data set to be detected by a community; preprocessing a detection data set to be community to obtain target detection data characteristics; and clustering the data to be detected in the data set to be detected in the community according to the target detection data characteristics and the trained community detection network model to generate clustering result information. The invention adopts a dynamic multi-scale feature fusion network, utilizes two modules TDCN-M and TDCNS to dynamically capture node information, can be applied to the dynamic and complex community detection field, and can improve the community detection accuracy.

Description

Community detection method based on dynamic multi-scale feature fusion network
Technical Field
The invention relates to the technical field of community detection, in particular to a community detection method based on a dynamic multi-scale feature fusion network.
Background
Community detection is a challenging task of network analysis, which aims to partition a topology graph formed by a network into multiple disjoint subgraphs, thereby revealing hidden relationships inside the network. Community detection is widely applied to various fields such as interpersonal relationship analysis and people preference analysis in the real world. The current community detection technology is mainly divided into two categories: and deducing a community structure by combining a probability graph model with prior knowledge, and converting complex network data into low-dimensional data by using a deep learning technology for representation learning. However, the conventional probability map method is difficult to apply to a complex and dynamically changing community structure on one hand, and is difficult to perform community characteristic information fusion on the other hand, so that the precision of a community detection task is low. In recent years, a rapid deep learning technology is developed, and high-dimensional data is converted into low-dimensional data for representation learning. Therefore, the method can be quickly adapted to the community structure with dynamic change characteristics, and the accuracy of the community detection task is improved by extracting the characteristics of the community nodes and the structural characteristics among the nodes. However, deep learning has a certain improvement space in community detection at present. On one hand, because a deep learning network architecture of the community detection application is often a shallow network, the detection efficiency of a large community is not greatly improved; on the other hand, the community detection technology does not use a flexible network node characteristic and network topology structure fusion mode. Therefore, the application exploration of the dynamic multi-scale feature fusion network in community detection is necessary.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In order to solve the technical problem of low accuracy of community detection, the invention provides a community detection method based on a dynamic multi-scale feature fusion network.
The invention provides a community detection method based on a dynamic multi-scale feature fusion network, which comprises the following steps:
acquiring a data set to be detected by a community;
preprocessing the data set to be detected by the community to obtain target detection data characteristics;
and clustering the data to be detected in the data set to be detected in the community according to the target detection data characteristics and the trained community detection network model to generate clustering result information.
Further, the preprocessing the data set to be detected by the community to obtain target detection data characteristics includes:
establishing a normalized adjacency matrix according to the data set to be detected in the community;
and generating the target detection data characteristic according to the detection data set to be community and the normalized adjacency matrix, wherein the target detection data characteristic is a data characteristic which can be further processed.
Further, the training process of the community detection network model includes:
constructing a community detection network model;
acquiring a sample community detection data set group, wherein data characteristics corresponding to a sample community detection data set in the sample community detection data set group are sample data class sets;
preprocessing each sample community detection data set in the sample community detection data set group to obtain sample data characteristics corresponding to the sample community detection data sets;
and training the community detection network model by using the sample data characteristics and the data characteristics corresponding to each sample community detection data set in the sample community detection data set group to obtain the trained community detection network model.
Furthermore, the community detection network model comprises a plurality of mixed network modules under the same scale, a multi-scale node information fusion module and an automatic supervision clustering module.
Further, the training the community detection network model by using the sample data characteristics and the data characteristics corresponding to each sample community detection data set in the sample community detection data set group to obtain the trained community detection network model includes:
and learning and fusing the node characteristics of the hybrid network module under the same scale by using the sample data characteristics and the data characteristics corresponding to each sample community detection data set in the sample community detection data set group, performing multi-scale node information fusion, determining an automatic supervision network training loss function, and further obtaining a trained community detection network model.
Further, the learning and fusing the node characteristics of the hybrid network modules under the same scale includes:
and respectively inputting the sample data characteristics into two deep learning networks, fusing the characteristics learned by the two deep learning networks by using a dynamic attention mechanism, and finally performing further characteristic extraction with the structural information of the data.
Further, the step of inputting the sample data features into two deep learning networks respectively, fusing the features learned by the two deep learning networks by using a dynamic attention mechanism, and finally performing further feature extraction with the structural information of the data includes:
performing feature learning by using a self-coding network;
learning the data feature representation by using a Transformer network, wherein the Transformer network comprises three modules: the system comprises a data input module, an encoding module and a task-based multilayer perceptron module, wherein the data input module comprises a linear layer, the encoding module comprises a self-attention layer, a feedforward network layer and a residual connecting layer, and the task-based multilayer perceptron module comprises a plurality of linear layers and a plurality of nonlinear layers;
and performing dynamic hybrid learning by using a self-coding network and a Transformer network to represent data information, solving the weight through a self-attention mechanism, weighting the weight to a corresponding network, integrating the structural characteristics of the data, and finally learning the information of the data.
Further, the formula corresponding to the determined training loss function of the self-supervision network is as follows:
ζ=αζ R +βζ C
where ζ is the self-supervision network training loss function, α and β are two coefficients, ζ C Is cluster loss, ζ R Is the reconstruction loss.
Further, the formula for establishing the normalized adjacency matrix is as follows:
Figure BDA0003912317350000031
wherein,
Figure BDA0003912317350000032
is a normalized adjacency matrix, D is a degree matrix, a is an adjacency matrix of a graph, where the graph may be a graph of the dataset construction to be detected by community, and I is an identity matrix.
Further, the formula for generating the target detection data features is expressed as:
Figure BDA0003912317350000033
wherein X' is a target detection data feature,
Figure BDA0003912317350000034
and X is a data characteristic corresponding to the data set to be detected in the community.
The invention has the following beneficial effects:
the invention adopts a dynamic multi-scale feature fusion network, utilizes two modules TDCN-M and TDCNS to dynamically capture node information, can be applied to the dynamic and complex community detection field, and can improve the community detection accuracy.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method for community detection based on a dynamic multi-scale feature fusion network according to the present invention;
FIG. 2 is a schematic diagram of a single transform model framework according to the present invention;
FIG. 3 is a diagram illustrating learning of data characteristics by a transform network according to the present invention;
FIG. 4 is a schematic diagram of a self-coding network and a Transformer network performing dynamic hybrid learning to represent data information according to the present invention.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the technical solutions according to the present invention will be given with reference to the accompanying drawings and preferred embodiments. In the following description, different references to "one embodiment" or "another embodiment" do not necessarily refer to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The invention provides a community detection method based on a dynamic multi-scale feature fusion network, which comprises the following steps:
acquiring a data set to be detected by a community;
preprocessing a detection data set to be a community to obtain target detection data characteristics;
and clustering the to-be-community detection data in the to-be-community detection data set according to the target detection data characteristics and the trained community detection network model to generate clustering result information.
The following steps are detailed:
referring to fig. 1, a flow diagram of some embodiments of a dynamic multi-scale feature fusion network based community detection method according to the invention is shown. The community detection method based on the dynamic multi-scale feature fusion network comprises the following steps:
s1, acquiring a data set to be detected by the community.
In some embodiments, a set of data to be community detected may be obtained.
The data to be detected in the data set to be detected by the community may be data to be detected by the community. For example, the set of data to be community detected may include, but is not limited to: a target user's purchased commodity set, city population set, and shelf item set. Wherein the target user may be a user who purchases the goods.
As an example, when the data set to be community detected is a purchased commodity set of a target user, subsequent community detection is performed on the purchased commodity set of the target user, so that a preference of the target user for purchasing commodities can be obtained.
As yet another example, a Pythrch framework may be employed to read in a dataset to be community detected.
And S2, preprocessing the detection data set to be the community to obtain target detection data characteristics.
In some embodiments, the to-be-community detection data set may be preprocessed to obtain target detection data features.
The target detection data characteristic can be a preprocessed data set to be detected by the community. The target detection data characteristic is also a data characteristic that can be further processed.
As an example, this step may include the steps of:
firstly, establishing a normalized adjacency matrix according to the data set to be detected in the community.
For example, the formula for establishing the normalized adjacency matrix correspondence may be:
Figure BDA0003912317350000041
wherein,
Figure BDA0003912317350000042
is a normalized adjacency matrix. D is a degree matrix. A is a adjacency matrix of a graph, wherein the graph can be a graph of a dataset to be detected community construction. I is the identity matrix.
And secondly, generating target detection data characteristics according to the to-be-community detection data set and the normalized adjacency matrix.
The formula for generating the target detection data features is expressed as:
Figure BDA0003912317350000043
where X' is the target detection data feature (processed data feature).
Figure BDA0003912317350000044
Is a normalized adjacency matrix. And X is the data characteristic corresponding to the data set to be detected by the community.
And S3, clustering the data to be detected in the data set to be detected in the community according to the target detection data characteristics and the trained community detection network model to generate clustering result information.
In some embodiments, the data to be detected in the data set to be detected by the community may be clustered according to the target detection data characteristics and the trained community detection network model, so as to generate clustering result information.
The community detection network model can be used for clustering the data to be detected in the data set to be detected in the community. The clustering result information can represent the result of the community detection of the data set to be subjected to the community detection.
As examples, clustering index calculation and community detection analysis. Wherein, the clustering index may include: accuracy (ACC), normalized Mutual Information (NMI), average Rand Index (ARI), and F1 recall. The higher the four indexes are, the better the clustering effect is, and thus the better the community detection effect is.
Optionally, the training process of the community detection network model may include the following steps:
firstly, a community detection network model is constructed.
The community detection network model comprises a plurality of same-scale down-mixed network modules (TDCN-M), a multi-scale node information fusion (TDCN-S) and an automatic supervision clustering module. Wherein, the TDCN-M and the TDCN-S are both composed of an auto-coding Architecture (AE) and a Transformer structure.
For example, parameters in the community detection network model may be initialized. For example, the number of the hybrid network modules in the same scale may be set to be a first preset number, the number of the data set clusters may be a second preset number, the number of the coding blocks in the transform network may be a third preset number, and the number of the multi-layer perceptron modules in the transform network may be a fourth preset number. A single Transformer model framework diagram can be seen in FIG. 2. The first preset number, the second preset number, the third preset number and the fourth preset number may be preset numbers. The first, second, third and fourth preset numbers may be unequal. The first predetermined number may be denoted as M. The second predetermined number may be denoted as k. The third predetermined number may be denoted as K. The fourth predetermined number may be denoted as N. The data set clustering number can be a category number obtained by clustering through a community detection network model.
And secondly, acquiring a sample community detection data set group.
The data type of the sample community detection data in the sample community detection data set group may be the same as the data type of the data to be detected in the community. The sample data class set corresponding to the sample community detection data set may be known. The sample data class set corresponding to the sample community detection data set may be a class set obtained by clustering and dividing sample community detection data in the sample community detection data set in advance. The sample data class set can represent the division condition of the sample community detection data in the sample community detection data set. And the data characteristics corresponding to the sample community detection data set in the sample community detection data set group are sample data class sets.
In actual conditions, the more the number of sample community detection data sets in a sample community detection data set group is, the better the training effect of the community detection network model is.
And thirdly, preprocessing each sample community detection data set in the sample community detection data set group to obtain sample data characteristics corresponding to the sample community detection data sets.
The specific implementation manner of this step may refer to step S2, and the sample community detection data set may be used as the data set to be detected by the community, and step S2 is executed, so that the obtained target detection data feature is the sample data feature.
And fourthly, training the community detection network model by using the sample data characteristics and the data characteristics corresponding to each sample community detection data set in the sample community detection data set group to obtain the trained community detection network model.
For example, the node features of the hybrid network module at the same scale may be learned and fused by using the sample data features and the data features corresponding to each sample community detection data set in the sample community detection data set group, and multi-scale node information fusion may be performed to determine an unsupervised network training loss function, and further obtain a trained community detection network model, where this step may include the following sub-steps:
the first sub-step, learning and fusing the node characteristics of TDCN-M (Multi-Network Feature Fusion Module) under the same scale.
For example, the sample data features may be input into two deep learning networks respectively, the features learned by the two deep learning networks are fused by a dynamic attention mechanism, and finally, the feature extraction is further performed with the structural information of the data, which specifically includes the following steps:
firstly, feature learning is carried out by utilizing a self-coding network. The sample data characteristics are input into the first layer self-coding network with the preset number to obtain the characteristics of each layer self-coding network.
For example, the formula corresponding to the feature of the self-coding network of each layer may be:
H (l) =σ(W (l) H (l-1) +b (l) )
wherein H (l) Is a characteristic of the l-th layer self-coding network. H (l-1) Is a characteristic of layer l-1 self-coding networks. l represents the number of layer variables. The value range of l is {1,2,3, \8230;, M }. M is a first predetermined number. When l =1, H (0) Is a sample data feature. σ is a non-linear activation function, such as the Relu function. W (l) And b (l) Are the learning parameters from the l-th layer in the coding model.
Next, learning is performed on the data feature representation using a transform network. The Transformer network mainly comprises three modules: the system comprises a data input module, an encoding module and a task-based multilayer perceptron module. The data input module mainly comprises a linear layer. The coding module comprises a self-attention layer, a feedforward network layer and a residual connecting layer. The task-based multi-layer perceptron Module (MLP) comprises a plurality of linear layers and a plurality of non-linear layers, and the detailed implementation, referring to fig. 3, may comprise the following steps:
in step 301, the data input module represents learning of the input data.
For example, the node characteristics input by the first Transformer network can be recorded as Z (l) Wherein l also represents the sequence number of the Transformer network. Node characteristic Z input by first Transformer network (0) Is a sample data feature. Will Z (l) The linear layer of the afferent neural network has the following calculation formula:
Z (l) =W (l) Z (l) +b (l)
step 302, the encoding module further processes the data processed by the data input module.
For example, incoming data feature representation Z (l) Through K encoding modules, a new data representation Z is generated (l) For inputting into multiple layersAnd a perceptron module. Wherein K is a third predetermined number. Further, the formula of a single coding module is as follows:
Z′ (p) =LN(MHA(Z′ (p) )+Z′ (p) )
Z (p) =LN(FFN(Z′ (p) )+Z′ (p) )
wherein, the value range of p is {1,2,3, \8230;, K }.
In step 303, the mlp further processes the data.
For example, the data characteristic Z processed by the coding module (l) Input into the MLP and obtain a new data representation for data information fusion. Further, the formula for a single MLP is as follows:
Z′ (p) =W (p) Z (p) +b (p)
Z (p) =ELU(Z′ (p) )
wherein, the value range of p can also be {1,2,3, \8230;, N }. MLP is a multi-layer perceptron.
Then, the data information is represented by dynamic hybrid learning using a self-coding network and a transform network. And the weights are solved by a self-attention mechanism. Namely, the self-coding network and the Transformer network can solve the weights through a self-attention mechanism. Weighting the weight to the corresponding network, then integrating the structural features of the data, and finally learning the information of the data, the specific implementation manner, referring to fig. 4, may include the following steps:
step 401, calculate the weight of each of the two networks.
The calculation formula of this step is as follows:
Ψ l =l 2 (softmax(a T (LeakyReLU([Z (l) ||H (l) ]W (l) ))))
where a is a parameter that can be learned during training. l 2 Is L2-Norm. L2 is normalized. Z is a linear or branched member (l) Is the feature learned by the transform module. H (l) Is the output from the encoding module AE. W (l) Are parameters that the neural network can learn. Ψ l Is of 2d × 2 dimension, wherein dIs the dimension of the output of the hybrid network.
Step 402, apply the calculated weights to the output of the corresponding network.
For example, the calculated weight Ψ l The output applied to the corresponding network is calculated as follows:
Z (l) =(ψ l,1 )⊙Z (l) +(ψ l,2 )⊙H (l)
wherein, the all is the Hadamard product operation. (psi) l,1 ) Is the matrix Ψ l The first column of (2). (psi) l,2 ) Is the matrix Ψ l The second column of (2).
And step 403, final calculation of the node characteristics of the TDCN-M.
For example, the learned features are blended into the adjacency matrix of the data to obtain new data feature information, which is used as the input of the next TDCN-M, and the calculation formula is as follows:
Figure BDA0003912317350000081
wherein Z is (l) Is a feature learned by layer 1.
Figure BDA0003912317350000082
Is a normalized adjacency matrix. The transform () representation is input into the transform module in parenthesis.
And the second substep, multi-scale node information fusion (TDCN-S). Output Z generated by TDCN-S (Multi-Scale Heterogeneous Fusion Module) on L TDCN-M (l) And performing weighted fusion operation on the output of the Mth layer of the self-coding network so as to further explore the characteristics of the node characteristics under multiple scales, wherein the weighted fusion operation method specifically comprises the following steps of:
first, the output of each module of the TDCN-M and the output of the last layer of the self-coding network are connected to obtain the hidden state Z'. The calculation formula is as follows:
Z′=[Z (1) ||Z (2) ||…||Z (l) ||H (M) ]
wherein, the value range of L is {1,2,3, \8230;, L }.
Finally, Z calculation is finally indicated. The attention calculation method may refer to step 401 to obtain the weight Ψ. The hidden state Z' is calculated by the weights Ψ in the form of Hadamard products as well. The calculation of Z refers to step 403.
And a third substep of determining an unsupervised network training loss function. The loss function is mainly composed of two parts, namely clustering loss zeta C And reconstruction loss ζ R . Determining a formula corresponding to the training loss function of the self-supervision network as follows:
ζ=αζ R +βζ C
where ζ is the self-supervised network training loss function. α and β are two coefficients. ζ represents a unit C Is the cluster loss. Zeta R Is the reconstruction loss.
Further, ζ R The calculation mode is obtained by a decoding part of the self-coding network. Characteristic H of decoder to encoder output (m) Reconstructing to obtain the predicted value of the node characteristics
Figure BDA0003912317350000083
And performing normal mode operation on the original node characteristics and the prediction characteristics, wherein the specific formula is as follows:
Figure BDA0003912317350000084
further, the clustering loss ζ C It is composed of the final representation Z and the self-encoded output H (m) The respective calculated KL divergence constitutes.
The invention adopts a dynamic multi-scale feature fusion network, utilizes two modules TDCN-M and TDCNS to dynamically capture node information, can be applied to the dynamic and complex community detection field, and can improve the community detection accuracy.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; the modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application, and are included in the protection scope of the present application.

Claims (10)

1. A community detection method based on a dynamic multi-scale feature fusion network is characterized by comprising the following steps:
acquiring a data set to be detected by a community;
preprocessing the data set to be detected by the community to obtain target detection data characteristics;
and clustering the data to be detected in the data set to be detected in the community according to the target detection data characteristics and the trained community detection network model to generate clustering result information.
2. The method according to claim 1, wherein the preprocessing the data set to be subjected to community detection to obtain target detection data features comprises:
establishing a normalized adjacency matrix according to the data set to be detected in the community;
and generating the target detection data characteristic according to the detection data set to be community and the normalized adjacency matrix, wherein the target detection data characteristic is a data characteristic which can be further processed.
3. The method for community detection based on the dynamic multi-scale feature fusion network as claimed in claim 1, wherein the training process of the community detection network model comprises:
constructing a community detection network model;
acquiring a sample community detection data set group, wherein data characteristics corresponding to a sample community detection data set in the sample community detection data set group are sample data class sets;
preprocessing each sample community detection data set in the sample community detection data set group to obtain sample data characteristics corresponding to the sample community detection data sets;
and training the community detection network model by using the sample data characteristics and the data characteristics corresponding to each sample community detection data set in the sample community detection data set group to obtain the trained community detection network model.
4. The method as claimed in claim 1, wherein the community detection network model includes a plurality of same-scale down-mixing network modules, a multi-scale node information fusion module and an auto-supervised clustering module.
5. The method according to claim 3, wherein the training of the community detection network model by using the sample data features and data features corresponding to each sample community detection data set in the sample community detection data set group to obtain the trained community detection network model comprises:
and learning and fusing the node characteristics of the hybrid network module under the same scale by using the sample data characteristics and the data characteristics corresponding to each sample community detection data set in the sample community detection data set group, performing multi-scale node information fusion, determining an automatic supervision network training loss function, and further obtaining a trained community detection network model.
6. The community detection method based on the dynamic multi-scale feature fusion network of claim 5, wherein the learning and fusion of the node features of the hybrid network modules under the same scale comprises:
and respectively inputting the sample data characteristics into two deep learning networks, fusing the characteristics learned by the two deep learning networks by using a dynamic attention mechanism, and finally performing further characteristic extraction with the structural information of the data.
7. The method according to claim 6, wherein the method for detecting communities based on the dynamic multi-scale feature fusion network is characterized in that sample data features are respectively input into two deep learning networks, the features learned by the two deep learning networks are fused by a dynamic attention mechanism, and finally, feature extraction is further performed with structural information of data, and comprises the following steps:
performing feature learning by using a self-coding network;
learning the data feature representation by using a Transformer network, wherein the Transformer network comprises three modules: the system comprises a data input module, an encoding module and a task-based multilayer perceptron module, wherein the data input module comprises a linear layer, the encoding module comprises a self-attention layer, a feedforward network layer and a residual connecting layer, and the task-based multilayer perceptron module comprises a plurality of linear layers and a plurality of nonlinear layers;
and performing dynamic hybrid learning by using a self-coding network and a Transformer network to represent data information, solving the weight through a self-attention mechanism, weighting the weight to a corresponding network, integrating the structural characteristics of the data, and finally learning the information of the data.
8. The community detection method based on the dynamic multi-scale feature fusion network according to claim 5, wherein the formula corresponding to the determined self-supervision network training loss function is:
ζ=αζ R +βζ C
where ζ is the self-supervision network training loss function, α and β are two coefficients, ζ C Is cluster loss, ζ R Is the reconstruction loss.
9. The community detection method based on the dynamic multi-scale feature fusion network according to claim 2, wherein the formula for establishing the normalized adjacency matrix correspondence is as follows:
Figure FDA0003912317340000021
wherein,
Figure FDA0003912317340000022
is a normalized adjacency matrix, D is a degree matrix, a is an adjacency matrix of a graph, wherein the graph can be a graph of the structure of the data set to be detected by the community, and I is an identity matrix.
10. The method according to claim 2, wherein the formula for generating the target detection data features is represented as follows:
Figure FDA0003912317340000023
wherein, X Is a characteristic of the target detection data,
Figure FDA0003912317340000024
and X is a data characteristic corresponding to the data set to be detected in the community. />
CN202211326522.1A 2022-10-27 2022-10-27 Community detection method based on dynamic multi-scale feature fusion network Pending CN115964626A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211326522.1A CN115964626A (en) 2022-10-27 2022-10-27 Community detection method based on dynamic multi-scale feature fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211326522.1A CN115964626A (en) 2022-10-27 2022-10-27 Community detection method based on dynamic multi-scale feature fusion network

Publications (1)

Publication Number Publication Date
CN115964626A true CN115964626A (en) 2023-04-14

Family

ID=87353332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211326522.1A Pending CN115964626A (en) 2022-10-27 2022-10-27 Community detection method based on dynamic multi-scale feature fusion network

Country Status (1)

Country Link
CN (1) CN115964626A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011114135A1 (en) * 2010-03-16 2011-09-22 Bae Systems Plc Detecting at least one community in a network
CN107153713A (en) * 2017-05-27 2017-09-12 合肥工业大学 Overlapping community detection method and system based on similitude between node in social networks
US20210027182A1 (en) * 2018-03-21 2021-01-28 Visa International Service Association Automated machine learning systems and methods
US20210279260A1 (en) * 2018-06-22 2021-09-09 1Qb Information Technologies Inc. Method and system for identifying at least one community in a dataset comprising a plurality of elements
CN113642596A (en) * 2021-06-02 2021-11-12 南京航空航天大学 Brain network classification method based on community detection and double-path self-coding
CN114020999A (en) * 2021-10-20 2022-02-08 山西大学 Community structure detection method and system for movie social network
WO2022028249A1 (en) * 2020-08-05 2022-02-10 华中师范大学 Learning interest discovery method for online learning community
US20220051125A1 (en) * 2020-08-11 2022-02-17 Paypal, Inc. Intelligent clustering of account communities for account feature adjustment
CN114202035A (en) * 2021-12-16 2022-03-18 成都理工大学 Multi-feature fusion large-scale network community detection algorithm
CN114648635A (en) * 2022-03-15 2022-06-21 安徽工业大学 Multi-label image classification method fusing strong correlation among labels
CN114880538A (en) * 2022-06-08 2022-08-09 读者出版集团有限公司 Attribute graph community detection method based on self-supervision
CN114970684A (en) * 2022-05-05 2022-08-30 西安理工大学 Community detection method for extracting network core structure by combining VAE
CN115002159A (en) * 2022-06-06 2022-09-02 哈尔滨理工大学 Sparse crowd sensing system-oriented community classification and user selection method
CN115018663A (en) * 2022-06-20 2022-09-06 兰州大学 Seed expansion community detection method and system based on community clustering characteristics

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011114135A1 (en) * 2010-03-16 2011-09-22 Bae Systems Plc Detecting at least one community in a network
CN107153713A (en) * 2017-05-27 2017-09-12 合肥工业大学 Overlapping community detection method and system based on similitude between node in social networks
US20210027182A1 (en) * 2018-03-21 2021-01-28 Visa International Service Association Automated machine learning systems and methods
US20210279260A1 (en) * 2018-06-22 2021-09-09 1Qb Information Technologies Inc. Method and system for identifying at least one community in a dataset comprising a plurality of elements
WO2022028249A1 (en) * 2020-08-05 2022-02-10 华中师范大学 Learning interest discovery method for online learning community
US20220051125A1 (en) * 2020-08-11 2022-02-17 Paypal, Inc. Intelligent clustering of account communities for account feature adjustment
CN113642596A (en) * 2021-06-02 2021-11-12 南京航空航天大学 Brain network classification method based on community detection and double-path self-coding
CN114020999A (en) * 2021-10-20 2022-02-08 山西大学 Community structure detection method and system for movie social network
CN114202035A (en) * 2021-12-16 2022-03-18 成都理工大学 Multi-feature fusion large-scale network community detection algorithm
CN114648635A (en) * 2022-03-15 2022-06-21 安徽工业大学 Multi-label image classification method fusing strong correlation among labels
CN114970684A (en) * 2022-05-05 2022-08-30 西安理工大学 Community detection method for extracting network core structure by combining VAE
CN115002159A (en) * 2022-06-06 2022-09-02 哈尔滨理工大学 Sparse crowd sensing system-oriented community classification and user selection method
CN114880538A (en) * 2022-06-08 2022-08-09 读者出版集团有限公司 Attribute graph community detection method based on self-supervision
CN115018663A (en) * 2022-06-20 2022-09-06 兰州大学 Seed expansion community detection method and system based on community clustering characteristics

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEI SHI ET AL.: "《network embedding via community based variational autoencoder》", 《IEEE ACCESS》, 31 December 2019 (2019-12-31) *
郑雅萍: "《基于异构图卷积神经网络的 时序社区检测方法研究》", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 March 2022 (2022-03-15) *

Similar Documents

Publication Publication Date Title
CN112100369B (en) Semantic-combined network fault association rule generation method and network fault detection method
Castillo Functional networks
CN115269357A (en) Micro-service abnormity detection method based on call chain
CN112784929B (en) Small sample image classification method and device based on double-element group expansion
CN112138403B (en) Interactive behavior recognition method and device, storage medium and electronic equipment
CN112818764A (en) Low-resolution image facial expression recognition method based on feature reconstruction model
CN113239916B (en) Expression recognition and classroom state evaluation method, device and medium
CN113628059A (en) Associated user identification method and device based on multilayer graph attention network
Wu et al. Switchtab: Switched autoencoders are effective tabular learners
CN114861740B (en) Self-adaptive mechanical fault diagnosis method and system based on multi-head attention mechanism
Pise et al. Relational reasoning using neural networks: a survey
CN114863572A (en) Myoelectric gesture recognition method of multi-channel heterogeneous sensor
Fu et al. A unified framework for multi-domain ctr prediction via large language models
Jenny Li et al. Evaluating deep learning biases based on grey-box testing results
CN110851580B (en) Personalized task type dialog system based on structured user attribute description
Falahzadeh et al. A 3D tensor representation of speech and 3D convolutional neural network for emotion recognition
CN115964626A (en) Community detection method based on dynamic multi-scale feature fusion network
Wang et al. How to make a BLT sandwich? learning to reason towards understanding web instructional videos
CN117011219A (en) Method, apparatus, device, storage medium and program product for detecting quality of article
CN112801076B (en) Electronic commerce video highlight detection method and system based on self-attention mechanism
CN115830384A (en) Image fusion method and system for generating countermeasure network based on double discriminators
CN115545833A (en) Recommendation method and system based on user social information
CN115132181A (en) Speech recognition method, speech recognition apparatus, electronic device, storage medium, and program product
CN115100599A (en) Mask transform-based semi-supervised crowd scene abnormality detection method
CN115410000A (en) Object classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination