CN117765530A - Multi-mode brain network classification method, system, electronic equipment and medium - Google Patents

Multi-mode brain network classification method, system, electronic equipment and medium Download PDF

Info

Publication number
CN117765530A
CN117765530A CN202410045207.4A CN202410045207A CN117765530A CN 117765530 A CN117765530 A CN 117765530A CN 202410045207 A CN202410045207 A CN 202410045207A CN 117765530 A CN117765530 A CN 117765530A
Authority
CN
China
Prior art keywords
brain
network
global
local
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410045207.4A
Other languages
Chinese (zh)
Inventor
朱旗
李超君
李胜荣
张道强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202410045207.4A priority Critical patent/CN117765530A/en
Publication of CN117765530A publication Critical patent/CN117765530A/en
Pending legal-status Critical Current

Links

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a multi-mode brain network classification method, a multi-mode brain network classification system, electronic equipment and a medium, and relates to the technical field of image processing. The method comprises the following steps: acquiring multi-modal brain network data of a target to be classified; preprocessing the rs-fMRI data and the DTI data respectively to obtain an interested region of the rs-fMRI data and a fiber image of the DTI data; determining a multi-modal brain network; respectively constructing a local graph attention network and a global graph attention network; respectively inputting the multi-mode brain network into a local graph attention network and a global graph attention network to obtain corresponding local feature embedded representation and global feature embedded representation; fusing the local feature embedded representation and the global feature embedded representation by using an attention mechanism to obtain an embedded representation; and inputting the embedded representation into a classifier, applying a contrast loss function, and optimizing the classifier to obtain a classification result. The method and the device can improve the accuracy of classifying the multi-modal brain network.

Description

Multi-mode brain network classification method, system, electronic equipment and medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method, a system, an electronic device, and a medium for classifying a multi-modal brain network.
Background
As a low-cost noninvasive technology, the electroencephalogram has the problem of low spatial resolution, the functional magnetic resonance imaging technology is developed rapidly, and the electroencephalogram has the characteristics of high imaging precision and small injury, and plays an important role in the field of researching human brain function connection networks and structure connection networks. Functional magnetic resonance imaging techniques rely on measurement of Blood Oxygen Level Dependent (BOLD) changes in MRI signals, reflecting functional connections between brain regions of interest (ROIs), which can be used to construct brain Functional Connection Networks (FCNs). Meanwhile, diffusion Tensor Imaging (DTI) reflects structural connectivity by using mapping of white matter fiber bundles, and can be used for constructing brain Structure Connection Network (SCN), brain structure abnormality can be reflected by the DTI, and different types of brain networks can effectively represent brain activity states. Thus, studying the differences in brain networks between individuals can provide ancillary information to the analysis of brain disease, including epilepsy, where FCNs and SCNs exist in a different connection pattern than in healthy individuals.
fMRI data and DTI data analysis have been widely studied, but single-modality data analysis presents limitations. Most previous approaches mainly incorporate feature space for different modalities, possibly ignoring the complementarity and importance of each modality. At present, the multi-mode fusion analysis method has the following defects: (1) The brain network is converted into a vector form, so that the topological structure of the brain network is destroyed, and the importance of different modal characteristics to classification is not considered. (2) If only local features of the brain network are considered, global information of the brain network will be lost.
Disclosure of Invention
The invention aims to provide a multi-modal brain network classification method, a system, electronic equipment and a medium, which can improve the accuracy of multi-modal brain network classification.
In order to achieve the above object, the present invention provides the following solutions:
a multi-modal brain network classification method, the classification method comprising:
acquiring multi-modal brain network data of a target to be classified; the multi-modal brain network data comprises rs-fMRI data and DTI data;
preprocessing the rs-fMRI data and the DTI data respectively to obtain an interested region of the rs-fMRI data and a fiber image of the DTI data;
determining a multi-mode brain network according to the region of interest of the rs-fMRI data and the fiber image of the DTI data; the multi-modal brain network comprises brain region feature mapping and brain connection feature mapping;
respectively constructing a local graph attention network and a global graph attention network; the local graph attention network comprises a local attention module and a local feature mapping module; the global graph attention network comprises a global attention module and a global feature mapping module;
inputting the multi-modal brain network into the local graph attention network to obtain a local feature embedded representation fused with local brain region features and local brain connection features;
Inputting the multi-modal brain network into the global graph attention network to obtain a global feature embedded representation fused with global brain region features and global brain connection features;
fusing the local feature embedded representation and the global feature embedded representation by using an attention mechanism to obtain an embedded representation;
inputting the embedded representation into a classifier, and optimizing the classifier by applying a contrast loss function to obtain a classification result; the classification result comprises the probability of normal multi-mode brain network and the probability of abnormal multi-mode brain network.
Optionally, preprocessing the rs-fMRI data and the DTI data respectively to obtain a region of interest of the rs-fMRI data and a fiber image of the DTI data, which specifically includes:
dividing the rs-fMRI data into a plurality of rs-fMRI data fragments based on a time sequence;
correcting each rs-fMRI data segment by applying a plane echo sequence template to obtain a plurality of corrected rs-fMRI data segments;
performing trend removal processing on each corrected rs-fMRI data segment to obtain a plurality of processed rs-fMRI data segments;
dividing the region of interest of each processed rs-fMRI data segment by using an automatic anatomical marker map to obtain the region of interest of the rs-fMRI data;
Performing distortion correction on the DTI data to obtain corrected DTI data;
acquiring a magnetic resonance T1 image of the target to be classified;
determining a standard automatic anatomical marker map of the target to be classified according to the magnetic resonance T1 image;
determining an anatomical region according to the standard automatic anatomical landmark map of the object to be classified;
and generating a fiber image of the DTI data according to the anatomical region.
Optionally, determining a multi-mode brain network according to the region of interest of the rs-fMRI data and the fiber image of the DTI data specifically includes:
constructing an fMRI feature matrix; the value in the fMRI feature matrix is the region of interest of the rs-fMRI data;
calculating pearson correlation coefficients between regions of interest in the fMRI feature matrix to obtain a pearson correlation matrix, and taking the pearson correlation matrix as brain region feature mapping;
constructing a DTI feature matrix, and taking the DTI feature matrix as brain connection feature mapping; the value in the DTI feature matrix is the number of white matter fiber bundles in the fiber image of the DTI data.
Optionally, the local graph attention network is:
wherein,is a local feature vector of a brain region i after passing through a local graph attention network; sigma is a nonlinear activation function; a is the mechanism of attention; leakyReLU is a nonlinear activation function; carrying out T The table represents a transpose operation, || is a join operation, W L Attention to a weight matrix in the network for the partial graph; h is a i Is a feature of brain region i; h is a j Is a feature of brain region j; h is a k Is a feature of brain region k; alpha ij Is the normalized attention coefficient; m is m ij Local attention coefficients for brain connections;
tanh is a nonlinear activation function; d, d ij Indicating whether there is a connection between brain region i and brain region j; d, d ik Indicating whether there is a connection between brain region i and brain region k; b L Is a paranoid item of the output layer; w (w) L A weight matrix of a feature mapping module in the local graph attention network; n (N) i All the adjacent brain regions for brain region i.
Optionally, the global graph attention network is:
wherein,is a global feature vector of a brain region i after passing through a global graph attention network; n (N) i All contiguous brain regions that are brain region i; sigma is a nonlinear activation function; beta ij Global brain connection attention for normalized brain region i; n is n ij Global attention coefficients for brain regions; w (W) G Attention to a weight matrix in the network for the global graph; w (w) G Mapping a weight matrix of a module for the characteristics in the local graph attention network; h is a j Is a feature of brain region j; tanh is a nonlinear activation function; h is a ij Indicating whether there is a connection between brain region i and brain region j; h is a ik Indicating whether there is a connection between brain region i and brain region k; b G Is a bias term for the output layer; s is(s) ij Is the global brain connection attention coefficient between brain region i and brain region j; s is(s) ik Is the global brain connection attention coefficient between brain region i and brain region k.
Alternatively, the embedding is expressed as:
Z=γ L ·Z LG ·Z G
γ L =diag(w l );
γ G =diag(w g );
wherein Z is an embedded representation; z is Z L Embedding local features; z is Z G Embedding global features; gamma ray L Embedding Z for local features L Attention values of N brain regions; gamma ray G Embedding Z for global features G Attention values of N brain regions; w (w) l Learning weights for local features; w (w) g Learning weights for global features.
Optionally, the contrast loss function is:
wherein,is a contrast loss; />Is cross entropy loss; />Is an embedding loss; lambda is a hyper-parameter; y is a label of the multi-modal brain network; p (Z) is the probability of dividing the embedded representation Z into probabilities of normal of the multi-modal brain network; n is the total number of brain regions; />Is an indication function; θ () is a discriminator function; τ is a temperature parameter; m is m i Embedding a representation for a local feature of brain region i; n is n i Embedding a representation for global features of brain region i; />For each facing (m i ,n i ) Is a paired objective loss of (1); />For each facing (n i ,m i ) Is a paired objective loss of (1); m is m k Embedding a representation for a local feature of brain region k; n is n k Embedding a representation for global features of brain region k; m is m i Embedding a representation for a local feature of brain region i; n is n i A representation is embedded for the global features of brain region i.
A multi-modal brain network classification system, to which the above-described multi-modal brain network classification method is applied, the classification system comprising:
the acquisition module is used for acquiring multi-modal brain network data of the target to be classified; the multi-modal brain network data comprises rs-fMRI data and DTI data;
the preprocessing module is used for respectively preprocessing the rs-fMRI data and the DTI data to obtain a region of interest of the rs-fMRI data and a fiber image of the DTI data;
the brain network determining module is used for determining a multi-mode brain network according to the region of interest of the rs-fMRI data and the fiber image of the DTI data; the multi-modal brain network comprises brain region feature mapping and brain connection feature mapping;
the building module is used for respectively building a local graph attention network and a global graph attention network; the local graph attention network comprises a local attention module and a local feature mapping module; the global graph attention network comprises a global attention module and a global feature mapping module;
the local embedding determining module is used for inputting the multi-modal brain network into the local graph attention network to obtain local feature embedding representation fusing local brain region features and local brain connection features;
The global embedding determining module is used for inputting the multi-modal brain network into the global graph attention network to obtain a global feature embedding representation fusing global brain region features and global brain connection features;
the fusion module is used for fusing the local feature embedded representation and the global feature embedded representation by using an attention mechanism to obtain an embedded representation;
the classification module is used for inputting the embedded representation into a classifier, and optimizing the classifier by applying a contrast loss function to obtain a classification result; the classification result comprises the probability of normal multi-mode brain network and the probability of abnormal multi-mode brain network.
An electronic device comprising a memory for storing a computer program and a processor that runs the computer program to cause the electronic device to perform the multi-modal brain network classification method described above.
A computer readable storage medium storing a computer program which when executed by a processor implements the above-described multi-modal brain network classification method.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
When the local topological structure of the brain network is kept complete through the local graph attention network, the local attention module and the feature mapping module are introduced to explore the local brain region features and the local brain connection features respectively, and the local sub-graph information is organically fused. The global graph attention network uses the global attention module to conduct global attention ordering on brain connection, and simultaneously uses the feature mapping module to calculate the global attention coefficient of the brain region, and updates the brain network according to the global attention coefficient. The attention mechanism is able to mine the uncertainty in time of the sequence data, further learning the effective feature representation. Contrast loss is used to refine the distance between features in the shared representation space, enhancing global and local key feature expression. The attention mechanism can dynamically integrate local and global brain network characteristics to obtain final brain network embedded characteristics. The classifier is used to predict the type of input brain network to optimize and update the model. In addition, the Cross-Entropy Loss is used for training a classifier, and the classification performance of the model is improved by pulling the distance between the predicted label and the real label.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of the overall framework of the multi-modal brain network classification method of the present invention;
FIG. 2 is a schematic diagram of a partial view attention network architecture of the present invention;
FIG. 3 is a diagram of a global diagram attention network architecture according to the present invention;
FIG. 4 is a flow chart of data preprocessing according to the present invention;
FIG. 5 is a workflow diagram of a multimodal brain network analysis framework based on global and local graph attention of the present invention;
FIG. 6 is a flow chart of a method for classifying a multi-modal brain network according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a multi-modal brain network classification method, a system, electronic equipment and a medium, which can improve the accuracy of multi-modal brain network classification.
As shown in fig. 5, the invention provides a multi-mode brain network analysis framework based on global and local graph attention, which not only can adaptively estimate the importance degree of brain regions and brain connection, but also can effectively mine multi-level discriminant features of the brain network. Firstly, performing data preprocessing on rs-fMRI data and DTI data by using a DPARSF and PANDA kit (corresponding to step S1); secondly, respectively calculating pearson correlation coefficients and characteristic transformation for fMRI data and DTI data, and constructing a multi-mode brain network (corresponding to step S2); next, training a local graph attention network composed of a local attention module and a feature mapping module to extract local level features of the brain network (corresponding to step S3); then, a global graph attention network consisting of a global attention module and a feature mapping module is trained to extract global level features of the brain network (corresponding to step S4); then, the local and global feature extraction is carried out on the brain network subjected to the local and global feature extraction again to further extract rich features (corresponding to step S5); then, the contrast loss is adopted to measure the distance between the global and local features, and the expression of the key features is enhanced to optimize the model (corresponding to the step S6); then, based on the attention mechanism, adaptively fusing the global and local features to obtain a final multi-modal feature representation (corresponding to step S7); finally, the multi-modal representation is fed into a classifier consisting of a feed-forward network, a fully connected layer and a SoftMax layer, optimizing the classification performance by cross entropy loss and contrast loss functions (corresponding to step S8).
The multi-modal brain network diagnosis model provided by the invention comprises a multi-modal brain network construction module ((1)), a feature extraction module ((2)), an optimization fusion module ((3)) and a classification module ((4)). The overall framework of the proposed method is shown in fig. 1-3. It consists of a local graph attention network, a global graph attention network, an attention mechanism and a classifier. When the local graph attention network keeps the local topological structure of the brain network complete, the local attention module and the feature mapping module respectively explore the local brain region features and the local brain connection features, and organically integrate the local sub-graph information. The global graph attention network uses the global attention module to conduct global attention ordering on brain connection, and simultaneously uses the feature mapping module to calculate the global attention coefficient of the brain region, and updates the brain network according to the global attention coefficient. The attention mechanism is able to mine the uncertainty in time of the sequence data, further learning the effective feature representation. Contrast Loss (L P ) For refiningThe sharing of distances between features within the representation space enhances global and local key feature expression. The attention mechanism can dynamically integrate local and global brain network characteristics to obtain final brain network embedded characteristics. The classifier is used to predict the type of input brain network to optimize and update the model. In addition, the Cross-Entropy Loss is used for training a classifier, and the brain disease diagnosis performance of the model is improved by shortening the distance between the prediction tag and the real tag.
In fig. 2 and 3, wherein Arabic numerals 1, 2, 3 and … … each represent brain regions, a total of 90 brain regions are divided, d 11 、d 12 、d 13 、d 14 、d 15 、d 16 Representing the connection between brain region 1 and adjacent brain regions in the local graph attention network, brain region 1, brain region 2, brain region 3, brain region 4, brain region 5, and brain region 6 are all adjacent brain regions of brain region 1; d, d 22 、d 23 、d 26 ……d 24 、d 27 、d 28 Representing the connection between brain region 2 and brain regions 2, 3, 6, … …, 4, 7, 8, respectively, in the global graph attention network, brain regions 2, 3, 6, … …, 4, 7, 8 are not necessarily adjacent brain regions of brain region 2.
The framework provided by the invention is based on a graph neural network architecture. In the local attention network, a method for calculating the attention coefficient of a local brain region by using the attention network is adopted, meanwhile, the local connection attention coefficient is calculated by using the feature mapping network, the two attention coefficients are added in sequence, a dropout (p=0.5) layer is applied, and the brain network is updated to change the local feature dimension into 8100. In the global graph attention network, the global brain connection attention coefficient is calculated by the PageRank algorithm, the global brain area attention coefficient is calculated by the feature mapping network, the two attention coefficients are added in sequence, a dropout (p=0.5) layer is applied, and the global feature dimension is changed into 8100 by updating the brain network. The global and local features are then fused with an attention mechanism, and the fully connected neural network layers with softmax units that reduce the dimension from 8100 to 2 are then categorized.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1
As shown in fig. 6, the present invention provides a multi-modal brain network classification method, the classification method comprising:
step 1: acquiring multi-modal brain network data of a target to be classified; the multi-modal brain network data includes rs-fMRI data and DTI data.
Step 2: and respectively preprocessing the rs-fMRI data and the DTI data to obtain an interested region of the rs-fMRI data and a fiber image of the DTI data.
The step 2 specifically comprises the following steps:
step 21: the rs-fMRI data is divided into a plurality of rs-fMRI data fragments based on a time sequence.
Step 22: and correcting each rs-fMRI data segment by applying a plane echo sequence template to obtain a plurality of corrected rs-fMRI data segments.
Step 23: and carrying out trending treatment on each corrected rs-fMRI data segment to obtain a plurality of treated rs-fMRI data segments.
Step 24: and dividing the region of interest of each processed rs-fMRI data segment by using an automatic anatomical marker map to obtain the region of interest of the rs-fMRI data.
Step 25: and carrying out distortion correction on the DTI data to obtain corrected DTI data.
Step 26: and acquiring a magnetic resonance T1 image of the target to be classified.
Step 27: and determining a standard automatic anatomical marker map of the target to be classified according to the magnetic resonance T1 image.
Step 28: and determining an anatomical region according to the standard automatic anatomical landmark map of the object to be classified.
Step 29: and generating a fiber image of the DTI data according to the anatomical region.
In practical application, as shown in fig. 4, multi-mode brain network data is acquired, and the data is preprocessed to obtain a preprocessed synchronous multi-mode sequence matrix.
The data set used is preprocessed by means of SPM8 software in MATLAB-based DPARSF kits and MATLAB-based PANDA kits. For rs-fMRI data in the dataset, the initial image is divided into several segments and imported into the DPARSF toolkit GUI interface. The EPI (echo planar imaging, plane echo sequence) template (here the EPI template is the EPI template in MATLAB kit) is aligned for correction and repositioning. Detritus is then employed to mitigate the effects of head movements and cerebrospinal fluid (CSF) and white matter disturbances. Finally, the rs-fMRI data is divided into 90 regions of interest (ROIs) at 240 time points using an Automated Anatomical Labeling (AAL) map. In the present invention, the entire rs-fMRI data is divided into 90 regions of interest (ROIs) at 240 time points in total.
Specifically, the image is segmented into grey matter, white matter and cerebrospinal fluid by using Segment and New segment+DARTEL operations in the tool box, and the purpose of dividing the initial image into several segments is achieved.
For DTI data in the dataset, the GUI interface of the PANDA toolkit first imports the dataset. Second, the FSL toolbox is used to correct DTI distortion. The anatomical region is then determined based on AAL criteria derived from the subject T1 image. Finally, fiber images were generated using TrackVis, the number of fibers being a measure of structural connections. In the present invention, the object to be classified includes a subject.
Step 3: determining a multi-mode brain network according to the region of interest of the rs-fMRI data and the fiber image of the DTI data; the multimodal brain network includes a brain region feature map and a brain connection feature map.
The step 3 specifically comprises the following steps:
step 31: constructing an fMRI feature matrix; the values in the fMRI feature matrix are regions of interest of the rs-fMRI data.
Step 32: and calculating the pearson correlation coefficient between the interested areas in the fMRI characteristic matrix to obtain a pearson correlation matrix, and taking the pearson correlation matrix as brain region characteristic mapping.
Step 33: constructing a DTI feature matrix, and taking the DTI feature matrix as brain connection feature mapping; the value in the DTI feature matrix is the number of white matter fiber bundles in the fiber image of the DTI data.
In practical application, based on the preprocessed data obtained in step 2, for each subject, a multi-modal brain network thereof is constructed so as to mine out disease-related features.
Defining fMRI eigen matrixIt is a BOLD timing signal obtained from rs-fMRI data, where N represents the number of ROIs and M represents the number of consecutive time series points acquired. The pearson correlation coefficient for ROIs is calculated to measure functional connectivity. The specific formula is as follows:
wherein,and-> The time series of the ith and jth ROIs are represented, respectively. The pearson correlation matrix of the whole brain of each subject is expressed asFurthermore, for each subject, a DTI feature matrix is defined as The values in the matrix may beReflecting the strength of the connection between the brains. To achieve a preliminary fusion of fMRI and DTI, the brain network g= (H, D) of each subject is defined, where +.> As brain region feature map, < >>As a brain connection feature map.
Step 4: respectively constructing a local graph attention network and a global graph attention network; the local graph attention network comprises a local attention module and a local feature mapping module; the global graph attention network includes a global attention module and a global feature mapping module.
In practical applications, the local and global map attention networks in fig. 1 do not share the same feature mapping module, and the structures of the feature mapping modules applied by the local and global feature mapping modules in the local and global map attention networks are the same, but the feature mapping modules corresponding to the local and global feature mapping modules have different updated parameters.
The local graph attention network includes a first local attention network and a second local attention network; the first local attention network and the second local attention network each comprise a local attention module and a local feature mapping module; the first local attention network comprises a first local attention module and a first local feature mapping module; the second local attention network includes a second local attention module and a second local feature mapping module. The global graph attention network includes a first global graph attention network and a second global graph attention network; the first global graph attention network and the second global graph attention network both comprise a global attention module and a global feature mapping module; the first global graph attention network comprises a first global attention module and a first global feature mapping module; the second global graph attention network includes a second global attention module and a second global feature mapping module.
Step 5: and inputting the multi-modal brain network into the local graph attention network to obtain a local feature embedded representation fused with local brain region features and local brain connection features.
In practical application, the local graph attention network is constructed to explore local brain region characteristics and local brain connection characteristics, and local sub-graph information is organically fused.
The multi-modal brain network is input to a first local attention network, the results output from the first local attention network are input to a second local attention network, and a local feature embedded representation that merges local brain region features and local brain connection features is output from the second local attention network.
Specifically, the multi-modal brain network is input to a first local attention module and a first local feature mapping module; and outputting the results from the first local attention module and the first local feature mapping module, inputting the results to the second local attention module and the second local feature mapping module, and obtaining the local feature embedded representation fusing the local brain region features and the local brain connection features after outputting the results from the second local attention module and the second local feature mapping module.
The inputs to the local attention module are a series of feature vectors of the brain region, which can be expressed as Wherein the number of brain regions and the feature dimension are both N. If brain region i to brain region j have edges, local brain region attention coefficient e ij And (5) calculating. The specific formula is as follows:
e ij =a(W L h i ,W L h j );
wherein,for a weight matrix in the local graph attention network, h i And h j Is brain region i andthe characteristics of brain region j, a, is a shared attentiveness mechanism: />The present invention injects graph structure into the mechanism by performing masking attention, then spans all neighboring brain regions j e N with a softmax function i For normalizing the local attention coefficient of the brain region i. The specific formula is as follows:
wherein e ik Is the local brain region attention coefficient, alpha, between brain region i and brain region k ij Is the normalized attention coefficient and represents the importance of brain region j to brain region i. And calculating the local attention coefficient of the brain region i. The specific formula is as follows:
wherein the attention mechanism a is a single-layer feedforward neural network, which consists of weight vectorsParameterizing and applying LeakyReLU nonlinearity, T the table represents a transpose operation, || is a join operation. Meanwhile, the invention introduces a feature mapping module to extract the attention of brain connection, and the invention adopts a multi-layer perceptron. The input of the feature mapping module is a series of brain connected feature vectors, which can be expressed as +. >Wherein, the number of brain regions and the feature dimension are N +.>If d ij >0, then there is a connection between brain region i and brain region j, the invention uses the feature mapping module to calculate the local attention coefficient k of brain connection ij . Tool withThe volume formula is as follows:
wherein d ik Indicating whether there is a connection between brain region i and brain region k, w L Is the weight matrix of the feature mapping module in the local graph attention network, b L Is a paranoid of the output layer and tanh is a nonlinear activation function. In order to fully extract the local information of the brain network, the invention simultaneously considers the local attention coefficients of the adjacent brain area and the brain connection to update the brain network. The specific formula is as follows:
wherein,is a local feature vector of a brain region i after passing through a local graph attention network; sigma is a nonlinear activation function; a is the mechanism of attention; leakyReLU is a nonlinear activation function; carrying out T The table represents a transpose operation, || is a join operation, W L A weight matrix for the local graph attention network; h is a i Is a feature of brain region i; h is a j Is a feature of brain region j; h is a k Is a feature of brain region k; alpha ij Is the normalized attention coefficient; m is m ij Local attention coefficients for brain connections; tanh is a nonlinear activation function; d, d ij Indicating whether there is a connection between brain region i and brain region j; d, d ik Indicating whether there is a connection between brain region i and brain region k; b L Is a paranoid item of the output layer; w (w) L Is a weight matrix of a feature mapping module in the local graph attention network; n (N) i All the adjacent brain regions for brain region i.
Step 6: and inputting the multi-modal brain network into the global graph attention network to obtain a global feature embedded representation integrating global brain region features and global brain connection features.
In practical application, a global graph attention network is constructed, global attention ordering is carried out on brain connection by using a global attention module, and meanwhile, the global attention coefficient of a brain area is calculated by using a feature mapping module, so that the brain network is updated.
The multi-modal brain network is input to a first global attention network, the results output from the first global attention network are input to a second global attention network, and a global feature embedded representation that merges global brain region features and global brain connection features is output from the second global attention network.
Specifically, the multi-modal brain network is input to a first global attention module and a first global feature mapping module; and outputting the result from the first global attention module and the first global feature mapping module to a second global attention module and a second global feature mapping module, and obtaining a global feature embedded representation fusing the global brain region features and the global brain connection features.
The invention introduces the PageRank algorithm to calculate global brain connection attention coefficients, which are typically used as network centrality metrics, deriving the importance of each node from the overall graph structure. The invention uses the feature matrix of DTIViewed as adjacency matrices, each feature matrix D can be transformed into a brain network, so that the global importance distribution of brain regions can be calculated by the PageRank algorithm. The specific formula is as follows:
wherein i represents brain region, N i Representing a set of brain regions connected to brain region i, s (i) and s (j) representing importance scores of brain regions i and j, D j The number of connections in brain region j is indicated. The larger value of s (i) indicates that the more important is brain region i, and the more important is the connection of brain region i. Assume that all brain regions are distributed as importanceIs limited toAt the same time, the importance of each brain region is initially the same, set as +.>Converting adjacency matrix D into connection matrix C E {0,1} N×N Wherein D is ij Not less than 0, the following transformations can be applied:
constructing a migration matrix T according to the converted connection matrix C:
the score vector s is then updated continuously with the migration matrix T until a condition is met to stop the iteration, where epsilon is a parameter that determines whether the iteration is complete:
T=s×T;
|T k -T k-1 |<ε,k≤100;
the score vector s is a global importance distribution of the brain region obtained according to brain connection, and the invention injects the global importance distribution of the brain region into the adjacency matrix D by executing masking attention, and then spans all adjacent brain regions j epsilon N by using softmax function i For normalizing global brain connection attention of brain region i. The specific formula is as follows:
wherein s is ij Is the global brain connection attention coefficient between brain region i and brain region j, i.eMiddle-countA calculated importance score; s is(s) ik Is the global brain connection attention coefficient, beta, between brain region i and brain region k ij Is the normalized global brain connection attention coefficient, and represents the importance of brain connection between brain region i and brain region j. At the same time, the invention introduces a feature mapping module to extract the attention coefficient of the brain region, and the input is a series of feature vectors of the brain region, which can be expressed asWhere both the number of brain regions and the feature dimension are N,N i representing a set of brain regions connected to brain region i, N u Is a collection of brain regions connected to brain region u. The invention uses the feature mapping module to calculate the global attention coefficient m of the brain region ij . The specific formula is as follows:
wherein d is ik Represents the kth feature, w, of brain region i G Is a weight matrix of feature mapping modules in the global graph attention network, b G Is a bias term for the output layer and tanh is a nonlinear activation function. In order to fully extract global information of the brain network, the invention simultaneously considers all brain areas and brain connections to update the brain network. The specific formula is as follows:
wherein, Is a global feature vector of a brain region i after passing through a global graph attention network; n (N) i All contiguous brain regions that are brain region i; sigma is a nonlinear activation function; beta ij Global brain connection attention for normalized brain region i; n is n ij Global attention to brain regionsCoefficients; w (W) G A weight matrix for the global graph attention network; h is a j Is a feature of brain region j; tanh is a nonlinear activation function; h is a ij Indicating whether there is a connection between brain region i and brain region j; h is a ik Indicating whether there is a connection between brain region i and brain region k; b G Is a bias term for the output layer; s is(s) ij Is the global brain connection attention coefficient between brain region i and brain region j; s is(s) ik Is the global brain connection attention coefficient between brain region i and brain region k.
Step 7: and fusing the local feature embedded representation and the global feature embedded representation by using an attention mechanism to obtain an embedded representation.
Based on the step 5 and the step 6, the feature embedding obtained through the local graph attention network and the global graph attention network is dynamically fused by using an attention mechanism.
Local and global feature embedding Z is generated by a local graph attention network and a global graph attention network L And Z G Considering that the labels of the brain network are related to their pair combinations, the present invention uses the attention mechanism att (Z L ,Z G ) They are fused. The specific formula is as follows:
LG )=att(Z L ,Z G );
wherein the method comprises the steps ofRespectively represent embedded Z L And Z G Is a concentration value of N brain regions. For brain region i, it is at Z L Is->The invention transforms the embedding by a nonlinear transformation and then uses a shared attention +.>Get attention value +.>The following is provided. The specific formula is as follows:
wherein,is a weight matrix>Is the bias vector. Similarly, the invention can obtain embedded Z G Attention value of midbrain region i +.>The invention then uses the softmax function to apply the attention value +.>Normalization to obtain final weights. The specific formula is as follows:
wherein,is the weight of local embedding after normalization; />Is the globally embedded weight after normalization. Wherein L and G represent the meanings of Local and Global, respectively.
Same reasonThe larger the attention weight, the more important the corresponding embedding. For all N brain regions, there is a learned weight +.>And gamma is L =diag(w l ),γ G =diag(w g ). The invention then combines the local and global feature embedding to obtain the final embedding. The specific formula is as follows:
Z=γ L ·Z LG ·Z G
wherein Z is an embedded representation; z is Z L Embedding local features; z is Z G Embedding global features; gamma ray L Embedding Z for local features L Attention values of N brain regions; gamma ray G Embedding Z for global features G Attention values of N brain regions; w (w) l Learning weights for local features; w (w) g Learning weights for global features.
Step 8: inputting the embedded representation into a classifier, and optimizing the classifier by applying a contrast loss function to obtain a classification result; the classification result comprises the probability of normal multi-mode brain network and the probability of abnormal multi-mode brain network.
In practical applications, contrast loss functions are designed to refine the distance between features within the shared representation space, thereby enhancing global and local key feature expression.
Global and local embedded representations from the same brain network can be obtained through the global-local graph attention network, respectively, and then the invention uses a comparison objective to distinguish the same brain region embedded from other brain region embedded in these two different embeddings. For any brain region m i Embedded m generated in local graph attention network i Regarded as anchor points, embedded n generated in global graph attention network i Considered as positive samples, and m is removed from both embeddings i The embedding of the brain regions outside is naturally considered as a negative example. Formally, the present invention defines a discriminator θ (m, n) =s (g (m), g (n)), g (n)), where s is cosine similarity and g is a nonlinear projection to enhance the representation-ability of the discriminator. The transformation g is implemented using a two-layer multi-layer perceptron (MLP). The invention sets each of the opposite (m i ,n i ) Is defined as the pair of targets. The specific formula is as follows:
wherein the method comprises the steps ofIs an indicator function, τ is a temperature parameter if and only if k+.i is 1. The negative sample has two sources, one embedded in the same and one embedded between the different. Since the two embeddings are symmetrical, there is also a further loss of embeddings, which is equal to +.>The definition is similar.
The overall objective to be minimized is then defined as the average of all the faces. The specific formula is as follows:
meanwhile, the invention uses a cross entropy loss function to enhance network parameter update:
where P (Z) is the probability of dividing the embedded Z into a class, Y is the label of the brain network, where λ is the hyper-parameter, used to adjust the loss function. The final loss function is given by:
wherein,is a contrast loss; />Is cross entropy loss; />Is an embedding loss; lambda is a hyper-parameter; y is a label of the multi-modal brain network; p (Z) is the probability of dividing the embedded representation Z into probabilities of normal of the multi-modal brain network; n is the total number of brain regions; />Is an indication function; θ () is a discriminator function; τ is a temperature parameter; m is m i Embedding a representation for a local feature of brain region i; n is n i Embedding a representation for global features of brain region i; />For each facing (m i ,n i ) Is a paired objective loss of (1); />For each facing (n i ,m i ) Is a paired objective loss of (1); m is m k Embedding a representation for a local feature of brain region k; n is n k Embedding a representation for global features of brain region k; m is m i Embedding a representation for a local feature of brain region i; n is n i A representation is embedded for the global features of brain region i.
The invention has the following characteristics:
(1) The invention provides a framework for fusing the fMRI time sequence signal of brain activity and the DTI image reflecting the brain physical structure, which can effectively capture the complementary information between modes and improve the accuracy of classification results.
(2) The classification result of the invention comprises the probability of normal multi-mode brain network and the probability of abnormal multi-mode brain network, and can be used as the auxiliary for the clinician to analyze the epilepsy according to the rs-fMRI data and the DTI data.
(3) The invention uses the attention coefficient of brain area and brain connection as the basis of brain network update, extracts the final characteristic layer by training the model, and finds out important brain area and brain connection by using the biomarker method as a key sub-network. For example, to key subnetworks that facilitate epileptic analysis.
(4) The invention takes the global and local graph attention network as backbone network, learns the global and local embedded representation of the brain network, more comprehensively excavates the brain network information, and enhances the expression of the key subnetwork in the embedding by utilizing the contrast loss function.
(5) The invention can provide auxiliary information for the analysis of brain diseases by doctors. The classification result of the method provided by the invention on the epileptic data set is superior to that of the current multi-mode fusion analysis method.
Example two
In order to perform a corresponding method of the above embodiments to achieve the corresponding functions and technical effects, a multi-modal brain network classification system is provided below, the classification system comprising:
the acquisition module is used for acquiring multi-modal brain network data of the target to be classified; the multi-modal brain network data includes rs-fMRI data and DTI data.
The preprocessing module is used for respectively preprocessing the rs-fMRI data and the DTI data to obtain a region of interest of the rs-fMRI data and a fiber image of the DTI data.
The brain network determining module is used for determining a multi-mode brain network according to the region of interest of the rs-fMRI data and the fiber image of the DTI data; the multimodal brain network includes a brain region feature map and a brain connection feature map.
The building module is used for respectively building a local graph attention network and a global graph attention network; the local graph attention network comprises a local attention module and a local feature mapping module; the global graph attention network includes a global attention module and a global feature mapping module.
And the local embedding determining module is used for inputting the multi-mode brain network into the local graph attention network to obtain local feature embedding representation fusing local brain region features and local brain connection features.
And the global embedding determining module is used for inputting the multi-modal brain network into the global graph attention network to obtain a global feature embedding representation integrating global brain region features and global brain connection features.
And the fusion module is used for fusing the local feature embedded representation and the global feature embedded representation by using an attention mechanism to obtain the embedded representation.
And the classification module is used for inputting the embedded representation into a classifier, and optimizing the classifier by applying a contrast loss function to obtain a classification result.
Example III
The embodiment of the invention provides an electronic device, which comprises a memory and a processor, wherein the memory is used for storing a computer program, and the processor runs the computer program to enable the electronic device to execute the multi-mode brain network classification method in the first embodiment.
Alternatively, the electronic device may be a server.
In addition, the embodiment of the invention also provides a computer readable storage medium, which stores a computer program, and the computer program realizes the multi-modal brain network classification method of the first embodiment when being executed by a processor.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other. For the system disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (10)

1. A multi-modal brain network classification method, the classification method comprising:
Acquiring multi-modal brain network data of a target to be classified; the multi-modal brain network data comprises rs-fMRI data and DTI data;
preprocessing the rs-fMRI data and the DTI data respectively to obtain an interested region of the rs-fMRI data and a fiber image of the DTI data;
determining a multi-mode brain network according to the region of interest of the rs-fMRI data and the fiber image of the DTI data; the multi-modal brain network comprises brain region feature mapping and brain connection feature mapping;
respectively constructing a local graph attention network and a global graph attention network; the local graph attention network comprises a local attention module and a local feature mapping module; the global graph attention network comprises a global attention module and a global feature mapping module;
inputting the multi-modal brain network into the local graph attention network to obtain a local feature embedded representation fused with local brain region features and local brain connection features;
inputting the multi-modal brain network into the global graph attention network to obtain a global feature embedded representation fused with global brain region features and global brain connection features;
fusing the local feature embedded representation and the global feature embedded representation by using an attention mechanism to obtain an embedded representation;
Inputting the embedded representation into a classifier, and optimizing the classifier by applying a contrast loss function to obtain a classification result; the classification result comprises the probability of normal multi-mode brain network and the probability of abnormal multi-mode brain network.
2. The multi-modal brain network classification method according to claim 1, wherein preprocessing the rs-fMRI data and the DTI data respectively to obtain a region of interest of the rs-fMRI data and a fiber image of the DTI data, specifically comprises:
dividing the rs-fMRI data into a plurality of rs-fMRI data fragments based on a time sequence;
correcting each rs-fMRI data segment by applying a plane echo sequence template to obtain a plurality of corrected rs-fMRI data segments;
performing trend removal processing on each corrected rs-fMRI data segment to obtain a plurality of processed rs-fMRI data segments;
dividing the region of interest of each processed rs-fMRI data segment by using an automatic anatomical marker map to obtain the region of interest of the rs-fMRI data;
performing distortion correction on the DTI data to obtain corrected DTI data;
acquiring a magnetic resonance T1 image of the target to be classified;
Determining a standard automatic anatomical marker map of the target to be classified according to the magnetic resonance T1 image;
determining an anatomical region according to the standard automatic anatomical landmark map of the object to be classified;
and generating a fiber image of the DTI data according to the anatomical region.
3. The multi-modal brain network classification method according to claim 1, characterized in that determining a multi-modal brain network from the region of interest of the rs-fMRI data and the fiber image of the DTI data, in particular comprises:
constructing an fMRI feature matrix; the value in the fMRI feature matrix is the region of interest of the rs-fMRI data;
calculating pearson correlation coefficients between regions of interest in the fMRI feature matrix to obtain a pearson correlation matrix, and taking the pearson correlation matrix as brain region feature mapping;
constructing a DTI feature matrix, and taking the DTI feature matrix as brain connection feature mapping; the value in the DTI feature matrix is the number of white matter fiber bundles in the fiber image of the DTI data.
4. The multi-modal brain network classification method according to claim 1, wherein the local graph attention network is:
wherein,is a local feature vector of a brain region i after passing through a local graph attention network; sigma is a nonlinear activation function; a is the mechanism of attention; leakyReLU is a nonlinear activation function; carrying out T The table represents a transpose operation, || is a join operation, W L Attention to a weight matrix in the network for the partial graph; h is a i Is a feature of brain region i; h is a j Is a feature of brain region j; h is a k Is a feature of brain region k; alpha ij Is the normalized attention coefficient; m is m ij Local attention coefficients for brain connections;
tanh is a nonlinear activation function; d, d ij Indicating whether there is a connection between brain region i and brain region j; d, d ik Indicating whether there is a connection between brain region i and brain region k; b L Is a paranoid item of the output layer; w (w) L A weight matrix of a feature mapping module in the local graph attention network; n (N) i All the adjacent brain regions for brain region i.
5. The multi-modal brain network classification method according to claim 1, wherein the global graph attention network is:
wherein,is a global feature vector of a brain region i after passing through a global graph attention network; n (N) i All contiguous brain regions that are brain region i; sigma is a nonlinear activation function; beta ij Global brain connection attention for normalized brain region i; n is n ij Global attention coefficients for brain regions; w (W) G Attention to a weight matrix in the network for the global graph; w (w) G Mapping a weight matrix of a module for the characteristics in the local graph attention network; h is a j Is a feature of brain region j; tanh is a nonlinear activation function; h is a ij Indicating whether there is a connection between brain region i and brain region j; h is a ik Indicating whether there is a connection between brain region i and brain region k; b G Is a bias term for the output layer; s is(s) ij Is the global brain connection attention coefficient between brain region i and brain region j; s is(s) ik Is the global brain connection attention coefficient between brain region i and brain region k.
6. The multi-modal brain network classification method according to claim 1, characterized in that the embedding is represented as:
Z=γ L ·Z LG ·Z G
γ L =diag(w l );
γ G =diag(w g );
wherein Z is an embedded representation; z is Z L Embedding local features; z is Z G Embedding global features; gamma ray L Embedding Z for local features L Attention values of N brain regions; gamma ray G Embedding Z for global features G Attention values of N brain regions; w (w) l Learning weights for local features; w (w) g Learning weights for global features.
7. The multi-modal brain network classification method according to claim 1, characterized in that the contrast loss function is:
wherein,is a contrast loss; />Is cross entropy loss; />Is an embedding loss; lambda is a hyper-parameter; y is a label of the multi-modal brain network; p (Z) is the probability of dividing the embedded representation Z into probabilities of normal of the multi-modal brain network; n is the total number of brain regions;is an indication function; θ () is a discriminator function; τ is a temperature parameter; m is m i Embedding a representation for a local feature of brain region i; n is n i Embedding a representation for global features of brain region i; l (m) i ,n i ) For each facing (m i ,n i ) Is a paired objective loss of (1); l (n) i ,m i ) For each facing (n i ,m i ) Is a paired objective loss of (1); m is m k Embedding a representation for a local feature of brain region k; n is n k Embedding a representation for global features of brain region k; m is m i Embedding a representation for a local feature of brain region i; n is n i A representation is embedded for the global features of brain region i.
8. A multi-modal brain network classification system, the classification system comprising:
the acquisition module is used for acquiring multi-modal brain network data of the target to be classified; the multi-modal brain network data comprises rs-fMRI data and DTI data;
the preprocessing module is used for respectively preprocessing the rs-fMRI data and the DTI data to obtain a region of interest of the rs-fMRI data and a fiber image of the DTI data;
the brain network determining module is used for determining a multi-mode brain network according to the region of interest of the rs-fMRI data and the fiber image of the DTI data; the multi-modal brain network comprises brain region feature mapping and brain connection feature mapping;
the building module is used for respectively building a local graph attention network and a global graph attention network; the local graph attention network comprises a local attention module and a local feature mapping module; the global graph attention network comprises a global attention module and a global feature mapping module;
The local embedding determining module is used for inputting the multi-modal brain network into the local graph attention network to obtain local feature embedding representation fusing local brain region features and local brain connection features;
the global embedding determining module is used for inputting the multi-modal brain network into the global graph attention network to obtain a global feature embedding representation fusing global brain region features and global brain connection features;
the fusion module is used for fusing the local feature embedded representation and the global feature embedded representation by using an attention mechanism to obtain an embedded representation;
the classification module is used for inputting the embedded representation into a classifier, and optimizing the classifier by applying a contrast loss function to obtain a classification result; the classification result comprises the probability of normal multi-mode brain network and the probability of abnormal multi-mode brain network.
9. An electronic device comprising a memory for storing a computer program and a processor that runs the computer program to cause the electronic device to perform the multi-modal brain network classification method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it stores a computer program which, when executed by a processor, implements the multi-modal brain network classification method according to any one of claims 1 to 7.
CN202410045207.4A 2024-01-11 2024-01-11 Multi-mode brain network classification method, system, electronic equipment and medium Pending CN117765530A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410045207.4A CN117765530A (en) 2024-01-11 2024-01-11 Multi-mode brain network classification method, system, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410045207.4A CN117765530A (en) 2024-01-11 2024-01-11 Multi-mode brain network classification method, system, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN117765530A true CN117765530A (en) 2024-03-26

Family

ID=90314646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410045207.4A Pending CN117765530A (en) 2024-01-11 2024-01-11 Multi-mode brain network classification method, system, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN117765530A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118038231A (en) * 2024-04-12 2024-05-14 山东工商学院 Brain network construction and feature extraction method for fusing multidimensional information in small sample scene

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118038231A (en) * 2024-04-12 2024-05-14 山东工商学院 Brain network construction and feature extraction method for fusing multidimensional information in small sample scene

Similar Documents

Publication Publication Date Title
CN113040715B (en) Human brain function network classification method based on convolutional neural network
Wang et al. Single slice based detection for Alzheimer’s disease via wavelet entropy and multilayer perceptron trained by biogeography-based optimization
CN109659033B (en) Chronic disease state of an illness change event prediction device based on recurrent neural network
WO2023077603A1 (en) Prediction system, method and apparatus for abnormal brain connectivity, and readable storage medium
Song et al. Auto-metric graph neural network based on a meta-learning strategy for the diagnosis of Alzheimer's disease
CN109242860B (en) Brain tumor image segmentation method based on deep learning and weight space integration
CN112735570B (en) Image-driven brain atlas construction method, device, equipment and storage medium
Turkson et al. Classification of Alzheimer’s disease using deep convolutional spiking neural network
CN113314205B (en) Efficient medical image labeling and learning system
CN109544518B (en) Method and system applied to bone maturity assessment
CN111967495B (en) Classification recognition model construction method
CN114242236A (en) Structure-function brain network bidirectional mapping model construction method and brain network bidirectional mapping model
CN111242233B (en) Alzheimer disease classification method based on fusion network
CN117765530A (en) Multi-mode brain network classification method, system, electronic equipment and medium
CN114299006A (en) Self-adaptive multi-channel graph convolution network for joint graph comparison learning
CN110136109B (en) MCI classification method based on expansion convolutional neural network
CN115272295A (en) Dynamic brain function network analysis method and system based on time domain-space domain combined state
Jung et al. Inter-regional high-level relation learning from functional connectivity via self-supervision
Baskar et al. An Accurate Prediction and Diagnosis of Alzheimer’s Disease using Deep Learning
Lonij et al. Open-world visual recognition using knowledge graphs
Zong et al. Multiscale autoencoder with structural-functional attention network for alzheimer’s disease prediction
CN116523839A (en) Parkinson&#39;s disease auxiliary analysis system
WO2023108418A1 (en) Brain atlas construction and neural circuit detection method and related product
CN113080847B (en) Device for diagnosing mild cognitive impairment based on bidirectional long-short term memory model of graph
CN114663696A (en) Category incremental learning method and system suitable for small sample medical image classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination