CN117689625A - CT image processing method driven by multitasking hybrid neural network and brain-like diagnosis system - Google Patents

CT image processing method driven by multitasking hybrid neural network and brain-like diagnosis system Download PDF

Info

Publication number
CN117689625A
CN117689625A CN202311626792.9A CN202311626792A CN117689625A CN 117689625 A CN117689625 A CN 117689625A CN 202311626792 A CN202311626792 A CN 202311626792A CN 117689625 A CN117689625 A CN 117689625A
Authority
CN
China
Prior art keywords
neural network
task
hybrid
hybrid neural
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311626792.9A
Other languages
Chinese (zh)
Other versions
CN117689625B (en
Inventor
胡滨
关治洪
单梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Guangzhou
South China University of Technology SCUT
Original Assignee
Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Guangzhou
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Guangzhou, South China University of Technology SCUT filed Critical Guangdong Provincial Laboratory Of Artificial Intelligence And Digital Economy Guangzhou
Priority to CN202311626792.9A priority Critical patent/CN117689625B/en
Publication of CN117689625A publication Critical patent/CN117689625A/en
Application granted granted Critical
Publication of CN117689625B publication Critical patent/CN117689625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a CT image processing method driven by a multitasking hybrid neural network and a brain-like diagnosis system, and relates to an artificial intelligent medical image; a common encoder and a multi-tasking decoder are generated, and a multi-tasking result image is output. The system comprises a processor, a memory and a display, wherein at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to realize the following steps: acquiring a CT image; generating a common encoder established based on the hybrid neural network, and calling the encoder to encode the CT image to generate joint characteristics; and generating a multi-task decoder established based on the hybrid neural network, and calling the decoder to perform classification operation and segmentation operation on the combined characteristics to obtain a multi-task diagnosis result. The invention can improve the multitask diagnosis efficiency of CT images, and reduce the calculation power consumption of a diagnosis system while ensuring the multitask precision.

Description

CT image processing method driven by multitasking hybrid neural network and brain-like diagnosis system
Technical Field
The invention relates to an artificial intelligent medical image, in particular to a CT image processing method driven by a multi-task hybrid neural network and a brain-like diagnosis system.
Background
CT (Computed Tomography ) examination involves the process of non-invasively acquiring images of internal tissues of a human body or a part of a human body, and is one of the most important clinical diagnostic methods. Early CT examination methods were constructed in expert systems, which relied on medical professionals to guide the reading and give diagnostic results based on expert experience. In recent years, the image processing method driven by artificial intelligence technologies such as an artificial neural network, a large model and the like can learn disease knowledge from large data, and brings objective, accurate and rapid diagnosis effects.
The accurate lung CT classification or segmentation method greatly assists doctors in judging disease types and determining disease positions. However, on one hand, the lung of a patient suffering from pneumonia can have abnormal lung solid changes, interstitial lesions or glass-ground shadows and other abnormalities caused by pulmonary alveolar wall inflammation or pulmonary alveolar exudates, and present complex characteristics of multiple infection areas, blurred edges, heterogeneous shapes of the infection areas and the like, so that the existing neural network methods such as U-Net, VGGNet and the like have the defects of large data requirements, high calculation energy consumption and the like, and are difficult to use for pulmonary CT examination; on the other hand, CT examination involves the Multi-task medical image processing process of classification, segmentation and the like, and the existing Multi-task neural networks such as Multi-task CNN, SNN and the like have the defects of high training difficulty, low Multi-task precision and the like, so that the application of the artificial intelligent medical image technology is limited.
Therefore, the development of a novel artificial neural network method improves the processing efficiency and accuracy of the multi-task CT image, and is one of important ways for breaking through the bottleneck of the existing medical image technology.
Disclosure of Invention
Aiming at the defects of low precision and high calculation energy consumption of a multi-task neural network in the CT image processing process in the prior art, the invention provides the CT image processing method driven by the multi-task hybrid neural network and a brain-like diagnosis system, which can improve the multi-task diagnosis efficiency, ensure the multi-task precision and reduce the calculation power consumption of the diagnosis system.
In order to achieve the above purpose, the present invention provides the following technical solutions:
in a first aspect, the present invention provides a CT image processing method, which includes the steps of:
acquiring a CT image;
generating a hybrid pulse-convolution calculation layer, wherein the hybrid pulse-convolution calculation layer is formed by cascade connection of convolution operation and pulse neurons, and generating a hybrid neural network according to the hybrid pulse-convolution calculation layer;
generating a common encoder established based on the hybrid neural network, and calling the encoder to encode the CT image to generate a joint feature;
generating a multi-task decoder established based on a hybrid neural network, and calling the decoder to perform classification operation and multi-task segmentation operation on the joint characteristics;
and generating a multi-task hybrid neural network connected in series by the common encoder and the multi-task decoder, and outputting a brain-like result image by using the multi-task hybrid neural network.
In a second aspect, the present invention provides a CT brain diagnostic system comprising a processor, a memory and a display, wherein at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the steps of:
acquiring a CT image;
generating a hybrid pulse-convolution calculation layer, wherein the hybrid pulse-convolution calculation layer is formed by cascade connection of convolution operation and pulse neurons, and generating a hybrid neural network according to the hybrid pulse-convolution calculation layer;
generating a common encoder established based on the hybrid neural network, and calling the encoder to encode the CT image to generate a joint feature;
generating a multi-task decoder established based on a hybrid neural network, and calling the decoder to perform classification operation and multi-task segmentation operation on the joint characteristics;
and generating a multi-task hybrid neural network connected in series by the common encoder and the multi-task decoder, and outputting a brain-like result image by using the multi-task hybrid neural network.
In a third aspect, the present invention also provides a computer-readable storage medium storing a program for execution by a processor to implement a method as described above.
In a fourth aspect, the present invention also provides a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method described previously.
Compared with the prior art, the invention has the beneficial effects that:
(1) The multitasking diagnosis precision is high: the neural network provided by the invention starts from the multi-task priori knowledge and the image association characteristics, integrates the coding-decoding network structure and the hybrid convolution-pulse calculation unit, improves the existing single loss function type, constructs a subtask-specific coupled multi-loss function, trains the refreshing network by means of an improved counter-propagation algorithm, extracts the multi-task joint characteristics of CT image classification and segmentation, and achieves the aim of improving the lung CT multi-task diagnosis precision.
(2) The bottom layer calculation power consumption is low: the neural network provided by the invention is added with LIF impulse neurons to replace the continuous activation function and multiply-accumulate operation of the traditional convolutional neural network, discrete information processing and addition calculation are executed through impulse sequences (with the value of 0 or 1), the calculation energy consumption of the built neural network is reduced, and the running cost of a CT diagnosis system is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly explain the drawings needed in the embodiments, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a general technical route frame diagram of an embodiment of the present invention;
FIG. 2 is a schematic diagram of a multi-tasking hybrid neural network; FIG. 2 (a) is an encoder built based on a hybrid neural network; FIG. 2 (b) is a schematic diagram of an embodiment of a hybrid neural network-based multi-task encoder;
FIG. 3 is a schematic diagram of a CT-like brain diagnostic system and apparatus;
FIG. 4 is an example of a novel coronavirus infected lung CT;
FIG. 5 is a graph of convergence of the loss function and multi-loss function of three subtasks of the built neural network on the training set; FIG. 5 (a) is a plot of convergence of the loss function of the classification task on the training set for the hybrid neural network constructed in example 1; FIG. 5 (b) is a plot of convergence of the loss function of the segmentation task of the pulmonary infection region on the training set for the hybrid neural network constructed in example 1; FIG. 5 (c) is a plot of convergence of the loss function of the pulmonary parenchyma segmentation task on the training set for the hybrid neural network constructed in example 1; FIG. 5 (d) is a plot of the convergence of the total loss function of the hybrid neural network constructed in example 1 over the training set;
FIG. 6 is a graph of the results of the established neural network and comparison method for segmenting the pulmonary infection area;
FIG. 7 is a graph showing the results of independent segmentation of lung parenchyma by a par-SEG model in the neural network;
FIG. 8 is a graph of the results of independent segmentation of the lung infection area by an inf-SEG model in the neural network constructed;
FIG. 9 is a graph of classification metrics for a test set for an established neural network and a comparison method;
FIG. 10 is the ROC curve and AUC area size for sample levels on test sets for the neural network and comparison method constructed;
FIG. 11 is a graph of the established neural network and the comparison method for dividing the index over the test set;
FIG. 12 is a graph of calculated energy consumption ratios for the neural network and the comparison method.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Examples:
it should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
Aiming at the multitasking of CT image joint classification and segmentation, the embodiment of the invention adopts the technical conception of ' one encoder and a plurality of decoders are connected in series and parallel ' mixed convolution-pulse calculation ', constructs a brain-like multitask mixed neural network, and overcomes the limitations of large data demand, high calculation power consumption and the like of the prior neural network technologies such as U-Net, multi-task CNN, SNN and the like in the CT image processing process; and combining computer equipment to build a brain-like diagnosis system with the CT image combined classification and segmentation functions, so that the diagnosis efficiency of the CT image is improved, and the calculation power consumption of the diagnosis system is reduced.
Example 1
Referring to fig. 1 to 2, a method for processing a CT image according to the present embodiment may include the following steps:
step 101: CT images are acquired.
In this step, a CT image may be obtained by an electronic computer scan (Computerized Tomography, CT).
Step 102: and generating a hybrid pulse-convolution calculation layer, wherein the hybrid pulse-convolution calculation layer is formed by cascade connection of convolution operation and pulse neurons, and generating a hybrid neural network according to the hybrid pulse-convolution calculation layer.
Step 103: and generating a common encoder built based on the hybrid neural network, and calling the encoder to encode the CT image to generate a joint feature.
In the above steps, the encoder includes N 1 Each hierarchy structure comprises two mixed pulse-convolution calculation layers and a pooling layer, wherein the mixed pulse-convolution calculation layers are used for carrying out feature extraction on the CT image, the mixed pulse-convolution calculation layers are formed by convolution operation and pulse neuron cascading, and the pooling layer is used for sampling the CT image after feature extraction.
Specifically, the encoder E is constructed with the input being a CT image, such as a lung CT, and the output being a multi-tasking joint feature F. The encoder is built by a hybrid pulse-convolution calculation layer and a pooling layer, the input CT image is subjected to feature extraction by the hybrid pulse-convolution calculation layer, and then is subjected to downsampling by the pooling layer, so that the size of the feature image is halved, and finally the joint feature F is obtained. An encoder as a feature extraction network comprising N 1 Each hierarchy consists of 2 hash pulsesThe pulse-convolution computing layer and 1 pooling layer, wherein the mixed pulse-convolution computing layer is formed by cascade connection of convolution operation and pulse neurons. The formula of each calculation unit is as follows:
a convolution unit: the convolution operation is adopted, namely, after the weighted average of pixels in a small area of the input CT is adopted, the pixels become corresponding pixels in the output image, wherein the weight is defined by a convolution kernel, and the requirements are satisfied
conv(x l )=x l *K(ω l ,b) (1)
Wherein x is l Represents a layer I input, K (w l B) represents a convolution kernel, w l Represents the first layer weight parameter, b represents the bias parameter, weight w l Will be determined by the training process of step 3) below. First layer input x l For the original CT image, the other layers input x l Defined by the pulse sequence generated by the neuron.
Impulse neurons: the description is made using a standard leak integration-and-fire (LIF) model. Note u (t) is the neuron membrane potential, I (t) is the neuron input, mainly from the forward neurons, R is the membrane resistance, tau is the time constant, LIF model is as follows:
the mechanism by which LIF neurons produce pulses according to equation (2) is: when the membrane potential u reaches the threshold u thr The neuron emits a pulse (impulse) while the membrane potential u is reset to an initial value u 0 =0. The output of the impulse neuron is thus a pulse train, defined as
Here, t k Representing the pulse time, δ is the dirac impulse function:
hybrid convolution-pulse layer: taking the forward convolution result as the input of LIF model, taking the pulse sequence generated by neuron as the output, calculating as (3), namely
The first layer is obtained by repeating the formulas (1) and (4) by adopting 2 cascade hybrid pulse-convolution calculation units:
dh l =impul(conv(h l )) (5)
pooling layer: downsampling is performed by a max pooling operation, i.e., the forward input image is divided into rectangular regions, the maximum value of the element is output for each sub-region,
z l =pooling(dh l ) (6)
here, z l Represents the layer i output, l=1, …, N-1.
Nth (N) 1 The layer consists of only 2 hybrid convolution-pulse calculation layers, so the layer output is directly defined by the membrane potential of the neuron, u (T) is calculated according to formula (2), and the average is taken according to the time step T, i.e
z L =u(T)/T (7)
In summary, through the forward computation of the N-layer hybrid neural network, the encoder E finally outputs the multi-tasking joint feature F.
Step 104: generating a multi-task decoder built based on the hybrid neural network, and calling the decoder to perform classification operation and multi-task segmentation operation on the joint characteristics.
Step 105: and generating a multi-task hybrid neural network connected in series by the common encoder and the multi-task decoder, and outputting a brain-like result image by using the multi-task hybrid neural network.
In the above steps, the decoder includes a classification module for performing the classification operation and a segmentation module for performing the segmentation operation, where the classification module includes a CLS module, and the segmentation module includes a par-SEG module and an inf-SEG module for implementing different segmentation functions. Further, the classification process of the CLS module is to convert the joint characteristic F output by the encoder into a one-dimensional vector, input the one-dimensional vector into a classification network, the output of the CLS module is the pulse release frequency of the neurons of the output layer, the classification result depends on the subscript of the neurons with the largest pulse release frequency of the output layer, and the subscript represents the category of CT; the partition process of the par-SEG module is as follows: the combined features transmitted by the input encoder are subjected to feature re-extraction through a hybrid convolution-pulse layer, up-sampling is performed through a transposed convolution layer, the size of a feature map is recovered layer by layer, and finally a segmentation result of a desired region is output through a 1X 1 convolution layer; the inf-SEG module is also configured with a strong heterogeneity relative to the par-SEG module.
Specifically, a multi-task decoder D is constructed, comprising M parallel modules: a classification module (called CLS module for short), a lung parenchyma segmentation subtask module (called par-SEG module for short), a lung infection area segmentation subtask module (called inf-SEG module for short), and the like; the number M of the sub-modules can be adjusted according to the requirements of the diagnosis task; the input is the multi-task joint characteristic F of the step 1), and each module outputs as each subtask result.
The CLS module is composed of N 2 And (3) a classification network formed by layer full-connection units, wherein each hierarchical structure is formed by impulse neurons, the neurons adopt LIF models given by formulas (2) and (3), and the number of the neurons is determined by the dimension of the joint feature diagram F. The classification process of the CT image is as follows: the joint feature map F output by the encoder is converted into a one-dimensional vector, the one-dimensional vector is input into a classification network, the output of the CLS module is the pulse issuing frequency of the neurons of the output layer, and the classification result depends on the subscript of the neurons with the largest pulse issuing frequency of the output layer, and the subscript represents the category of CT.
The par-SEG module adopts a network structure symmetrical to the encoder E and is constructed by a mixed pulse-convolution calculation layer and a transposed convolution layer, and comprises N 1 Each hierarchy is composed of 2 hash pulse-convolution layers, 1 transpose convolution layer. The segmentation process of CT image lung parenchyma is as follows: the combined feature map F transmitted by the encoder E is input, feature re-extraction is carried out through a hybrid convolution-pulse layer, up-sampling is carried out through a transposed convolution layer, the size of the feature map is restored layer by layer, and finally a segmentation result of a desired region is output through a 1X 1 convolution layer.
The calculation formula of the hybrid pulse-convolution layer is shown in the formula (5) of the step 1). The calculation formula of the transposed convolution layer is as follows:
outsize l =stride×(insize l -1)+kernel (8)
here, outsize l And insize l Respectively representing the output and input of the layer, l=1, …, N 1 -1, kernel is an odd number, taking a value of 3 or 5.
The inf-SEG module has a network structure and a computing unit similar to those of the par-SEG module, and the segmentation process of the CT image infection region is also consistent with the lung parenchyma segmentation process. The difference is that the lung parenchyma is a whole structure distributed on two sides, and the lung infection areas are more and distributed on different positions, and the infection areas have different shapes and sizes and strong heterogeneity, so that the heterogeneous characteristics of the segmentation of the infection areas can be effectively extracted corresponding to a complex loss function.
In some embodiments, the method further comprises the step of training the encoder and the decoder:
step 201: generating a loss function and an overall loss function corresponding to a classification module and a multi-task segmentation module in the multi-task hybrid neural network, and inputting training samples to the encoder and the decoder;
step 202: invoking the loss function and adjusting parameters related to the loss function of the classifier and the multi-tasking module for reducing the loss of the encoder and the decoder;
step 203: according to the relevant parameters of the loss function adjusted by the result image of the training sample, calculating the losses of the classification model and the multi-task segmentation module;
step 204: and performing pulse-based back propagation training according to the loss to optimize the multi-task hybrid neural network, wherein the training is optimized until a preset training stopping condition is reached, so as to obtain the optimal multi-task hybrid neural network.
Specifically, a multi-tasking hybrid neural network trains. According to the heterogeneity of subtasks such as classification, segmentation and the like, an effective training Loss (Loss) function and a back propagation algorithm are designed, so that the built neural network is ensured to learn key weight parameters.
The scheme adopts a Multi-loss function to train the Multi-task hybrid neural network, and the training function is defined as follows:
for classification tasks, the Loss function takes the form of MSE (Mean squared error) Loss, i.e
Here, pr is the output class of the CLS module, yr is the true class label, subscript i represents the ith task, and r represents the number of training samples.
For two segmentation tasks, namely lung parenchyma and lung infection region segmentation, consider that the region to be segmented is only a small part of the whole lung, namely most pixels of a CT image are positive samples, and the small part is a negative sample to be segmented, and the situation of unbalanced sample distribution is presented. Therefore, the Loss function of the scheme adopts the Focal Loss form:
the method aims at increasing the attention of a training network to a foreground region with a small number of negative samples of a segmentation task, and further enhancing the extraction of segmentation features, wherein phat represents the proximity degree of an output category and a real label, alpha epsilon (0, 1) represents an adjustment factor, and log represents natural logarithm.
In order to improve the multi-task training effect, the scheme adopts a regularized weighting multi-Loss function, consists of Loss of M subtask modules and is defined as follows:
here, lamdai >0 represents the weighting factor of the subtask i, and M represents the number of subtasks.
Further, the present solution adopts a pulse back propagation algorithm, so as to learn the optimal weight parameter w by minimizing the Multi-loss function (11), namely:
according to equations (4), (5), mL (p, y) is a function of the weight parameter w and the bias parameter b, and to optimize mL (p, y), it is necessary to obtain a gradient of the function mL (p, y) with respect to w and b, respectively. The scheme adopts a steady bias parameter b, so that only the gradient of mL to w is needed to be calculated. The built multi-task hybrid neural network has a hierarchical structure, and according to a gradient back propagation rule, gradients can be back propagated from a final output layer to an input layer along a plurality of hidden layers. Thus, the following chain derivative rule is obtained:
here the number of the elements is the number,representing the Multi-loss function with respect to p l Netl represents the output of the impulse neuron, representing the multi-loss function with respect to p l And so on.
In feed forward propagation, convolution and impulse are integrated into a hybrid neuron model, where the computational formula of the impulse neuron is determined from the original image and the previous convolution. The number n of neurons per layer is determined by the forward convolution size; the input to the impulse neuron is different between the different layers. For layer 1, the input vector of the impulse neuron is defined as:
net 1 (t)=conv(x 0 )=K*x 0
for layer i, l=2..n, the input of the impulse neuron is defined as:
here, W is l-1 Representing the weight of the previous layer, x l-1 And (t) represents a pulse sequence transmitted by a neuron of the upper layer, and the definition is the same as that of the formula (3).
Because of the discontinuity of the pulse sequence, as in equations (4) and (5) above, the built multi-task hybrid neural network cannot be trained directly using classical gradient descent methods, so a pulse-based back propagation algorithm is employed. The gradient is given layer by layer belowIs a calculation scheme of (a).
For the final output layer, l=n, substitution (7), gradient calculation satisfies:
for the hidden layer l, l=1, 2..n-1, equation (10) indicates that the output z is discontinuous and therefore not differentiable, so a pseudo derivative method is employed to estimate the derivatives in equation (12). Specifically, according to LIF model (2), the activation function of neurons is given by:
further increasing the leak term may result in:
wherein,s (t) represents a pulse sequence, as in equation (3), representing the number of output pulses of the neuron over the total forward propagation time. Using an activation function aIF (t) instead of the original discontinuous output z l The following derivatives can be obtained:
reviewing the chain law (12), there is
To sum up, formulas (14) and (15), similar to the back propagation gradient in convolutional neural networks such as VGG, the back propagation gradient in the multi-task hybrid neural network can achieve the purpose of training all weight parameters w in all layers. The update rule of the weight is as follows:
where 0 < η < 1 represents a learning rate, l=1, 2.
In summary, the image processing method of the present invention has the following advantages:
(1) The multitasking diagnosis precision is high: the neural network provided by the invention starts from the multi-task priori knowledge and the image association characteristics, integrates the coding-decoding network structure and the hybrid convolution-pulse calculation unit, improves the existing single loss function type, constructs a subtask-specific coupled multi-loss function, trains the refreshing network by means of an improved counter-propagation algorithm, extracts the multi-task joint characteristics of CT image classification and segmentation, and achieves the aim of improving the lung CT multi-task diagnosis precision.
(2) The bottom layer calculation power consumption is low: the neural network provided by the invention is added with LIF impulse neurons to replace the continuous activation function and multiply-accumulate operation of the traditional convolutional neural network, discrete information processing and addition calculation are executed through impulse sequences (with the value of 0 or 1), the calculation energy consumption of the built neural network is reduced, and the running cost of a CT diagnosis system is reduced.
In order to prove the advantage of the invention in calculating energy consumption, the invention also provides a method for measuring the energy consumption of the artificial neural network. For a general convolution layer, the measurement formula for calculating the energy consumption is as follows:
E conv =#FP conv ×E mac
for a general impulse neuron layer, a measurement formula for calculating energy consumption is as follows:
E impu =#FP impu ×E add
=R fire ×#FP conv ×E add
for the hybrid convolution-pulse neuron layer provided by the invention, a measurement formula for calculating energy consumption is as follows:
E hybrid =#FP layer1 ×E mac +(#FP-#FP layer1 )×R fire ×E add (16)
where #FP denotes the number of FP (floating point) operations per second in the network, and the energy consumption of one 32-bit MAC (multiply-accumulate) operation is E mac =4.6 pJ, the energy consumption of a 32-bit ADD (accumulate) operation is E add =0.1 pJ, the former energy consumption is 5.1 times that of the latter; r is R fire The firing frequency of the pulse representing the neuron satisfies:
C[k]representing the total pulse count for k time steps, # Neurons representing the number of Neurons in the layer, R fire Take the value 0 to 1, R fire =1 represents that each neuron of the layer triggers only once in a T time step, which means that the number of pulsed neuron layer FP operations is the same as the corresponding convolution level. In summary, the four formulas are:
E hybrid ≤E conv (17)
the built hybrid network has proved to have the advantage of lower computational power consumption than the corresponding convolutional network.
Example 2
Referring to fig. 3, based on the same inventive concept, an embodiment of the present invention further provides a CT brain diagnosis system, which includes a processor and a memory, wherein at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the following steps:
acquiring a CT image;
generating a hybrid pulse-convolution calculation layer, wherein the hybrid pulse-convolution calculation layer is formed by cascade connection of convolution operation and pulse neurons, and generating a hybrid neural network according to the hybrid pulse-convolution calculation layer;
generating a common encoder established based on the hybrid neural network, and calling the encoder to encode the CT image to generate a joint feature;
generating a multi-task decoder established based on a hybrid neural network, and calling the decoder to perform classification operation and multi-task segmentation operation on the joint characteristics;
and generating a multi-task hybrid neural network connected in series by the common encoder and the multi-task decoder, and outputting a brain-like result image by using the multi-task hybrid neural network.
Specifically, the diagnosis system is driven by the multi-task neural network technology, so that negative-positive classification of lung CT and accurate segmentation of lung heterogeneous diseases can be automatically realized under the condition that a doctor does not need to read a slice, and meanwhile, the calculation energy consumption of the diagnosis process is greatly reduced, and the diagnosis system is called a CT brain-like diagnosis system. The method can be specifically built according to the following steps:
step 301: an APP is set up by adopting an electronic device with a memory, a processor, a display, a network connection and other components, and is marked as a CT brain-like, and the built multi-task hybrid neural network is written in through python language.
Step 302: acquiring a new CT image through a memory, inputting a 'CT brain diagnosis' APP, operating the APP, displaying the output of the multi-task hybrid neural network according to an electronic command by a display, wherein the method comprises the following steps of: CT category, lung parenchyma segmentation map, lung infection area, and a prediction instruction is displayed at the same time to give the infection degree of the patient.
Note that the electronic device may be a desktop computer, a portable computer, or a mobile phone, etc., and the pulse network computing module and the visualization interface may be provided.
Because the system is a system corresponding to the CT image processing method according to the embodiment of the present invention, and the principle of solving the problem by the system is similar to that of the method, the implementation of the system can refer to the implementation process of the above method embodiment, and the repetition is omitted.
Example 3
In the following, a preferred embodiment is given in connection with a novel coronavirus infected patient's lung CT image, as shown in fig. 4, the solution of the present invention is described in detail and validated specifically, by means of which two key technical problems mentioned in the present invention are solved. It should be understood that the examples are given solely for the purpose of illustration and are not intended to limit the present invention.
The implementation process of the preferred embodiment comprises the following steps: first, acquiring an electronic computing device; second, CT images are collected; thirdly, constructing a multi-task hybrid neural network; fourth, the CT multitasking diagnosis result is displayed.
Step 401, preparing an electronic computer. The hardware of this embodiment includes 1 portable notebook: a GeForce RTX 3080Ti display card, a 17-inch display, a power supply 1 set and a key mouse 1 set are configured.
Step 402, preparing a CT data set iData. The CT data set used in this example was from a new coronavirus infected patient and control group, comprising 9,701 lung CT images of 256X256, denoted I, from data set COVID-19CT scans (https:// www.kaggle.com/data/andrewmvd/COVID 19-CT-scans) and data set iCFCT (document [1 ]). The COVID-19CT scan collects CT images of 20 patients with the established coronavirus infection, acquires 1,194 Zhang Biaoji positive lung CT images, and gives a segmentation label of lung parenchyma and an infection area to each CT image by a radiologist; the iCFCT collects CT of the lungs of 1,521 patients, and acquires 8,507 Zhang Biaoji CT images of negative.
Step 403, in contrast to the above-described technical solution, the present embodiment builds a multi-task hybrid neural network formed by cascading the encoder E and the multi-task decoder D. The specific parameters are as follows:
s1) constructing an encoder E having 5 hierarchical structures. In terms of network architecture, each hierarchy consists of 2 hash pulse-convolution computation layers and 1 pooling layer, with hierarchy 5 containing 2 hash pulse-convolution computation layers. In terms of convolution size and selection of impulse neurons, first, considering that an input image I is a 256×256 gray scale map, a layer 1 convolution size is selected as; secondly, the number of the layer 1 impulse neurons is obtained according to the forward convolution size, and an LIF model (2) is adopted as a model of the impulse neurons; for other levels, the convolution size, the specification of the impulse neurons, and so on.
S2) constructing a multi-task decoder D, which is composed of 3 parallel sub-modules: CLS module, par-SEG module, inf-SEG module.
The CLS module is used for judging the type of the CT image, namely positive or negative. The specific parameters are as follows: the classifier formed by 2 layers of full-connection units is adopted, each layer is formed by impulse neurons, the 1 st layer of the CLS module comprises F dimension size neurons, the 2 nd layer comprises 2 neurons, and the 2 classes are respectively corresponding to: positive or negative. The classification result depends on the subscript of the layer 2 most frequent neurons.
The purpose of the par-SEG module is to segment the lung parenchyma, i.e. to separate the lung parenchyma from the thoracic background by pixel differences. The specific parameters are as follows: in terms of network architecture, a system is employed that contains 5 hierarchies, each consisting of 2 hybrid convolution-pulse units and 1 transposed convolution layer. In terms of convolution size and neuron rule, the layer 1 of the par-SEG module adopts the convolution size of 256X256, the number of neurons of the layer 1 is 256X256 according to the convolution size, the transposed convolution scale is selected to be 256X256, parameters of other layers are analogized in sequence, and the layer 5 of the par-SEG module only comprises 2 hybrid convolution-pulse units, and the output of the hybrid convolution-pulse units is 256X256 pixel matrixes and represents lung parenchyma segmentation results.
The purpose of the inf-SEG module is to segment the pulmonary infection area, and the network structure, convolution and pulse neuron parameter configuration are similar to the specifications of the par-SEG module. The difference is that the 256X256 pixel matrix output by the layer 5 of the inf-SEG module represents the segmentation results of the infected areas with different positions and different sizes of the lung.
S3) training the multi-task hybrid impulse neural network by adopting a data set iData, wherein the specific flow comprises the following steps: firstly, for each round of epoch in the training process, 20% of negative CT images are randomly sampled, 1,701 negative samples are selected, 1,194 Zhang Yangxing samples are selected at the same time, a data set containing 2,895 CT images is formed, and then the data set is divided into a training set, a verification set and a test set according to the proportion of 8:1:1.
Secondly, according to the technical scheme step S3), constructing a Loss function of three subtasks by using a training set and each round of epoch: l1, L2, L3, the Multi-loss function used in this embodiment is defined as follows:
mL(p,y)=λ 1 L 12 L 23 L 3
in addition, according to the pulse back propagation algorithm, the gradient of the Multi-loss function with respect to the weight w is calculated, and the weight parameter w is optimized by back propagation of the gradient in the Multi-task hybrid neural network, so that the aim of optimizing the Multi-loss function is fulfilled.
For each epoch round, all training samples were propagated in feedforward and back in the neural network built.
FIG. 5 shows the convergence of three subtask Loss functions, multi-Loss functions, which illustrate that after 5 rounds of training, the Loss functions all approach 0, i.e., the weight parameter w approaches the optimal value, meaning that the built neural network learns the multitasking features of CT diagnosis.
Step 404, displaying the CT multi-task diagnosis result through a display, and calculating the image task index, which illustrates the advantages of the built multi-task hybrid neural network compared with the prior art. The specific flow is as follows: and randomly selecting samples in the verification set and the test set, inputting the samples into the multi-task hybrid neural network, and obtaining the output of each module through feedforward calculation. The input test set CT samples, the output of the three subtasks is given in fig. 6, 7, 8.
In order to illustrate the advantages of the present invention over the prior art, the present embodiment further calculates various indexes related to image classification and image segmentation, including classification Accuracy (Accuracy), specificity (Specificity), sensitivity (Sensitivity), ROC curve, AUC area, dice coefficient, ioU measurement, and neural network calculation energy consumption, for evaluating the effect of the built multi-task hybrid neural network.
For CT classification tasks, the established neural network achieves 89.9% accuracy, 90.2% specificity and 88.9% sensitivity on the test set, which is at least 12.1% improvement over the existing UNet. Fig. 9 shows specific comparison techniques and results. To further evaluate classification performance, fig. 10 shows a ROC (receiver operating characteristic) curve of the sample level of the network being constructed on the test set, and a ROC curve of the comparative technique, it can be observed that the area under the ROC curve (i.e., AUC) of the network being constructed is 0.99, which is greater than the comparative method, and these numerical indicators illustrate that the network being constructed has better robustness.
For the lung parenchyma segmentation task, the Dice coefficient obtained by the built neural network on the test set is 0.915, and the IoU measurement is 0.844, which is improved by at least 23% compared with the existing U-Net technology. In the lung infection area segmentation task, the established network obtains the Dice coefficient of 0.532 and IoU of 0.366, which is improved by 15 percent compared with the prior U-Net technology. FIG. 11 shows specific comparison techniques and results, and the digital indexes illustrate the prior art of the established task hybrid neural network such as the segmentation effect optimization single-task U-Net, the single-task impulse neural network and the like; and in combination with the classification indexes provided by the figure 7, the built multi-task hybrid neural network can integrate the relevance of classification and segmentation tasks and promote the single-task effects of classification or segmentation and the like when learning multi-task image features of classification, segmentation and the like.
Finally, to illustrate the computational energy consumption advantage of the neural network built, the energy consumption ratio is calculated according to formulas (16), (17) given by the second advantage of the technical scheme: e (E) hybrid /E conv The embodiment compares the calculated energy consumption of the established network with that of the control network, and adopts normalization, namely, setting E conv =1, fig. 12 illustrates that the computational power consumption of the established neural network is reduced by at least 80%.
In summary, the embodiment shown in fig. 4 and the comparative indexes shown in fig. 9, 10, 11 and 12 illustrate that the neural network has the capability of learning the characteristics of the multitasking CT image, especially about the learning of key characteristics such as CT image category, different region segmentation, etc., and has better effect than the prior art, and the calculation energy consumption of the neural network is far lower than that of the convolutional neural network of the contrast, therefore, the pulmonary CT diagnosis system driven by the multitasking hybrid neural network has the advantages of high multitasking precision and low calculation energy consumption of the bottom layer, and is hopeful to better and more provincially assist doctors in treating patients with coronary pneumonia.
Other embodiments can use different types of medical image data, such as breast ultrasound imaging and brain nuclear magnetic resonance imaging, and apply the established brain-like multi-task hybrid neural network method to auxiliary diagnosis of diseases such as breast cancer, depression, brain tumor and the like.
Example 4
Based on the same inventive concept, the embodiments of the present invention also provide a computer-readable storage medium having stored therein at least one instruction, at least one program, a code set, or an instruction set, which is loaded and executed by a processor to implement the CT image processing method as described above.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the above embodiments may be implemented by a program that instructs associated hardware, the program may be stored in a computer readable storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (Programmable Read-Only Memory, PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disk Memory, magnetic disk Memory, tape Memory, or any other medium that can be used for carrying or storing data that is readable by a computer.
Since the storage medium is a storage medium corresponding to the CT image processing method according to the embodiment of the present invention, and the principle of solving the problem of the storage medium is similar to that of the method, the implementation of the storage medium may refer to the implementation process of the method embodiment, and the repetition is omitted.
Example 5
In some possible implementations, aspects of the method of the embodiments of the present invention may also be implemented in the form of a program product comprising program code for causing a computer device to carry out the steps of the CT image processing method according to the various exemplary embodiments of the present application as described above, when the program product is run on a computer device. Wherein executable computer program code or "code" for performing the various embodiments may be written in a high-level programming language such as C, C ++, c#, smalltalk, java, javaScript, visual Basic, structured query language (e.g., act-SQL), perl, or in a variety of other programming languages.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
The above embodiments are only for illustrating the technical concept and features of the present invention, and are intended to enable those skilled in the art to understand the content of the present invention and implement the same, and are not intended to limit the scope of the present invention. All equivalent changes or modifications made in accordance with the essence of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. A method of CT image processing, comprising the steps of:
acquiring a CT image;
generating a hybrid pulse-convolution calculation layer, wherein the hybrid pulse-convolution calculation layer is formed by cascade connection of convolution operation and pulse neurons, and generating a hybrid neural network according to the hybrid pulse-convolution calculation layer;
generating a common encoder established based on the hybrid neural network, and calling the encoder to encode the CT image to generate a joint feature;
generating a multi-task decoder established based on a hybrid neural network, and calling the decoder to perform classification operation and multi-task segmentation operation on the joint characteristics;
and generating a multi-task hybrid neural network connected in series by the common encoder and the multi-task decoder, and outputting a brain-like result image by using the multi-task hybrid neural network.
2. The CT image processing method of claim 1, further comprising the step of training the multi-tasking hybrid neural network:
generating a loss function and an overall loss function corresponding to a classification module and a multi-task segmentation module in the multi-task hybrid neural network, and inputting training samples to the encoder and the decoder;
invoking the loss function and adjusting parameters related to the loss function of the classifier and the multi-tasking module for reducing the loss of the encoder and the decoder;
according to the relevant parameters of the loss function adjusted by the result image of the training sample, calculating the losses of the classification model and the multi-task segmentation module;
and performing pulse-based back propagation training according to the loss to optimize the multi-task hybrid neural network, wherein the training is optimized until a preset training stopping condition is reached, so as to obtain the optimal multi-task hybrid neural network.
3. The CT image processing method of claim 1, wherein the Loss function is configured with a multi-tasking operation heterogeneity and the Loss function of the classification operation employs an MSE Loss function; the loss functions of the multi-task segmentation operation adopt FocalLoss functions; the total loss function employs a regularized weighted multi-loss function based on the loss functions of the individual operations.
4. The CT image processing method of claim 1, wherein the hybrid neural network is configured with a hybrid pulse-convolutional computational layer formed by a convolutional operation and a cascade of impulse neurons, the impulse neurons acting as an activation function to achieve low power consumption computation.
5. A CT-like brain diagnostic system comprising a processor, a memory and a display, said memory having stored therein at least one instruction, said instruction being loaded and executed by said processor to effect the steps of:
acquiring a CT image;
generating a hybrid pulse-convolution calculation layer, wherein the hybrid pulse-convolution calculation layer is formed by cascade connection of convolution operation and pulse neurons, and generating a hybrid neural network according to the hybrid pulse-convolution calculation layer;
generating a common encoder established based on the hybrid neural network, and calling the encoder to encode the CT image to generate a joint feature;
generating a multi-task decoder established based on a hybrid neural network, and calling the decoder to perform classification operation and multi-task segmentation operation on the joint characteristics;
and generating a multi-task hybrid neural network connected in series by the common encoder and the multi-task decoder, and outputting a brain-like result image by using the multi-task hybrid neural network.
6. The CT-like brain diagnostic system of claim 5 further comprising the step of training the multi-tasking hybrid neural network:
generating a loss function and an overall loss function corresponding to a classification module and a multi-task segmentation module in the multi-task hybrid neural network, and inputting training samples to the encoder and the decoder;
invoking the loss function and adjusting parameters related to the loss function of the classifier and the multi-tasking module for reducing the loss of the encoder and the decoder;
according to the relevant parameters of the loss function adjusted by the result image of the training sample, calculating the losses of the classification model and the multi-task segmentation module;
and performing pulse-based back propagation training according to the loss to optimize the multi-task hybrid neural network, wherein the training is optimized until a preset training stopping condition is reached, so as to obtain the optimal multi-task hybrid neural network.
7. The CT brain-like diagnostic system of claim 5 wherein the Loss function is configured with a multi-task operation heterogeneity and the Loss function of the classification operation employs an MSE Loss function; the loss functions of the multi-task segmentation operation adopt FocalLoss functions; the total loss function employs a regularized weighted multi-loss function based on the loss functions of the individual operations.
8. The CT image processing method of claim 5, wherein the hybrid neural network is configured with a hybrid pulse-convolutional computational layer formed by a convolutional operation and a cascade of impulse neurons, the impulse neurons acting as an activation function to achieve low power consumption computation.
9. A computer readable storage medium having stored therein at least one instruction, at least one program, code set, or instruction set, the at least one instruction, the at least one program, the code set, or instruction set being loaded and executed by a processor to implement the CT image processing method of any of claims 1-4.
CN202311626792.9A 2023-11-30 2023-11-30 CT image processing method driven by multitasking hybrid neural network and brain-like diagnosis system Active CN117689625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311626792.9A CN117689625B (en) 2023-11-30 2023-11-30 CT image processing method driven by multitasking hybrid neural network and brain-like diagnosis system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311626792.9A CN117689625B (en) 2023-11-30 2023-11-30 CT image processing method driven by multitasking hybrid neural network and brain-like diagnosis system

Publications (2)

Publication Number Publication Date
CN117689625A true CN117689625A (en) 2024-03-12
CN117689625B CN117689625B (en) 2024-07-23

Family

ID=90134456

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311626792.9A Active CN117689625B (en) 2023-11-30 2023-11-30 CT image processing method driven by multitasking hybrid neural network and brain-like diagnosis system

Country Status (1)

Country Link
CN (1) CN117689625B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3712849A1 (en) * 2019-03-18 2020-09-23 Siemens Healthcare GmbH Automated uncertainty estimation of lesion segmentation
US20210397966A1 (en) * 2020-06-18 2021-12-23 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image segmentation
CN115409870A (en) * 2022-09-06 2022-11-29 西安电子科技大学 Target tracking method and electronic equipment based on pulse coding learnable SNN
CN115482230A (en) * 2022-09-27 2022-12-16 西北师范大学 Pulmonary tuberculosis assistant decision-making system based on deep convolution pulse neural network
CN116030078A (en) * 2023-03-29 2023-04-28 之江实验室 Attention-combined lung lobe segmentation method and system under multitask learning framework
US20230162353A1 (en) * 2021-11-23 2023-05-25 City University Of Hong Kong Multistream fusion encoder for prostate lesion segmentation and classification
CN116797609A (en) * 2023-06-12 2023-09-22 西安电子科技大学 Global-local feature association fusion lung CT image segmentation method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3712849A1 (en) * 2019-03-18 2020-09-23 Siemens Healthcare GmbH Automated uncertainty estimation of lesion segmentation
US20210397966A1 (en) * 2020-06-18 2021-12-23 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for image segmentation
US20230162353A1 (en) * 2021-11-23 2023-05-25 City University Of Hong Kong Multistream fusion encoder for prostate lesion segmentation and classification
CN115409870A (en) * 2022-09-06 2022-11-29 西安电子科技大学 Target tracking method and electronic equipment based on pulse coding learnable SNN
CN115482230A (en) * 2022-09-27 2022-12-16 西北师范大学 Pulmonary tuberculosis assistant decision-making system based on deep convolution pulse neural network
CN116030078A (en) * 2023-03-29 2023-04-28 之江实验室 Attention-combined lung lobe segmentation method and system under multitask learning framework
CN116797609A (en) * 2023-06-12 2023-09-22 西安电子科技大学 Global-local feature association fusion lung CT image segmentation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIANG SHAN ET.AL.: ""Detecting COVID-19 on CT Images with Impulsive-Backpropagation Neural Networks"", 《2022 34TH CHINESE CONTROL AND DECISION CONFERENCE (CCDC)》, 14 February 2023 (2023-02-14), pages 2797 - 2803 *
赖策;魏小琴;: "基于卷积脉冲神经网络的图像分类算法仿真", 信息技术与信息化, no. 04, 28 April 2020 (2020-04-28), pages 149 - 151 *

Also Published As

Publication number Publication date
CN117689625B (en) 2024-07-23

Similar Documents

Publication Publication Date Title
CN111539930B (en) Dynamic ultrasonic breast nodule real-time segmentation and identification method based on deep learning
US20190065897A1 (en) Medical image analysis method, medical image analysis system and storage medium
CN111627019A (en) Liver tumor segmentation method and system based on convolutional neural network
CN107203989A (en) End-to-end chest CT image dividing method based on full convolutional neural networks
CN114419020B (en) Medical image segmentation method, medical image segmentation device, computer equipment and storage medium
Wazir et al. HistoSeg: Quick attention with multi-loss function for multi-structure segmentation in digital histology images
CN112991346B (en) Training method and training system for learning network for medical image analysis
Popescu et al. Retinal blood vessel segmentation using pix2pix gan
CN113256592B (en) Training method, system and device of image feature extraction model
CN112819831B (en) Segmentation model generation method and device based on convolution Lstm and multi-model fusion
CN112614133A (en) Three-dimensional pulmonary nodule detection model training method and device without anchor point frame
CN111524109A (en) Head medical image scoring method and device, electronic equipment and storage medium
CN113744275A (en) Feature transformation-based three-dimensional CBCT tooth image segmentation method
CN116091412A (en) Method for segmenting tumor from PET/CT image
Sirjani et al. Automatic cardiac evaluations using a deep video object segmentation network
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
CN117689625B (en) CT image processing method driven by multitasking hybrid neural network and brain-like diagnosis system
US12033365B2 (en) Image processing method and apparatus and storage medium
CN116309346A (en) Medical image detection method, device, equipment, storage medium and program product
CN115546089A (en) Medical image segmentation method, pathological image processing method, device and equipment
Pavithra et al. Systemic Lupus Erythematosus Detection using Deep Learning with Auxiliary Parameters
Demin et al. Semantic segmentation of lung radiographs using U-net type neural network
US20240104719A1 (en) Multi-task learning framework for fully automated assessment of coronary arteries in angiography images
EP4343782A1 (en) A multi-task learning framework for fully automated assessment of coronary arteries in angiography images
Sharma et al. Pneumothorax Segmentation from Chest X-Rays Using U-Net/U-Net++ Architectures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant