CN116258730B - Semi-supervised medical image segmentation method based on consistency loss function - Google Patents

Semi-supervised medical image segmentation method based on consistency loss function Download PDF

Info

Publication number
CN116258730B
CN116258730B CN202310545669.8A CN202310545669A CN116258730B CN 116258730 B CN116258730 B CN 116258730B CN 202310545669 A CN202310545669 A CN 202310545669A CN 116258730 B CN116258730 B CN 116258730B
Authority
CN
China
Prior art keywords
network
consistency
student network
frequency domain
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310545669.8A
Other languages
Chinese (zh)
Other versions
CN116258730A (en
Inventor
贺阿龙
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haihe Laboratory Of Advanced Computing And Key Software Xinchuang
Nankai University
Original Assignee
Haihe Laboratory Of Advanced Computing And Key Software Xinchuang
Nankai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haihe Laboratory Of Advanced Computing And Key Software Xinchuang, Nankai University filed Critical Haihe Laboratory Of Advanced Computing And Key Software Xinchuang
Priority to CN202310545669.8A priority Critical patent/CN116258730B/en
Publication of CN116258730A publication Critical patent/CN116258730A/en
Application granted granted Critical
Publication of CN116258730B publication Critical patent/CN116258730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20052Discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a semi-supervised medical image segmentation method based on a consistency loss function, belongs to the technical field of neural networks, trains a segmentation network by using consistency constraints based on frequency domains and multi-granularity similarity, and efficiently segments medical images by using limited labeling samples and a large number of non-labeling samples. According to the multi-granularity consistency constraint of the frequency domain and the region, corresponding supervision signals can be provided for the non-marked data, and then the model can train the model by using marked and non-marked data at the same time, wherein the frequency domain consistency transforms the image to the frequency domain by using discrete cosine transform; the multi-scale region consistency can utilize region consistency information and can provide rich region semantic information for the model. The method can reduce the requirement of the full-supervision deep learning segmentation model on the annotation data, thereby reducing the annotation cost by 90 percent, and enabling the model to utilize a large amount of non-annotation data under the guidance learning of limited annotation samples.

Description

Semi-supervised medical image segmentation method based on consistency loss function
Technical Field
The invention belongs to the technical field of neural networks, and particularly relates to a semi-supervised medical image segmentation method based on a consistency loss function.
Background
The deep convolutional neural network is used as a deep learning model, and achieves the most advanced performance on a plurality of computer vision tasks such as image classification, target detection, target segmentation and the like. In many practical applications, it is not always feasible to collect enough annotated data, especially for pixel-based tasks. Thus, the fully supervised setup prevents to some extent the deployment of depth models in many clinical applications. In clinical practice, there are a large number of unlabeled medical images, and if these images can be used effectively, the dependence of the depth model on large scale labeling data can be resolved. While semi-supervised segmentation algorithms have been presented to address these problems, these studies have shown good performance with both marked and unmarked data in different tasks. The Chinese patent application publication No. CN114332135A discloses a semi-supervised medical image segmentation method and device based on dual-model interactive learning, wherein cross entropy and DICE supervision constraint are introduced when tag data knowledge is effectively learned. In another example, the Chinese patent with the application publication number of CN115359029A discloses that cross pseudo-supervised learning is performed by combining the Unet and the Swin-Unet in a HCPS network model by a semi-supervised medical image segmentation method based on a heterogeneous cross pseudo-supervised network, so that the training efficiency and the segmentation effect of the network are improved. There are still two issues to consider: first, the current approach focuses mainly on the RGB domain, but ignores the frequency domain. The use of the perception of the RGB regions alone to identify potential lesions is a very challenging task. Thus, considering the frequency domain information may provide an additional perspective for the model to explore the rich information hidden in the frequency space. Secondly, most of the conventional methods pay attention to consistency at the pixel level, and lack semantic consistency at the region level. Thus, methods that focus only on pixel-level features ultimately result in suboptimal results. In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
Aiming at the technical problems in the prior art, the invention provides a semi-supervised medical image segmentation method based on a consistency loss function, which can reduce the requirement of a full-supervised deep learning segmentation model on annotation data, thereby reducing the annotation cost by 90 percent and enabling the model to utilize a large amount of non-annotation data under the guidance and learning of limited annotation samples.
The technical scheme adopted by the invention is as follows: a semi-supervised medical image segmentation method based on a consistency loss function, comprising the steps of:
step 1: preprocessing a medical image data set, and dividing the medical image data set into a training set and a testing set; the training set comprises marked images and unmarked images; data enhancement is carried out on the training set;
step 2: initializing weights of a student network and a teacher network; the student network and the teacher network adopt the same network structure and each comprise a UNet branch, a frequency domain branch and a multi-granularity area similarity branch, wherein the UNet branch consists of an Encoder module and a Decoder module, the frequency domain branch is a FEM module, the multi-granularity area similarity branch is an MRSM module, the Encoder module is respectively connected with the Decoder module and the FEM module, and the Decoder module is connected with the MRSM module;
step 3: randomly selecting marked images and unmarked images from the training set, inputting the marked images and unmarked images into a student network and a teacher network, and propagating forward;
step 4: calculating forward propagation loss and loss functionThe following are provided:
wherein ,is a supervision loss->Is a loss of frequency domain consistency, < >>Is a multi-granularity region similarity consistency penalty, < >>Is predicted outcome loss, < >>Is a coefficient;
step 5: calculating respective gradient information for each sample in the marked image and the unmarked image selected in the step 3;
step 6: gradient back propagation of the loss layer, updating weight of the student network; updating the weight of the teacher network according to the weight of the student network;
step 7: if the student network is not converged or the maximum iteration number is not reached, returning to the step 3; otherwise, the student network training is finished;
step 8: and dividing the unmarked images on the test set by using the student network, and calculating the division index according to the division result.
In the step 1, the training set is subjected to data enhancement by adopting a random cutting and random rotation mode; the image size in the training set is 512' 512 pixels.
Further, in step 1, in the training set, the number of marked images is 10%, and the number of unmarked images is 90%.
Furthermore, the FEM module converts the input RGB features into a frequency domain through discrete cosine transform DCT, then separates high frequency from low frequency, adds position codes, models the high frequency information and the low frequency information in the frequency domain respectively by adopting a transducer structure, splices, and finally outputs the signals after the transducer structure.
Further, the MRSM module sizes the input asIs subjected to pooling operations>Obtaining region features of different granularities, the size of which is +.>Respectively go through->After the unfolding operation is changed into a two-dimensional matrix, similarity calculation is carried out, and the obtained similarity matrix with the size of +.>
Further, in step 3, the number of marked images and unmarked images randomly selected from the training set is equal.
Further, the marked image is input into a student network to calculate the loss of forward propagationThe method comprises the steps of carrying out a first treatment on the surface of the Inputting the unlabeled image into the student network and the teacher network simultaneously, and calculating the forward propagation loss +.>、/> and />
Further, the method comprises the steps of,by cross entropy loss calculation,/-> and />By a square difference loss.
Further, the method comprises the steps of,= -y log p, where p is the prediction result of the student network UNet branch, and y is the labeling result of the student network UNet branch;
, wherein ,/>Is a non-annotated image, is->Is label-free data, < >>Is the prediction result of the teacher network frequency domain branch, +.>Is the prediction result of the frequency domain branch of the student network;
, wherein ,/>The output results of the multi-granularity area similarity branches of the teacher network and the student network are respectively, and G is the total number of different granularities;
, wherein ,/> and />The predicted outcome of UNet branches of the teacher network and the student network, respectively.
Further, in step 6,
wherein ,is the weight of the teacher's network at time t, +.>Is the weight of the teacher's network at time t-1, < ->Is a coefficient of the degree of freedom,is the weight of the student network at time t.
Compared with the prior art, the invention has the following beneficial effects: the present invention uses frequency domain and multi-granularity similarity based consistency constraints to train a segmentation network to efficiently segment medical images by utilizing limited labeled samples and a large number of unlabeled samples. According to the multi-granularity consistency constraint of the frequency domain and the region, corresponding supervision signals can be provided for the non-marked data, and then the model can train the model by using marked and non-marked data at the same time, wherein the frequency domain consistency transforms the image to the frequency domain by using discrete cosine transform; the multi-scale region consistency can utilize region consistency information and can provide rich region semantic information for the model. The method can reduce the requirement of the full-supervision deep learning model on a large amount of annotation data, and uses 10% of annotation samples and a large amount of non-annotation samples to divide medical images, thereby reducing the requirement of the model on the annotation data, reducing the annotation cost, achieving the segmentation result similar to the full-supervision model, and being suitable for the medical image segmentation task with the lack of the annotation data. The method solves the problem of the requirement of the full-supervision segmentation algorithm on the marked data, reduces the marked data requirement by 90 percent to a certain extent, increases the utilization rate of unmarked data, and improves the utilization rate of the deep learning model on marked and unmarked data.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of a network according to an embodiment of the present invention;
fig. 3 is a network schematic diagram of a FEM module according to an embodiment of the present invention;
fig. 4 is a network schematic diagram of an MRSM module according to an embodiment of the present invention;
fig. 5 is a graph of segmentation results according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the drawings and the specific embodiments, so that those skilled in the art can better understand the technical solutions of the present invention.
The embodiment of the invention provides a semi-supervised medical image segmentation method based on a consistency loss function, which is shown in fig. 1 and comprises the following steps:
step 1: preprocessing a medical image data set, and dividing the medical image data set into a training set and a testing set; the training set comprises a marked image and an unmarked image, wherein the unmarked image is used as an unmarked sample; the number of marked images accounts for 10% and the number of unmarked images accounts for 90%; the image size is 512' 512 pixels.
Data enhancement is performed on the training set: adopts a mode of random cutting and random rotation.
Super parameters of the student network and the teacher network are set.
Step 2: the weights of the student network and the teacher network are initialized.
As shown in FIG. 2, the student network and the teacher network adopt the same network structure, and each of the student network and the teacher network comprises a UNet branch, a frequency domain branch and a multi-granularity area similarity branch, wherein the UNet branch consists of an Encoder module and a Decoder module, the frequency domain branch is an FEM module, the multi-granularity area similarity branch is an MRSM module, the Encoder module is respectively connected with the Decoder module and the FEM module, and the Decoder module is connected with the MRSM module. The parameters of the student network and the teacher network are different.
As shown in fig. 3, the FEM module transforms the input RGB features into the frequency domain through Discrete Cosine Transform (DCT), then separates the high frequency and the low frequency, adds the position codes, models the high frequency and the low frequency information in the frequency domain respectively by using a transducer structure, then splices, and finally outputs the signals after passing through the transducer structure.
As shown in fig. 4, the MRSM module will input a size ofIs subjected to pooling operations>Obtaining region features of different granularities, the size of which is +.>Respectively go through->After the unfolding operation is changed into a two-dimensional matrix, similarity calculation is carried out, and the obtained similarity matrix with the size of +.>
Step 3: randomly selecting the marked images and the unmarked images with the same quantity from the training set, inputting the marked images and the unmarked images into a student network and a teacher network, and propagating forward; the marked image is only input into the student network, and the unmarked image is simultaneously input into the student network and the teacher network.
Step 4: calculating forward propagation loss and loss functionThe following are provided:
wherein ,is a supervision loss->Is a loss of frequency domain consistency, < >>Is a multi-granularity region similarity consistency penalty, < >>Is predicted outcome loss, < >>Is a coefficient;
the marked image is input into a student network to calculate the loss of forward propagation;/>Calculated by cross entropy loss.
= -y log p, where p is the prediction result of the student network UNet branch and y is the labeling result of the student network UNet branch.
The unlabeled image is input into the student network and the teacher network simultaneously, and the forward propagation loss is calculated、/> and />。/> and />By a square difference loss.
, wherein ,/>Is a non-annotated image, is->Is label-free data, < >>Is the prediction result of the teacher network frequency domain branch, +.>Is the prediction result of the frequency domain branch of the student network;
, wherein ,/>The output results of the multi-granularity area similarity branches of the teacher network and the student network are respectively, and G is the total number of different granularities;
, wherein ,/> and />The predicted outcome of UNet branches of the teacher network and the student network, respectively.
Step 5: and (3) calculating respective gradient information for each sample in the marked image and the unmarked image selected in the step (3).
Step 6: gradient back propagation of the loss layer updates the weights of the student network.
And updating the weight of the teacher network according to the weight of the student network. EMA (exponential moving average) represents an exponential moving average of the formula
wherein ,is the weight of the teacher's network at time t, +.>Is the weight of the teacher's network at time t-1, < ->Is a coefficient, typically 0.99, < >>Is the weight of the student network at time t.
Step 7: if the student network is not converged or the maximum iteration number is not reached, returning to the step 3; otherwise, the student network training is finished.
Step 8: after the student network training is finished, the non-labeling images on the test set are segmented by utilizing the student network, and segmentation indexes are calculated according to segmentation results. As shown in fig. 5, wherein fig. 5 (a) is an expert annotation; fig. 5 (b) is an original image; fig. 5 (c) is a segmentation result diagram of the student network.
In the case of 10% of marked data, the image segmentation results of the method and the prior art are shown in table 1, and compared with the prior art, the method has obvious advantages in segmentation performance by using 10% of data. The method provides a loss function of consistency of the frequency domain and the multi-scale area to train the semi-supervised segmentation network, so that the network can achieve a result similar to 100% of marked data under the condition that only 10% of marked data is used, the utilization rate of the network to unmarked data is improved, and the marked cost is saved.
Table 1 semi-supervised segmentation method vs. results table
The present invention has been described in detail by way of examples, but the description is merely exemplary of the invention and should not be construed as limiting the scope of the invention. The scope of the invention is defined by the claims. In the technical scheme of the invention, or under the inspired by the technical scheme of the invention, similar technical schemes are designed to achieve the technical effects, or equivalent changes and improvements to the application scope are still included in the protection scope of the patent coverage of the invention.

Claims (5)

1. The semi-supervised medical image segmentation method based on the consistency loss function is characterized by comprising the following steps of:
step 1: preprocessing a medical image data set, and dividing the medical image data set into a training set and a testing set; the training set comprises marked images and unmarked images; data enhancement is carried out on the training set;
step 2: initializing weights of a student network and a teacher network; the student network and the teacher network adopt the same network structure and each comprise a UNet branch, a frequency domain branch and a multi-granularity area similarity branch, wherein the UNet branch consists of an Encoder module and a Decoder module, the frequency domain branch is a FEM module, the multi-granularity area similarity branch is an MRSM module, the Encoder module is respectively connected with the Decoder module and the FEM module, and the Decoder module is connected with the MRSM module;
the FEM module converts the input RGB features into a frequency domain through discrete cosine transform DCT, then separates high frequency from low frequency, adds position codes, models the high frequency information and the low frequency information in the frequency domain respectively by adopting a transducer structure, splices, and outputs after passing through the transducer structure;
the MRSM module outputs the input size ofIs subjected to pooling operations>Obtaining region features of different granularities, the size of which is +.>Respectively go through->After the unfolding operation is changed into a two-dimensional matrix, similarity calculation is carried out, and the obtained similarity matrix with the size of +.>
Step 3: randomly selecting marked images and unmarked images from the training set, inputting the marked images and unmarked images into a student network and a teacher network, and propagating forward;
step 4: calculating forward propagation loss and loss functionThe following are provided:
wherein ,is a supervision loss->Is a loss of frequency domain consistency, < >>Is a multi-granularity region similarity consistency penalty, < >>Is predicted outcome loss, < >>Is a coefficient;
the marked image is input into a student network to calculate the loss of forward propagationThe method comprises the steps of carrying out a first treatment on the surface of the Inputting the unlabeled image into the student network and the teacher network simultaneously, and calculating the forward propagation loss +.>、/> and />
By cross entropy loss calculation,/-> and />Through the loss of square difference;
= -y log p, where p is the prediction result of the student network UNet branch, and y is the labeling result of the student network UNet branch;
, wherein ,/>Is a non-annotated image, is->Is label-free data, < >>Is the prediction result of the teacher network frequency domain branch, +.>Is the prediction result of the frequency domain branch of the student network;
, wherein ,/>The output results of the multi-granularity area similarity branches of the teacher network and the student network are respectively, and G is the total number of different granularities;
, wherein ,/>The predicted results of UNet branches of the teacher network and the student network, respectively;
step 5: calculating respective gradient information for each sample in the marked image and the unmarked image selected in the step 3;
step 6: gradient back propagation of the loss layer, updating weight of the student network; updating the weight of the teacher network according to the weight of the student network;
step 7: if the student network is not converged or the maximum iteration number is not reached, returning to the step 3; otherwise, the student network training is finished;
step 8: and dividing the unmarked images on the test set by using the student network, and calculating the division index according to the division result.
2. The method for segmenting the semi-supervised medical image based on the consistency loss function as set forth in claim 1, wherein in the step 1, data enhancement is performed on the training set by adopting a random cutting and random rotation mode; the image size in the training set is 512' 512 pixels.
3. The method for segmenting a semi-supervised medical image based on a consistency loss function as recited in claim 1, wherein in step 1, the number of labeled images in the training set is 10% and the number of unlabeled images is 90%.
4. The method of claim 1, wherein in step 3, the number of labeled images and unlabeled images randomly selected from the training set is equal.
5. A semi-supervised medical image segmentation method based on a consistency loss function as recited in claim 1, wherein in step 6,
wherein ,is the weight of the teacher's network at time t, +.>Is the weight of the teacher's network at time t-1, < ->Is a coefficient of->Is the weight of the student network at time t.
CN202310545669.8A 2023-05-16 2023-05-16 Semi-supervised medical image segmentation method based on consistency loss function Active CN116258730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310545669.8A CN116258730B (en) 2023-05-16 2023-05-16 Semi-supervised medical image segmentation method based on consistency loss function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310545669.8A CN116258730B (en) 2023-05-16 2023-05-16 Semi-supervised medical image segmentation method based on consistency loss function

Publications (2)

Publication Number Publication Date
CN116258730A CN116258730A (en) 2023-06-13
CN116258730B true CN116258730B (en) 2023-08-11

Family

ID=86682924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310545669.8A Active CN116258730B (en) 2023-05-16 2023-05-16 Semi-supervised medical image segmentation method based on consistency loss function

Country Status (1)

Country Link
CN (1) CN116258730B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117253044B (en) * 2023-10-16 2024-05-24 安徽农业大学 Farmland remote sensing image segmentation method based on semi-supervised interactive learning
CN117390454A (en) * 2023-11-16 2024-01-12 整数智能信息技术(杭州)有限责任公司 Data labeling method and system based on multi-domain self-adaptive data closed loop

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114187308A (en) * 2021-12-16 2022-03-15 中国人民解放军陆军工程大学 HRNet self-distillation target segmentation method based on multi-scale pooling pyramid
CN114332554A (en) * 2021-11-10 2022-04-12 腾讯科技(深圳)有限公司 Training method of image segmentation model, image segmentation method, device and equipment
CN114757273A (en) * 2022-04-07 2022-07-15 南京工业大学 Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model
CN115331009A (en) * 2022-08-17 2022-11-11 西安理工大学 Medical image segmentation method based on multitask MeanTeacher
CN115470863A (en) * 2022-09-30 2022-12-13 南京工业大学 Domain generalized electroencephalogram signal classification method based on double supervision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179961B (en) * 2020-01-02 2022-10-25 腾讯科技(深圳)有限公司 Audio signal processing method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332554A (en) * 2021-11-10 2022-04-12 腾讯科技(深圳)有限公司 Training method of image segmentation model, image segmentation method, device and equipment
CN114187308A (en) * 2021-12-16 2022-03-15 中国人民解放军陆军工程大学 HRNet self-distillation target segmentation method based on multi-scale pooling pyramid
CN114757273A (en) * 2022-04-07 2022-07-15 南京工业大学 Electroencephalogram signal classification method based on collaborative contrast regularization average teacher model
CN115331009A (en) * 2022-08-17 2022-11-11 西安理工大学 Medical image segmentation method based on multitask MeanTeacher
CN115470863A (en) * 2022-09-30 2022-12-13 南京工业大学 Domain generalized electroencephalogram signal classification method based on double supervision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
非对称交叉平均教师:一种半监督语义分割算法及其在变电站表计读数识别中的应用;滕国龙 等;《中国电机工程学报》;第43卷(第8期);第2979-2989页 *

Also Published As

Publication number Publication date
CN116258730A (en) 2023-06-13

Similar Documents

Publication Publication Date Title
CN116258730B (en) Semi-supervised medical image segmentation method based on consistency loss function
Sharma et al. Brain tumor segmentation using genetic algorithm and artificial neural network fuzzy inference system (ANFIS)
CN114677403B (en) Liver tumor image segmentation method based on deep learning attention mechanism
Sharma et al. Brain tumor segmentation using hybrid genetic algorithm and artificial neural network fuzzy inference system (anfis)
Couturier et al. Image denoising using a deep encoder-decoder network with skip connections
Zhao et al. A deep cascade of neural networks for image inpainting, deblurring and denoising
Li et al. Towards photo-realistic visible watermark removal with conditional generative adversarial networks
Zhao et al. PCA dimensionality reduction method for image classification
CN111242952B (en) Image segmentation model training method, image segmentation device and computing equipment
Ji et al. Uxnet: Searching multi-level feature aggregation for 3d medical image segmentation
Adegun et al. Deep learning model for skin lesion segmentation: Fully convolutional network
CN117237801A (en) Multi-mode remote sensing image change detection method based on self-supervision learning
CN116128898A (en) Skin lesion image segmentation method based on transducer double-branch model
Wang et al. Jpeg artifacts removal via contrastive representation learning
Fang et al. A guiding teaching and dual adversarial learning framework for a single image dehazing
Xiao et al. Siamese few-shot network: a novel and efficient network for medical image segmentation
Gao A method for face image inpainting based on generative adversarial networks
CN117437423A (en) Weak supervision medical image segmentation method and device based on SAM collaborative learning and cross-layer feature aggregation enhancement
Alahmadi Boundary aware U-net for medical image segmentation
Cui et al. SCU‐Net++: A Nested U‐Net Based on Sharpening Filter and Channel Attention Mechanism
Jia et al. A mix-supervised unified framework for salient object detection
Wang et al. Automated segmentation of intervertebral disc using fully dilated separable deep neural networks
Xie et al. LA-HRNet: High-Resolution Network for Automatic Left Atrial Segmentation in Multi-center LEG MRI
Liu et al. Image Segmentation of Bladder Cancer Based on DeepLabv3+
Li et al. A semi-supervised learning model based on convolutional autoencoder and convolutional neural network for image classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant