CN112472136B - Cooperative analysis method based on twin neural network - Google Patents

Cooperative analysis method based on twin neural network Download PDF

Info

Publication number
CN112472136B
CN112472136B CN202011448450.9A CN202011448450A CN112472136B CN 112472136 B CN112472136 B CN 112472136B CN 202011448450 A CN202011448450 A CN 202011448450A CN 112472136 B CN112472136 B CN 112472136B
Authority
CN
China
Prior art keywords
network module
twin
layer
metric learning
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011448450.9A
Other languages
Chinese (zh)
Other versions
CN112472136A (en
Inventor
陈芳
叶浩然
谢彦廷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202011448450.9A priority Critical patent/CN112472136B/en
Publication of CN112472136A publication Critical patent/CN112472136A/en
Application granted granted Critical
Publication of CN112472136B publication Critical patent/CN112472136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0833Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures
    • A61B8/085Detecting organic movements or changes, e.g. tumours, cysts, swellings involving detecting or locating foreign bodies or organic structures for locating body or organic structures, e.g. tumours, calculi, blood vessels, nodules
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Surgery (AREA)
  • Medical Informatics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Vascular Medicine (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cooperative analysis method based on a twin neural network, which comprises a twin coding and decoding network module, a twin metric learning network module and a decision network module, wherein a group of picture pairs are input to the twin coding and decoding network module, the twin coding and decoding network module extracts the characteristics of the picture pairs and inputs the characteristics of the picture pairs into the twin metric learning network module, after the twin metric learning network module vectorizes the characteristics of the picture pairs, the distance between two vectors is calculated, the result is input to the decision network module, and the decision network module judges whether the picture pairs are in the same class or not and transmits the result to the twin coding and decoding network module. Tests prove that the method can be used for more effectively analyzing the ultrasonic sequence image and has wide future application prospect.

Description

Cooperative analysis method based on twin neural network
Technical Field
The invention belongs to the technical field of ultrasonic sequence image analysis, and particularly relates to a cooperative analysis method based on a twin neural network.
Background
The ultrasonic sequence image plays an important role in clinical medical diagnosis, greatly changes the clinical diagnosis mode and promotes the development of clinical medicine. As shown in fig. 1, the ultrasound sequence images are acquired by scanning the same target in multiple free directions, or by obtaining sequence images in a continuous scanning mode. The sequence image is not only suitable for examining congenital heart disease, thrombus, internal tumor and other diseases, but also very simple, convenient and accurate to diagnose the visceral calculus. Meanwhile, it also shows the space occupying lesion or trauma of pancreas, kidney, spleen, bladder, adrenal gland and other organs. Therefore, the ultrasonic sequence image analysis technology is especially important for the development of clinical medicine in the future.
Since the data sets of the ultrasound images are acquired continuously or from different angles with respect to the same object, there is a correlation between the images. However, most of the existing image analysis methods are based on single information of the image, and the relevance among pictures is not considered. Therefore, a collaborative analysis method is provided, the collaborative analysis is applied to the ultrasonic images, and the related information between the ultrasonic sequence images is fully utilized to achieve the effect of better analyzing the images.
Disclosure of Invention
The invention provides a cooperative analysis method based on a twin neural network, which aims to solve the problems in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
a cooperative analysis method based on a twin neural network comprises a twin coding and decoding network module, a twin metric learning network module and a decision network module, wherein a group of picture pairs (a pair of ultrasonic sequence images) are input to the twin coding and decoding network module, the twin coding and decoding network module extracts the characteristics of the picture pairs and inputs the characteristics of the picture pairs into the twin metric learning network module, after the twin metric learning network module vectorizes the characteristics of the picture pairs, the distance between two vectors is calculated, the result is input into the decision network module, and the decision network module judges whether the picture pairs are in the same class or not and transmits the result to the twin coding and decoding network module.
Further, the twin encoding and decoding network module comprises a five-layer encoder and a five-layer decoder, wherein the encoder is a feature extraction convolutional neural network based on VGG16, each layer comprises 2 to 3 convolutional layers and a maximum pooling layer with a convolutional kernel of 2x2, and the decoder converts an input picture of 224x224x3 (length is 224, width is 224 and depth is 3) into a semantic feature map of 7x7x512 (length is 7, width is 7 and depth is 512), and the semantic feature map is used as an input part of the decoder; the decoder comprises five layers, each layer comprises 2 to 3 transposition convolution layers and 1 nearest neighbor interpolation layer, the size of the feature map is increased by using nearest neighbor interpolation, but the error of picture feature analysis is increased at the same time, so that the transposition convolution layer is added after each nearest neighbor interpolation, and the error is reduced; ReLU operation is performed once at the end of the transposition convolution layer in each layer except the 5 th layer, and the 5 th layer converts the feature map into a co-partitioned prediction result M through a Sigmoid function.
Further, the twin metric learning network module comprises two fully connected layers, namely 128-dimension and 64-dimension, the first layerThe layer is connected with a nonlinear excitation function ReLU; the twin metric learning network module firstly carries out global average pooling on a pair of feature maps output by the deconvolution layer cT9 of the twin coding and decoding network module to obtain a pair of vectors fu1And fu2A 1 is to fu1And fu2Inputting the vector into a twin metric learning network module and outputting a pair of vectors fs1,fs2To the decision network module.
Further, the decision network module determines that the picture pair is of the same type, and the determination result is 1, and the decision network module determines that the picture pair is not of the same type, and the determination result is 0.
Furthermore, the decision network module obtains a 128-dimensional vector as input by splicing two vectors output by the twin metric learning network module, sequentially passes through two full-connection layers, the first full-connection layer is 32-dimensional and is connected with the nonlinear activation function ReLU, the second full-connection layer is 1-dimensional, and the prediction result is converted into the probability between 0 and 1 by connecting a Sigmoid function, so as to infer whether the group of pictures are of the same type, if the probability value is close to 1, the pictures represent the same type, otherwise, the probability value is not.
Further, in performing the segmentation task, the loss function used is:
Lfinal=w1L1+w2L2+w3L3
wherein: l isfinalIs the total loss of the model, w1、w2、w3Is a weight, L1The binary cross entropy is a loss function for training the twin encoding and decoding network; l is2Is triplet loss, which is a loss function for training twin metric learning networks;
Figure BDA0002825770320000021
wherein: a is an error value and a is a value,
Figure BDA0002825770320000022
two vectors for input; y generationTable two vector labels are the same; when the same y is 1, when the same y is-1;
loss function L of decision network3Is the BCE loss, and the BCE loss,
Figure BDA0002825770320000023
wherein: i represents the number of samples, yrEqual to 0 or 1 indicates whether the input sample pair is of the same class;
when the detection task is executed, the L3 is changed to smooth L1 loss,
Figure BDA0002825770320000024
Figure BDA0002825770320000031
wherein: x is the element-wise difference between the prediction box and the ground channel.
Compared with the prior art, the invention has the following beneficial effects:
the invention can more effectively analyze the ultrasonic sequence image and has wide future application prospect.
Drawings
FIG. 1 is an ultrasound sequence image acquisition diagram of an ultrasound image acquisition;
FIG. 2 is a framework flow diagram of the present invention;
FIG. 3 is a block diagram of twin encoding and decoding network modules in the present invention;
FIG. 4 is a block diagram of a twin metric learning network module of the present invention;
FIG. 5 is a block diagram of a decision network module in the present invention;
FIG. 6 is a network flow of the present invention;
FIG. 7 is an ultrasound image of a bone;
FIG. 8 is a segmentation result;
FIG. 9 shows the results of the detection.
Detailed Description
The present invention will be further described with reference to the following examples.
Example 1
In order to realize the cooperative analysis of the ultrasonic sequence images, the invention provides a cooperative analysis method based on a twin neural network, which comprises a twin coding and decoding network module, a twin metric learning network module and a decision network module, as shown in fig. 2, a group of picture pairs (a pair of ultrasonic sequence images) are input to the twin coding and decoding network module, the twin coding and decoding network module extracts the characteristics of the picture pairs and inputs the characteristics of the picture pairs into the twin metric learning network module, the twin metric learning network module calculates the distance between two vectors after vectorizing the characteristics of the picture pairs and inputs the result into the decision network module, and the decision network module judges whether the picture pairs are in the same class or not and transmits the result to the twin coding and decoding network module.
The twin encoding and decoding network module is shown in fig. 3, in which: i is1For the input picture, f1For picture features extracted by the encoder, M1For prediction results, the twin encoding and decoding network module comprises a five-layer encoder and a five-layer decoder, wherein the encoder is a feature extraction convolutional neural network based on VGG16, each layer comprises 2 to 3 convolutional layers and a maximum pooling layer with a convolutional kernel of 2x2, and the decoder converts an input picture of 224x3 (with the length of 224, the width of 224 and the depth of 3) into a semantic feature map of 7x7x512 (with the length of 7, the width of 7 and the depth of 512), and the semantic feature map is used as an input part of the decoder; the decoder comprises five layers, each layer comprises 2 to 3 transposition convolution layers and 1 nearest neighbor interpolation layer, the size of the feature map is increased by using nearest neighbor interpolation, but the error of picture feature analysis is increased at the same time, so that the transposition convolution layer is added after each nearest neighbor interpolation, and the error is reduced; ReLU operation is performed once at the end of the transposition convolution layer in each layer except the 5 th layer, and the 5 th layer converts the feature map into a co-partitioned prediction result M through a Sigmoid function.
The twin metric learning network module is shown in FIG. 4, the twin degreeThe quantity learning network module comprises two full connection layers which are respectively 128-dimensional and 64-dimensional, and the first layer is connected with a nonlinear excitation function ReLU; the twin metric learning network module firstly carries out global average pooling on a pair of feature maps output by the deconvolution layer cT9 of the twin coding and decoding network module to obtain a pair of vectors fu1And fu2A 1 is to fu1And fu2Inputting the vector into a twin metric learning network module and outputting a pair of vectors fs1,fs2To the decision network module.
The decision network module is shown in fig. 5 and is configured to determine whether a group of input pictures are of the same category, and when the decision network module determines that the pair of pictures are of the same category, the determination result is 1, and when the decision network module determines that the pair of pictures are not of the same category, the determination result is 0. Specifically, the decision network module obtains a 128-dimensional vector as input by splicing two vectors output by the twin metric learning network module, sequentially passes through two fully connected layers, the first fully connected layer is 32-dimensional and is connected with a nonlinear activation function ReLU, the second fully connected layer is 1-dimensional, and a prediction result is converted into a probability between 0 and 1 by connecting a Sigmoid function, so as to infer whether the group of pictures are of the same type, if the probability value is close to 1, the pictures represent the same type, otherwise, the probability value is not.
The loss functions used by the network are different according to the analysis tasks, and when the segmentation tasks are executed, the loss functions used are as follows:
Lfinal=w1L1+w2L2+w3L3
wherein: l isfinalIs the total loss of the model, w1、w2、w3Is a weight, L1The binary cross entropy is a loss function for training the twin encoding and decoding network; l is2Is triplet loss, which is a loss function for training twin metric learning networks;
Figure BDA0002825770320000041
wherein: a is an error value,
Figure BDA0002825770320000042
Two vectors are input; y represents whether the two vector labels are the same; when the same y is 1, when the same y is-1;
loss function L of decision network3Is the BCE loss, and the BCE loss,
Figure BDA0002825770320000051
wherein: i represents the number of samples, yrEqual to 0 or 1 indicates whether the input sample pair is of the same class;
when the detection task is executed, the L3 is changed to smooth L1 loss,
Figure BDA0002825770320000052
Figure BDA0002825770320000053
wherein: x is the element-wise difference between the prediction box and the ground channel.
The flow of the entire network collaborative analysis method is shown in FIG. 6, where the input is a set of image pairs (a pair of ultrasound sequence images I)1、I2) And is transmitted through a twin encoding and decoding network sharing the weight. Inputting the feature graph obtained in decoding into a twin metric learning network, vectorizing the feature graph and transmitting the vectorized feature graph to a decision network, and judging whether the feature graph and the decision network are the same type of sample by the decision network. Training a complete network aiming at the same type of samples and outputting a prediction result M1、M2
By using this network, the image shown in fig. 7 is divided, and the expected division result is shown in fig. 8, and when the network is used for the detection task, the prediction analysis result is shown in fig. 9.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (2)

1. A cooperative analysis method based on a twin neural network is characterized in that: the cooperative analysis method comprises a twin coding and decoding network module, a twin metric learning network module and a decision network module, wherein a group of picture pairs are input to the twin coding and decoding network module, the twin coding and decoding network module extracts the characteristics of the picture pairs and inputs the characteristics of the picture pairs into the twin metric learning network module, after the twin metric learning network module vectorizes the characteristics of the picture pairs, the distance between two vectors is calculated, the result is input into the decision network module, and the decision network module judges whether the picture pairs are in the same class or not and transmits the result to the twin coding and decoding network module;
the twin encoding and decoding network module comprises a five-layer encoder and a five-layer decoder, wherein the encoder is a feature extraction convolutional neural network based on VGG16, each layer comprises 3 convolutional layers and a maximum pooling layer with a convolutional core of 2x2, the encoder converts an input picture of 224x224x3 into a semantic feature map of 7x7x512, and the semantic feature map is used as an input part of the decoder; the decoder comprises five layers, wherein each layer comprises 3 transposition convolutional layers and 1 nearest neighbor interpolation layer, and the transposition convolutional layers are added after each nearest neighbor interpolation; ReLU operation is performed once at the end of the transposed convolution layer inside each layer except for layer 5,
the twin metric learning network module comprises two fully-connected layers which are respectively 128-dimensional and 64-dimensional, and a nonlinear excitation function ReLU is connected behind the first layer; the twin metric learning network module firstly performs global average pooling on a pair of feature maps output by the twin coding and decoding network module transposed convolutional layer cT9 to obtain a pair of vectors fu1 and fu2, inputs fu1 and fu2 into the twin metric learning network module, and outputs a pair of vectors fs1 and fs2 to the decision network module;
the decision network module obtains a 128-dimensional vector as input by splicing two vectors output by the twin metric learning network module, the 128-dimensional vector sequentially passes through two full-connection layers, the first full-connection layer is 32-dimensional and is connected with a nonlinear activation function ReLU, the second full-connection layer is 1-dimensional, a prediction result is converted into a probability between 0 and 1 by connecting a Sigmoid function, whether the group of pictures are of the same type or not is deduced, if the probability value is close to 1, the pictures represent the same type, and if not, the probability value is not;
the 5 th layer of the decoder converts the feature map into a co-segmented prediction result M through a Sigmoid function;
in performing the segmentation task, the loss function used is:
Lfinal=w1L1+w2L2+w3L3
wherein: l isfinalIs the total loss of the model, w1、w2、w3Is a weight, L1The binary cross entropy is a loss function for training the twin encoding and decoding network; l is2Is triplet loss, which is a loss function for training twin metric learning networks;
Figure FDA0003577170050000011
wherein: a is an error value, fu1And fu2Learning a pair of vectors fu1 and fu2 derived in the network module for the twin metric; y represents whether the two vector labels are the same; when the same y is 1, when the same y is-1;
loss function L of decision network3Is a compound of the group BCEloss,
Figure FDA0003577170050000021
wherein: i represents the number of samples, yrA value equal to 0 indicates that the input sample pair is not of the same type, yrEqual to 1 indicates that the input sample pairs are of the same class,
Figure FDA0003577170050000022
to make an emergencySetting the output result of the network;
when the detection task is executed, the L3 is changed to smoothL1 loss,
Figure FDA0003577170050000023
Figure FDA0003577170050000024
Wherein: x is the element-wise difference between the prediction box and the ground channel.
2. The twin neural network-based collaborative analysis method according to claim 1, wherein: and the decision network module judges that the picture pairs are of the same type, the judgment result is 1, and the decision network module judges that the picture pairs are not of the same type, the judgment result is 0.
CN202011448450.9A 2020-12-09 2020-12-09 Cooperative analysis method based on twin neural network Active CN112472136B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011448450.9A CN112472136B (en) 2020-12-09 2020-12-09 Cooperative analysis method based on twin neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011448450.9A CN112472136B (en) 2020-12-09 2020-12-09 Cooperative analysis method based on twin neural network

Publications (2)

Publication Number Publication Date
CN112472136A CN112472136A (en) 2021-03-12
CN112472136B true CN112472136B (en) 2022-06-17

Family

ID=74941561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011448450.9A Active CN112472136B (en) 2020-12-09 2020-12-09 Cooperative analysis method based on twin neural network

Country Status (1)

Country Link
CN (1) CN112472136B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792653B (en) * 2021-09-13 2023-10-20 山东交通学院 Method, system, equipment and storage medium for cloud detection of remote sensing image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136101A (en) * 2019-04-17 2019-08-16 杭州数据点金科技有限公司 A kind of tire X-ray defect detection method compared based on twin distance
CN111127503A (en) * 2019-12-31 2020-05-08 上海眼控科技股份有限公司 Method, device and storage medium for detecting the pattern of a vehicle tyre
CN111242173A (en) * 2019-12-31 2020-06-05 四川大学 RGBD salient object detection method based on twin network
CN111368729A (en) * 2020-03-03 2020-07-03 河海大学常州校区 Vehicle identity discrimination method based on twin neural network
CN111797716A (en) * 2020-06-16 2020-10-20 电子科技大学 Single target tracking method based on Siamese network
CN111833334A (en) * 2020-07-16 2020-10-27 上海志唐健康科技有限公司 Fundus image feature processing and analyzing method based on twin network architecture

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10671855B2 (en) * 2018-04-10 2020-06-02 Adobe Inc. Video object segmentation by reference-guided mask propagation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136101A (en) * 2019-04-17 2019-08-16 杭州数据点金科技有限公司 A kind of tire X-ray defect detection method compared based on twin distance
CN111127503A (en) * 2019-12-31 2020-05-08 上海眼控科技股份有限公司 Method, device and storage medium for detecting the pattern of a vehicle tyre
CN111242173A (en) * 2019-12-31 2020-06-05 四川大学 RGBD salient object detection method based on twin network
CN111368729A (en) * 2020-03-03 2020-07-03 河海大学常州校区 Vehicle identity discrimination method based on twin neural network
CN111797716A (en) * 2020-06-16 2020-10-20 电子科技大学 Single target tracking method based on Siamese network
CN111833334A (en) * 2020-07-16 2020-10-27 上海志唐健康科技有限公司 Fundus image feature processing and analyzing method based on twin network architecture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Deep-Learning-Based Block Similarity Evaluation for Image Forensics;Hsin-Tzu WANG等;《2020 IEEE International Conference on Consumer Electronics》;20201123;第1-2页 *

Also Published As

Publication number Publication date
CN112472136A (en) 2021-03-12

Similar Documents

Publication Publication Date Title
CN112150425B (en) Unsupervised intravascular ultrasound image registration method based on neural network
CN110889853B (en) Tumor segmentation method based on residual error-attention deep neural network
CN110889852B (en) Liver segmentation method based on residual error-attention deep neural network
JP2023550844A (en) Liver CT automatic segmentation method based on deep shape learning
CN111985538A (en) Small sample picture classification model and method based on semantic auxiliary attention mechanism
CN111325750A (en) Medical image segmentation method based on multi-scale fusion U-shaped chain neural network
CN115512103A (en) Multi-scale fusion remote sensing image semantic segmentation method and system
CN112472136B (en) Cooperative analysis method based on twin neural network
CN116228792A (en) Medical image segmentation method, system and electronic device
CN114897094A (en) Esophagus early cancer focus segmentation method based on attention double-branch feature fusion
CN112381846A (en) Ultrasonic thyroid nodule segmentation method based on asymmetric network
CN114299006A (en) Self-adaptive multi-channel graph convolution network for joint graph comparison learning
CN115496720A (en) Gastrointestinal cancer pathological image segmentation method based on ViT mechanism model and related equipment
CN115631183A (en) Method, system, device, processor and storage medium for realizing classification and identification of X-ray image based on double-channel decoder
Ye et al. Adjacent-level feature cross-fusion with 3D CNN for remote sensing image change detection
CN116503668A (en) Medical image classification method based on small sample element learning
CN111275103A (en) Multi-view information cooperation type kidney benign and malignant tumor classification method
CN117611601A (en) Text-assisted semi-supervised 3D medical image segmentation method
CN113706546B (en) Medical image segmentation method and device based on lightweight twin network
CN115688234A (en) Building layout generation method, device and medium based on conditional convolution
CN109064403A (en) Fingerprint image super-resolution method based on classification coupling dictionary rarefaction representation
CN114283301A (en) Self-adaptive medical image classification method and system based on Transformer
CN109919162B (en) Model for outputting MR image feature point description vector symbol and establishing method thereof
CN112133366A (en) Face type prediction method based on gene data and generation of anti-convolution neural network
Li et al. A deep learning feature fusion algorithm based on Lensless cell detection system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant