CN116310535A - Multi-scale multi-region thyroid nodule prediction method - Google Patents

Multi-scale multi-region thyroid nodule prediction method Download PDF

Info

Publication number
CN116310535A
CN116310535A CN202310215158.XA CN202310215158A CN116310535A CN 116310535 A CN116310535 A CN 116310535A CN 202310215158 A CN202310215158 A CN 202310215158A CN 116310535 A CN116310535 A CN 116310535A
Authority
CN
China
Prior art keywords
feature extraction
model
scale
nodule
thyroid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310215158.XA
Other languages
Chinese (zh)
Inventor
于林韬
曲朔欧
杨絮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202310215158.XA priority Critical patent/CN116310535A/en
Publication of CN116310535A publication Critical patent/CN116310535A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses a thyroid nodule prediction method with multiple scales and multiple areas, and belongs to the technical field of medical image processing and deep learning; comprising the following steps: preparing a training data set; constructing a feature extraction model; constructing a characteristic reconstruction model; determining a prediction model; the method adopts the ultrasonic images of different areas with different scales to combine the high-low frequency module with the multi-scale feature extraction module, specifically, the feature map is divided into high-low frequency features according to the channel direction in the network training process, the feature extraction is carried out on thyroid nodules, and then the feature reconstruction is carried out to obtain the discriminant features; the thyroid nodule characteristics obtained by the network are more comprehensively extracted, and the accuracy of the benign and malignant prediction results is higher.

Description

Multi-scale multi-region thyroid nodule prediction method
Technical Field
The invention belongs to the technical field of medical image processing and deep learning, and particularly relates to a multi-scale multi-region thyroid nodule prediction method.
Background
Noninvasive diagnosis of thyroid nodules mainly relies on imaging examinations, wherein ultrasound examinations have the advantages of being economical, harmless, noninvasive, simple to operate and high in accuracy. However, conventional ultrasonic examination is affected by factors such as operator expertise, medical environment level, etc., and ultrasonic diagnosis accuracy is limited. With the continuous fusion and development of computer and imaging technologies, the deep learning method is widely applied to imaging aided diagnosis of clinical diseases by virtue of the advantages of low consumption, high efficiency, high homogeneity, high accuracy and the like. With the assistance of the deep learning technology, different doctors can quantitatively analyze the ultrasonic images to form a unified diagnosis result report, and the influence of external factors such as doctor level, imaging equipment difference and the like on the result is reduced. However, the insufficient overall extraction of nodule features in thyroid nodule ultrasound images is currently a major problem that results in lower accuracy of classification results.
Chinese patent publication No. CN201911271119, named "thyroid nodule classification method based on multiscale feature fusion". The method mainly cleans data, performs emphasis processing on a data set, adds a high-resolution channel based on a residual error network, and replaces a residual error module with a multi-scale information fusion module. The method only inputs a single pixel image, only has a high-resolution channel, and the feature extraction is not comprehensive enough.
Disclosure of Invention
Aiming at the problems that the accuracy rate of the benign and malignant thyroid nodule prediction results is low and the integral nodule characteristics are ignored in the characteristic extraction process at present, the invention provides a multiscale and multiscale thyroid nodule prediction method; the method comprises the steps of inputting ultrasonic images of different dimensions and different areas, combining a high-low frequency module with a multi-scale feature extraction module, specifically dividing a feature map into high-low frequency features according to the channel direction in the network training process, extracting features of thyroid nodules, and then reconstructing the features to obtain discriminant features; the thyroid nodule characteristics obtained by the network are more comprehensively extracted, and the accuracy of the benign and malignant prediction results is higher.
The invention is realized by the following technical scheme:
a multi-scale multi-region thyroid nodule prediction method comprises the following specific steps:
step 1: preparing a training data set: collecting a thyroid ultrasonic image set of a thyroid patient, and performing image preprocessing to construct a training data set;
step 2: constructing a feature extraction model: the feature extraction model comprises a high-low frequency feature extraction module and a multi-scale feature extraction module; thyroid nodule images with different scales and different areas are input to a high-low frequency feature extraction module, feature extraction is carried out according to three aspects of nodule outline, nodule shape and nodule edge, and an output result is used as input of a multi-scale feature extraction module to carry out further feature extraction;
step 3: constructing a characteristic reconstruction model: the feature reconstruction model performs feature reconstruction on the multi-scale feature extraction result obtained in the step 2, optimizes a network model and improves classification accuracy;
step 4: determining a prediction model: training the network model by using a training set, testing the trained model by using a testing set after the training times reach a preset threshold, and when the accuracy of the network model is stabilized in a certain set range, considering that the network model is trained, and storing parameters of the network model; after the network training is finished, fixing network model parameters, and determining the network model as a final nodule prediction model; and finally, predicting the type of the thyroid nodule based on the trained network model.
Further, in the step 1, the image set refers to an ultrasonic image of a thyroid nodule which is subjected to a thyroid ultrasonic examination in a hospital and confirmed to be benign and malignant through puncture pathology; the image preprocessing includes the steps of cutting off invalid information, unifying sizes, normalizing images and enhancing ultrasonic contrast, and setting a training data set according to 7: the scale of 3 is divided into training and test sets.
Further, in the step 2, the high-low frequency feature extraction module includes a high-frequency module and a low-frequency module, where the high-frequency module is used to convolve and pool data, and the low-frequency module is used to average and pool data and up-sample data; the multi-scale feature extraction module comprises four different convolution blocks and a short circuit mechanism.
Further, the four different convolution blocks are one convolution layer, two convolution layers, three convolution layers and four convolution layers, respectively.
Further, in the step 3, the feature reconstruction model includes three skip modules and one convolution layer; wherein each skip module comprises four convolutional layers, four standardized operation layers and four activation function layers.
Compared with the prior art, the invention has the following advantages:
1. the method for simultaneously extracting the features of the low-frequency channel and the high-frequency channel is adopted, so that the deep features and the shallow features of the ultrasonic image are extracted more comprehensively, the prediction accuracy is effectively improved, and the diagnosis of doctors can be better assisted in clinical medicine.
2. The characteristic extraction part inputs the multi-scale thyroid nodule map, and performs characteristic extraction from different areas with different scales, so that the nodule condition can be more comprehensively described, and the prediction accuracy is improved.
3. The characteristic reconstruction part of the invention adopts a jump connection structure, and a convolution layer is added, so that the information flow can be effectively increased, the distinguishing characteristics of the image can be learned, the problem of important information loss caused by accumulation of the convolution layer can be avoided, and the prediction accuracy can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. Like elements or portions are generally identified by like reference numerals throughout the several figures. In the drawings, elements or portions thereof are not necessarily drawn to scale.
FIG. 1 is a flow chart of a multi-scale and multi-region thyroid nodule prediction method of the present invention;
FIG. 2 is a network structure diagram of a multiscale, regional thyroid nodule prediction method of the present invention;
FIG. 3 is a block diagram of a high and low frequency feature extraction module according to the present invention;
FIG. 4 is a block diagram of a multi-scale feature extraction module according to the present invention;
FIG. 5 is a block diagram of a feature reconstruction portion according to the present invention;
fig. 6 is a block diagram of a skip module according to the present invention.
Detailed Description
For a clear and complete description of the technical scheme and the specific working process thereof, the following specific embodiments of the invention are provided with reference to the accompanying drawings in the specification:
example 1
As shown in fig. 1, a flow chart of a multi-scale multi-region thyroid nodule prediction method according to the present embodiment is shown, and the method specifically includes the following steps:
step 1, preparing a training data set: and carrying out image preprocessing on the thyroid ultrasound image set provided by the hospital, and constructing a training data set. The image set is thyroid nodule ultrasonic images which are subjected to thyroid ultrasonic examination in a hospital and confirmed to be benign and malignant through puncture pathology, the image preprocessing comprises the steps of cutting off invalid information to be uniform in size, carrying out image normalization and enhancing ultrasonic contrast, and training data are obtained according to the following steps of 7: the scale of 3 is divided into training and test sets.
Step 2, constructing a feature extraction model: the feature extraction model comprises a high-low frequency feature extraction module and a multi-scale feature extraction module. Inputting thyroid nodule images of different areas with different scales into a high-low frequency feature extraction module, extracting features according to three aspects of nodule outline, nodule shape and nodule edge, taking an output result as input of a multi-scale feature extraction module, and performing further feature extraction; the high-low frequency characteristic extraction module comprises a high-frequency module and a low-frequency module, wherein the high-frequency module comprises convolution operation and pooling operation, and the low-frequency module comprises average pooling operation, up-sampling operation and convolution operation; the multi-scale feature extraction module comprises four convolution blocks and a short circuit mechanism, wherein the four convolution blocks respectively comprise one convolution layer, two convolution layers, three convolution layers and four convolution layers.
Step 3, constructing a characteristic reconstruction model: the feature reconstruction part model performs feature reconstruction on the multi-scale feature extraction result, optimizes the network model and improves the classification accuracy; the feature reconstruction model comprises three jump modules and a convolution layer; wherein each skip module comprises four convolutional layers, four standardized operation layers and four activation function layers.
Step 4, determining a prediction model: training the network model by using a training set, testing the trained model by using a testing set after the training times reach a preset threshold, and when the accuracy of the network model is stabilized in a certain set range, considering that the network model is trained, and storing parameters of the network model; after the network training is completed, the parameters of the network model are fixed, and the network model is determined to be a final nodule prediction model. When the benign and malignant thyroid nodule ultrasonic images are predicted, the images can be directly input into a network model to obtain a final prediction result.
Example 2
As shown in fig. 1, a multi-scale multi-region thyroid nodule prediction method specifically includes the following steps:
step 1, preparing training data; the resolution of the original image provided by the hospital is 1440×1080 pixels; and (3) performing operations such as invalid information cutting, unified size, image normalization and the like on the data set, wherein the resolution of the processed image is 640×640 pixels. And then carrying out data expansion by methods of improving brightness and contrast, turning over, rotating and the like to obtain a final data set, wherein the total number of the final data set is 3516 JPG-format ultrasonic images. The data set was then processed according to 7: the training set and the test set are divided in proportion to 3.
And 2, constructing a feature extraction model. The feature extraction model structure is shown in the feature extraction part of fig. 2, and the feature extraction part model comprises a high-low frequency feature extraction module and a multi-scale feature extraction module. The input thyroid ultrasonic image respectively inputs three thyroid nodule images with different scales to a high-low frequency characteristic extraction module according to three aspects of the outline, the shape and the edge of the nodule in the morphological characteristics of the nodule ultrasonic image, wherein the sizes of the three thyroid nodule images are 640 multiplied by 640, 320 multiplied by 320 and 160 multiplied by 160 respectively; outputting the result to a multi-scale feature extraction module for further feature extraction, wherein the part performs feature extraction on images with different scales;
the high-low frequency feature extraction module comprises a high-frequency module and a low-frequency module, as shown in fig. 3, the high-frequency module comprises a convolution operation and a pooling operation, and the low-frequency module comprises an average pooling operation, an up-sampling operation and a convolution operation, wherein the convolution kernel size in the convolution operation is 2×2. Dividing an input image into high frequencies X H And low frequency X L And the resolution of the low-frequency image is reduced to half of that of the high-frequency image, and the high-frequency module outputs the result as Y H The output result of the low-frequency module is Y L The final output result can be expressed as:
Y=Y H +Y L
wherein Y is H And Y L The expression of (2) is:
Y H =f(X H ;W H→H )+upsample[f(X L ;W L→H ),2]
Y L =f(X L ;W L→L )+f[pool(X H ,2);W H→L ]
wherein: upsampling operation is represented by upsampling operation, pooling operation is represented by pool, and convolution operation with convolution kernel W is represented by f (X, W).
As shown in fig. 4, the multi-scale feature extraction module includes four convolution blocks and a short circuit mechanism, wherein the short circuit mechanism is that an input is directly connected to an output; the four convolution blocks include convolution layer 1, convolution layer 2, convolution layer 3, and convolution layer 4, respectively. The convolution kernel size of the convolution layer 1 is 3×3, and the expansion rate is 1; the convolution layer 2 comprises a convolution kernel with 3×3 and a convolution kernel with 1×1 and a convolution kernel with 1; the convolution layer 3 comprises a convolution kernel with the expansion rate of 1, a convolution kernel with the expansion rate of 3, 3 x 3, a convolution kernel with the expansion rate of 3 and a convolution kernel with the expansion rate of 1, 1 x 1; the convolution layer 4 comprises a convolution kernel of 3×3 with a dilation rate of 1, a convolution kernel of 3×3 with a dilation rate of 3, a convolution kernel of 3×3 with a dilation rate of 7 and a convolution kernel of 1×1 with a dilation rate of 1;
step 3, constructing a characteristic reconstruction model: the feature reconstruction part model performs feature reconstruction on the multi-scale feature extraction result; as shown in fig. 5, the feature reconstruction model includes three skip modules and one convolution layer, wherein the convolution kernel of one convolution layer has a size of 1×1;
the jump module is shown in fig. 6 and comprises four convolution layers, four standardized operation layers and four activation function layers. The convolution kernel sizes of the convolution layers are 1×1 and 3×3, the step sizes are 1, and the linear rectification function is used as the activation function.
The linear rectification function is defined as follows:
Figure BDA0004114641300000071
and 4, determining a prediction model. In the network training process, training iteration times are respectively set to 500, and learning rate is set to 10 -3 Network training is considered complete when the predicted accuracy reaches above 95%. After the network training is completed, the network model parameters are fixed. And determining the network model using the network model parameters as a final prediction model.
Analyzing the prediction effect of the thyroid nodule prediction model of the multi-scale and multi-region: and (3) inputting the test set divided in the step (1) into a prediction model, and taking the accuracy rate, the recall rate and the precision rate as evaluation indexes. As shown in the experimental results of the network model in Table 1, the network model provided by the invention has the best effect.
Table 1 network model experimental results
Figure BDA0004114641300000072
Figure BDA0004114641300000081
The implementation of convolution, activation functions, pooling and upsampling is an algorithm well known to those skilled in the art, and the specific procedure and method can be referred to in the corresponding textbook or technical literature.
The preferred embodiments of the present invention have been described in detail above with reference to the accompanying drawings, but the present invention is not limited to the specific details of the above embodiments, and various simple modifications can be made to the technical solution of the present invention within the scope of the technical concept of the present invention, and all the simple modifications belong to the protection scope of the present invention.
In addition, the specific features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various possible combinations are not described further.
Moreover, any combination of the various embodiments of the invention can be made without departing from the spirit of the invention, which should also be considered as disclosed herein.

Claims (5)

1. A multi-scale multi-region thyroid nodule prediction method is characterized by comprising the following specific steps:
step 1: preparing a training data set: collecting a thyroid ultrasonic image set of a thyroid patient, and performing image preprocessing to construct a training data set;
step 2: constructing a feature extraction model: the feature extraction model comprises a high-low frequency feature extraction module and a multi-scale feature extraction module; thyroid nodule images with different scales and different areas are input to a high-low frequency feature extraction module, feature extraction is carried out according to three aspects of nodule outline, nodule shape and nodule edge, and an output result is used as input of a multi-scale feature extraction module to carry out further feature extraction;
step 3: constructing a characteristic reconstruction model: the feature reconstruction model performs feature reconstruction on the multi-scale feature extraction result obtained in the step 2, optimizes a network model and improves classification accuracy;
step 4: determining a prediction model: training the network model by using a training set, testing the trained model by using a testing set after the training times reach a preset threshold, and when the accuracy of the network model is stabilized in a certain set range, considering that the network model is trained, and storing parameters of the network model; after the network training is finished, fixing network model parameters, and determining the network model as a final nodule prediction model; and finally, predicting the type of the thyroid nodule based on the trained network model.
2. The method for predicting thyroid nodule in multiple scale and multiple area according to claim 1, wherein in step 1, the image set is a thyroid nodule ultrasound image obtained by performing a thyroid ultrasound examination in a hospital and confirming benign and malignant thyroid nodule by puncture pathology; the image preprocessing includes the steps of cutting off invalid information, unifying sizes, normalizing images and enhancing ultrasonic contrast, and setting a training data set according to 7: the scale of 3 is divided into training and test sets.
3. The method according to claim 1, wherein in the step 2, the high-low frequency feature extraction module includes a high-frequency module and a low-frequency module, wherein the high-frequency module is used for performing convolution and pooling operations on data, and the low-frequency module is used for performing average pooling and up-sampling operations on data; the multi-scale feature extraction module comprises four different convolution blocks and a short circuit mechanism.
4. A multi-scale multi-region thyroid nodule prediction method as claimed in claim 3 wherein said four different convolution blocks are one convolution layer, two convolution layers, three convolution layers and four convolution layers, respectively.
5. The method of claim 1, wherein in step 3, the feature reconstruction model comprises three skip modules and a convolution layer; wherein each skip module comprises four convolutional layers, four standardized operation layers and four activation function layers.
CN202310215158.XA 2023-03-07 2023-03-07 Multi-scale multi-region thyroid nodule prediction method Pending CN116310535A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310215158.XA CN116310535A (en) 2023-03-07 2023-03-07 Multi-scale multi-region thyroid nodule prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310215158.XA CN116310535A (en) 2023-03-07 2023-03-07 Multi-scale multi-region thyroid nodule prediction method

Publications (1)

Publication Number Publication Date
CN116310535A true CN116310535A (en) 2023-06-23

Family

ID=86819843

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310215158.XA Pending CN116310535A (en) 2023-03-07 2023-03-07 Multi-scale multi-region thyroid nodule prediction method

Country Status (1)

Country Link
CN (1) CN116310535A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541586A (en) * 2024-01-10 2024-02-09 长春理工大学 Thyroid nodule detection method based on deformable YOLO

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117541586A (en) * 2024-01-10 2024-02-09 长春理工大学 Thyroid nodule detection method based on deformable YOLO

Similar Documents

Publication Publication Date Title
Shamshad et al. Transformers in medical imaging: A survey
CN107748900B (en) Mammary gland tumor classification device and storage medium based on discriminative convolutional neural network
Chan et al. Texture-map-based branch-collaborative network for oral cancer detection
CN111489327A (en) Cancer cell image detection and segmentation method based on Mask R-CNN algorithm
CN113223005B (en) Thyroid nodule automatic segmentation and grading intelligent system
Tan et al. Automated vessel segmentation in lung CT and CTA images via deep neural networks
CN109215035B (en) Brain MRI hippocampus three-dimensional segmentation method based on deep learning
CN116310535A (en) Multi-scale multi-region thyroid nodule prediction method
Hou et al. Anomaly detection of calcifications in mammography based on 11,000 negative cases
CN112634231A (en) Image classification method and device, terminal equipment and storage medium
CN112614093A (en) Breast pathology image classification method based on multi-scale space attention network
CN113764101B (en) Novel auxiliary chemotherapy multi-mode ultrasonic diagnosis system for breast cancer based on CNN
Shi et al. Automatic detection of pulmonary nodules in CT images based on 3D Res-I network
CN113538209A (en) Multi-modal medical image registration method, registration system, computing device and storage medium
CN114565601A (en) Improved liver CT image segmentation algorithm based on DeepLabV3+
Kumar et al. A Novel Approach for Breast Cancer Detection by Mammograms
CN112967254A (en) Lung disease identification and detection method based on chest CT image
CN116309647B (en) Method for constructing craniocerebral lesion image segmentation model, image segmentation method and device
CN116934722A (en) Small intestine micro-target detection method based on self-correction coordinate attention
CN114764855A (en) Intelligent cystoscope tumor segmentation method, device and equipment based on deep learning
Liu et al. Prior-based 3D U-Net: A model for knee-cartilage segmentation in MRI images
Chen et al. A new classification method in ultrasound images of benign and malignant thyroid nodules based on transfer learning and deep convolutional neural network
CN113936006A (en) Segmentation method and device for processing high-noise low-quality medical image
CN113796850A (en) Parathyroid MIBI image analysis system, computer device, and storage medium
Ahmed et al. An appraisal of the performance of AI tools for chronic stroke lesion segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination