CN113159223A - Carotid artery ultrasonic image identification method based on self-supervision learning - Google Patents
Carotid artery ultrasonic image identification method based on self-supervision learning Download PDFInfo
- Publication number
- CN113159223A CN113159223A CN202110532794.6A CN202110532794A CN113159223A CN 113159223 A CN113159223 A CN 113159223A CN 202110532794 A CN202110532794 A CN 202110532794A CN 113159223 A CN113159223 A CN 113159223A
- Authority
- CN
- China
- Prior art keywords
- carotid artery
- ultrasound image
- data set
- category
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000001715 carotid artery Anatomy 0.000 title claims abstract description 92
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000012549 training Methods 0.000 claims abstract description 37
- 230000003902 lesion Effects 0.000 claims abstract description 32
- 238000013528 artificial neural network Methods 0.000 claims abstract description 29
- 230000006870 function Effects 0.000 claims abstract description 24
- 238000012360 testing method Methods 0.000 claims abstract description 15
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 238000002604 ultrasonography Methods 0.000 claims description 133
- 230000003190 augmentative effect Effects 0.000 claims description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 239000000126 substance Substances 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims description 3
- 238000000638 solvent extraction Methods 0.000 claims description 3
- 238000013526 transfer learning Methods 0.000 claims description 3
- 230000003416 augmentation Effects 0.000 claims description 2
- 238000002203 pretreatment Methods 0.000 claims description 2
- 230000008521 reorganization Effects 0.000 claims description 2
- 238000012795 verification Methods 0.000 claims description 2
- 238000013100 final test Methods 0.000 abstract 1
- 238000004445 quantitative analysis Methods 0.000 abstract 1
- 239000000523 sample Substances 0.000 description 36
- 238000013135 deep learning Methods 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 5
- 238000010191 image analysis Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000004195 computer-aided diagnosis Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 238000002592 echocardiography Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012285 ultrasound imaging Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010054107 Nodule Diseases 0.000 description 1
- 230000003187 abdominal effect Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 210000000481 breast Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 208000035475 disorder Diseases 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000005865 ionizing radiation Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003211 malignant effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The invention discloses a carotid artery ultrasonic image identification method based on self-supervision learning. The method comprises the following steps: (1) collecting and preprocessing a carotid artery ultrasonic image; (2) expanding the data set and generating a corresponding pseudo tag by using an auxiliary task function of self-supervision learning according to the preprocessed ultrasonic image data set; (3) loading the new ultrasonic image data set obtained in the step (2) into a neural network for training, and storing the learned optimal network weight parameters; (4) and (3) migrating the network weight parameters to a target neural network, learning the preprocessed ultrasonic image data set obtained in the step (1) to obtain an optimal network model of the target neural network, and testing the test set to obtain the final test precision. The method applies the self-supervision learning method to the carotid artery ultrasonic image to extract the surface change and the internal characteristic of the characteristic, and provides a quantitative analysis method for the prediction of the lesion area.
Description
Technical Field
The invention belongs to the field of intersection of computer technology and medical images, and particularly relates to a method for classifying and identifying ultrasonic images.
Background
The ultrasound imaging originated in the 20 th century, and has a difficult-to-replace function in medical diagnosis due to the advantages of real-time image generation, no ionizing radiation, no wound, no pain and the like. In medical ultrasonic examination, echoes of sound waves reflected back to a probe from interfaces between different tissues are processed into digital images through an ultrasonic host machine for clinical judgment. In clinical practice, medical ultrasound such as echocardiography, breast ultrasound, abdominal ultrasound, carotid ultrasound, etc. has been widely used in specialized examinations. Ultrasound imaging, one of the most commonly used imaging modalities, has been recognized as a widespread and effective screening diagnostic tool for clinicians and radiologists.
Ultrasound image classification and identification is one of the most important basic tasks in the field of medical image analysis. At present, the traditional ultrasonic diagnosis mainly adopts manual judgment, and has strong dependence on clinical experience of doctors. With the popularization of medical imaging applications, more and more medical images need to be read by doctors, the number of clinicians is far less than the amount of image data, and the pressure of doctors on processing image data is increasing day by day. In addition, the image representations of the good and malignant nodules in the ultrasound image are overlapped, and the ultrasound image itself also has the defects of high noise and low resolution, which all easily cause misjudgment and missed judgment in diagnosis. With the development of multidisciplinary cross application, computer-aided diagnosis begins to enter the medical imaging industry, and diagnosis evaluation and treatment guidance are more objective, accurate and intelligent through automatic ultrasonic image analysis.
In recent years, machine learning and artificial intelligence techniques have been developed rapidly, and play an important role in medical fields such as computer-aided diagnosis and image-guided therapy. Compared with the traditional method, the deep learning method reduces the steps of target detection, target segmentation, feature extraction and the like of manual operation, and directly learns the advanced model with complex parameters in a mode of inputting images and image labels, so that the newly input images are judged. Although the deep learning technology accelerates the step of medical image analysis, the ultrasonic image has poor imaging quality, low image resolution and contrast, serious speckle noise and artifacts exist, the shape of a lesion area is complex and changeable, and the like, so that the difficulty is increased for classification diagnosis. In addition, a large-scale labeling ultrasonic image data set which is not disclosed is not available at present, and certain difficulty is brought to the research of a classification algorithm. The development of deep learning in computer vision tasks depends on massive training samples to a great extent, so that the recognition accuracy of a supervised learning algorithm based on deep learning on a small sample data set is still low. Due to the particularity and privacy of medical ultrasound images, most of the current ultrasound image data sets are private and private, the disclosed data sets are only about 200 and 500, and noiseless annotation data for deep learning training is lacked, which is a bottleneck challenge in the application of deep learning in the field of small-sample medical ultrasound images.
For small sample ultrasound datasets, to save the cost of collecting and annotating large-scale datasets, a self-supervised learning approach is employed to solve this problem. The self-supervised learning belongs to one of the unsupervised learning methods. The self-supervised learning approach can learn valid feature representations from unlabeled data without any manually labeled label information. To learn visual features from unlabeled data, one solution is to propose various agent tasks for the network to solve, so that the network can be trained by learning the objective functions of the agent tasks, to learn features in this process. Researchers have proposed various self-supervised learning agent tasks including shading grayscale images, inpainting images, puzzle games, etc. These agent tasks have two common characteristics: pseudo-tags that the convolutional layer needs to capture visual features of the image or video to solve the agent task and the agent task may be automatically generated according to the attributes of the image or video. The self-supervision learning methods can expand data of the original ultrasonic data set and mark pseudo labels, and the training effect of the neural network becomes better after the data volume is increased.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a carotid artery ultrasonic image identification method based on self-supervision learning.
The technical scheme of the invention is a carotid artery ultrasonic image identification method based on self-supervision learning, which comprises the following steps:
step 1: acquiring carotid artery ultrasonic image data, and preprocessing the data to obtain a preprocessed carotid artery ultrasonic image data set and a corresponding label set;
step 2: according to the preprocessed carotid artery ultrasound image data set obtained in the step 1, partitioning each ultrasound image of the data set by using an auxiliary task of self-supervision learning in the height direction and the width direction, then sequentially disordering the partitioned ultrasound image sample blocks, recombining the disordering ultrasound image sample blocks into a new ultrasound image, merging the recombined ultrasound image data set and the ultrasound image data set which is not disordering the sequence into a new expanded data set, and labeling the new expanded data set with a corresponding label set;
and step 3: loading the expanded carotid artery ultrasound image data set obtained in the step 2 into a ResNeXt network, performing two-classification task training for judging the carotid artery ultrasound image data set to be correct or wrong, continuously updating the loss value of the loss function after multiple iterative training, and obtaining and storing the optimal ResNeXt network weight parameter after multiple training;
and 4, step 4: and (3) performing transfer learning on the optimal network weight parameters obtained in the step (3), transferring the optimal network weight parameters into a ResNeXt network of the target neural network, initializing the weight of the target neural network, training the preprocessed carotid artery ultrasonic image data set obtained in the step (1), continuously updating the loss value of a loss function of the target neural network in the training process, storing the optimal weight parameters of the ResNeXt network of the target neural network after multiple times of training, and loading the stored weight parameters to perform classification and identification on the test ultrasonic image sample to obtain a final ultrasonic image identification result.
The technical effects are as follows: under the condition of a small amount of samples, the number of the samples can be increased and pseudo labels can be marked through an automatic supervision learning method, and the characteristic information of the carotid artery ultrasonic image can be better learned after training of a neural network. Compared with the prior art, the carotid artery ultrasonic image identification method based on the self-supervision learning can effectively improve the accuracy of carotid artery ultrasonic image identification under the condition of a small number of label samples.
Drawings
Fig. 1 is a flowchart of a carotid artery ultrasound image identification method based on self-supervised learning according to an embodiment of the present invention.
FIG. 2 is a graph comparing the accuracy of a test on a carotid artery ultrasound image data set in accordance with an embodiment of the present invention.
Detailed Description
The invention provides a carotid artery ultrasound image identification method based on self-supervised learning, which is mainly based on a self-supervised learning method and considers the problems of small data volume of a small sample ultrasound image data set and the cost of artificial marking. The method fully considers the problem of too small data volume of the ultrasonic image, doubles the number of the ultrasonic image data sets through the self-supervision learning method, generates corresponding pseudo labels, and trains the pseudo labels to obtain the image recognition precision. The result obtained by the method is more scientific and more accurate.
The method provided by the invention can realize the process by using a computer software technology. Referring to fig. 1, the embodiment uses an ultrasound image of carotid artery as an example to specifically describe the process of the present invention, as follows:
step 1, obtaining an original carotid artery ultrasonic image, and obtaining a preprocessed carotid artery ultrasonic image data set and a corresponding tag set through preprocessing;
the specific implementation of the examples is illustrated below:
the original carotid artery ultrasound image data set in step 1 is:
X=[x1,…,xl,…,xm,…,xn]
wherein X represents the original carotid ultrasound image dataset, the first l are ultrasound samples of a first category (calcified hard plaques), m-l are ultrasound samples of a second category (soft plaques), the last n-m are ultrasound samples of a third category (mixed plaques), and n is the total number of carotid ultrasound image datasets;
the label set of the original carotid artery ultrasound image in the step 1 is as follows:
Y=[y1,…,yl,…,ym,…,yn]
yi={0,1,2}
i∈[1,n]
wherein Y represents a label set corresponding to the original carotid artery ultrasound image data set, the first l labels represent the first category (calcified hard plaque), m-l labels are the second category (soft plaque), the last n-m labels are the third category (mixed plaque), and YiLabel, y, representing the ith carotid ultrasound image samplei0 denotes the first category (calcified hard plaque) label, yiDenote the second category (soft patch) label, y i2 denotes the third category (mixed plaque) label, and n is the total number of carotid artery ultrasound image label sets;
the pretreatment method in the step 1 comprises the following steps: cutting and storing an ROI (region of interest) of the original carotid artery ultrasonic image, wherein the stored image is the ultrasonic image representing the carotid artery lesion area;
the preprocessed carotid artery ultrasonic image data set in the step 1 is as follows:
X′=[x′1,…,x′l,…,x′m,…,x′n]
wherein, X' is a carotid artery ultrasonic image data set of a pretreated lesion region, the first l are pretreated lesion region ultrasonic samples of a first category (calcified hard plaques), m-l are pretreated lesion region ultrasonic samples of a second category (soft plaques), the last n-m are pretreated lesion region ultrasonic samples of a third category (mixed plaques), and n is the total number of the carotid artery ultrasonic image data sets.
the specific implementation of the examples is illustrated below:
the partitioned carotid artery ultrasound image sample block in the step 2 is represented as follows:
zi(α,β)=x′i(α,β)=x′i(hα,wβ)
wherein, x'iFor the carotid artery ultrasonic sample of the ith lesion area preprocessed in the step 1, height is the height of the ultrasonic sample, width is the width of the ultrasonic sample, a is the number of blocks in the height direction, b is the number of blocks in the width direction, h is the number of blocks in the width directionαRepresents x'iThe index segment, w, of the alpha-th small sample block in the height direction of the ultrasonic sampleβRepresents x'iThe index segment Z of the beta-th small sample block in the width direction of the ultrasonic samplei(α, β) represents a small ultrasound image sample block of a β -th block in the width direction of the α -th block in the height direction of the preprocessed carotid artery ultrasound sample of the i-th lesion region, i.e. a sample block of a β -1 column of the α -1 row of the carotid artery ultrasound sample of the i-th lesion region;
and 2, disturbing the original sequence of the partitioned ultrasound image sample blocks:
α∈random([1,a])
β∈random([1,b])
wherein, random function means that elements in the transmitted list are randomly scrambled;
each out-of-order recombined new carotid artery ultrasound image sample in the step 2 is as follows:
r′i={zi(α,β)},α∈[1,a],β∈[1,b]
wherein r'iA carotid ultrasound sample of the recombined ith lesion area;
from x'iAnd r'iComposing a new augmented data set:
X″=[x′1,…,x′l,…,x′m,…,x′n,r′1,…,r′l,…,r′m,…,r′n]
wherein X ' is the carotid artery ultrasound image data set [ X ' after the augmentation in step 2 '1,…,x′l,…,x′m,…,x′n]Representing the correct ultrasound image samples in the augmented data set, the first l being ultrasound samples of a first category (calcified hard plaque) in the correct ultrasound image samples, m-l being ultrasound samples of a second category (soft plaque) in the correct ultrasound image samples, n-m being ultrasound samples of a third category (hybrid plaque) in the correct ultrasound image samples, [ r'1,…,r′l,…,r′m,…,r′n]Representing erroneous ultrasound image samples in the augmented data set, the first l being ultrasound samples of a first category (calcified hard plaque) in the erroneous ultrasound image samples, m-l being ultrasound samples of a second category (soft plaque) in the erroneous ultrasound image samples, n-m being ultrasound samples of a third category (hybrid plaque) in the erroneous ultrasound image samples;
setting the label representing the correct carotid artery ultrasound image sample after preprocessing as 1, and setting the label representing the wrong carotid artery ultrasound image sample after disorder reorganization as 0 to form a label set of an expanded data set:
wherein Y' represents a set of tags that augment a data set,a label that represents the correct ultrasound image sample,a label representing an erroneous ultrasound image sample.
the specific implementation of the examples is illustrated below:
the data set used by the ResNeXt101 neural network in the auxiliary task of the self-supervision learning in the step 3 is an expanded carotid artery ultrasound image data set, and the corresponding labels are set as 0 and 1:
X″=[x′1,…,x′l,…,x′m,…,x′n,r′1,…,r′l,…,r′m,…,r′n]
the loss function of the ResNeXt101 neural network in the step 3 is a cross entropy loss function:
wherein the content of the first and second substances,is convolution calculation function, which represents the sample data before being input into the fully-connected layer after being subjected to convolution layer operation for multiple times, W is the weight parameter of the fully-connected layer, B is the bias parameter of the fully-connected layer,in order to be a label for the correct ultrasound image sample,a label for a false ultrasound image sample;
the specific parameters of the network in step 3 are as follows: the expansion data set is divided into a training set and a testing set according to the proportion of 8: 2, the network is a ResNeXt101 layer neural network, the optimizer is an Adam optimizer, the learning rate (learn _ rate) is set to be 0.0001, the batch (batch _ size) of the training set is set to be 32, the training frequency (epoch) is set to be 30, and the optimal network weight parameter is obtained after multiple iterative training, namely the optimal weight parameter of the auxiliary task learning of the self-supervision learning.
And 4, performing transfer learning on the optimal weight parameters after learning of the auxiliary task of the self-supervision learning, transferring the optimal weight parameters into a ResNeXt101 network, initializing the weight of the ResNeXt101 network, constructing a loss function of the ResNeXt101 network according to the label set corresponding to the preprocessed carotid artery ultrasonic image data set obtained in the step 1, training the preprocessed carotid artery ultrasonic image data set obtained in the step 1, continuously updating the loss value of the loss function of the target neural network, storing the optimal weight parameters of the ResNeXt101 network after multiple times of training, and loading the stored weight parameters to perform classification and identification on the test ultrasonic image samples to obtain a final ultrasonic image identification result.
The specific implementation of the examples is illustrated below:
the preprocessed carotid artery ultrasonic image data set obtained in the step 1 and loaded in the target neural network ResNeXt101 network in the step 4 is
X′=[x′1,…,x′l,…,x′m,…,x′n]
Wherein, X' is a carotid artery ultrasonic image data set of a pretreated lesion region, the first l are pretreated lesion region ultrasonic samples of a first category (calcified hard plaques), m-l are pretreated lesion region ultrasonic samples of a second category (soft plaques), the last n-m are pretreated lesion region ultrasonic samples of a third category (mixed plaques), and n is the total number of the carotid artery ultrasonic image data sets;
the label set corresponding to the preprocessed carotid artery ultrasound image in the step 4 is as follows:
Y=[y1,…,yl,…,ym,…,yn]
yi={0,1,2}
i∈[1,n]
wherein Y represents a label set corresponding to the original carotid artery ultrasound image data set, the first l labels represent the first category (calcified hard plaque), m-l labels are the second category (soft plaque), the last n-m labels are the third category (mixed plaque), and YiLabel, y, representing the ith carotid ultrasound image samplei0 denotes the first category (calcified hard plaque) label, yiDenote the second category (soft patch) label, y i2 denotes the third category (mixed plaque) label, and n is the total number of carotid artery ultrasound image label sets;
the specific network parameters in step 4 are as follows: dividing the preprocessed carotid artery ultrasound data set obtained in the step 1 into a training set, a verification set and a test set according to the proportion of 6: 2, wherein a target neural network is a ResNeXt 101-layer classification network, an optimizer is an Adam optimizer, the learning rate (learn _ rate) is set to be 0.0001, the batch (batch _ size) of the training set is set to be 16, and the training frequency (epoch) is set to be 100;
the loss function used by the target neural network in the step 4 is a cross entropy loss function:
wherein the content of the first and second substances,is convolution calculation function representing sample data before input into the fully-connected layer after multiple convolution layer operations, W is weight parameter of the fully-connected layer, B is bias parameter of the fully-connected layer, yiA label set corresponding to the preprocessed carotid artery ultrasonic image data set representing the lesion area;
the optimizer used by the objective neural network ResNeXt101 layer classification network in the step 4 is an Adam optimizer for updating parameters, optimal network weight parameters of the objective neural network are obtained after multiple iterative training, the optimized network weight parameters are loaded into the test set for classification test, and the final classification accuracy of the carotid artery ultrasound image is obtained, referring to FIG. 2, the embodiment takes the carotid artery ultrasound image as an example for specifically displaying the test result of the invention.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.
Claims (5)
1. A carotid artery ultrasonic image identification method based on self-supervision learning is characterized by comprising the following steps:
step 1: acquiring carotid artery ultrasonic image data, and preprocessing the data to obtain a preprocessed carotid artery ultrasonic image data set and a corresponding label set;
step 2: according to the preprocessed carotid artery ultrasound image data set obtained in the step 1, partitioning each ultrasound image of the data set by using an auxiliary task of self-supervision learning in the height direction and the width direction, then sequentially disordering the partitioned ultrasound image sample blocks, recombining the disordering ultrasound image sample blocks into a new ultrasound image, merging the recombined ultrasound image data set and the ultrasound image data set which is not disordering the sequence into a new expanded data set, and labeling the new expanded data set with a corresponding label set;
and step 3: loading the expanded carotid artery ultrasound image data set obtained in the step 2 into a ResNeXt network, performing two-classification task training for judging the carotid artery ultrasound image data set to be correct or wrong, continuously updating the loss value of the loss function after multiple iterative training, and obtaining and storing the optimal ResNeXt network weight parameter after multiple training;
and 4, step 4: and (3) performing transfer learning on the optimal network weight parameters obtained in the step (3), transferring the optimal network weight parameters into a ResNeXt network of the target neural network, initializing the weight of the target neural network, training the preprocessed carotid artery ultrasonic image data set obtained in the step (1), continuously updating the loss value of a loss function of the target neural network in the training process, storing the optimal weight parameters of the ResNeXt network of the target neural network after multiple times of training, and loading the stored weight parameters to perform classification and identification on the test ultrasonic image sample to obtain a final ultrasonic image identification result.
2. The method for ultrasound image feature identification based on self-supervised learning as claimed in claim 1, wherein the original carotid artery ultrasound image data set of step 1 is:
X=[x1,…,xl,…,xm,…,xn]
wherein X represents the original carotid ultrasound image dataset, the first l are ultrasound samples of a first category (calcified hard plaques), m-l are ultrasound samples of a second category (soft plaques), the last n-m are ultrasound samples of a third category (mixed plaques), and n is the total number of carotid ultrasound image datasets;
the label set of the original carotid artery ultrasound image in the step 1 is as follows:
Y=[y1,…,yl,…,ym,…,yn]
yi={0,1,2}
i∈[1,n]
wherein Y represents a label set corresponding to the original carotid artery ultrasound image data set, the first l labels represent the first category (calcified hard plaque), m-l labels are the second category (soft plaque), the last n-m labels are the third category (mixed plaque), and YiLabel, y, representing the ith carotid ultrasound image samplei0 denotes the first category (calcified hard plaque) label, yiDenote the second category (soft patch) label, yi2 denotes the third category (mixed plaque) label, and n is the total number of carotid artery ultrasound image label sets;
the pretreatment method in the step 1 comprises the following steps: cutting and storing an ROI (region of interest) of the original carotid artery ultrasonic image, wherein the stored image is the ultrasonic image representing the carotid artery lesion area;
the preprocessed carotid artery ultrasonic image data set in the step 1 is as follows:
X′=[x′1,…,x′l,…,x′m,…,x′n]
wherein, X' is a carotid artery ultrasonic image data set of a pretreated lesion region, the first l are pretreated lesion region ultrasonic samples of a first category (calcified hard plaques), m-l are pretreated lesion region ultrasonic samples of a second category (soft plaques), the last n-m are pretreated lesion region ultrasonic samples of a third category (mixed plaques), and n is the total number of the carotid artery ultrasonic image data sets.
3. The method for identifying ultrasonic image features based on self-supervised learning as claimed in claim 1, wherein the sample blocks of the carotid artery ultrasonic image in step 2 are represented as follows:
zi(α,β)=x′i(α,β)=x′i(hα,wβ)
wherein, x'iFor the carotid artery ultrasonic sample of the ith lesion area preprocessed in the step 1, height is the height of the ultrasonic sample, width is the width of the ultrasonic sample, a is the number of blocks in the height direction, b is the number of blocks in the width direction, h is the number of blocks in the width directionαRepresents x'iThe index segment, w, of the alpha-th small sample block in the height direction of the ultrasonic sampleβRepresents x'iThe index segment Z of the beta-th small sample block in the width direction of the ultrasonic samplei(α, β) represents a small ultrasound image sample block of a β -th block in the width direction of the α -th block in the height direction of the preprocessed carotid artery ultrasound sample of the i-th lesion region, i.e. a sample block of a β -1 column of the α -1 row of the carotid artery ultrasound sample of the i-th lesion region;
and 2, disturbing the original sequence of the partitioned ultrasound image sample blocks:
α∈random([1,a])
β∈random([1,b])
wherein, random function means that elements in the transmitted list are randomly scrambled;
each out-of-order recombined new carotid artery ultrasound image sample in the step 2 is as follows:
r′i={zi(α,β)},α∈[1,a],β∈[1,b]
wherein r'iA carotid ultrasound sample of the recombined ith lesion area;
from x'iAnd r'iComposing a new augmented data set:
X″=[x′1,…,x′l,…,x′m,…,x′n,r′1,…,r′l,…,r′m,…,r′n]
wherein X ' is the carotid artery ultrasound image data set [ X ' after the augmentation in step 2 '1,…,x′l,…,x′m,…,x′n]Representing the correct ultrasound image samples in the augmented data set, the first l being ultrasound samples of a first category (calcified hard plaque) in the correct ultrasound image samples, m-l being ultrasound samples of a second category (soft plaque) in the correct ultrasound image samples, n-m being ultrasound samples of a third category (hybrid plaque) in the correct ultrasound image samples, [ r'1,…,r′l,…,r′m,…,r′n]Representing erroneous ultrasound image samples in the augmented data set, the first l being ultrasound samples of a first category (calcified hard plaque) in the erroneous ultrasound image samples, m-l being ultrasound samples of a second category (soft plaque) in the erroneous ultrasound image samples, n-m being ultrasound samples of a third category (hybrid plaque) in the erroneous ultrasound image samples;
setting the label representing the correct carotid artery ultrasound image sample after preprocessing as 1, and setting the label representing the wrong carotid artery ultrasound image sample after disorder reorganization as 0 to form a label set of an expanded data set:
4. The ultrasound image feature recognition method based on the unsupervised learning of claim 1, wherein the data set used by the resenext neural network in the aided task of the unsupervised learning of step 3 is an expanded data set, and the labels are set to 0 and 1:
X″=[x′1,…,x′l,…,x′m,…,x′n,r′1,…,r′l,…,r′m,…,r′n]
the loss function of the ResNeXt101 neural network in the step 3 is a cross entropy loss function:
wherein the content of the first and second substances,is convolution calculation function, which represents the sample data before being input into the fully-connected layer after being subjected to convolution layer operation for multiple times, W is the weight parameter of the fully-connected layer, B is the bias parameter of the fully-connected layer,in order to be a label for the correct ultrasound image sample,a label for a false ultrasound image sample;
the specific parameters of the network in step 3 are as follows: and (4) expanding the data set according to the following steps of 8: 2, dividing the network into a training set and a test set, wherein the network is a ResNeXt101 layer neural network, the optimizer is an Adam optimizer, the learning rate (learn _ rate) is set to be 0.0001, the batch (batch _ size) of the training set is set to be 32, the training times (epoch) are set to be 30, and after multiple times of iterative training, obtaining the optimal network weight parameter, namely the optimal weight parameter of the auxiliary task learning of the self-supervision learning.
5. The method of claim 1, wherein the carotid artery ultrasound image data set of the lesion region loaded in the target task network in the step 4 is ultrasound image data set of the lesion region
X′=[x′1,…,x′l,…,x′m,…,x′n]
Wherein, X' is a carotid artery ultrasonic image data set of a pretreated lesion region, the first l are pretreated lesion region ultrasonic samples of a first category (calcified hard plaques), m-l are pretreated lesion region ultrasonic samples of a second category (soft plaques), the last n-m are pretreated lesion region ultrasonic samples of a third category (mixed plaques), and n is the total number of the carotid artery ultrasonic image data sets;
the label set corresponding to the preprocessed carotid artery ultrasound image in the step 4 is as follows:
Y=[y1,…,yl,…,ym,…,yn]
yi={0,1,2}
i∈[1,n]
wherein Y represents a label set corresponding to the original carotid artery ultrasound image data set, the first l labels represent the first category (calcified hard plaque), m-l labels are the second category (soft plaque), the last n-m labels are the third category (mixed plaque), and YiLabel, y, representing the ith carotid ultrasound image samplei0 denotes the first category (calcified hard plaque) label, yiDenote the second category (soft patch) label, yi2 denotes the third category (mixed plaque) label, and n is the total number of carotid artery ultrasound image label sets;
the specific network parameters in step 4 are as follows: and (3) performing pretreatment on the carotid artery ultrasonic data set obtained in the step 1 according to the ratio of 6: 2: 2, dividing the ratio into a training set, a verification set and a test set, wherein a target neural network is a ResNeXt 101-layer classification network, an optimizer is an Adam optimizer, the learning rate (learn _ rate) is set to be 0.0001, the batch (batch _ size) of the training set is set to be 16, and the training frequency (epoch) is set to be 100;
the loss function used by the target neural network in the step 4 is a cross entropy loss function:
wherein the content of the first and second substances,is convolution calculation function representing sample data before input into the fully-connected layer after multiple convolution layer operations, W is weight parameter of the fully-connected layer, B is bias parameter of the fully-connected layer, yiA label set corresponding to the preprocessed carotid artery ultrasonic image data set representing the lesion area;
and 4, updating parameters by using an optimizer used by the ResNeXt 101-layer classification network of the target neural network, obtaining the optimal network weight parameters of the target neural network after multiple iterative training, loading the optimized network weight parameters into the test set for classification test, and obtaining the final classification accuracy of the carotid artery ultrasound image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110532794.6A CN113159223A (en) | 2021-05-17 | 2021-05-17 | Carotid artery ultrasonic image identification method based on self-supervision learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110532794.6A CN113159223A (en) | 2021-05-17 | 2021-05-17 | Carotid artery ultrasonic image identification method based on self-supervision learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113159223A true CN113159223A (en) | 2021-07-23 |
Family
ID=76876384
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110532794.6A Pending CN113159223A (en) | 2021-05-17 | 2021-05-17 | Carotid artery ultrasonic image identification method based on self-supervision learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113159223A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113962995A (en) * | 2021-12-21 | 2022-01-21 | 北京鹰瞳科技发展股份有限公司 | Cataract model training method and cataract identification method |
CN114882301A (en) * | 2022-07-11 | 2022-08-09 | 四川大学 | Self-supervision learning medical image identification method and device based on region of interest |
WO2023061104A1 (en) * | 2021-10-13 | 2023-04-20 | 山东大学 | Carotid artery ultrasound report generation system based on multi-modal information |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647741A (en) * | 2018-05-18 | 2018-10-12 | 湖北工业大学 | A kind of image classification method and system based on transfer learning |
CN109086836A (en) * | 2018-09-03 | 2018-12-25 | 淮阴工学院 | A kind of automatic screening device of cancer of the esophagus pathological image and its discriminating method based on convolutional neural networks |
CN109858563A (en) * | 2019-02-22 | 2019-06-07 | 清华大学 | Self-supervisory representative learning method and device based on transformation identification |
CN110852350A (en) * | 2019-10-21 | 2020-02-28 | 北京航空航天大学 | Pulmonary nodule benign and malignant classification method and system based on multi-scale migration learning |
CN111401320A (en) * | 2020-04-15 | 2020-07-10 | 支付宝(杭州)信息技术有限公司 | Privacy-protecting biological characteristic image processing method and device |
CN111898696A (en) * | 2020-08-10 | 2020-11-06 | 腾讯云计算(长沙)有限责任公司 | Method, device, medium and equipment for generating pseudo label and label prediction model |
CN112613502A (en) * | 2020-12-28 | 2021-04-06 | 深圳壹账通智能科技有限公司 | Character recognition method and device, storage medium and computer equipment |
CN112651916A (en) * | 2020-12-25 | 2021-04-13 | 上海交通大学 | Method, system and medium for pre-training of self-monitoring model |
-
2021
- 2021-05-17 CN CN202110532794.6A patent/CN113159223A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647741A (en) * | 2018-05-18 | 2018-10-12 | 湖北工业大学 | A kind of image classification method and system based on transfer learning |
CN109086836A (en) * | 2018-09-03 | 2018-12-25 | 淮阴工学院 | A kind of automatic screening device of cancer of the esophagus pathological image and its discriminating method based on convolutional neural networks |
CN109858563A (en) * | 2019-02-22 | 2019-06-07 | 清华大学 | Self-supervisory representative learning method and device based on transformation identification |
CN110852350A (en) * | 2019-10-21 | 2020-02-28 | 北京航空航天大学 | Pulmonary nodule benign and malignant classification method and system based on multi-scale migration learning |
CN111401320A (en) * | 2020-04-15 | 2020-07-10 | 支付宝(杭州)信息技术有限公司 | Privacy-protecting biological characteristic image processing method and device |
CN111898696A (en) * | 2020-08-10 | 2020-11-06 | 腾讯云计算(长沙)有限责任公司 | Method, device, medium and equipment for generating pseudo label and label prediction model |
CN112651916A (en) * | 2020-12-25 | 2021-04-13 | 上海交通大学 | Method, system and medium for pre-training of self-monitoring model |
CN112613502A (en) * | 2020-12-28 | 2021-04-06 | 深圳壹账通智能科技有限公司 | Character recognition method and device, storage medium and computer equipment |
Non-Patent Citations (2)
Title |
---|
WEI MA 等: "Plaque Recognition of Carotid Ultrasound Images Based on Deep Residual Network", 《2019 IEEE 8TH JOINT INTERNATIONAL INFORMATION TECHNOLOGY AND ARTIFICIAL INTELLIGENCE CONFERENCE (ITAIC)》 * |
赵媛等: "一种基于深度学习的颈动脉斑块超声图像识别方法", 《中国医疗器械信息》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023061104A1 (en) * | 2021-10-13 | 2023-04-20 | 山东大学 | Carotid artery ultrasound report generation system based on multi-modal information |
CN113962995A (en) * | 2021-12-21 | 2022-01-21 | 北京鹰瞳科技发展股份有限公司 | Cataract model training method and cataract identification method |
CN113962995B (en) * | 2021-12-21 | 2022-04-19 | 北京鹰瞳科技发展股份有限公司 | Cataract model training method and cataract identification method |
CN114882301A (en) * | 2022-07-11 | 2022-08-09 | 四川大学 | Self-supervision learning medical image identification method and device based on region of interest |
CN114882301B (en) * | 2022-07-11 | 2022-09-13 | 四川大学 | Self-supervision learning medical image identification method and device based on region of interest |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11101033B2 (en) | Medical image aided diagnosis method and system combining image recognition and report editing | |
CN109166133B (en) | Soft tissue organ image segmentation method based on key point detection and deep learning | |
CN113159223A (en) | Carotid artery ultrasonic image identification method based on self-supervision learning | |
CN107993221B (en) | Automatic identification method for vulnerable plaque of cardiovascular Optical Coherence Tomography (OCT) image | |
CN113902761B (en) | Knowledge distillation-based unsupervised segmentation method for lung disease focus | |
Li et al. | Automatic lumbar spinal MRI image segmentation with a multi-scale attention network | |
CN1934589A (en) | Systems and methods providing automated decision support for medical imaging | |
CN105760874A (en) | CT image processing system and method for pneumoconiosis | |
CN106529188A (en) | Image processing method applied to surgical navigation | |
CN112820399A (en) | Method and device for automatically diagnosing benign and malignant thyroid nodules | |
CN114494215A (en) | Transformer-based thyroid nodule detection method | |
Du et al. | Boosting dermatoscopic lesion segmentation via diffusion models with visual and textual prompts | |
CN114519705A (en) | Ultrasonic standard data processing method and system for medical selection and identification | |
CN112686932B (en) | Image registration method for medical image, image processing method and medium | |
Yu et al. | Convolutional neural network design for breast cancer medical image classification | |
Wulaning Ayu et al. | Pixel Classification Based on Local Gray Level Rectangle Window Sampling for Amniotic Fluid Segmentation. | |
CN115409812A (en) | CT image automatic classification method based on fusion time attention mechanism | |
CN113837273A (en) | Deep semi-supervised chest X-ray image classification method based on inter-sample relation of inner product measurement | |
CN113255794A (en) | Medical image classification method based on GoogLeNet network | |
Song et al. | Abdominal multi-organ segmentation using multi-scale and context-aware neural networks | |
CN116385814B (en) | Ultrasonic screening method, system, device and medium for detection target | |
CN116309593B (en) | Liver puncture biopsy B ultrasonic image processing method and system based on mathematical model | |
CN118015021B (en) | Active domain self-adaptive cross-modal medical image segmentation method based on sliding window | |
CN114708236B (en) | Thyroid nodule benign and malignant classification method based on TSN and SSN in ultrasonic image | |
Suresh et al. | Revolutionizing Medical Imaging: The Vital Role of Diffusion Models in Modern Image Augmentation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210723 |
|
RJ01 | Rejection of invention patent application after publication |