CN112086197A - Mammary nodule detection method and system based on ultrasonic medicine - Google Patents

Mammary nodule detection method and system based on ultrasonic medicine Download PDF

Info

Publication number
CN112086197A
CN112086197A CN202010924386.0A CN202010924386A CN112086197A CN 112086197 A CN112086197 A CN 112086197A CN 202010924386 A CN202010924386 A CN 202010924386A CN 112086197 A CN112086197 A CN 112086197A
Authority
CN
China
Prior art keywords
video
breast
classification
ultrasonic
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010924386.0A
Other languages
Chinese (zh)
Other versions
CN112086197B (en
Inventor
张国君
李卫斌
陈敏
王连生
陈云超
徐辉雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiang'an Hospital Of Xiamen University
Original Assignee
Xiang'an Hospital Of Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiang'an Hospital Of Xiamen University filed Critical Xiang'an Hospital Of Xiamen University
Priority to CN202010924386.0A priority Critical patent/CN112086197B/en
Publication of CN112086197A publication Critical patent/CN112086197A/en
Application granted granted Critical
Publication of CN112086197B publication Critical patent/CN112086197B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Data Mining & Analysis (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses a breast nodule detection method and a breast nodule detection system based on ultrasonic medicine, which comprises the following steps of acquiring breast ultrasonic video data, preprocessing the data, training the preprocessed data by a transfer learning method, establishing a breast nodule detection model, inputting video data to be tested into the breast nodule detection model, and obtaining a result. The invention provides a breast nodule detection method and a breast nodule detection system based on ultrasonic medicine, which take ultrasonic breast video image inspection as a main means, aim to establish an accurate method for judging the benign and malignant breast tumors and pathologically typing and solve the difficult clinical problem of difficulty in dynamic identification and identification of breast tumors. The establishment of a highly accurate breast tumor qualitative system has great clinical significance for the diagnosis and treatment of breast tumor patients.

Description

Mammary nodule detection method and system based on ultrasonic medicine
Technical Field
The invention relates to the technical field of video image processing, in particular to a breast nodule detection method and system based on ultrasonic medicine.
Background
The ultrasonic Computer Aided Diagnosis (CAD) can detect the internal information of the tumor, such as texture information, edge information and the like, which can not be acquired by human eyes, and provide reliable auxiliary Diagnosis opinions for doctors, thereby not only being beneficial to relieving the workload of the doctors, reducing misdiagnosis caused by insufficient experience of the doctors, visual fatigue and the like, but also reducing the biopsy rate of patients and relieving pain. The breast ultrasound artificial intelligence real-time detection module used for research can automatically help a doctor to find nodules in the ultrasound probing process, has important significance for reducing missed diagnosis, and can help the doctor to carry out self-training of ultrasound diagnosis capability. The breast ultrasound artificial intelligence equipment is combined with the remote medical equipment, and is hopeful to be supported by a remote ultrasound expert. The method has very important social value for improving the technical level of ultrasonic doctors in primary hospitals and realizing the redistribution of high-quality medical resources.
The invention mainly develops a method and a system for detecting the malignancy of diseases of mammary glands BI-RADS 4 and 5 based on deep learning of ultrasonic medical video images, combines clinical expert experience and clinical pathological diagnosis, researches the relationship between image expression and benign and malignant breast tumors and different pathological typing, constructs a mammary nodule intelligent diagnosis model with high accuracy and interpretable results, finds nodules in real time in the daily ultrasonic detection process, diagnoses the nodules, and provides guidance for disease diagnosis and treatment of patients.
Disclosure of Invention
The invention provides a breast nodule detection method and a breast nodule detection system based on ultrasonic medicine, which take ultrasonic breast video image inspection as a main means, aim to establish an accurate method for judging the benign and malignant breast tumors and pathologically typing and solve the difficult clinical problem of difficulty in dynamic identification and identification of breast tumors. The establishment of a highly accurate breast tumor qualitative system has great clinical significance for the diagnosis and treatment of breast tumor patients.
In order to achieve the purpose, the invention adopts the following technical scheme:
a breast nodule detection method based on ultrasonic medicine comprises the following steps:
s1, acquiring breast ultrasound video data and preprocessing the data;
s2, training the preprocessed data by a transfer learning method, and establishing a breast nodule detection model;
s21, feature extraction: the preprocessed video data is subjected to a feature extraction network ResNet to obtain a video feature vector;
s22, linear classification: detecting the classification probability of each frame of the video by the video feature vector through a linear classification network;
s23, attention selection: obtaining the weight of each frame of the video by the video feature vector through an attention selection network;
s24, video detection: combining the classification probability given by linear classification and each frame weight provided by the attention selection module to obtain a breast nodule detection model;
and S3, inputting the video data to be tested into the breast nodule detection model, and obtaining a result.
Preferably, the feature extraction network ResNet is a ResNet18 with a network depth of 18 layers, the network structure of ResNet18 can be divided into five stages, a convolutional layer with a convolution kernel size of 7 × 7 is used in a first layer to capture a large receptive field, then a maximum pooling layer with a step size of 2 is passed, and a convolutional layer with a convolution kernel size of 3 × 3 is used in a last four layers to perform feature extraction.
Preferably, the feature extraction in step S21 specifically includes: after pretreatmentVideo data V ═ ft|t∈[1,T]T ∈ N ^ where ftThe video characteristic vector sequence F is obtained by coding each frame information of the video through the characteristic extraction network ResNett|t∈[1,T]T ∈ N ^ where stAnd representing the feature vector corresponding to the t frame image of the video.
Preferably, the linear classification in step S22 is specifically: the linear classification network comprises a full connection layer and a softmax function layer, the full connection layer performs feature fusion on each dimensionality of extracted feature vectors, the classification is concentrated on effective features through a weight form, useless features are ignored, the softmax function layer normalizes output, effective probability is provided for classification, extracted video feature vectors F are given, and learning weight W is givenpE.g. R, the classification probability P of the video is softmax (W)pF + b), where b is a constant.
Preferably, the attention selection in step S23 is specifically: the attention selection network comprises a full connection layer and a softmax function layer, wherein the full connection layer performs feature fusion on each dimension of extracted feature vectors, the classification is concentrated on effective features through a weight form, the useless features are ignored, the softmax function layer normalizes output, effective probability is provided for classification, extracted video feature vectors F are given, and learning weights W are givenaE R, the attention selection weight of the video a ═ softmax (W)aF + b), where b is a constant.
Preferably, the attention selection in step S24 is specifically: the detection result of the breast nodule detection model is
Figure BDA0002667815940000031
Wherein the content of the first and second substances,
Figure BDA0002667815940000032
is the detection probability.
Preferably, the step S2 further includes the step S25 of loss function:optimizing each module of the model by adopting a common cross entropy loss function and a common central loss function, and detecting the probability according to each video
Figure BDA0002667815940000033
And the true value y, respectively calculating the cross entropy loss function
Figure BDA0002667815940000034
And center loss function
Figure BDA0002667815940000035
Figure BDA0002667815940000036
Figure BDA0002667815940000037
Figure BDA0002667815940000038
Where N denotes the size of the training data set, xiFeatures before fully connecting layers, cyiDenotes the y thiThe feature center of each class, λ, controls the specific gravity between the two.
Preferably, the preprocessing step of step S1 includes:
s11, intercepting the maximum rectangle of the breast ultrasound original video data at an ultrasound imaging part, and uniformly scaling the size to 256 multiplied by 256, wherein 20 frames are sparsely sampled at the middle part of the video at equal intervals;
s12, digital image processing, namely classifying the intercepted ultrasonic video images, selecting a clear image slice layer containing a complete nodule region, and deleting a redundant image without an interested region;
and S13, data enhancement, wherein the classified ultrasonic video images are randomly cut into 224 multiplied by 224 pixels by adopting a random cutting mode, 1 part of the ultrasonic video images can be enhanced by hundreds of times, affine transformation is carried out on the ultrasonic video images by adopting space geometric transformation, and the sizes of the ultrasonic video images are kept consistent by filling through an interpolation method.
Mammary nodule detecting system based on ultrasonic medicine, its characterized in that includes feature extraction module, linear classification module, attention selection module, video detection module and loss function module, wherein:
a feature extraction module: the preprocessed video data is subjected to a feature extraction network ResNet to obtain a video feature vector;
a linear classification module: detecting the classification probability of each frame of the video by the video feature vector through a linear classification network;
an attention selection module: obtaining the weight of each frame of the video by the video feature vector through an attention selection network;
the video detection module: combining the classification probability given by linear classification and each frame weight provided by the attention selection module to obtain a breast nodule detection model;
a loss function module: and optimizing each module of the model by adopting a common cross entropy loss function and a common central loss function.
After adopting the technical scheme, compared with the background technology, the invention has the following advantages:
1. the invention provides a breast nodule detection method and a breast nodule detection system based on ultrasonic medicine, which take ultrasonic breast video image inspection as a main means, aim to establish an accurate method for judging the benign and malignant breast tumors and pathologically typing and solve the difficult clinical problem of difficulty in dynamic identification and identification of breast tumors. The establishment of a highly accurate breast tumor qualitative system has great clinical significance for the diagnosis and treatment of breast tumor patients.
2. The invention provides a breast nodule detection method and system based on ultrasonic medicine, which adopts a feature extraction network ResNet18, effectively relieves the gradient disappearance problem of a deep network during reverse propagation, solves the degradation problem caused by difficulty in optimizing the deep network, and can perform transfer learning by using pre-training weights on a large data set while the data set is small. The pre-training weight provides a better initial value for the network, so that the early training of the network is more stable, and the method is better than the method for extracting the relevant characteristic information from the network.
Drawings
FIG. 1 is a schematic diagram of the steps of the construction method of the present invention;
FIG. 2 is a flow chart of a breast nodule detection model experiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the present invention, it should be noted that the terms "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are all based on the orientation or positional relationship shown in the drawings, and are only for convenience of describing the present invention and simplifying the description, but do not indicate or imply that the apparatus or element of the present invention must have a specific orientation, and thus, should not be construed as limiting the present invention.
Example 1
As shown in fig. 1, the invention discloses a breast nodule detection method based on ultrasonic medicine, which comprises the following steps:
s1, acquiring breast ultrasound video data and preprocessing the data;
s11, intercepting the maximum rectangle of the breast ultrasound original video data at an ultrasound imaging part, and uniformly scaling the size to 256 multiplied by 256, wherein 20 frames are sparsely sampled at the middle part of the video at equal intervals;
s12, digital image processing, namely classifying the intercepted ultrasonic video images, selecting a clear image slice layer containing a complete nodule region, and deleting a redundant image without an interested region;
and S13, data enhancement, wherein the classified ultrasonic video images are randomly cut into 224 multiplied by 224 pixels by adopting a random cutting mode, 1 part of the ultrasonic video images can be enhanced by hundreds of times, affine transformation is carried out on the ultrasonic video images by adopting space geometric transformation, and the sizes of the ultrasonic video images are kept consistent by filling through an interpolation method.
S2, training the preprocessed data by a transfer learning method, and establishing a breast nodule detection model;
s21, feature extraction: the preprocessed video data is subjected to a feature extraction network ResNet to obtain a video feature vector;
the feature extraction network ResNet is ResNet18 with a network depth of 18 layers, the network structure of ResNet18 can be divided into five stages, the resolution of a feature map is reduced by one time after each stage, a convolution layer with a convolution kernel size of 7 x 7 is adopted in the first layer to capture a larger receptive field, then a maximum pooling layer with a step size of 2 is passed, feature extraction is carried out by a convolution layer with a convolution kernel size of 3 x 3 in the last four layers, and with the progressive of each layer, the resolution of the feature map is continuously reduced by one half, and the number of feature channels is increased by one time.
Pre-processed video data V ═ ft|t∈[1,T]T ∈ N ^ where ftThe video characteristic vector sequence F is obtained by coding each frame information of the video through the characteristic extraction network ResNett|t∈[1,T]T ∈ N ^ where stAnd representing the feature vector corresponding to the t frame image of the video.
S22, linear classification: detecting the classification probability of each frame of the video by the video feature vector through a linear classification network;
the linear classification network comprises a full connection layer and a softmax function layer, the full connection layer performs feature fusion on each dimensionality of extracted feature vectors, the classification is concentrated on effective features through a weight form, useless features are ignored, the softmax function layer normalizes output, effective probability is provided for classification, extracted video feature vectors F are given, and learning weight W is givenpE.g. R, the classification probability P of the video is softmax (W)pF + b), where b is a constant.
S23, attention selection: obtaining the weight of each frame of the video by the video feature vector through an attention selection network;
the attention selection network comprises a full connection layer and a softmax function layer, wherein the full connection layer performs feature fusion on each dimension of extracted feature vectors, the classification is concentrated on effective features through a weight form, the useless features are ignored, the softmax function layer normalizes output, effective probability is provided for classification, extracted video feature vectors F are given, and learning weights W are givenaE R, the attention selection weight of the video a ═ softmax (W)aF + b), where b is a constant.
S24, video detection: combining the classification probability given by linear classification and each frame weight provided by the attention selection module to obtain a breast nodule detection model;
the detection result of the breast nodule detection model is
Figure BDA0002667815940000061
Wherein the content of the first and second substances,
Figure BDA0002667815940000071
is the detection probability.
Step S25, loss function: optimizing each module of the model by adopting a common cross entropy loss function and a common central loss function, and detecting the probability according to each video
Figure BDA0002667815940000072
And the true value y, respectively calculating the cross entropy loss function
Figure BDA0002667815940000073
And center loss function
Figure BDA0002667815940000074
Figure BDA0002667815940000075
Figure BDA0002667815940000076
Figure BDA0002667815940000077
Where N denotes the size of the training data set, xiFeatures before fully connecting layers, cyiDenotes the y thiThe feature center of each class, λ, controls the specific gravity between the two.
And S3, inputting the video data to be tested into the breast nodule detection model, and obtaining a result, namely whether the patient of the video data has a breast nodule, and if so, whether the breast nodule is benign or nausea.
After the method is adopted in this embodiment, the network model is evaluated through ten-fold cross validation, and the evaluation result is as described in table 1 below. The specific evaluation indexes comprise accuracy, precision, recall rate, F1 value, average precision, AUC and the like.
TABLE-evaluation results of the method of this example
Figure BDA0002667815940000078
The ultrasonic video image adopts the following standards: observing the edge form, peripheral vocal cords, internal echoes, rear attenuation and the like of key masses at focus positions, collecting standard color blood flow images and unmarked original images for computer analysis, wherein all image data comprise the information of relevant cases of any patient, the same patient is examined for many times, early examination data are taken, and the image which can show the characteristics of breast diseases most has no measurement mark (the measurement mark influences training, retrospective data can have the measurement mark, and prospective data needs no mark) format: uncompressed DICOM format (prospective data requirements DICOM) dynamic video images derived from ultrasound machines or "cloud" are sharp: the video accords with the mammary gland hyperradiography guide, and the video embodies the complete focus radiography process. In the process of manual classification labeling, qualitative diagnosis is completed by an attending physician, when a special case is met, the attending physician is requested to diagnose, if the opinions are inconsistent, the dynamic images are played back together for observation, and negotiation is achieved.
Example 2
The invention discloses a breast nodule detection system based on ultrasonic medicine, which is characterized by comprising a feature extraction module, a linear classification module, an attention selection module, a video detection module and a loss function module, wherein:
a feature extraction module: the preprocessed video data is subjected to a feature extraction network ResNet to obtain a video feature vector;
a linear classification module: detecting the classification probability of each frame of the video by the video feature vector through a linear classification network;
an attention selection module: obtaining the weight of each frame of the video by the video feature vector through an attention selection network;
the video detection module: combining the classification probability given by linear classification and each frame weight provided by the attention selection module to obtain a breast nodule detection model;
a loss function module: and optimizing each module of the model by adopting a common cross entropy loss function and a common central loss function.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (9)

1. The breast nodule detection method based on ultrasonic medicine is characterized by comprising the following steps of:
s1, acquiring breast ultrasound video data and preprocessing the data;
s2, training the preprocessed data by a transfer learning method, and establishing a breast nodule detection model;
s21, feature extraction: the preprocessed video data is subjected to a feature extraction network ResNet to obtain a video feature vector;
s22, linear classification: detecting the classification probability of each frame of the video by the video feature vector through a linear classification network;
s23, attention selection: obtaining the weight of each frame of the video by the video feature vector through an attention selection network;
s24, video detection: combining the classification probability given by linear classification and each frame weight provided by the attention selection module to obtain a breast nodule detection model;
and S3, inputting the video data to be tested into the breast nodule detection model, and obtaining a result.
2. The ultrasound medical-based breast nodule detection method of claim 1, wherein: the feature extraction network ResNet is ResNet18 with a network depth of 18 layers, the network structure of ResNet18 can be divided into five stages, a convolutional layer with a convolutional kernel size of 7 x 7 is adopted in a first layer to capture a large receptive field, then a largest pooling layer with a step size of 2 is passed, and a convolutional layer with a convolutional kernel size of 3 x 3 is adopted in a last four layers to perform feature extraction.
3. The breast nodule detecting method based on ultrasonic medicine of claim 1, wherein the feature extraction in the step S21 is specifically as follows: pre-processed video data V ═ ft|t∈[1,T]T ∈ N ^ where ftThe video characteristic vector sequence F is obtained by coding each frame information of the video through the characteristic extraction network ResNett|t∈[1,T]T ∈ N ^ where stAnd representing the feature vector corresponding to the t frame image of the video.
4. The breast nodule detecting method based on ultrasonic medicine as claimed in claim 3, wherein the linear classification in the step S22 is specifically: the linear classification network comprises a full-link layer and a softmax function layerPerforming feature fusion on each dimensionality of the extracted feature vectors by the layer connection, enabling classification to be concentrated on effective features through a weight form, neglecting useless features, normalizing output by the softmax function layer, providing effective probability for classification, giving the extracted video feature vectors F, and learning weight WpE.g. R, the classification probability P of the video is softmax (W)pF + b), where b is a constant.
5. The breast nodule detecting method based on ultrasonic medicine as claimed in claim 4, wherein the attention selection in the step S23 is specifically: the attention selection network comprises a full connection layer and a softmax function layer, wherein the full connection layer performs feature fusion on each dimension of extracted feature vectors, the classification is concentrated on effective features through a weight form, the useless features are ignored, the softmax function layer normalizes output, effective probability is provided for classification, extracted video feature vectors F are given, and learning weights W are givenaE R, the attention selection weight of the video a ═ softmax (W)aF + b), where b is a constant.
6. The breast nodule detecting method based on ultrasound medicine as claimed in claim 5, wherein the attention selection in step S24 is specifically: the detection result of the breast nodule detection model is
Figure FDA0002667815930000021
Wherein the content of the first and second substances,
Figure FDA0002667815930000022
is the detection probability.
7. The ultrasound medical-based breast nodule detecting method of claim 6, wherein the step S2 further includes the step S25 of loss function: optimizing each module of the model by adopting common cross entropy loss function and center loss functionAccording to the detection probability of each video
Figure FDA0002667815930000023
And the true value y, respectively calculating the cross entropy loss function
Figure FDA0002667815930000024
And center loss function
Figure FDA0002667815930000025
Figure FDA0002667815930000026
Figure FDA0002667815930000027
Figure FDA0002667815930000028
Where N denotes the size of the training data set, xiFeatures before fully connecting layers, cyiDenotes the y thiThe feature center of each class, λ, controls the specific gravity between the two.
8. The method for breast nodule detection based on ultrasound medicine of claim 1, wherein the preprocessing step of step S1 includes:
s11, intercepting the maximum rectangle of the breast ultrasound original video data at an ultrasound imaging part, and uniformly scaling the size to 256 multiplied by 256, wherein 20 frames are sparsely sampled at the middle part of the video at equal intervals;
s12, digital image processing, namely classifying the intercepted ultrasonic video images, selecting a clear image slice layer containing a complete nodule region, and deleting a redundant image without an interested region;
and S13, data enhancement, wherein the classified ultrasonic video images are randomly cut into 224 multiplied by 224 pixels by adopting a random cutting mode, 1 part of the ultrasonic video images can be enhanced by hundreds of times, affine transformation is carried out on the ultrasonic video images by adopting space geometric transformation, and the sizes of the ultrasonic video images are kept consistent by filling through an interpolation method.
9. Mammary nodule detecting system based on ultrasonic medicine, its characterized in that includes feature extraction module, linear classification module, attention selection module, video detection module and loss function module, wherein:
a feature extraction module: the preprocessed video data is subjected to a feature extraction network ResNet to obtain a video feature vector;
a linear classification module: detecting the classification probability of each frame of the video by the video feature vector through a linear classification network;
an attention selection module: obtaining the weight of each frame of the video by the video feature vector through an attention selection network;
the video detection module: combining the classification probability given by linear classification and each frame weight provided by the attention selection module to obtain a breast nodule detection model;
a loss function module: and optimizing each module of the model by adopting a common cross entropy loss function and a common central loss function.
CN202010924386.0A 2020-09-04 2020-09-04 Breast nodule detection method and system based on ultrasonic medicine Active CN112086197B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010924386.0A CN112086197B (en) 2020-09-04 2020-09-04 Breast nodule detection method and system based on ultrasonic medicine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010924386.0A CN112086197B (en) 2020-09-04 2020-09-04 Breast nodule detection method and system based on ultrasonic medicine

Publications (2)

Publication Number Publication Date
CN112086197A true CN112086197A (en) 2020-12-15
CN112086197B CN112086197B (en) 2022-05-10

Family

ID=73731488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010924386.0A Active CN112086197B (en) 2020-09-04 2020-09-04 Breast nodule detection method and system based on ultrasonic medicine

Country Status (1)

Country Link
CN (1) CN112086197B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884707A (en) * 2021-01-15 2021-06-01 复旦大学附属妇产科医院 Cervical precancerous lesion detection system, equipment and medium based on colposcope
CN113053523A (en) * 2021-04-23 2021-06-29 广州易睿智影科技有限公司 Continuous self-learning multi-model fusion ultrasonic breast tumor precise identification system
CN113065533A (en) * 2021-06-01 2021-07-02 北京达佳互联信息技术有限公司 Feature extraction model generation method and device, electronic equipment and storage medium
CN113256605A (en) * 2021-06-15 2021-08-13 四川大学 Breast cancer image identification and classification method based on deep neural network
CN114360695A (en) * 2021-12-24 2022-04-15 上海杏脉信息科技有限公司 Mammary gland ultrasonic scanning analysis auxiliary system, medium and equipment
CN114842238A (en) * 2022-04-01 2022-08-02 苏州视尚医疗科技有限公司 Embedded mammary gland ultrasonic image identification method
CN116416381A (en) * 2023-03-31 2023-07-11 脉得智能科技(无锡)有限公司 Mammary gland nodule three-dimensional reconstruction method, device and medium based on mammary gland ultrasonic image
CN116563216A (en) * 2023-03-31 2023-08-08 河北大学 Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition
CN116705252A (en) * 2023-06-16 2023-09-05 脉得智能科技(无锡)有限公司 Construction method, image classification method, device and medium for prostate cancer diagnosis model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040086161A1 (en) * 2002-11-05 2004-05-06 Radhika Sivaramakrishna Automated detection of lung nodules from multi-slice CT image data
US20050171409A1 (en) * 2004-01-30 2005-08-04 University Of Chicago Automated method and system for the detection of lung nodules in low-dose CT image for lung-cancer screening
CN108596195A (en) * 2018-05-09 2018-09-28 福建亿榕信息技术有限公司 A kind of scene recognition method based on sparse coding feature extraction
CN110391022A (en) * 2019-07-25 2019-10-29 东北大学 A kind of deep learning breast cancer pathological image subdivision diagnostic method based on multistage migration
CN111243730A (en) * 2020-01-17 2020-06-05 视隼智能科技(上海)有限公司 Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040086161A1 (en) * 2002-11-05 2004-05-06 Radhika Sivaramakrishna Automated detection of lung nodules from multi-slice CT image data
US20050171409A1 (en) * 2004-01-30 2005-08-04 University Of Chicago Automated method and system for the detection of lung nodules in low-dose CT image for lung-cancer screening
CN108596195A (en) * 2018-05-09 2018-09-28 福建亿榕信息技术有限公司 A kind of scene recognition method based on sparse coding feature extraction
CN110391022A (en) * 2019-07-25 2019-10-29 东北大学 A kind of deep learning breast cancer pathological image subdivision diagnostic method based on multistage migration
CN111243730A (en) * 2020-01-17 2020-06-05 视隼智能科技(上海)有限公司 Mammary gland focus intelligent analysis method and system based on mammary gland ultrasonic image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓迪: "基于自匹配注意力机制的命名实体关系识别模型", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884707A (en) * 2021-01-15 2021-06-01 复旦大学附属妇产科医院 Cervical precancerous lesion detection system, equipment and medium based on colposcope
CN113053523A (en) * 2021-04-23 2021-06-29 广州易睿智影科技有限公司 Continuous self-learning multi-model fusion ultrasonic breast tumor precise identification system
CN113065533A (en) * 2021-06-01 2021-07-02 北京达佳互联信息技术有限公司 Feature extraction model generation method and device, electronic equipment and storage medium
CN113256605A (en) * 2021-06-15 2021-08-13 四川大学 Breast cancer image identification and classification method based on deep neural network
CN113256605B (en) * 2021-06-15 2021-11-02 四川大学 Breast cancer image identification and classification method based on deep neural network
CN114360695A (en) * 2021-12-24 2022-04-15 上海杏脉信息科技有限公司 Mammary gland ultrasonic scanning analysis auxiliary system, medium and equipment
CN114842238A (en) * 2022-04-01 2022-08-02 苏州视尚医疗科技有限公司 Embedded mammary gland ultrasonic image identification method
CN114842238B (en) * 2022-04-01 2024-04-16 苏州视尚医疗科技有限公司 Identification method of embedded breast ultrasonic image
CN116416381A (en) * 2023-03-31 2023-07-11 脉得智能科技(无锡)有限公司 Mammary gland nodule three-dimensional reconstruction method, device and medium based on mammary gland ultrasonic image
CN116563216A (en) * 2023-03-31 2023-08-08 河北大学 Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition
CN116416381B (en) * 2023-03-31 2023-09-29 脉得智能科技(无锡)有限公司 Mammary gland nodule three-dimensional reconstruction method, device and medium based on mammary gland ultrasonic image
CN116563216B (en) * 2023-03-31 2024-02-20 河北大学 Endoscope ultrasonic scanning control optimization system and method based on standard site intelligent recognition
CN116705252A (en) * 2023-06-16 2023-09-05 脉得智能科技(无锡)有限公司 Construction method, image classification method, device and medium for prostate cancer diagnosis model
CN116705252B (en) * 2023-06-16 2024-05-31 脉得智能科技(无锡)有限公司 Construction method, image classification method, device and medium for prostate cancer diagnosis model

Also Published As

Publication number Publication date
CN112086197B (en) 2022-05-10

Similar Documents

Publication Publication Date Title
CN112086197B (en) Breast nodule detection method and system based on ultrasonic medicine
US11101033B2 (en) Medical image aided diagnosis method and system combining image recognition and report editing
CN106372390B (en) A kind of self-service healthy cloud service system of prevention lung cancer based on depth convolutional neural networks
CN113781440B (en) Ultrasonic video focus detection method and device
CN111429474B (en) Mammary gland DCE-MRI image focus segmentation model establishment and segmentation method based on mixed convolution
CN111227864A (en) Method and apparatus for lesion detection using ultrasound image using computer vision
TW202032577A (en) Medical image dividing method, device, and system, and image dividing method
CN112529834A (en) Spatial distribution of pathological image patterns in 3D image data
CN114782307A (en) Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning
US20170262584A1 (en) Method for automatically generating representations of imaging data and interactive visual imaging reports (ivir)
CN107767362A (en) A kind of early screening of lung cancer device based on deep learning
CN112071418B (en) Gastric cancer peritoneal metastasis prediction system and method based on enhanced CT image histology
JPWO2020027228A1 (en) Diagnostic support system and diagnostic support method
CN112508884A (en) Comprehensive detection device and method for cancerous region
CN110738633B (en) Three-dimensional image processing method and related equipment for organism tissues
Cheng et al. Dr. Pecker: A Deep Learning-Based Computer-Aided Diagnosis System in Medical Imaging
CN112967254A (en) Lung disease identification and detection method based on chest CT image
CN113539476A (en) Stomach endoscopic biopsy Raman image auxiliary diagnosis method and system based on artificial intelligence
CN108960305A (en) A kind of scope scope interpretation system and method
CN112862752A (en) Image processing display method, system electronic equipment and storage medium
CN112002407A (en) Breast cancer diagnosis device and method based on ultrasonic video
CN117275677A (en) Method for effectively identifying benign and malignant breast ultrasound image tumor
CN110827275A (en) Liver nuclear magnetic artery phase image quality grading method based on raspberry group and deep learning
CN114359194A (en) Multi-mode stroke infarct area image processing method based on improved U-Net network
CN116415649B (en) Breast micro cancer analysis method based on multi-mode ultrasonic image self-supervision learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant