CN114742761A - Device is judged to good or malignant node based on neural network - Google Patents

Device is judged to good or malignant node based on neural network Download PDF

Info

Publication number
CN114742761A
CN114742761A CN202210246887.7A CN202210246887A CN114742761A CN 114742761 A CN114742761 A CN 114742761A CN 202210246887 A CN202210246887 A CN 202210246887A CN 114742761 A CN114742761 A CN 114742761A
Authority
CN
China
Prior art keywords
unit
cbl
feature
nodule
edge frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210246887.7A
Other languages
Chinese (zh)
Inventor
张淦钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shiwei Xinzhi Medical Technology Shanghai Co ltd
Original Assignee
Shiwei Xinzhi Medical Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shiwei Xinzhi Medical Technology Shanghai Co ltd filed Critical Shiwei Xinzhi Medical Technology Shanghai Co ltd
Priority to CN202210246887.7A priority Critical patent/CN114742761A/en
Publication of CN114742761A publication Critical patent/CN114742761A/en
Granted legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention relates to a device for judging whether a nodule is good or bad based on a neural network, which comprises: an image acquisition module: for acquiring an ultrasound image with a nodule; a benign and malignant detection module: the system is used for detecting the benign and malignant nodules in the ultrasonic image through a neural network, wherein the neural network comprises a feature extraction module and a feature grading module, the feature extraction module is connected with the feature grading module, the feature extraction module is used for extracting image features, and the feature grading module is used for obtaining feature matrixes of different receptive fields based on the image features and predicting results of the feature matrixes of the different receptive fields. The invention can accurately judge the benign and malignant of the nodule in the ultrasonic image through the constructed neural network.

Description

Device is judged to good malignancy of node based on neural network
Technical Field
The invention relates to the technical field of auxiliary medical diagnosis, in particular to a device for judging whether a nodule is good or bad based on a neural network.
Background
The automatic nodule detection based on the ultrasonic image can effectively assist the medical diagnosis process, becomes an important research direction in the current medical field, and with the continuous development of artificial intelligence technology, the nodule detection based on the deep learning target detection network is gradually increased, but in practical application, the accuracy and the detection speed are not satisfactory. A new network needs to be researched to guarantee both accuracy and detection speed.
Disclosure of Invention
The invention aims to provide a nodule quality and malignancy judging device based on a neural network, which can accurately judge the quality and malignancy of a nodule in an ultrasonic image through the constructed neural network.
The technical scheme adopted by the invention for solving the technical problem is as follows: provided is a device for judging whether a nodule is good or bad based on a neural network, comprising:
an image acquisition module: for acquiring an ultrasound image with a nodule;
a benign and malignant detection module: the system is used for detecting the benign and malignant nodules in the ultrasonic image through a neural network, wherein the neural network comprises a feature extraction module and a feature grading module, the feature extraction module is connected with the feature grading module, the feature extraction module is used for extracting image features, and the feature grading module is used for obtaining feature matrixes of different receptive fields based on the image features and predicting results of the feature matrixes of the different receptive fields.
The feature extraction module comprises a first CBL unit and 3 Block units which are sequentially connected, wherein the output of each Block unit is connected with a short, the first CBL unit is also sequentially connected with 3 1 × 1Conv corresponding to the 3 Block units, and the output of each 1 × 1Conv is connected with the short connected with the corresponding Block unit;
wherein the Block unit comprises a first branch and a second branch connected by a concatenate, the first branch comprising a sequentially connected 1 × 1Conv, 3 × 3Conv and 1 × 1Conv, and the second branch comprising a sequentially connected 3 × 3Conv and 1 × 1 Conv; the first CBL unit includes a 3 × 3Conv, a batch normalization layer BN, and an activation function leak.
The short in the feature extraction module satisfies y ═ β F (x, { W }i}) + (1-beta) x, where x represents the input features, F (-) represents the feature extraction mode of the neural network, WiRepresents the weight of the parameter during neural network learning, y represents the feature after fusion, beta represents the adjusting parameter and beta belongs to (0, 1).
The characteristic grading module comprises a second CBL unit, a third CBL unit, a fourth CBL unit and a first convolution unit which are connected in sequence, and a first characteristic matrix is output through the first convolution unit;
the output of the second CBL unit is also connected with the third CBL unit subjected to upsampling through a concatenate, then is sequentially connected with the fifth CBL unit and the second convolution unit, and a second feature matrix is output through the second convolution unit;
the third CBL unit is also connected with a sixth CBL unit after being subjected to upsampling and the output of the second CBL unit are connected through a concatenate, the sixth CBL unit is connected with the output of the feature extraction module through a concatenate, then is sequentially connected with a seventh CBL unit and a third convolution unit, and outputs a third feature matrix through a third convolution unit;
wherein the first convolution unit, the second convolution unit and the third convolution unit are all 3 × 3 Conv; the second, third, fourth, fifth, sixth, and seventh CBL units each include a 3 × 3Conv, a batch normalization layer BN, and an activation function leak relu.
The neural network passes through L ═ LCIoU+LClass+LpTo calculate a loss function, wherein LCIoUA CIoU loss function representing the position of the edge frame is determined and
Figure BDA0003545113500000021
Rpredrepresenting a predicted edge box, RgtRepresenting the actual edge frame, IoU representing the overlap of the predicted edge frame and the actual edge frame; w is agtRespectively, the width of the actual edge frame, hgtHigh, w, representing the actual edge boxpredTo indicate toWidth, h, of the edge framepredThe height of the predicted edge frame is represented, and v is used for measuring the consistency degree of the width-height ratio of the predicted edge frame to the actual edge frame; alpha represents a measurement parameter, d represents the center distance between the prediction edge frame and the actual edge frame, and a represents the diagonal length of the minimum circumscribed rectangle of the prediction edge frame and the actual edge frame; l is a radical of an alcoholClassRepresents a cross entropy loss function that determines whether a nodule classification exists and
Figure BDA0003545113500000022
c indicates whether a nodule classification actually exists or not,
Figure BDA0003545113500000023
indicating whether a prediction exists for a nodule classification, wcA weight indicating whether a nodule classification exists; l is a radical of an alcoholpRepresents a cross entropy loss function of good and malignant classifications and
Figure BDA0003545113500000031
p represents the actual quality of the image or the image,
Figure BDA0003545113500000032
indicates the predicted benign or malignant condition, wpRepresenting the weight of the benign and malignant classification.
Advantageous effects
Due to the adoption of the technical scheme, compared with the prior art, the invention has the following advantages and positive effects: the neural network constructed by the invention can detect the positions of the nodules in real time, ensure higher accuracy, and add the classification function of the benign and malignant nodules in the nodule target detection network, so that the benign and malignant nodules can be judged while target detection is carried out; the neural network designed by the invention can not only ensure the diversity of the extracted features, but also does not influence the gradient propagation effect; the invention has important reference value for the clinical diagnosis process of doctors and can help the doctors to focus more attention on the analysis of the nodule characteristics.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a block diagram of a depth-based block and width in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a feature fusion mode according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a neural network architecture according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of neural network training parameters according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of actual nodule detection in accordance with an embodiment of the present invention.
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
An embodiment of the present invention relates to a device for judging whether a nodule is benign or malignant based on a neural network, and referring to fig. 1, the device includes:
an image acquisition module: for acquiring an ultrasound image with a nodule;
a benign and malignant detection module: the system is used for detecting the benign and malignant nodules in the ultrasonic image through a neural network, wherein the neural network comprises a feature extraction module and a feature grading module, the feature extraction module is connected with the feature grading module, the feature extraction module is used for extracting image features, and the feature grading module is used for obtaining feature matrixes of different receptive fields based on the image features and predicting results of the feature matrixes of the different receptive fields.
The present embodiment is described in detail below:
1. feature extraction module in neural network
In the deep learning network, a feature extraction part extracts a large number of features in each receptive field in an input image through operations such as convolution, pooling, standardization and the like, and finally combines the extracted features to perform further calculation according to different tasks (such as classification, target detection, segmentation and the like). With the continuous improvement of the computing power of hardware, the scale of the deep learning network is larger and larger, and the feature extraction of the image does not use the convolution layer as a basic unit, but uses a block (block) as a basic unit in the feature extraction. In the feature extraction stage, a common block has two main directions, depth-based and width-based.
Referring to fig. 2, the main function of the depth-based block is to enable the network to learn the "residual error" feature, so as to solve the problem that the gradient in the deep network disappears. The main function of the width-based block is to increase the information density of each layer, while in combination with the cross-channel separable convolutional layers, the number of computational parameters can be effectively reduced. Considering that a nodule occupies only a small part of the whole image in an ultrasound image, a block that can extract deeper features is used.
In a block consisting of more convolutions, the fusion mode of features has an important connection function. Considering feature fusion, there are two main ways, short and concatenate, where the short fusion way comes from ResNet and can be expressed as:
y=F(x,{Wi})+x
where x represents the input feature, F (·) represents a feature extraction method, i.e., a learning method, which should be non-linear, and this embodiment uses a convolutional neural network as the feature extraction network, WiThe weights of the parameters during learning are represented, and y represents the feature after fusion. The advantages of shortcut are mainly two: firstly, the dimensionality of the features is not increased, and the weight of the parameters is changed, so that a large amount of operation cannot be increased; the second is to reduce gradient loss information. The existing shortcut is mainly used in a deeper feature extraction network, because the short used in a shallow network can affect the propagation effect of the gradient, the length of the feature network is reduced in the embodiment, but in order to keep the diversity of the features and not affect the propagation effect of the gradient, weighted-short is provided and expressed as follows:
y=βF(x,{Wi})+(1-β)x
wherein beta is a regulating parameter and beta epsilon (0,1), and 0 & ltbeta & lt 0.5 is taken in order not to influence the gradient propagation effect. In the feature extraction module, the embodiment also uses a contiate feature fusion mode, please refer to fig. 3, (a) shows a shortcut mode, and (b) shows the contiate mode, which can show that shortcut can generate feature information loss, so for a shallow network, the contiate mode needs to be used for feature fusion.
2. Feature classification module in neural network
After the features are extracted, the edge frames are predicted and classified, the feature vectors obtained in the previous stage are processed through two times of upsampling, and three feature vectors with different sizes, namely 13 × 13, 26 × 26 and 52 × 52, are obtained and correspond to the targets with three sizes, namely large, medium and small. This is because the small size of the feature vector represents a deeper level of the convolution process, while the smaller size of the feature vector represents a larger field of view, and therefore more accurate results can be obtained with three different sizes of the feature vector. And subsequently, respectively carrying out post-processing on the three feature vectors with different sizes, namely, judging the Maximum confidence coefficient and Non-Maximum Suppression (NMS), so as to obtain a more accurate detection result.
In the good-malignancy detection module, the present embodiment classifies the good and malignancy of the nodule after the target detection is performed, and the good-malignancy classification is mainly considered as follows:
(1) the extracted features are used, so that the time and the memory for extracting the features are fully saved;
(2) the nodule is very small relative to the size of the whole ultrasound image, so the features of the whole image should not be taken as the features of the benign and malignant classification;
(3) in practical situations, there are a lot of cases of confusion of benign and malignant features, and in order not to affect the accuracy of nodule target identification, the classification result of the target is not considered to be directly added into benign and malignant features.
In summary, in this embodiment, the original target detection output (x, y, w, h, confidence, class) is adjusted to (x, y, w, h, confidence, class, path) (a box of the neural network output), where x represents the central abscissa of the edge box, y represents the central ordinate of the edge box, w represents the width of the edge box, h represents the height of the edge box, confidence represents the confidence of nodule detection, class represents whether it is a nodule, and path represents a good or bad classification. Therefore, each feature vector detects an edge frame, all feature vectors are sorted from high to low according to confidence level, the feature vectors with the confidence level lower than a threshold value are removed, non-maximum suppression is performed on the sorted feature vectors, and finally the obtained vectors each contain (x, y, w, h, confidence, class, topology) information.
The loss function of the neural network of the present embodiment includes: a CIoU loss function for determining the position of the edge frame and a cross entropy loss function for determining the classification, wherein the CIoU loss function can be expressed as:
Figure BDA0003545113500000061
wherein R ispredRepresenting a predicted edge box, RgtRepresenting the actual edge frame, IoU representing the overlap of the predicted edge frame and the actual edge frame; w is agt,hgtRespectively representing the width and height of the actual edge box, wpred,hpredRespectively representing the width and the height of the predicted edge frame, wherein v is used for measuring the consistency degree of the width-height ratio of the predicted edge frame to the actual edge frame; α represents a measurement parameter, d represents a center distance of the predicted edge frame from the actual edge frame, and a represents a diagonal length of a minimum bounding rectangle of the two edge frames. The cross-entropy function that computes the classification penalty can be expressed as:
Figure BDA0003545113500000062
wherein C represents whether a nodule classification actually exists or not,
Figure BDA0003545113500000063
indicating whether a prediction exists for a nodule classification, wcA weight indicating whether a nodule classification exists.
In addition, the present embodiment adds a cross entropy function as a loss function for classifying good and bad, which can be expressed as:
Figure BDA0003545113500000064
wherein, P represents the actual benign or malignant property,
Figure BDA0003545113500000065
indicates predicted benign or malignant, wpRepresenting the weight of the benign or malignant classification. The final overall loss is the sum of these losses:
L=LCIoU+LClass+Lp
by increasing the loss calculation of benign and malignant nodules, the prediction of the benign and malignant nodules can be increased on the basis of not influencing the nodule detection, the extracted features are recycled, the time is fully saved, and the efficiency is improved.
4. Network integral structure
After analyzing the characteristics and structures of the feature extraction module and the feature classification module, please refer to fig. 4, the two modules are combined to finally generate a prediction result.
Further, the feature extraction module comprises a first CBL unit and 3 Block units which are connected in sequence, wherein the output of each Block unit is connected with a shortcut, the first CBL unit is also connected with 3 1 × 1Conv (step length s ═ 2) corresponding to the 3 Block units in sequence, and the output of each 1 × 1Conv is connected with the shortcut connected with the corresponding Block unit;
wherein the Block unit comprises a first branch and a second branch connected by a concatenate, the first branch comprising sequentially connected 1 × 1Conv (s ═ 1), 3 × 3Conv (s ═ 1, separable), and 1 × 1Conv (s ═ 2), the second branch comprising sequentially connected 3 × 3Conv (s ═ 1, separable), and 1 × 1Conv (s ═ 2);
further, the feature classification module comprises a second CBL unit, a third CBL unit, a fourth CBL unit and a first convolution unit which are connected in sequence, and outputs a 13 × 13 × 24 feature matrix through the first convolution unit;
the output of the second CBL unit is also connected with the third CBL unit after up-sampling through a continate, and then is sequentially connected with the fifth CBL unit and the second convolution unit, and a 26 multiplied by 24 characteristic matrix is output through the second convolution unit;
the output of the second CBL unit is also connected with the third CBL unit subjected to upsampling through a concatenate and then is also connected with a sixth CBL unit, the output of the sixth CBL unit subjected to upsampling and the output of the feature extraction module are also connected with a seventh CBL unit and a third convolution unit in sequence after being connected through a concatenate, and a 52 x 24 feature matrix is output through the third convolution unit;
wherein the first convolution unit, the second convolution unit and the third convolution unit are all 3 × 3Conv (s ═ 1), and the first CBL unit to the seventh CBL unit each include 3 × 3Conv (s ═ 1 or s ═ 2), a batch normalization layer BN and an activation function leak relu.
Referring to fig. 4, an input image in RGB format has a size of 416 × 416, and then passes through two main parts, namely a "feature extraction module" and a "feature classification module", in sequence to generate prediction results in three sizes, each of which has 24 layers. In the network structure diagram, the CBL unit, the Block unit, etc. have been subjected to structural analysis, wherein the Conv unit represents a convolutional layer of 3x3, and a part of the convolutional layer of 1x1 is explicitly labeled; BN represents a batch normalization layer; LeakyReLU denotes the activation function layer. In the present embodiment, since downsampling is performed by a convolutional layer having a step size of 2, a step size of 2 indicates that downsampling is performed on an input, and a step size of 1 indicates that downsampling is not performed, that is, a feature size is not changed. In the Block unit, two 3 × 3 separable convolutions are used, which have been proven in Xceptiop to reduce the parameters in the convolution operation without affecting the network effect. "A" represents the shortcut feature fusion approach introduced above, and "C" represents the concatenate feature fusion approach introduced above.
5. Training process and results
In the process of training the neural network, the parameters are set as follows:
(1) initial learning rate: 0.00261, and adopting the mode of round descending, the learning rate is reduced by 0.0003 every 500 rounds of training, and the learning rate is kept unchanged when the learning rate is less than 0.0005.
(2) Data enhancement mode: random image inversion, random brightness change and random Gaussian noise addition.
(3) And (3) an optimization algorithm: SGD, kinetic value 0.9.
(4) The number of images in the training set is 2700, and the number of images in the verification set is 300.
Referring to fig. 5, a graph of the overall training process is shown, in which the lower dense curve represents the variation of the loss value of the training set, and the upper curve represents the variation of the accuracy of the validation set.
Please refer to fig. 6, which shows the result of the nodule detection and the Benign-malignant classification performed on the ultrasound image after the training of the present embodiment, (a) is a Benign nodule (Benign), the confidence of the Benign nodule is 95.354%, and the confidence of the predicted edge box in fig. (a) is 95.951%; (b) the confidence of the Malignant nodule is 92.82% for the Malignant nodule (Malignant), and 98.627% for the predicted edge box in graph (b).
And (3) analyzing an experimental result:
first, rate of accuracy
Through tests, the accuracy of nodule target detection in the embodiment can reach 85%, the MIoU can reach 75%, and the accuracy of benign and malignant judgment is 82%. And the accuracy is not lower than the accuracy of independent judgment of other algorithms.
Second, time efficiency
In the operating environment of the Core i7-10700 processor, the speed of nodule detection and good and bad prediction is 95 ms/piece, the video detection frame rate is 10.5FPS, in the GPU operating environment, the speed of nodule detection and good and bad prediction is 20 ms/piece, the video detection frame rate is 50FPS, and the real-time detection effect can be achieved.
The foregoing description of specific exemplary embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain certain principles of the invention and its practical application to enable one skilled in the art to make and use various exemplary embodiments of the invention and various alternatives and modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.

Claims (5)

1. A device for judging whether a nodule is good or bad based on a neural network, comprising:
an image acquisition module: for acquiring an ultrasound image with a nodule;
a benign and malignant detection module: the system is used for detecting the benign and malignant nodules in the ultrasonic image through a neural network, wherein the neural network comprises a feature extraction module and a feature grading module, the feature extraction module is connected with the feature grading module, the feature extraction module is used for extracting image features, and the feature grading module is used for obtaining feature matrixes of different receptive fields based on the image features and predicting results of the feature matrixes of the different receptive fields.
2. The apparatus according to claim 1, wherein the feature extraction module comprises a first CBL unit and 3 Block units connected in sequence, wherein the output of each Block unit is connected with a short, the first CBL unit is further connected with 3 1 × 1Conv corresponding to the 3 Block units in sequence, and the output of each 1 × 1Conv is connected with the short connected with the corresponding Block unit;
wherein the Block unit comprises a first branch and a second branch connected by a concatenate, the first branch comprises sequentially connected 1 × 1Conv, 3 × 3Conv and 1 × 1Conv, and the second branch comprises sequentially connected 3 × 3Conv and 1 × 1 Conv; the first CBL unit includes a 3 × 3Conv, a batch normalization layer BN, and an activation function LeakyReLU.
3. The apparatus according to claim 2, wherein the short score in the feature extraction module satisfies y ═ β F (x, { W } F (x, { W)i}) + (1-beta) x, where x represents the input features, F (-) represents the feature extraction mode of the neural network, WiRepresents the weight of the parameter during neural network learning, y represents the feature after fusion, beta represents the regulation parameter and beta belongs to (0, 1).
4. The device for judging whether a nodule is good or bad according to claim 1, wherein the feature classification module comprises a second CBL unit, a third CBL unit, a fourth CBL unit and a first convolution unit which are connected in sequence, and outputs a first feature matrix through the first convolution unit;
the output of the second CBL unit is also connected with the third CBL unit subjected to upsampling through a concatenate, then is sequentially connected with the fifth CBL unit and the second convolution unit, and a second feature matrix is output through the second convolution unit;
the third CBL unit is also connected with a sixth CBL unit after being subjected to upsampling and the output of the second CBL unit are connected through a concatenate, the sixth CBL unit is connected with the output of the feature extraction module through a concatenate, then is sequentially connected with a seventh CBL unit and a third convolution unit, and outputs a third feature matrix through a third convolution unit;
wherein the first convolution unit, the second convolution unit and the third convolution unit are all 3 × 3 Conv; the second, third, fourth, fifth, sixth, and seventh CBL units each include a 3 × 3Conv, a batch normalization layer BN, and an activation function leak relu.
5. The device for judging whether a nodule is benign or malignant according to claim 1, wherein the neural network passes L ═ LCIoU+LClass+LpTo calculate a loss function, wherein LCIoUIndicating the position of the edge frameIs a CIoU loss function of
Figure FDA0003545113490000021
RpredRepresenting a predicted edge box, RgtRepresenting the actual edge frame, IoU representing the overlap of the predicted edge frame and the actual edge frame; w is agtRespectively, the width of the actual edge frame, hgtHigh, w representing the actual edge boxpredWidth, h, representing the predicted edge framepredThe height of the predicted edge frame is represented, and v is used for measuring the consistency degree of the width-height ratio of the predicted edge frame to the actual edge frame; alpha represents a measurement parameter, d represents the center distance between the prediction edge frame and the actual edge frame, and a represents the diagonal length of the minimum circumscribed rectangle of the prediction edge frame and the actual edge frame; l isClassRepresents a cross-entropy loss function that determines whether a nodule classification exists and
Figure FDA0003545113490000022
c indicates whether a nodule classification actually exists or not,
Figure FDA0003545113490000023
indicating whether a prediction exists for a nodule classification, wcA weight indicating whether a nodule classification exists; l ispRepresents a cross entropy loss function of good and malignant classifications and
Figure FDA0003545113490000024
p represents the actual quality of the image or the image,
Figure FDA0003545113490000025
indicates the predicted benign or malignant condition, wpRepresenting the weight of the benign and malignant classification.
CN202210246887.7A 2022-03-14 2022-03-14 Device is judged to good or malignant node based on neural network Granted CN114742761A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210246887.7A CN114742761A (en) 2022-03-14 2022-03-14 Device is judged to good or malignant node based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210246887.7A CN114742761A (en) 2022-03-14 2022-03-14 Device is judged to good or malignant node based on neural network

Publications (1)

Publication Number Publication Date
CN114742761A true CN114742761A (en) 2022-07-12

Family

ID=82275151

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210246887.7A Granted CN114742761A (en) 2022-03-14 2022-03-14 Device is judged to good or malignant node based on neural network

Country Status (1)

Country Link
CN (1) CN114742761A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101071506A (en) * 2006-01-25 2007-11-14 美国西门子医疗解决公司 System and method for local pulmonary structure classification for computer-aided nodule detection
CN111243042A (en) * 2020-02-28 2020-06-05 浙江德尚韵兴医疗科技有限公司 Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning
CN111598875A (en) * 2020-05-18 2020-08-28 北京小白世纪网络科技有限公司 Method, system and device for building thyroid nodule automatic detection model
CN113450320A (en) * 2021-06-17 2021-09-28 浙江德尚韵兴医疗科技有限公司 Ultrasonic nodule grading and benign and malignant prediction method based on deeper network structure
CN113469987A (en) * 2021-07-13 2021-10-01 山东大学 Dental X-ray image lesion area positioning system based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101071506A (en) * 2006-01-25 2007-11-14 美国西门子医疗解决公司 System and method for local pulmonary structure classification for computer-aided nodule detection
CN111243042A (en) * 2020-02-28 2020-06-05 浙江德尚韵兴医疗科技有限公司 Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning
CN111598875A (en) * 2020-05-18 2020-08-28 北京小白世纪网络科技有限公司 Method, system and device for building thyroid nodule automatic detection model
CN113450320A (en) * 2021-06-17 2021-09-28 浙江德尚韵兴医疗科技有限公司 Ultrasonic nodule grading and benign and malignant prediction method based on deeper network structure
CN113469987A (en) * 2021-07-13 2021-10-01 山东大学 Dental X-ray image lesion area positioning system based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姜旭恒 等: "基于U-net++卷积神经网络的CT图像肺实质分割方法", 《电子技术与软件工程》, 15 December 2021 (2021-12-15), pages 114 - 117 *

Similar Documents

Publication Publication Date Title
CN112949572B (en) Slim-YOLOv 3-based mask wearing condition detection method
CN111738363B (en) Alzheimer disease classification method based on improved 3D CNN network
CN106875395B (en) Super-pixel-level SAR image change detection method based on deep neural network
CN111783819A (en) Improved target detection method based on region-of-interest training on small-scale data set
CN116824307A (en) Image labeling method and device based on SAM model and related medium
Fan et al. A novel sonar target detection and classification algorithm
CN115564983A (en) Target detection method and device, electronic equipment, storage medium and application thereof
CN116452966A (en) Target detection method, device and equipment for underwater image and storage medium
CN113538458A (en) U-Net image segmentation method based on FTL loss function and attention
CN115880523A (en) Image classification model, model training method and application thereof
CN117456167A (en) Target detection algorithm based on improved YOLOv8s
CN113536896B (en) Insulator defect detection method and device based on improved Faster RCNN and storage medium
CN116563285B (en) Focus characteristic identifying and dividing method and system based on full neural network
CN113139549B (en) Parameter self-adaptive panoramic segmentation method based on multitask learning
CN113313678A (en) Automatic sperm morphology analysis method based on multi-scale feature fusion
CN117274355A (en) Drainage pipeline flow intelligent measurement method based on acceleration guidance area convolutional neural network and parallel multi-scale unified network
CN116993661A (en) Clinical diagnosis method for potential cancerous polyps based on feature fusion and attention mechanism
CN116229228A (en) Small target detection method based on center surrounding mechanism
CN114742761A (en) Device is judged to good or malignant node based on neural network
CN112906707B (en) Semantic segmentation method and device for surface defect image and computer equipment
CN116129417A (en) Digital instrument reading detection method based on low-quality image
CN112927250B (en) Edge detection system and method based on multi-granularity attention hierarchical network
CN113177599A (en) Enhanced sample generation method based on GAN
CN114187301A (en) X-ray image segmentation and classification prediction model based on deep neural network
Wang et al. Sonar Objective Detection Based on Dilated Separable Densely Connected CNNs and Quantum‐Behaved PSO Algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant