CN111210445A - Prostate ultrasound image segmentation method and equipment based on Mask R-CNN - Google Patents

Prostate ultrasound image segmentation method and equipment based on Mask R-CNN Download PDF

Info

Publication number
CN111210445A
CN111210445A CN202010014967.0A CN202010014967A CN111210445A CN 111210445 A CN111210445 A CN 111210445A CN 202010014967 A CN202010014967 A CN 202010014967A CN 111210445 A CN111210445 A CN 111210445A
Authority
CN
China
Prior art keywords
prostate
cnn
mask
segmentation
ultrasonic image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010014967.0A
Other languages
Chinese (zh)
Inventor
卢旭
刘志勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Polytechnic Normal University
Original Assignee
Guangdong Polytechnic Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Polytechnic Normal University filed Critical Guangdong Polytechnic Normal University
Priority to CN202010014967.0A priority Critical patent/CN111210445A/en
Publication of CN111210445A publication Critical patent/CN111210445A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30081Prostate

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The embodiment of the invention discloses a prostate ultrasound image segmentation method based on Mask R-CNN, which comprises the steps of establishing a Mask R-CNN + ResNet-101 network model; inputting a prostate ultrasonic image to be segmented into the Mask R-CNN + ResNet-101 network model for segmentation; and outputting the segmented prostate ultrasonic image. Meanwhile, the prostate ultrasonic image segmentation equipment based on Mask R-CNN is also provided, and comprises a model building module, an input module and an output module. Compared with the prior art, the invention solves the problems of insufficient precision and low positioning accuracy of the existing prostate ultrasonic image segmentation.

Description

Prostate ultrasound image segmentation method and equipment based on Mask R-CNN
Technical Field
The invention relates to the technical field of medical image processing, in particular to a prostate ultrasonic image segmentation method and equipment based on Mask R-CNN.
Background
Prostate ultrasonic image segmentation is an important research direction in the field of computer vision and medical images, and the segmentation technology provides application values for detecting and treating prostate medically.
Because the prostate ultrasonic image display has the problems of serious speckle noise, low signal-to-noise ratio and the like, and most prostate ultrasonic image segmentation algorithms do not have pixel-level segmentation, the current prostate ultrasonic image segmentation is not accurate enough and has low positioning accuracy, which brings much trouble and pressure to later judgment and work of doctors.
Therefore, how to provide a prostate ultrasound image segmentation algorithm and a device capable of improving the positioning accuracy is an urgent problem to be solved by those skilled in the art.
Disclosure of Invention
The embodiment of the invention provides a prostate ultrasonic image segmentation method and equipment based on Mask R-CNN, which can realize accurate segmentation of a prostate ultrasonic image and further reduce the workload of doctors.
In order to solve the problems in the prior art, the prostate ultrasound image segmentation method based on Mask R-CNN provided by the invention has the specific scheme that:
a prostate ultrasonic image segmentation method based on Mask R-CNN comprises the following steps:
establishing a Mask R-CNN + ResNet-101 network model;
inputting a prostate ultrasonic image to be segmented into the Mask R-CNN + ResNet-101 network model for segmentation;
and outputting the segmented prostate ultrasonic image.
Preferably, the step of establishing a Mask R-CNN + ResNet-101 network model comprises the following steps:
constructing a prostate ultrasonic image data set with segmentation marking information;
calling a Mask R-CNN + ResNet-101 network;
inputting the prostate ultrasonic image in the prostate ultrasonic image data set with the segmentation marking information into the Mask R-CNN + ResNet-101 network;
training the MaskR-CNN + ResNet-101 network according to the segmentation marking information of the prostate ultrasonic image to obtain a training result;
and establishing a Mask R-CNN + ResNet-101 network model according to the training result.
Preferably, the step of training the MaskR-CNN + ResNet-101 network according to the segmentation labeling information of the prostate ultrasound image to obtain a training result includes:
inputting the prostate ultrasonic image with the segmentation marking information into a Mask R-CNN + ResNet-101 network model;
extracting the characteristics of the prostate ultrasonic image through a convolutional neural network to obtain a corresponding characteristic diagram of the prostate ultrasonic image;
rapidly generating a candidate region on the characteristic map by the prostate ultrasound image with the image characteristics through a region suggestion network, reserving floating point type coordinates on the candidate region through a bilinear interpolation algorithm, and obtaining a prostate ultrasound segmentation characteristic image with a fixed size through pooling processing;
detecting the obtained prostate ultrasonic segmentation characteristic image to obtain target positioning and/or classification of the prostate ultrasonic segmentation characteristic image;
and drawing a corresponding binary mask for the positioned and/classified prostate ultrasonic segmentation characteristic image through a full convolution network to realize segmentation, and outputting a predicted image of the prostate ultrasonic image.
Preferably, the bilinear interpolation algorithm comprises the steps of:
Figure BDA0002358534070000021
Figure BDA0002358534070000022
then linear interpolation is carried out on the y direction
Figure BDA0002358534070000023
Where f (x, y) is the pixel value of the point P to be solved, and four points are known as f (Q)11)、f(Q12)、f(Q21)、f(Q22) The four-point pixel value is Q11=(x1,y1),Q12=(x1,y2),Q21=(x2,y1) And Q22=(x2,y2) Interpolation in the x direction to obtain f (R)1)、f(R2) The pixel value.
Another object of the present invention is to further provide a Mask R-CNN based prostate ultrasound image segmentation apparatus, including: the model establishing module is used for establishing a Mask R-CNN + ResNet-101 network model; the input module is used for inputting the prostate ultrasonic image to be segmented into the Mask R-CNN + ResNet-101 network model; and the output module is used for outputting the segmented prostate ultrasonic image.
Preferably, the model building module comprises:
an acquisition unit for acquiring a prostate ultrasound image of a patient;
the identification unit is used for marking the dividing boundary line of the prostate ultrasonic image and constructing a prostate ultrasonic image data set with dividing marking information;
the calling unit is used for calling a Mask R-CNN + ResNet-101 network;
the input unit is used for inputting the prostate ultrasonic image in the prostate ultrasonic image data set with the segmentation marking information into the Mask R-CNN + ResNet-101 network;
the training unit is used for training the MaskR-CNN + ResNet-101 network according to the segmentation marking information of the prostate ultrasonic image to obtain a training result;
and the model establishing unit is used for establishing a Mask R-CNN + ResNet-101 network model according to the training result.
Preferably, the training unit comprises:
the characteristic extraction subunit is used for extracting the characteristics of the prostate ultrasonic image through the convolutional neural network to obtain a corresponding prostate ultrasonic image characteristic diagram;
the detection subunit is used for positioning and classifying the targets in the prostate ultrasonic image;
and the segmentation subunit generates a binary mask by using a full convolution network to realize the segmentation of the prostate ultrasonic image.
According to the technical scheme, the embodiment of the invention has the following advantages: the invention collects, classifies, standardizes and the like the prostate ultrasonic images based on a Mask R-CNN + ResNet-101 network, further establishes a Mask R-CNN + ResNet-101 network model, inputs a large number of prostate ultrasonic images into a convolutional neural network, and extracts image characteristics; then, a bilinear interpolation algorithm is fully utilized on the ROIAlign layer, and floating point coordinates are reserved; the prostate ultrasound image is subjected to pooling processing to obtain a characteristic image with a fixed size, and a full convolution network is used for generating a binary mask, so that the segmentation of the prostate ultrasound image and the output of the prostate ultrasound image are realized.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of a prostate ultrasound image segmentation method based on Mask R-CNN according to the present invention;
FIG. 2 is a schematic diagram of an embodiment of a prostate ultrasound image segmentation method based on Mask R-CNN according to the present invention;
FIG. 3 is a schematic diagram of the structure of the ResNet-101 network in the present invention;
FIG. 4 is a block diagram schematically illustrating the structure of an embodiment of a prostate image segmentation device based on Mask R-CNN in the present invention;
FIG. 5 is a schematic view illustrating the processing effect of an embodiment of the prostate ultrasound image segmentation method based on Mask R-CNN according to the present invention;
FIG. 6 is a schematic diagram of an embodiment of ROIAlign algorithm processing of a prostate ultrasound image segmentation device based on Mask R-CNN in the present invention;
FIG. 7 is a block diagram schematically illustrating the structure of another embodiment of the prostate image segmentation device based on Mask R-CNN in the present invention;
fig. 8 is a schematic block diagram of the structure of an embodiment of a training unit in a prostate image segmentation device based on Mask R-CNN in the present invention.
Detailed Description
The embodiment of the invention provides a prostate ultrasonic image segmentation method and equipment based on Mask R-CNN, which can realize accurate segmentation of a prostate ultrasonic image and reduce the workload of doctors.
The technical solutions in the embodiments of the present invention are described in detail and clearly with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to 8, there are shown schematic diagrams of an embodiment of a method, an apparatus or a principle for segmenting a prostate ultrasound image based on Mask R-CNN according to the present invention.
Example 1
The prostate ultrasound image segmentation method based on Mask R-CNN provided by the embodiment of the present invention is a prostate ultrasound image segmentation method based on Mask R-CNN, and includes the following steps:
101. establishing a Mask R-CNN + ResNet-101 network model;
102. inputting a prostate ultrasonic image to be segmented into the Mask R-CNN + ResNet-101 network model for segmentation;
103. and outputting the segmented prostate ultrasonic image.
It should be noted that there are many ways for establishing the Mask R-CNN + ResNet-101 network model, and the preferred ways and steps of the present invention are:
constructing a prostate ultrasonic image data set with segmentation marking information;
calling a Mask R-CNN + ResNet-101 network;
inputting the prostate ultrasound image in the prostate ultrasound image data set with the segmentation marking information into a Mask R-CNN + ResNet-101 network;
training a Mask R-CNN + ResNet-101 network according to the segmentation marking information of the prostate ultrasonic image to obtain a training result;
and establishing a Mask R-CNN + ResNet-101 network model according to the training result.
Meanwhile, training the MaskR-CNN + ResNet-101 network according to the segmentation marking information of the prostate ultrasonic image to obtain a training result, wherein the specific processing mode is as follows:
inputting the prostate ultrasonic image with segmentation marking information, namely marking segmentation boundaries, into a MaskR-CNN + ResNet-101 network model;
extracting the characteristics of the prostate ultrasonic image through a convolutional neural network to obtain a corresponding characteristic diagram of the prostate ultrasonic image;
rapidly generating a candidate region on the characteristic map by the prostate ultrasound image with the image characteristics through a region suggestion network, reserving floating point type coordinates on the candidate region through a bilinear interpolation algorithm, and obtaining a prostate ultrasound segmentation characteristic image with a fixed size through pooling processing;
detecting the obtained prostate ultrasonic segmentation characteristic image to obtain target positioning and/or classification of the prostate ultrasonic segmentation characteristic image;
and drawing a corresponding binary mask for the positioned and/classified prostate ultrasonic segmentation characteristic image through a full convolution network to realize segmentation, and outputting a predicted image of the prostate ultrasonic image.
When a small pixel level task of the prostate ultrasound image is detected, a bilinear interpolation algorithm is adopted, floating point type coordinates are reserved, the prostate ultrasound image pixels can correspond to characteristic images, and the segmentation precision is improved. The bilinear interpolation algorithm is as follows:
Figure BDA0002358534070000061
Figure BDA0002358534070000062
then linear interpolation is carried out on the y direction
Figure BDA0002358534070000071
Wherein the sum of f (x,y) is the pixel value of the point to be solved P, and the known four points are f (Q)11)、f(Q12)、f(Q21)、f(Q22) The four-point pixel value is Q11=(x1,y1),Q12=(x1,y2),Q21=(x2,y1) And Q22=(x2,y2) Interpolation in the x direction to obtain f (R)1)、f(R2) The pixel value.
FIG. 2 is a schematic diagram of an embodiment of a prostate ultrasound image segmentation method based on Mask R-CNN in the present invention. In the specific implementation process, firstly, a prostate ultrasound image data set is obtained from Guangzhou Huaqiao hospital and a third subsidiary hospital of Zhongshan university, 1200 images are randomly selected from positive prostate ultrasound image data and negative prostate ultrasound image data and are labeled to construct a new data set, wherein the positive images and the negative images both occupy a proper proportion; a professional is required to manually label the boundary region of the prostate ultrasound image by using labelme software; calling a Mask R-CNN + ResNet-101 network; inputting a large number of manually marked prostate ultrasonic images into a Mask R-CNN + ResNet-101 network; training a Mask R-CNN + ResNet-101 network according to a large number of manually marked prostate ultrasonic image segmentation areas; and establishing a Mask R-CNN + ResNet-101 network model according to the training result.
Segmenting boundary regions according to a large number of prostate ultrasonic images, and training a Mask R-CNN + ResNet-101 network model, wherein the method comprises the following steps: segmenting boundary regions according to a large number of prostate ultrasonic images, and using a ResNet-101 network as a feature extractor; the CoCo pre-training model is used for generalizing the parameters, so that the model has the feature extraction capability, and the training time is further reduced; and then, performing model training by using the labeled data, and setting related parameters.
Numbering a large number of manually labeled prostate images to obtain a large number of prostate segmentation feature images, and the steps are as follows: performing sliding window processing on the feature map to quickly generate a candidate region; the ROIAlign layer pools the generated candidate regions, and then the feature maps with different scales are pooled into a fixed scale feature map through the ROIAlign layer; and ROIAlign adopts a bilinear interpolation algorithm on the generated ROI characteristic diagram, and floating point coordinates are reserved to obtain a plurality of prostate segmentation characteristic images.
Example 2
The embodiment of the invention also provides prostate ultrasonic image segmentation equipment based on Mask R-CNN, which comprises: the model establishing module 1 is used for establishing a Mask R-CNN + ResNet-101 network model; the input module 2 is used for inputting the prostate ultrasonic image to be segmented into the Mask R-CNN + ResNet-101 network model; and the output module 3 is used for outputting the segmented prostate ultrasonic image.
Wherein, the model building module 1 comprises:
an acquisition unit 11 for acquiring an ultrasound image of a prostate of a patient;
the identification unit 12 is used for marking the dividing boundary line of the prostate ultrasonic image and constructing a prostate ultrasonic image data set with dividing marking information;
a calling unit 13, configured to call a Mask R-CNN + ResNet-101 network;
the input unit 14 is used for inputting the prostate ultrasound image in the prostate ultrasound image dataset with the segmentation marking information into the Mask R-CNN + ResNet-101 network;
the training unit 15 is used for training the MaskR-CNN + ResNet-101 network according to the segmentation marking information of the prostate ultrasonic image to obtain a training result;
and the model establishing unit 16 is used for establishing a Mask R-CNN + ResNet-101 network model according to the training result.
The training unit 15 includes:
the feature extraction subunit 150 is configured to perform feature extraction on the prostate ultrasound image through the convolutional neural network to obtain a corresponding prostate ultrasound image feature map, so that the purpose of extracting a fixed-size prostate ultrasound image feature map is achieved;
a detection subunit 151, configured to perform positioning and classification on targets in the ultrasound image of the prostate;
the segmentation subunit 152 generates a binary mask by using a full convolution network, so as to segment the prostate ultrasound image and complete pixel-level differentiation.
In the embodiment of the invention, the following mode is selected according to the specific operation preference of the equipment:
the prostate ultrasound image to be detected is transmitted into the Mask R-CNN + ResNet-101 network model, a feature extraction subunit 150 of the prostate ultrasound image performs feature extraction on the prostate ultrasound image through a convolutional neural network to obtain a corresponding feature map, a candidate region is rapidly generated on the feature map by using a region suggestion network, a bilinear interpolation algorithm is fully utilized on a candidate region matching layer, floating point type coordinates are reserved, and pooling processing is performed to obtain a feature map with a fixed size to be output; then, the detection subunit 151 passes through a full connection layer on the prostate ultrasonic image characteristic image to realize the positioning and classification of the image target; and the segmentation subunit 152 generates a corresponding binary mask through a full convolution network to implement segmentation, and finally outputs a predicted image of the ultrasound image of the prostate.
In the operation process of the segmentation equipment, the area suggestion network performs sliding window on the feature map through windows with different length-width ratios so as to quickly generate candidate areas.
The prostate ultrasonic image segmentation equipment based on the Mask R-CNN + ResNet-101 network model is used for inputting a large number of prostate ultrasonic images into a convolutional neural network so as to extract image characteristics; then, a bilinear interpolation algorithm is fully utilized on the ROIAlign layer, and floating point coordinates are reserved; the prostate ultrasonic image is subjected to pooling processing to obtain a fixed-size characteristic image. The candidate region matching layer is pooled with the objective of pooling prostate ultrasound images of different scales through the layer into fixed-scale features. And the problem that the prostate ultrasonic image segmentation and positioning accuracy have errors due to quantization errors caused by more than two rounding operations in the pooling process of the current partial segmentation method is solved.
The segmentation subunit 152 replaces the convolutional neural network with a full convolutional network, which has the advantage that the deconvolution is applied to the feature map of the ultrasound image of the prostate of the first convolutional layer at the end to perform upsampling processing, so that the output is restored to the original image size, and then the softmax classifier is used to perform ordered pixel prediction, thereby classifying the classes of the pixels.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (7)

1. A prostate ultrasonic image segmentation method based on Mask R-CNN is characterized by comprising the following steps:
establishing a Mask R-CNN + ResNet-101 network model;
inputting a prostate ultrasonic image to be segmented into the Mask R-CNN + ResNet-101 network model for segmentation;
and outputting the segmented prostate ultrasonic image.
2. The method for prostate ultrasound image segmentation based on Mask R-CNN as claimed in claim 1, wherein the step of establishing a Mask R-CNN + ResNet-101 network model comprises:
constructing a prostate ultrasonic image data set with segmentation marking information;
calling a Mask R-CNN + ResNet-101 network;
inputting the prostate ultrasonic image in the prostate ultrasonic image data set with the segmentation marking information into the Mask R-CNN + ResNet-101 network;
training the Mask R-CNN + ResNet-101 network according to the segmentation marking information of the prostate ultrasonic image to obtain a training result;
and establishing a Mask R-CNN + ResNet-101 network model according to the training result.
3. The prostate ultrasound image segmentation method based on Mask R-CNN according to claim 2, wherein the step of training the Mask R-CNN + ResNet-101 network according to the segmentation labeling information of the prostate ultrasound image to obtain the training result comprises:
inputting the prostate ultrasonic image with the segmentation marking information into a Mask R-CNN + ResNet-101 network model;
extracting the characteristics of the prostate ultrasonic image through a convolutional neural network to obtain a corresponding characteristic diagram of the prostate ultrasonic image;
rapidly generating a candidate region on the characteristic map by the prostate ultrasound image with the image characteristics through a region suggestion network, reserving floating point type coordinates on the candidate region through a bilinear interpolation algorithm, and obtaining a prostate ultrasound segmentation characteristic image with a fixed size through pooling processing;
detecting the obtained prostate ultrasonic segmentation characteristic image to obtain target positioning and/or classification of the prostate ultrasonic segmentation characteristic image;
and drawing a corresponding binary mask for the positioned and/classified prostate ultrasonic segmentation characteristic image through a full convolution network to realize segmentation, and outputting a predicted image of the prostate ultrasonic image.
4. The MaskR-CNN-based prostate ultrasound image segmentation method according to claim 3, wherein the bilinear interpolation algorithm comprises the steps of:
Figure FDA0002358534060000021
when R is1=(x,y1)
Figure FDA0002358534060000022
When R is2=(x,y2)
Then linear interpolation is carried out on the y direction
Figure FDA0002358534060000023
Where f (x, y) is the pixel value of the point P to be solved, and four points are known as f (Q)11)、f(Q12)、f(Q21)、f(Q22) The four-point pixel value is Q11=(x1,y1),Q12=(x1,y2),Q21=(x2,y1) And Q22=(x2,y2) Interpolation in the x direction to obtain f (R)1)、f(R2) The pixel value.
5. A prostate ultrasound image segmentation device based on MaskR-CNN is characterized by comprising:
the model establishing module is used for establishing a Mask R-CNN + ResNet-101 network model;
the input module is used for inputting the prostate ultrasonic image to be segmented into the Mask R-CNN + ResNet-101 network model;
and the output module is used for outputting the segmented prostate ultrasonic image.
6. The Mask R-CNN-based prostate ultrasound image segmentation apparatus according to claim 5, wherein the model building module comprises:
an acquisition unit for acquiring a prostate ultrasound image of a patient;
the identification unit is used for marking the dividing boundary line of the prostate ultrasonic image and constructing a prostate ultrasonic image data set with dividing marking information;
the calling unit is used for calling a Mask R-CNN + ResNet-101 network;
the input unit is used for inputting the prostate ultrasonic image in the prostate ultrasonic image data set with the segmentation marking information into the Mask R-CNN + ResNet-101 network;
the training unit is used for training the Mask R-CNN + ResNet-101 network according to the segmentation marking information of the prostate ultrasonic image to obtain a training result;
and the model establishing unit is used for establishing a Mask R-CNN + ResNet-101 network model according to the training result.
7. The Mask R-CNN-based prostate ultrasound image segmentation apparatus according to claim 6, wherein the training unit comprises:
the characteristic extraction subunit is used for extracting the characteristics of the prostate ultrasonic image through the convolutional neural network to obtain a corresponding prostate ultrasonic image characteristic diagram;
the detection subunit is used for positioning and classifying the targets in the prostate ultrasonic image;
and the segmentation subunit generates a binary mask by using a full convolution network to realize the segmentation of the prostate ultrasonic image.
CN202010014967.0A 2020-01-07 2020-01-07 Prostate ultrasound image segmentation method and equipment based on Mask R-CNN Pending CN111210445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010014967.0A CN111210445A (en) 2020-01-07 2020-01-07 Prostate ultrasound image segmentation method and equipment based on Mask R-CNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010014967.0A CN111210445A (en) 2020-01-07 2020-01-07 Prostate ultrasound image segmentation method and equipment based on Mask R-CNN

Publications (1)

Publication Number Publication Date
CN111210445A true CN111210445A (en) 2020-05-29

Family

ID=70789597

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010014967.0A Pending CN111210445A (en) 2020-01-07 2020-01-07 Prostate ultrasound image segmentation method and equipment based on Mask R-CNN

Country Status (1)

Country Link
CN (1) CN111210445A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754530A (en) * 2020-07-02 2020-10-09 广东技术师范大学 Prostate ultrasonic image segmentation and classification method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190102878A1 (en) * 2017-09-30 2019-04-04 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for analyzing medical image
CN109859184A (en) * 2019-01-29 2019-06-07 牛旗 A kind of real-time detection of continuous scanning breast ultrasound image and Decision fusion method
CN110059589A (en) * 2019-03-21 2019-07-26 昆山杜克大学 The dividing method of iris region in a kind of iris image based on Mask R-CNN neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190102878A1 (en) * 2017-09-30 2019-04-04 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for analyzing medical image
CN109859184A (en) * 2019-01-29 2019-06-07 牛旗 A kind of real-time detection of continuous scanning breast ultrasound image and Decision fusion method
CN110059589A (en) * 2019-03-21 2019-07-26 昆山杜克大学 The dividing method of iris region in a kind of iris image based on Mask R-CNN neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CSDN: "Mask-RCNN 算法及其实现详解", 《HTTPS://BLOG.CSDN.NET/REMANENTED/ARTICLE/DETAILS/79564045》 *
知乎: "【目标检测】Mask R-CNN", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/62492064/》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754530A (en) * 2020-07-02 2020-10-09 广东技术师范大学 Prostate ultrasonic image segmentation and classification method
CN111754530B (en) * 2020-07-02 2023-11-28 广东技术师范大学 Prostate ultrasonic image segmentation classification method

Similar Documents

Publication Publication Date Title
CN109978839B (en) Method for detecting wafer low-texture defects
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
WO2022100034A1 (en) Detection method for malignant region of thyroid cell pathological section based on deep learning
CN111145209B (en) Medical image segmentation method, device, equipment and storage medium
CN109583345B (en) Road recognition method, device, computer device and computer readable storage medium
CN106529537A (en) Digital meter reading image recognition method
CN111932482A (en) Method and device for detecting target object in image, electronic equipment and storage medium
CN108830149B (en) Target bacterium detection method and terminal equipment
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN105740872B (en) Image feature extraction method and device
CN110689525A (en) Method and device for recognizing lymph nodes based on neural network
CN111462140B (en) Real-time image instance segmentation method based on block stitching
CN115409990B (en) Medical image segmentation method, device, equipment and storage medium
WO2020173024A1 (en) Multi-gesture precise segmentation method for smart home scenario
CN110689518A (en) Cervical cell image screening method and device, computer equipment and storage medium
CN115841669A (en) Pointer instrument detection and reading identification method based on deep learning technology
CN113706562A (en) Image segmentation method, device and system and cell segmentation method
CN110807416A (en) Digital instrument intelligent recognition device and method suitable for mobile detection device
CN113160175B (en) Tumor lymphatic vessel infiltration detection method based on cascade network
CN112489053B (en) Tongue image segmentation method and device and storage medium
CN111210445A (en) Prostate ultrasound image segmentation method and equipment based on Mask R-CNN
CN112927215A (en) Automatic analysis method for digestive tract biopsy pathological section
JP3223384B2 (en) Pattern matching device for grayscale images
CN115345895A (en) Image segmentation method and device for visual detection, computer equipment and medium
CN113033593B (en) Text detection training method and device based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination