CN115222675A - Hysteromyoma automatic typing method and device based on deep learning - Google Patents

Hysteromyoma automatic typing method and device based on deep learning Download PDF

Info

Publication number
CN115222675A
CN115222675A CN202210765824.2A CN202210765824A CN115222675A CN 115222675 A CN115222675 A CN 115222675A CN 202210765824 A CN202210765824 A CN 202210765824A CN 115222675 A CN115222675 A CN 115222675A
Authority
CN
China
Prior art keywords
hysteromyoma
type
area
uterine
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210765824.2A
Other languages
Chinese (zh)
Inventor
毛丽
李秀丽
薛华丹
何泳蓝
金征宇
俞益洲
李一鸣
乔昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Original Assignee
Beijing Shenrui Bolian Technology Co Ltd
Shenzhen Deepwise Bolian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Shenrui Bolian Technology Co Ltd, Shenzhen Deepwise Bolian Technology Co Ltd filed Critical Beijing Shenrui Bolian Technology Co Ltd
Priority to CN202210765824.2A priority Critical patent/CN115222675A/en
Publication of CN115222675A publication Critical patent/CN115222675A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for automatically classifying hysteromyoma based on deep learning. The method comprises the following steps: constructing a semantic segmentation model, and dividing an input pelvic cavity medical image into 3 regions in the mucosa, the muscle layer and the serosa; constructing an example segmentation model, segmenting the uterine fibroid to obtain the volume V of the uterine fibroid T (ii) a Extracting the contour of the uterine fibroid, and calculating the volume V of the uterine fibroid belonging to the extraserosal region O And bodies belonging to the intramucosal regionProduct V N (ii) a Based on V O 、V N And V T Performing FIGO typing on the uterine fibroids. The full process of the method for typing the hysteromyoma does not need manual participation, reduces the analysis pressure of imaging doctors, and is beneficial to improving the accuracy and consistency of FIGO typing judgment.

Description

Hysteromyoma automatic typing method and device based on deep learning
Technical Field
The invention belongs to the technical field of medical images, and particularly relates to a method and a device for automatically typing hysteromyoma based on deep learning.
Background
Uterine fibroid is a common benign tumor, is one of the most common tumors in human body, and is also called fibroid and uterine fibroid. Uterine fibroids are also called uterine leiomyoma because they are mainly formed by the proliferation of uterine smooth muscle cells, in which a small amount of fibrous connective tissue is present as a supporting tissue. Based on The relationship between hysteromyoma and serosa and mucosa, FIGO (The International Federation of Gynecology and Obstetrics, international Federation of Obstetrics and Gynecology) typing can be performed on hysteromyoma. The effect on patients and the choice of treatment method are different for uterine fibroids of different FIGO classifications. Fibroid FIGO typing method is as follows: type 0: submucosal fibroids completely inside the uterine cavity; type 1: most of myomas are located in uterine cavities, and the part of the myomas located between muscle walls is less than or equal to 50 percent; type 2: myoma protruding to the mucosa between the muscle walls, wherein the part of the myoma located between the muscle walls is more than 50 percent; types 2 to 5: a type of mixing; type 3: the myoma is completely located between muscle walls, but the position of the myoma is close to the mucous membrane; type 4: the myoma is completely positioned between muscle walls and does not protrude to a serosal layer or a mucosal layer; type 5: myoma protrudes to serosa, but the part between muscle walls is more than or equal to 50 percent; type 6: myomas protrude to serosa, but lie < 50% in the intermural part; type 7: subserous serosa fibroids; type 8: non-myometrium type (specific site such as cervix, ligament of wide myoma).
At present, the FIGO typing mainly depends on subjective judgment of doctors, a large amount of time is consumed for analysis, and an automatic FIGO typing algorithm and device are not available. The diagnosis of doctors depends on the annual capital and the diagnosis experience of the doctors, and has strong subjectivity and poor consistency. Therefore, the invention provides an automatic FIGO typing method based on a deep learning segmentation model, which is characterized in that the regions of serosa and mucous membrane are determined based on semantic segmentation, then the region for obtaining hysteromyoma is segmented based on an example on a medical image, and the FIGO typing is automatically obtained after relevant features are extracted.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an automatic hysteromyoma typing method and device based on deep learning.
In order to achieve the above object, the present invention adopts the following technical solutions.
In a first aspect, the invention provides a deep learning-based uterine fibroid automatic typing method, which comprises the following steps:
constructing a semantic segmentation model, inputting a pelvic cavity medical image into the trained model, and dividing the image into 3 regions: in the mucous membrane; the muscular layer, i.e. the region between the mucosa and serosa; outside the serosa;
constructing an example segmentation model, inputting the pelvic cavity medical image into the trained model, segmenting the hysteromyoma to obtain the hysteromyoma volume V T
Extracting the profile of the hysteromyoma, determining the area adjacent to each pixel point on the profile, and calculating the volume V of the hysteromyoma belonging to the extraserosal area O And a volume V belonging to the intramucosal region N
Based on V O 、V N And V T Performing FIGO typing on the uterine fibroids.
Further, the output of the semantic segmentation model is two types of serous membrane and mucous membrane, and the activation function of the last layer is sigmoid; during model prediction, the region with sigmoid output larger than 0.5 is a prediction type; if a pixel is predicted to be both serous and mucosal, then the final type of pixel is mucosal.
Further, the method for determining the area adjacent to each pixel point on the contour comprises the following steps:
selecting a neighborhood taking any pixel point A on the contour as a center;
if the pixel points except for the uterine fibroids in the neighborhood all belong to one of 3 regions, the region is a region adjacent to the pixel point A;
and if the pixel points except for the uterine fibroids in the adjacent areas belong to a plurality of areas in the 3 areas, selecting any one area in the plurality of areas as an area adjacent to the pixel point A.
Further, V O 、V N The calculating method comprises the following steps:
respectively connecting the pixel points of which the adjacent area is an external serosal area and the pixel points of which the adjacent area is an internal mucous membrane area on the outline of each layer of the hysteromyoma, and combining the outlines of the external serosal area and the internal mucous membrane area to obtain two parts of which each layer of the hysteromyoma respectively belongs to the external serosal area and the internal mucous membrane area;
calculating the volume V of each layer of hysteromyoma belonging to the extraserosal region based on the two parts Oi And a volume V belonging to an area within the mucosa Ni I =1,2, …, n, n is the number of layers of the pelvic cavity medical image;
and (3) calculating:
Figure BDA0003725528600000031
further, the base V O 、V N And V T Performing FIGO classification on the uterine fibroids, comprising:
s0, calculating: ratio _ N = V N /V T ,ratio_O=V O /V T
Figure BDA0003725528600000032
Figure BDA0003725528600000033
S1, if V N =V O =0, uterine fibroid is type 4; otherwise, turning to the step S2;
s2, if V T =V N If so, the hysteromyoma is type 0; otherwise, turning to the step S3;
s3, if V T =V O If so, the hysteromyoma is type 8; otherwise, turning to the step S4;
s4, if V N >0, and V O >0, the hysteromyoma is type 2-5; otherwise, turning to the step S5;
s5, if V O >0, go to step S61; otherwise, go to step S71;
s61, if the ratio _ O is greater than 0.98, the hysteromyoma is 7 type; otherwise, go to step S62;
s62, if the ratio _ max _ O is greater than 0.5, the uterine fibroid is type 6; otherwise, go to step S63;
s63, if ratio _ max _ O is greater than 0.1, determining that the uterine fibroid is type 5; otherwise, the uterine fibroid is type 4;
s71, if ratio _ max _ N is less than 0.1, determining that the uterine fibroid is type 3; otherwise, go to step S72;
s72, if the ratio _ max _ N is less than 0.5, the hysteromyoma is type 2; otherwise, uterine fibroids are type 1.
In a second aspect, the present invention provides an automatic hysteromyoma typing device based on deep learning, comprising:
the first modeling module is used for constructing a semantic segmentation model, inputting the pelvic medical image into the trained model and dividing the image into 3 regions: in the mucosa; the muscular layer, i.e. the region between the mucosa and serosa; outside the serous membrane;
a second modeling module for constructing an example segmentation model, inputting the pelvic cavity medical image into the trained model, segmenting the hysteromyoma and obtaining the hysteromyoma volume V T
A volume calculation module for extracting the contour of hysteromyoma, determining the adjacent region of each pixel point on the contour, and calculating the volume V of the hysteromyoma belonging to the extraserosal region O And a volume V belonging to the intramucosal region N
Myoma typing module for V-based O 、V N And V T Performing FIGO typing on the uterine fibroids.
Further, the output of the semantic segmentation model is two types of serous membrane and mucous membrane, and the activation function of the last layer is sigmoid; during model prediction, areas with sigmoid output larger than 0.5 are prediction types; if a pixel is predicted to be both serous and mucosal, then the final type of pixel is mucosal.
Further, the method for determining the area adjacent to each pixel point on the contour comprises the following steps:
selecting a neighborhood taking any pixel point A on the contour as a center;
if the pixel points except for the uterine fibroids in the neighborhood all belong to one of 3 regions, the region is a region adjacent to the pixel point A;
and if the pixel points except the uterine fibroids in the neighborhood belong to a plurality of 3 regions, selecting any one of the regions as the region adjacent to the pixel point A.
Further, V O 、V N The calculating method comprises the following steps:
respectively connecting the pixel points of which the adjacent area is an external serosal area and the pixel points of which the adjacent area is an internal mucous membrane area on the outline of each layer of the hysteromyoma, and combining the outlines of the external serosal area and the internal mucous membrane area to obtain two parts of which each layer of the hysteromyoma respectively belongs to the external serosal area and the internal mucous membrane area;
calculating the volume V of each layer of hysteromyoma belonging to the extraserosal region based on the two parts Oi And a volume V belonging to an area within the mucosa Ni I =1,2, …, n, n is the number of layers of the pelvic medical image;
and (3) calculating:
Figure BDA0003725528600000041
further, the V-based O 、V N And V T Performing FIGO typing on the uterine fibroids, comprising:
s0, calculating: ratio _ N = V N /V T ,ratio_O=V O /V T
Figure BDA0003725528600000042
Figure BDA0003725528600000043
S1, if V N =V O =0, uterine fibroid is type 4; otherwise, turning to the step S2;
s2, if V T =V N If so, the hysteromyoma is type 0; otherwise, turning to the step S3;
s3, if V T =V O If so, the hysteromyoma is type 8; otherwise, turning to the step S4;
s4, if V N >0, and V O >0, the hysteromyoma is 2-5 type; otherwise, turning to the step S5;
s5, if V O >0, go to step S61; otherwise, go to step S71;
s61, if the ratio _ O is greater than 0.98, the hysteromyoma is 7 type; otherwise, go to step S62;
s62, if the ratio _ max _ O is greater than 0.5, the uterine fibroid is type 6; otherwise, go to step S63;
s63, if the ratio _ max _ O is greater than 0.1, the hysteromyoma is type 5; otherwise, the uterine fibroid is type 4;
s71, if the ratio _ max _ N is less than 0.1, the hysteromyoma is type 3; otherwise, go to step S72;
s72, if the ratio _ max _ N is less than 0.5, the hysteromyoma is type 2; otherwise, uterine fibroids are type 1.
Compared with the prior art, the invention has the following beneficial effects.
The invention constructs semantic scoreCutting the model, dividing the medical image into 3 regions, constructing an example cutting model, cutting the hysteromyoma to obtain the volume V of the hysteromyoma T Extracting the contour of the uterine fibroid, determining the area adjacent to each pixel point on the contour, and calculating the volume V of the uterine fibroid belonging to the extraserosal area O And a volume V belonging to the intramucosal region N Based on V O 、V N And V T And carrying out FIGO typing on the hysteromyoma, and realizing automatic typing of the hysteromyoma. The full process of the method for typing the hysteromyoma does not need manual participation, greatly reduces the analysis pressure of doctors in imaging departments, and is beneficial to improving the accuracy and consistency of FIGO typing judgment.
Drawings
Fig. 1 is a flowchart of an automatic uterine fibroid typing method based on deep learning according to an embodiment of the present invention.
Fig. 2 is a schematic illustration of uterine fibroids, mucosa and serosa labeling.
Fig. 3 is a schematic diagram of a method for determining the region to which the hysteromyoma contour point belongs.
Fig. 4 is a schematic view of the volume distribution of 3 sections of uterine fibroids.
FIG. 5 is based on V O 、V N And V T Flow chart for typing uterine fibroids.
Fig. 6 is a block diagram of an automatic hysteromyoma typing device based on deep learning according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and more obvious, the present invention is further described below with reference to the accompanying drawings and the detailed description. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of an automatic uterine fibroid typing method based on deep learning according to an embodiment of the present invention, including the following steps:
step 101, constructing a semantic segmentation model, inputting a pelvic cavity medical image into the trained model, and dividing the image into 3 regions: in the mucosa; the muscular layer, the region between the mucosa and serosa; outside the serous membrane;
102, constructing an example segmentation model, inputting the pelvic cavity medical image into the trained model, segmenting the uterine fibroid, and obtaining the uterine fibroid volume V T
103, extracting the contour of the uterine fibroid, determining the area adjacent to each pixel point on the contour, and calculating the volume V of the uterine fibroid belonging to the extraserosal area O And a volume V belonging to the intramucosal region N
Step 104, based on V O 、V N And V T Performing FIGO typing on the uterine fibroids.
In this embodiment, step 101 is mainly used to divide the input pelvic medical image into 3 regions. The 3 regions are intramucosal, muscularis (mucosa inside serosa, muscularis i.e. the region between mucosa and serosa) and extraserosa, respectively, as shown in fig. 4. In the embodiment, a semantic segmentation model is constructed, a pelvic cavity medical image is input into the trained model, and the image is divided into 3 regions. The semantic segmentation model may classify each pixel of the input image. A nnU-Net frame can be selected for training a semantic segmentation model to obtain a 3D U-Net segmentation model. Model training requires the construction of a training set, a validation set, and a test set by collecting medical images containing pelvic sites. The commonly used imaging sequences for observing the hysteromyoma and serosa and mucosa, such as pelvic medical images of sagittal T2WI and axial T2WI, can be selected. The whole data collection process needs to comply with the confidentiality principle, carry out anonymization processing on the collected image data, and remove personal information of all patients. The collected images are also labeled, and an experienced labeling person can draw the outline of the uterus and endometrium on the medical image of each patient under the guidance of a radiologist with years of female pelvic image diagnosis experience, draw all hysteromyomas, and label the FIGO classification according to the guidelines. As shown in FIG. 2, there are two uterine fibroids, each labeled separately at the time of labeling. In order to avoid adverse effects on model performance due to differences in image resolution and size, spatial resolution normalization is performed on the collected medical image sequence. Firstly, counting the resolution and image size information of all medical image sequences, and selecting a median as a target resolution and a target size. And resampling all images to a target resolution by adopting an interpolation algorithm, and cutting to a target size. Meanwhile, the same processing is carried out on the sketching result of the doctor, namely the sketching is carried out to the target resolution, and the sketching is carried out to the target size. Finally, dividing all data into a training set, a verification set and a test set according to patients, wherein the division ratio of 3 data sets can be selected according to the data volume, and the ratio can be selected from 6. The pelvic medical image of the present embodiment includes, but is not limited to, an MR image.
In this embodiment, step 102 is mainly used to segment the uterine fibroid. In the embodiment, the uterine fibroids are segmented by constructing the example segmentation model and inputting the pelvic cavity medical image into the trained model. Image segmentation can be divided into instance segmentation, which is, as mentioned above, classifying each pixel of an input image, and semantic segmentation, which is, as mentioned above, a combination of object detection and semantic segmentation, which detects an object in an image (object detection) and then labels each pixel (semantic segmentation). For the example segmentation task, not only is it given whether a pixel belongs to an object of interest, but also different objects of the same class are distinguished. Any example segmentation model that accomplishes the uterine fibroid segmentation task may be selected. The example segmentation model may be Mask RCNN. Since pelvic MR scans are generally thick-layer data, segmentation can be performed in a pseudo-3D manner. Specifically, the target layer image and 1 image before and after the target layer image are taken as input, and the segmentation result of the target image is predicted by using a 2D Mask RCNN model. For example, for the prediction of the k-th layer, the input images are 3 images of the k-1, k, and k +1 layers, the 3 images are combined into 3 channels of the input tensor, and the output is the segmentation result of the k-th layer. The method is similar to an RGB three-channel mode of a natural image, so that the method can fully utilize a natural image data set, such as pretraining parameters of ImageNet, and avoid overfitting of a model. It should be noted that the Mask RCNN is only an example, and other existing example segmentation models may also be used. In addition, the segmentation method is not limited to the pseudo 3D method, and the example segmentation may be performed by directly using a 3D example segmentation algorithm. After the hysteromyoma is segmented, the hysteromyoma volume can be obtained.
In this embodiment, step 103 is mainly used to calculate the volume V of the uterine fibroid belonging to the extraserosal region O Volume V of the intramucosal region N . Calculating V O 、V N Is based on V for step 104 O 、V N And V T And (5) typing the uterine fibroids. The present embodiment first extracts the contour of the uterine fibroid by using an image contour retrieval algorithm. Since the lesion does not contain a hole, only the outermost contour is searched, resulting in Gong Jiliu all contour points in the current layer. Then, based on the 3 regions into which the input pelvic medical image is divided in step 101, the region to which each pixel point on the hysteromyoma contour belongs is determined, on the basis, the parts of the hysteromyoma, which belong to the serosal region and the intra-mucosal region, are determined respectively, and the volume V is calculated respectively O And V N . As shown in fig. 4, V in the figure Oi 、V Ni The volume of the uterine fibroid of the ith layer belonging to the extraserosal region and the volume belonging to the intramucosal region, respectively, can be calculated by V Oi 、V Ni Summing i to obtain V O 、V N . It is to be noted that V here Oi 、V Ni All volume and not area because each layer image has a certain thickness, called layer thickness (layer thickness is a parameter of medical image, representing the actual thickness of the human body corresponding to the layer image, and can be read from a stored medium such as Dicom). The volume can be obtained by taking the product of the area and the layer thickness.
In this embodiment, step 104 is mainly used for V-based processing O 、V N And V T Typing the uterine fibroid. The FIGO typing method for hysteromyoma is given in the background art, the hysteromyoma is classified into 0-8 types and also into a mixed type, i.e. 2-5 types, and the characteristics of each type are given. Can be according to said characteristics, according to V O 、V N And V T The value (or the value after simple operation) of (f) identifying the type of the hysteromyoma. The following example will give a specific typing method.
As an optional embodiment, the output of the semantic segmentation model is two types of serous membrane and mucosa, and the activation function of the last layer is sigmoid; during model prediction, the region with sigmoid output larger than 0.5 is a prediction type; if a pixel is predicted to be both serous and mucosal, then the final type of pixel is mucosal.
The embodiment provides a specific technical scheme of the semantic segmentation model. The semantic segmentation model of the embodiment is an improvement over the existing segmentation model. The number of the output classes of the common multi-class segmentation model is +1 (background) of the number of the prediction targets, for example, the output layer of the original model in the embodiment is 3 channels, which are serosa, mucosa and background, respectively, the prediction probabilities of the 3 channels are obtained by using the activation function softmax transformation, and the maximum prediction probability is taken as the prediction result of the pixel, that is, the multiple prediction results are mutually exclusive. In this embodiment, since the serosa and the mucosa region are in a contained relationship (the mucosa is located within the serosa), some pixels may be located within both the serosa and the mucosa. Therefore, in the embodiment, the output layer is changed into the 2-channel convolutional layer to respectively predict the serous membrane and the mucous membrane, and the last layer of nonlinear transformation is changed from softmax to sigmoid. During model prediction, the area which is larger than 0.5 after sigmoid transformation is taken as a prediction target. Since serosa and mucosa are involved, mucosa has a higher priority than serosa. In other words, if a pixel is predicted to be both serous and mucosal, then the final result for that pixel is the mucosa.
As an alternative embodiment, the method for determining the area adjacent to each pixel point on the contour includes:
selecting a neighborhood taking any pixel point A on the contour as a center;
if the pixel points except for the uterine fibroids in the adjacent areas all belong to one area of the 3 areas, the area is an area adjacent to the pixel point A;
and if the pixel points except the uterine fibroids in the neighborhood belong to a plurality of 3 regions, selecting any one of the regions as the region adjacent to the pixel point A.
This embodiment provides a technical solution for determining the region adjacent to any pixel point on the uterine fibroid contour. In the above technical solution, the pixel point a is taken as an example, and a method for determining an area adjacent to any pixel point is described in detail. For facilitating understanding of the technical scheme, fig. 3 shows a 4-neighborhood determination method, that is, taking any pixel point on the contour as a center, making a "nine-square lattice" of 3*3, and examining the areas to which pixel points except for the hysteromyoma belong, among 4 pixel points, namely, the upper, lower, left and right pixel points adjacent to the center point. In fig. 3, the left and the lower 2 pixel points belong to uterine fibroids, and the upper and the right 2 pixel points all belong to the serosal outer region, so the adjacent region of the central point is the serosal outer region and is marked as O. The regions adjacent to other pixels can be obtained according to the same method, as shown in fig. 3, the adjacent region marked as J represents the muscular layer, and the adjacent region marked as N represents the intra-mucosal region.
As an alternative embodiment, V O 、V N The calculating method comprises the following steps:
respectively connecting the pixel points of which the adjacent area is an external serosal area and the pixel points of which the adjacent area is an internal mucous membrane area on the outline of each layer of the hysteromyoma, and combining the outlines of the external serosal area and the internal mucous membrane area to obtain two parts of which each layer of the hysteromyoma respectively belongs to the external serosal area and the internal mucous membrane area;
calculating the volume V of each layer of hysteromyoma belonging to the extraserosal region based on the two parts Oi And a volume V belonging to an area within the mucosa Ni I =1,2, …, n, n is the number of layers of the pelvic medical image;
and (3) calculating:
Figure BDA0003725528600000091
this example presents the calculation of V O 、V N The technical scheme of (1). This example calculates V layer by layer Oi 、V Ni Then summing the layers to obtain V O 、V N . In this embodiment, the pixel points of which the adjacent region is the serosa outer region and the pixel points of which the adjacent region is the mucosal inner region are connected to each layer of hysteromyoma contour, for example, the pixel points marked as O and the pixel points marked as N in fig. 3 are connected, and then the region where the hysteromyoma protrudes the serosa outward and the region where the hysteromyoma protrudes the mucosal inward can be obtained according to the contours of the serosa outer region and the mucosal inner region, so as to obtain V Oi 、V Ni As shown in fig. 4.
As an alternative embodiment, the base V O 、V N And V T Performing FIGO classification on the uterine fibroids, comprising:
s0, calculating: ratio _ N = V N /V T ,ratio_O=V O /V T
Figure BDA0003725528600000101
Figure BDA0003725528600000102
S1, if V N =V O =0, uterine fibroid is type 4; otherwise, turning to the step S2;
s2, if V T =V N If so, the hysteromyoma is type 0; otherwise, turning to the step S3;
s3, if V T =V O If so, the hysteromyoma is type 8; otherwise, turning to the step S4;
s4, if V N >0, and V O >0, the hysteromyoma is 2-5 type; otherwise, turning to the step S5;
s5, if V O >0, go to step S61; otherwise, go to step S71;
s61, if the ratio _ O is greater than 0.98, the hysteromyoma is 7 type; otherwise, go to step S62;
s62, if the ratio _ max _ O is greater than 0.5, the uterine fibroid is type 6; otherwise, go to step S63;
s63, if the ratio _ max _ O is greater than 0.1, the hysteromyoma is type 5; otherwise, the uterine fibroid is type 4;
s71, if the ratio _ max _ N is less than 0.1, the hysteromyoma is type 3; otherwise, go to step S72;
s72, if the ratio _ max _ N is less than 0.5, the hysteromyoma is type 2; otherwise, uterine fibroids are type 1.
This example presents a technical solution for carrying out the FIGO typing of hysteromyoma. This embodiment is based on V O 、V N And V T FIGO typing of uterine fibroids. First according to V O 、V N And V T And 4 ratios are calculated, namely ratio _ N, ratio _ O, ratio _ max _ O and ratio _ max _ N, and then the FIGO type of the uterine fibroid is judged based on the relation and the value range of the parameters. The specific typing method can refer to the flowchart shown in fig. 5, and a detailed description thereof will not be provided.
Fig. 6 is a schematic composition diagram of an automatic hysteromyoma typing device based on deep learning according to an embodiment of the present invention, where the device includes:
the first modeling module 11 is configured to construct a semantic segmentation model, input the pelvic medical image into the trained model, and divide the image into 3 regions: in the mucosa; the muscular layer, i.e. the region between the mucosa and serosa; outside the serosa;
a second modeling module 12 for constructing an example segmentation model, inputting the pelvic medical image into the trained model, segmenting the hysteromyoma, and obtaining the hysteromyoma volume V T
A volume calculation module 13, configured to extract a contour of the uterine fibroid, determine an area adjacent to each pixel point on the contour, and calculate a volume V of the uterine fibroid in an extraserosal area O And a volume V belonging to the intramucosal region N
Myoma typing module 14 for V-based O 、V N And V T Performing FIGO typing on the uterine fibroid.
The apparatus of this embodiment may be used to implement the technical solution of the method embodiment shown in fig. 1, and the implementation principle and the technical effect are similar, which are not described herein again. The same applies to the following embodiments, which are not further described.
As an optional embodiment, the output of the semantic segmentation model is two types of serous membrane and mucosa, and the activation function of the last layer is sigmoid; during model prediction, areas with sigmoid output larger than 0.5 are prediction types; if a pixel is predicted to be both serous and mucosal, then the final type of pixel is mucosal.
As an alternative embodiment, the method for determining the area adjacent to each pixel point on the contour includes:
selecting a neighborhood taking any pixel point A on the contour as a center;
if the pixel points except for the uterine fibroids in the adjacent areas all belong to one area of the 3 areas, the area is an area adjacent to the pixel point A;
and if the pixel points except the uterine fibroids in the neighborhood belong to a plurality of 3 regions, selecting any one of the regions as the region adjacent to the pixel point A.
As an alternative embodiment, V O 、V N The calculating method comprises the following steps:
respectively connecting the pixel points of which the adjacent area is an external serosal area and the pixel points of which the adjacent area is an internal mucous membrane area on the outline of each layer of the hysteromyoma, and combining the outlines of the external serosal area and the internal mucous membrane area to obtain two parts of which each layer of the hysteromyoma respectively belongs to the external serosal area and the internal mucous membrane area;
calculating the volume V of each layer of hysteromyoma belonging to the extraserosal region based on the two parts Oi And a volume V belonging to an area within the mucosa Ni I =1,2, …, n, n is the number of layers of the pelvic medical image;
and (3) calculating:
Figure BDA0003725528600000111
as an alternative embodiment, the base V O 、V N And V T Performing FIGO classification on the uterine fibroids, comprising:
s0, calculating: ratio _ N = V N /V T ,ratio_O=V O /V T
Figure BDA0003725528600000121
Figure BDA0003725528600000122
S1, if V N =V O =0, then uterine fibroids are type 4; otherwise, turning to the step S2;
s2, if V T =V N If so, the hysteromyoma is type 0; otherwise, turning to the step S3;
s3, if V T =V O If so, the hysteromyoma is type 8; otherwise, turning to the step S4;
s4, if V N >0, and V O >0, the hysteromyoma is 2-5 type; otherwise, turning to the step S5;
s5, if V O >0, go to step S61; otherwise, go to step S71;
s61, if the ratio _ O is greater than 0.98, the hysteromyoma is 7 type; otherwise, go to step S62;
s62, if the ratio _ max _ O is greater than 0.5, the uterine fibroid is type 6; otherwise, go to step S63;
s63, if the ratio _ max _ O is greater than 0.1, the hysteromyoma is type 5; otherwise, the uterine fibroid is type 4;
s71, if the ratio _ max _ N is less than 0.1, the hysteromyoma is type 3; otherwise, go to step S72;
s72, if ratio _ max _ N is less than 0.5, determining that the uterine fibroid is type 2; otherwise, the uterine fibroids are type 1.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An automatic uterine fibroid typing method based on deep learning is characterized by comprising the following steps:
constructing a semantic segmentation model, inputting a pelvic cavity medical image into the trained model, and dividing the image into 3 regions: in the mucosa; the muscular layer, i.e. the region between the mucosa and serosa; outside the serous membrane;
constructing an example segmentation model, inputting the pelvic cavity medical image into the trained model, segmenting the hysteromyoma to obtain the hysteromyoma volume V T
Extracting the contour of the uterine fibroid, determining the area adjacent to each pixel point on the contour, and calculating the volume V of the uterine fibroid belonging to the extraserosal area O And a volume V belonging to the intramucosal region N
Based on V O 、V N And V T Performing FIGO typing on the uterine fibroid.
2. The deep learning-based uterine fibroid automatic typing method according to claim 1, wherein the output of the semantic segmentation model is two types of serosa and mucosa, and the activation function of the last layer is sigmoid; during model prediction, the region with sigmoid output larger than 0.5 is a prediction type; if a pixel is predicted to be both serous and mucosal, then the final type of pixel is mucosal.
3. The method for automatically typing uterine fibroids based on deep learning of claim 1, wherein the method for determining the region adjacent to each pixel point on the contour comprises:
selecting a neighborhood taking any pixel point A on the contour as a center;
if the pixel points except for the uterine fibroids in the neighborhood all belong to one of 3 regions, the region is a region adjacent to the pixel point A;
and if the pixel points except for the uterine fibroids in the adjacent areas belong to a plurality of areas in the 3 areas, selecting any one area in the plurality of areas as an area adjacent to the pixel point A.
4. The deep learning-based uterine fibroid automatic typing method according to claim 3, wherein V O 、V N The calculating method comprises the following steps:
respectively connecting the pixel points of which the adjacent area is an external serosal area and the pixel points of which the adjacent area is an internal mucous membrane area on the outline of each layer of the hysteromyoma, and combining the outlines of the external serosal area and the internal mucous membrane area to obtain two parts of which each layer of the hysteromyoma respectively belongs to the external serosal area and the internal mucous membrane area;
calculating the volume V of each layer of hysteromyoma belonging to the extraserosal region based on the two parts Oi And a volume V belonging to an area within the mucosa Ni I =1,2, …, n, n is the number of layers of the pelvic cavity medical image;
and (3) calculating:
Figure FDA0003725528590000021
5. the deep learning-based uterine fibroids automatic typing method according to claim 4, wherein the V is based on O 、V N And V T Performing FIGO classification on the uterine fibroids, comprising:
s0, calculating: ratio _ N = V N /V T ,ratio_O=V O /V T
Figure FDA0003725528590000022
Figure FDA0003725528590000023
S1, if V N =V O =0, uterine fibroid is type 4; otherwise, turning to the step S2;
s2, if V T =V N If so, the hysteromyoma is type 0; otherwise, turning to the step S3;
s3, if V T =V O If so, the hysteromyoma is type 8; otherwise, turning to the step S4;
s4, if V N >0, and V O >0,The hysteromyoma is type 2-5; otherwise, turning to the step S5;
s5, if V O >0, go to step S61; otherwise, go to step S71;
s61, if the ratio _ O is greater than 0.98, the hysteromyoma is 7 type; otherwise, go to step S62;
s62, if the ratio _ max _ O is greater than 0.5, the uterine fibroid is type 6; otherwise, go to step S63;
s63, if the ratio _ max _ O is greater than 0.1, the hysteromyoma is type 5; otherwise, the uterine fibroid is type 4;
s71, if the ratio _ max _ N is less than 0.1, the hysteromyoma is type 3; otherwise, go to step S72;
s72, if the ratio _ max _ N is less than 0.5, the hysteromyoma is type 2; otherwise, uterine fibroids are type 1.
6. An automatic hysteromyoma typing device based on deep learning, the device comprising:
the first modeling module is used for constructing a semantic segmentation model, inputting the pelvic medical image into the trained model and dividing the image into 3 regions: in the mucosa; the muscular layer, i.e. the region between the mucosa and serosa; outside the serosa;
a second modeling module for constructing an example segmentation model, inputting the pelvic cavity medical image into the trained model, segmenting the hysteromyoma and obtaining the hysteromyoma volume V T
The volume calculation module is used for extracting the contour of the uterine fibroid, determining the area adjacent to each pixel point on the contour, and calculating the volume V of the uterine fibroid belonging to the extraserosal area O And a volume V belonging to the intramucosal region N
Myoma typing module for V-based O 、V N And V T Performing FIGO typing on the uterine fibroids.
7. The deep learning-based hysteromyoma automatic typing device according to claim 6, wherein the output of the semantic segmentation model is two types of serosa and mucosa, and the activation function of the last layer is sigmoid; during model prediction, the region with sigmoid output larger than 0.5 is a prediction type; if a pixel is predicted to be both serous and mucosal, then the final type of pixel is mucosal.
8. The apparatus of claim 6, wherein the method of determining the area on the contour to which each pixel point is adjacent comprises:
selecting a neighborhood taking any pixel point A on the contour as a center;
if the pixel points except for the uterine fibroids in the neighborhood all belong to one of 3 regions, the region is a region adjacent to the pixel point A;
and if the pixel points except the uterine fibroids in the neighborhood belong to a plurality of 3 regions, selecting any one of the regions as the region adjacent to the pixel point A.
9. The deep learning-based hysteromyoma autotyping device according to claim 8, wherein V O 、V N The calculating method comprises the following steps:
respectively connecting the pixel points of which the adjacent area is an external serosal area and the pixel points of which the adjacent area is an internal mucous membrane area on the outline of each layer of the hysteromyoma, and combining the outlines of the external serosal area and the internal mucous membrane area to obtain two parts of which each layer of the hysteromyoma respectively belongs to the external serosal area and the internal mucous membrane area;
calculating the volume V of each layer of hysteromyoma belonging to the extraserosal region based on the two parts Oi And a volume V belonging to an area within the mucosa Ni I =1,2, …, n, n is the number of layers of the pelvic medical image;
and (3) calculating:
Figure FDA0003725528590000031
10. deep learning based uterine muscle according to claim 9Tumor automatic typing device characterized in that said V-based O 、V N And V T Performing FIGO classification on the uterine fibroids, comprising:
s0, calculating: ratio _ N = V N /V T ,ratio_O=V O /V T
Figure FDA0003725528590000032
Figure FDA0003725528590000033
S1, if V N =V O =0, then uterine fibroids are type 4; otherwise, turning to the step S2;
s2, if V T =V N If so, the hysteromyoma is type 0; otherwise, turning to the step S3;
s3, if V T =V O If so, the hysteromyoma is type 8; otherwise, turning to the step S4;
s4, if V N >0, and V O >0, the hysteromyoma is 2-5 type; otherwise, turning to the step S5;
s5, if V O >0, go to step S61; otherwise, go to step S71;
s61, if the ratio _ O is greater than 0.98, the hysteromyoma is 7 type; otherwise, go to step S62;
s62, if ratio _ max _ O is greater than 0.5, determining that the uterine fibroid is type 6; otherwise, go to step S63;
s63, if the ratio _ max _ O is greater than 0.1, the hysteromyoma is type 5; otherwise, the uterine fibroid is type 4;
s71, if the ratio _ max _ N is less than 0.1, the hysteromyoma is type 3; otherwise, go to step S72;
s72, if the ratio _ max _ N is less than 0.5, the hysteromyoma is type 2; otherwise, uterine fibroids are type 1.
CN202210765824.2A 2022-07-01 2022-07-01 Hysteromyoma automatic typing method and device based on deep learning Pending CN115222675A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210765824.2A CN115222675A (en) 2022-07-01 2022-07-01 Hysteromyoma automatic typing method and device based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210765824.2A CN115222675A (en) 2022-07-01 2022-07-01 Hysteromyoma automatic typing method and device based on deep learning

Publications (1)

Publication Number Publication Date
CN115222675A true CN115222675A (en) 2022-10-21

Family

ID=83610779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210765824.2A Pending CN115222675A (en) 2022-07-01 2022-07-01 Hysteromyoma automatic typing method and device based on deep learning

Country Status (1)

Country Link
CN (1) CN115222675A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116029999A (en) * 2022-12-28 2023-04-28 北京优创新港科技股份有限公司 Smoke and flame segmentation method and system based on u2net

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116029999A (en) * 2022-12-28 2023-04-28 北京优创新港科技股份有限公司 Smoke and flame segmentation method and system based on u2net

Similar Documents

Publication Publication Date Title
Sun et al. Computer-aided diagnosis in histopathological images of the endometrium using a convolutional neural network and attention mechanisms
US11101033B2 (en) Medical image aided diagnosis method and system combining image recognition and report editing
CN111402228B (en) Image detection method, device and computer readable storage medium
CN108364006B (en) Medical image classification device based on multi-mode deep learning and construction method thereof
CN113538313B (en) Polyp segmentation method and device, computer equipment and storage medium
RU2477524C2 (en) Method for signs election using genetic algorithm based on group of classification systems
CN111192245A (en) Brain tumor segmentation network and method based on U-Net network
CN112101451B (en) Breast cancer tissue pathological type classification method based on generation of antagonism network screening image block
CN111243042A (en) Ultrasonic thyroid nodule benign and malignant characteristic visualization method based on deep learning
CN108898175A (en) Area of computer aided model building method based on deep learning gastric cancer pathological section
CN112365464B (en) GAN-based medical image lesion area weak supervision positioning method
CN108062749B (en) Identification method and device for levator ani fissure hole and electronic equipment
CN109948671B (en) Image classification method, device, storage medium and endoscopic imaging equipment
Włodarczyk et al. Spontaneous preterm birth prediction using convolutional neural networks
CN114782307A (en) Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning
CN108765427A (en) A kind of prostate image partition method
CN109117890A (en) A kind of image classification method, device and storage medium
CN114398979A (en) Ultrasonic image thyroid nodule classification method based on feature decoupling
CN115471512A (en) Medical image segmentation method based on self-supervision contrast learning
CN112348794A (en) Ultrasonic breast tumor automatic segmentation method based on attention-enhanced U-shaped network
CN115222675A (en) Hysteromyoma automatic typing method and device based on deep learning
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
Liu et al. Automated classification of cervical Lymph-Node-Level from ultrasound using depthwise separable convolutional swin transformer
CN115631387B (en) Method and device for predicting lung cancer pathology high-risk factor based on graph convolution neural network
CN116468727A (en) Method and system for assisting in judging high-risk endometrial hyperplasia based on endoscopic image recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination