WO2020164493A1 - 一种医学图像区域过滤方法、装置及存储介质 - Google Patents

一种医学图像区域过滤方法、装置及存储介质 Download PDF

Info

Publication number
WO2020164493A1
WO2020164493A1 PCT/CN2020/074782 CN2020074782W WO2020164493A1 WO 2020164493 A1 WO2020164493 A1 WO 2020164493A1 CN 2020074782 W CN2020074782 W CN 2020074782W WO 2020164493 A1 WO2020164493 A1 WO 2020164493A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
medical image
image
tissue
target
Prior art date
Application number
PCT/CN2020/074782
Other languages
English (en)
French (fr)
Inventor
江铖
田宽
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP20755919.6A priority Critical patent/EP3926581A4/en
Publication of WO2020164493A1 publication Critical patent/WO2020164493A1/zh
Priority to US17/367,316 priority patent/US11995821B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • This application relates to the field of image recognition, and in particular to a method, device and storage medium for filtering medical image regions.
  • Mass is an important local feature for judging whether the tissues of the living body are normal.
  • the filtering of suspected malignant masses can provide doctors with a better basis for judging benign and malignant masses and is an important basis for cancer diagnosis.
  • the common way to filter suspected malignant masses in medical images is mainly to use artificial methods to filter the suspected malignant masses in medical images to determine the disease.
  • the embodiments of the present application provide a medical image region filtering method, device, and storage medium, which can improve the accuracy of medical image region filtering.
  • the embodiment of the present application provides a medical image region filtering method, including:
  • an embodiment of the present application also provides a medical image region filtering device, including:
  • the acquisition module is used to acquire the medical image of the target part of the biological tissue
  • a segmentation module for segmenting multiple tissue types from the target location medical image
  • a retention module configured to select a retention area to be retained from the multiple tissue types based on the shooting position type of the medical image of the target part;
  • a position relationship acquisition module configured to acquire the position relationship between the reserved area and the predicted lesion area in the medical image of the target part
  • the filtering module is used to filter the predicted lesion area in the medical image of the target part based on the position relationship to obtain the target lesion area.
  • an embodiment of the present application further provides a storage medium that stores an instruction, and the instruction is executed by a processor to implement the steps of the medical image region filtering method provided in any embodiment of the present application.
  • FIG. 1 is a schematic diagram of a scene of a medical image area filtering system provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of the first flow of a method for filtering medical image regions provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a second flow of a method for filtering medical image regions according to an embodiment of the present application
  • FIG. 4 is a schematic diagram of the third process of the medical image region filtering method provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of an application scenario provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a first filtered lesion area provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a second filtered lesion area provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a third filtered lesion area provided by an embodiment of the present application.
  • Figure 9 is a schematic diagram of a process provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the first structure of a medical image area filtering device provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a second structure of a medical image area filtering device provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a third structure of a medical image area filtering device provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a network device provided by an embodiment of the present application.
  • the embodiments of the present application provide a method, device, and storage medium for filtering medical image regions.
  • the embodiment of the present application provides a medical image area filtering method.
  • the execution subject of the medical image area filtering method may be the medical image area filtering device provided in the embodiments of the application, or a network device integrating the medical image area filtering device, wherein The medical image area filtering device can be implemented in hardware or software.
  • the network device can be a smart phone, a tablet computer, a palmtop computer, a notebook computer, or a desktop computer.
  • FIG. 1 is a schematic diagram of an application scenario of a medical image region filtering method provided by an embodiment of the present application.
  • the network device can obtain the medical image of the target part of the biological tissue, segment the part and tissue area of multiple tissue types from the medical image of the target part, based on the shooting of the target part medical image Location type, select the reserved area to be preserved from the part and tissue area of multiple tissue types, obtain the positional relationship between the reserved area and the predicted lesion area in the medical image of the target part, and calculate the predicted lesion area in the medical image of the target part based on the position relationship Filter to get the target lesion area.
  • FIG. 2 is a schematic diagram of the first flow of the medical image region filtering method provided by an embodiment of the present application.
  • the specific process of the medical image region filtering method provided in the embodiment of the application may be as follows:
  • the biological tissue may be a tissue part in a living object.
  • the biological tissue can be muscle tissue, subcutaneous tissue, and so on.
  • Biological tissue may also be a tissue composed of parts in a living object, such as breast tissue, eye tissue, and so on.
  • Biological tissue may also be a tissue composed of parts of a living object, such as pupil tissue in the eye, breast tissue in the breast, and so on.
  • life objects refer to objects that can respond to external stimuli and have life forms, such as humans, cats, dogs, and so on.
  • the medical image of the target part is an image obtained by shooting the living tissue, and the pathological condition of the living tissue can be judged through the medical image of the target part.
  • the medical image of the target part may be a mammography target image and so on.
  • Mammography mammography images are images obtained by mammography mammography machines, which are used in mammography examinations. Mammography of mammography is currently the first choice and the easiest and most reliable noninvasive detection method for the diagnosis of breast diseases.
  • the mammogram can be obtained by a mammogram X-ray machine, or the mammogram can be obtained locally from a network device or downloaded through the Internet. Mammography image, etc.
  • the tissue type refers to the types of different tissues included in the medical image of the target part.
  • the medical image of the target part is a mammography image
  • the part of the breast can be regarded as a tissue type
  • the muscle part in the mammography image can also be regarded as a tissue type, and so on.
  • the tissue region segmented from the mammography target image can include muscle tissue type and breast tissue type, and so on.
  • non-target lesion areas such as lymph nodes and nipple are recognized as the target lesion area, resulting in misjudgment.
  • the muscle tissue type and breast tissue type are segmented from the breast mammography image Area, lymph nodes and nipples that are misjudged as the target lesion area can be removed, thereby improving the accuracy of identifying the target lesion area.
  • a network model may be used to segment the image.
  • the step of "segmenting multiple tissue types from the medical image of the target part" may include:
  • region segmentation network model Based on a region segmentation network model, segment tissue regions of multiple tissue types from the target region medical image, wherein the region segmentation network model is trained from a plurality of sample region images.
  • the region segmentation network model is a deep learning segmentation network model.
  • the region segmentation network model may be a full convolutional network model (Fully Convolutional Networks, FCN).
  • FCN Full Convolutional Networks
  • the full convolutional network model can classify images at the pixel level, thereby solving the problem of semantic image segmentation.
  • the fully convolutional network model can accept input images of any size, so the deconvolution subnetwork is used to upsample the features of the last convolution layer to restore it to the same size of the input image, so that it can be generated for each pixel A prediction is made while preserving the spatial information in the original input image, and then pixel-by-pixel classification is performed on the up-sampled feature map.
  • the fully convolutional network model is equivalent to replacing the last fully connected layer of the convolutional neural network with a convolutional layer, and the output is an already marked image.
  • the region segmentation network model can also be replaced by a similar network model (such as U-Net network model, etc.).
  • the U-Net network model is an image segmentation network model applied in the medical field.
  • the U-Net network model can freely deepen the network structure according to the selected data set when processing targets with larger receptive fields, and the U-Net network model can adopt the superposition method when performing shallow feature fusion.
  • the step of "segmenting multiple tissue types from the target region medical image based on the region segmentation network model" may include:
  • the feature images after the restored size are classified to obtain the tissue regions of multiple tissue types.
  • the region segmentation network model may include a convolution sub-network and a deconvolution sub-network.
  • the convolution sub-network may include a convolution layer and a pooling layer.
  • the convolution layer is composed of several convolution units and performs convolution operations.
  • the purpose of the convolution operation is to extract different features of the input.
  • the first layer of convolutional layer may only extract some low-level features such as edges, lines, and corners. More layers of networks can iteratively extract more complex features from low-level features.
  • the pooling layer can compress the input feature map. On the one hand, it makes the feature map smaller and simplifies the network calculation complexity; on the other hand, it performs feature compression and extracts main features.
  • the deconvolution sub-network may include a deconvolution layer for performing deconvolution operations.
  • Deconvolution is also called transposed convolution.
  • the forward propagation process of the convolutional layer is the back propagation process of the deconvolution sub-network
  • the back propagation process of the convolutional layer is the forward propagation process of the deconvolution sub-network. Therefore, the feature image size can be restored through the deconvolution subnet.
  • the mammography target image can be input into the fully convolutional network model, and the convolutional sub-network including multiple convolutional layers is used for convolution.
  • the image features are obtained, and then the obtained features are up-sampled through the deconvolution layer in the deconvolution sub-network, and the feature images are restored in size, and then the restored feature images are classified to obtain muscle tissue Type and tissue area of the breast tissue type.
  • the medical image region filtering method may further include a training step of a region segmentation network model.
  • the initialization weights of the fully convolutional network model can be obtained through the segmentation data set PASCAL VOC training, and then the medical molybdenum target image public data set DDSM can be used for migration learning. Then use the data of 3000 cases of mammography target data from domestic hospitals marked with muscle tissue type and breast tissue type for transfer learning.
  • the batch size during network model training can be 4, the learning rate can be 0.00001, the maximum number of iterations can be 20000, and so on.
  • the trained region segmentation network model can be obtained, which can be used for more Segmentation of the tissue area of each tissue type.
  • segmentation data set pattern analysis, statistical modelling and computational learning visual object classes, PASCAL VOC
  • PASCAL VOC pattern analysis, statistical modelling and computational learning visual object classes
  • the Digital Database for Screening Mammography (DDSM) database is a database established by medical institutions to store breast cancer images.
  • the DDSM database stores data types such as malignant, conventional, and benign.
  • many studies on breast cancer are based on the DDSM database.
  • transfer learning is to transfer the trained model parameters to a new model to help the new model training.
  • the learned model parameters can be shared with the new model in some way through transfer learning, thereby speeding up and optimizing the learning efficiency of the model, instead of learning from it like most networks. Zero learning.
  • the network model data training is performed by parameter migration, and the model trained by task A can be used to initialize the model parameters of task B, so that task B can converge in learning and training faster.
  • the shooting position types of the medical image of the target part are different shooting position types that are generated due to different shooting positions when the medical image of the target part is taken.
  • the shooting position type can include CC position (axial position, X-ray beam projection from top to bottom), MLO position (lateral oblique position, which can be divided into internal and external oblique position and external and internal oblique position.
  • CC position axial position, X-ray beam projection from top to bottom
  • MLO position lateral oblique position, which can be divided into internal and external oblique position and external and internal oblique position.
  • Oblique position, internal and external oblique position is to place the film on the outer and lower part of the breast, the X-ray beam is projected from the inner upper part of the mammary gland at 45 degrees outward and downward, while the outer and inner oblique position is the opposite. Wait.
  • the reserved areas are different areas that need to be reserved for the medical images of target parts of different shooting position types, and the parts and tissue areas of multiple tissue types. For example, when the types of shooting locations are different, according to the actual situation, different tissue types can be selected and retained to improve the accuracy of region filtering.
  • the shooting position types include CC position and MLO position
  • the part tissue area of multiple tissue types includes muscle tissue type and breast tissue type.
  • the mammography image of the breast is in the CC position
  • only the part and tissue area of the breast tissue type is reserved;
  • the image of the mammogram is in the MLO position, the part and tissue area of the muscle tissue type and the breast tissue type is reserved.
  • tissue regions of different tissue types for different shooting position types according to actual conditions. For example, when the mammography image of the mammogram is in the CC position and the MLO position, the tissue regions of the muscle tissue type and breast tissue type can be reserved, and so on.
  • the step "based on the shooting position type of the medical image of the target part, from the tissue area of the multiple tissue types "Select the reserved area to be reserved” can include:
  • mapping relationship set includes a mapping relationship between a preset shooting position type and a tissue type of the medical image of the target part
  • the mapping relationship set includes the mapping relationship between the shooting location type and the tissue type of the medical image of the target part. For example, when the mammography target image is CC position, the tissue area corresponding to the breast tissue type is reserved; when the mammography target image is MLO position, the tissue area corresponding to the muscle tissue type and breast tissue type is reserved. And according to the mapping relationship, select the reserved area that needs to be reserved from multiple tissue types.
  • the mapping relationship can also be adjusted according to actual conditions. For example, when the mammography image of the mammography is CC position, the tissue area corresponding to the reserved muscle tissue type and breast tissue type; when the mammography image is the MLO position, the muscle tissue type and breast tissue type are correspondingly reserved The location of the tissue area, and so on.
  • the positional relationship may be the positional relationship between the reserved area and the predicted lesion area, and the positional relationship may be defined according to actual conditions.
  • the positional relationship between the reserved area and the predicted lesion area may include: predicting that the lesion area is all located in the reserved area, predicting that part of the lesion area is located in the reserved area, predicting that the lesion area is not located in the reserved area, and so on.
  • the positional relationship between the reserved area and the predicted lesion area may also be that the predicted lesion area is at the upper left, upper right, etc. of the reserved area.
  • the reserved area and the medical image of the target part including the predicted lesion area can be overlapped by image overlap to obtain the positional relationship.
  • the location of the location point of the reserved area can be determined in the medical image of the target part including the predicted lesion area (for example, two vertices on a diagonal line of the reserved area can be determined as the location points of the reserved area), and the reserved area and The medical images of the target part including the predicted lesion area are overlapped to obtain the positional relationship between the reserved area and the predicted lesion area.
  • the coordinate information of the reserved area and the predicted lesion area can also be obtained to obtain the positional relationship between the reserved area and the predicted lesion area.
  • the coordinate information of the predicted lesion area and the coordinate information of the reserved area can be compared to obtain the position information of the predicted lesion area in the reserved area, and so on.
  • the predicted lesion area is the area where the lesion may occur detected in the medical image of the target part.
  • the predicted lesion area includes an area that has been lesioned, and may also include an unlesioned area that is misjudged as a lesion area, and so on.
  • the medical image of the target part can be manually recognized in advance to obtain it, and the medical image of the target part can also be detected through the network model to obtain the predicted lesion area.
  • the specific process can refer to Figure 3 below.
  • the predicted lesion area is a suspected lesion area that has been located in the medical image of the target part.
  • the medical image of the target part is a mammography image
  • the predicted lesion area can be a suspected malignant mass located in the mammography image.
  • the target lesion area is the predicted lesion area after filtering.
  • the target lesion area may be a malignant mass.
  • non-target lesion areas such as lymph nodes and nipples will be recognized as the target lesion area, resulting in misjudgment.
  • the reserved area and the predicted lesion area are obtained based on detecting the same medical image of the target part, the positional relationship between the reserved area and the predicted lesion area in the medical image of the target part can be obtained to achieve In the reserved area, the predicted lesion area is filtered to obtain the target lesion area, thereby improving the accuracy of medical image area filtering.
  • the predicted lesion area in the medical image of the target part is filtered based on the position relationship to obtain the target lesion area.
  • the reserved area includes muscle tissue type part tissue area and breast tissue type part tissue area
  • the positional relationship between the reserved area and the predicted lesion area can be obtained, and according to the positional relationship, the muscle area and nipple in the reserved area
  • the predicted lesion area of the region is filtered out, otherwise the predicted lesion area is recognized as the target lesion area.
  • the predicted lesion area in the preset location area in the reserved area may be filtered according to the positional relationship between the reserved area and the predicted lesion area. For example, according to the actual situation, the predicted lesion area included in the preset size area in the upper left corner of the reserved area can be filtered, and so on.
  • the predicted lesion area may be obtained by acquiring an already marked image, or may be obtained in the following manner, and the process of obtaining the predicted lesion area may be as follows:
  • the candidate recognition area is an area that is likely to include the predicted lesion area.
  • the candidate recognition area is an area that may include a malignant mass. Segmenting the candidate recognition area can improve the efficiency of area filtering.
  • the accuracy of region filtering can be improved.
  • the step of "segmenting multiple candidate recognition regions from the medical image of the target part” may include:
  • a plurality of candidate recognition regions are segmented from the sub-part image.
  • the sub-part image may be an image where a certain part is intercepted from the medical image of the target part.
  • the medical image of the target part is a mammogram image
  • the sub-part image may be an image of the region where the breast is located.
  • the image of the breast area can be extracted from the mammography target image, and multiple candidate recognition areas can be segmented from the image of the breast area.
  • the accuracy of the area filtering can be improved.
  • the step of "extracting a sub-part image from the medical image of the target part” may include:
  • Preprocessing is the processing before the feature extraction, segmentation and matching of the input image.
  • the main purpose of preprocessing is to eliminate irrelevant information in the image, restore useful real information, enhance the detectability of relevant information and simplify data to the greatest extent, thereby improving the reliability of feature extraction, image segmentation, matching and recognition.
  • preprocessing the image you can reduce the noise in the image, improve the homogeneity of the region, and the robustness of the subsequent processing, and get the preprocessed sub-part image.
  • gray-scale stretching is a gray-scale transformation method, which improves the dynamic range of gray-scale levels during image processing through a piecewise linear transformation function.
  • the image histogram is an image displayed by image data in the form of a distribution curve with dark left and bright right.
  • Image histograms can be scaled down by algorithms, and have many advantages such as image translation, rotation, and scaling invariance. They are used in many areas of image processing, such as threshold segmentation of grayscale images, color-based image retrieval, Image classification and so on.
  • histogram equalization is a method in the field of image processing that uses image histograms to adjust contrast.
  • the histogram equalization effectively expands the commonly used brightness, so that the brightness can be better distributed on the histogram, which is used to enhance the local contrast without affecting the overall contrast.
  • the filtering operation can remove the noise in the image and improve the regional homogeneity.
  • the gray scale range of the mammography image can be stretched to 0-255 by linear stretching.
  • the robustness of subsequent processing can be improved by gray-scale stretching of the mammography target image.
  • the morphological opening operation can be used to corrode the stretched mammography image and then expand. Remove fine tissue and noise in the image, separate objects at delicate points, and smooth the boundaries of larger objects while not changing their area significantly.
  • the Otsu segmentation method (Otsu's method) is used for segmentation, that is, the gray image is converted into a binary image.
  • the Otsu segmentation method assumes that the image contains two types of pixels (foreground pixels and background pixels) according to the dual-mode histogram, so it must calculate the best threshold that can separate the two types of pixels so that their intra-class variance is the smallest; due to the pairwise square distance Constant, that is, to maximize their inter-class variance.
  • the image of the breast area can be obtained.
  • the initial sub-part image is an image of the region where the breast is located
  • the image histogram corresponding to the image of the region where the breast is located can be obtained, and then the histogram is equalized to obtain the balanced image of the region where the breast is located.
  • bilateral filtering can be used to filter the image of the region where the breast is located after equalization to obtain the image of the region where the breast is located.
  • Bilateral filtering is a non-linear filtering method that combines the spatial proximity of the image and the similarity of pixel values. It also considers the spatial information and gray-scale similarity to achieve the purpose of edge preservation and denoising. Destroy the image segmentation edge.
  • multiple candidate recognition regions can be segmented from the sub-part image, thereby improving the accuracy of region filtering.
  • the step of "segmenting multiple candidate recognition regions from the sub-part image" may include:
  • a plurality of candidate recognition regions are selected from a plurality of segmented images after the operation.
  • image segmentation is the technique and process of dividing an image into a number of specific regions with unique properties and proposing objects of interest. From a mathematical point of view, image segmentation is the process of dividing a digital image into disjoint areas. The process of image segmentation is also a marking process, that is, assigning the same numbers to the images belonging to the same area. In order to make it easier to train and classify images, the sub-part images can be segmented, and candidate recognition regions can be selected from multiple regions obtained by segmentation.
  • genetic genetic algorithm is a computational model that simulates the biological evolution process of natural selection and genetic mechanism of Darwin's biological evolution theory, and is a method of searching for the optimal solution by simulating the natural evolution process.
  • first-generation population After the generation of the first-generation population, according to the principle of survival of the fittest and survival of the fittest, successive generations evolve to produce better and better approximate solutions.
  • individuals are selected according to their fitness in the problem domain, and with the help of natural genetics.
  • the genetic operator combines crossover and mutation to generate a population representing a new solution set.
  • a two-dimensional wavelet transform can be performed on the sub-part image first to reduce the dimensionality.
  • the image can be segmented based on the image histogram of the image.
  • genetic genetic algorithm can be used to realize the mapping from phenotype to genetic nature through binary coding.
  • the length of the binary coding sequence can be the number of gray levels of the image, which can make the bit value 0 represent the gray level
  • the level is the segmentation threshold.
  • the genetic algorithm value function is a method to measure the excellence of chromosomes. The value function takes the maximum between-class variance and minimum within-class variance as the standard.
  • the three processes of iterative selection, crossover and mutation are repeated until convergence, such as the initial population size It can be 30, the number of iterations can be 40, the selection rate can be 10%, the crossover rate can be 80%, the mutation rate can be 10%, etc.
  • the segmentation threshold is output, and the sub-part image is segmented according to the segmentation threshold , Get multiple segmented images.
  • the segmented image can be corroded and then expanded to remove the fine tissue and noise of the image, separate objects at delicate points, and smooth the boundaries of larger objects while not changing their areas significantly, disconnecting glands, etc., for convenience Subsequent area extraction.
  • multiple candidate recognition regions are selected from the segmented image after multiple operations. For example, you can extract regions with higher gray levels from the segmented image after multiple operations. For example, you can extract the Top5 gray-level regions from the segmented images after multiple operations, and then select 10 regions with larger areas in each mammography target image as candidate recognition regions based on the extracted regions.
  • the area classification network model is a network model that can classify candidate recognition areas and determine whether the candidate recognition areas include the predicted lesion area.
  • the regional classification network model can be the Inception V3 network model.
  • Inception V3 is a type of convolutional neural network.
  • Convolutional neural network is a feed-forward neural network. Artificial neurons can respond to surrounding units and can perform large-scale image processing.
  • Convolutional neural networks include convolutional layers and pooling layers. Inception V3 optimizes the network by increasing the width of the single-layer convolutional layer, that is, using different scale convolution kernels on the single-layer convolutional layer.
  • the lesion prediction probability is the probability of predicting that the candidate recognition area includes the predicted lesion area.
  • the candidate recognition area can be classified by a network model, and the lesion prediction probability corresponding to the candidate recognition area can be obtained, and the accuracy of area filtering can be improved.
  • the step of "classifying the candidate recognition area based on the area classification network model to obtain the lesion prediction probability corresponding to the candidate recognition area" may include:
  • the feature of the region is classified based on the fully connected layer, and the lesion prediction probability corresponding to the candidate recognition region is obtained.
  • the regional classification network model is a network model that identifies the lesion prediction probability corresponding to the candidate recognition area, for example, it can be a deep learning convolutional neural network GoogLeNet (Inception V3), a fully convolutional neural network, a Resnet network model, and so on.
  • the regional classification network model can include a convolutional layer and a fully connected layer.
  • the Inception V3 network model approximates the optimal local sparse nodes through dense components, thereby making more efficient use of computing resources and extracting more features with the same amount of calculation, thereby improving training results.
  • the Inception network model There are two characteristics of the Inception network model: one is to use 1x1 convolution to increase and decrease dimensionality; the other is to perform convolution and re-aggregation on multiple sizes simultaneously.
  • the convolutional layer is composed of several convolution units, and the parameters of each convolution unit are optimized through the back propagation algorithm.
  • the purpose of image convolution operation is to extract different features of the input image.
  • the first layer of convolutional layer may only extract some low-level features such as edges, lines, and corners. More layers of networks can iteratively extract from low-level features. More complex features.
  • the fully connected layer can integrate the category-discriminatory local information in the convolutional layer.
  • the output value of the last fully connected layer is passed to an output, which can be classified using softmax logistic regression.
  • the candidate recognition area can be input into the network model, and the convolutional layer is used for convolution processing to obtain the characteristics of the area, and then the area To classify the features of the candidate recognition area to obtain the prediction probability of the lesion corresponding to the candidate recognition area.
  • the convolutional layer is used for convolution processing to obtain the characteristics of the area, and then the area To classify the features of the candidate recognition area to obtain the prediction probability of the lesion corresponding to the candidate recognition area.
  • the regional classification network model is the Inception V3 network model
  • the convolution kernel of various sizes allows the network model to maintain the sparsity of the network structure and use the high density of the dense matrix. Computing performance.
  • the medical image region filtering method may further include a training process of a region classification network model.
  • the medical image region filtering method may further include:
  • the positive sample area includes the predicted lesion area
  • the area classification network model is updated.
  • the positive sample area includes the predicted lesion area
  • the positive and negative sample area is the sample area in the medical image of the sample target part marked by the physician.
  • the positive sample may be the mammogram of the breast including the suspected malignant mass area marked by the physician.
  • the positive sample includes the entire mass area and a small background area surrounding it
  • the negative sample can be a mammogram of the obvious benign mass area and the background area marked by the physician.
  • data enhancement can allow limited data to generate more equivalent data, which can increase the number of samples and enhance the samples.
  • transfer learning is a new machine learning method that uses existing knowledge to solve problems in different but related fields. Transfer learning relaxes the two basic assumptions in traditional machine learning. The purpose is to transfer existing knowledge to solve learning problems in the target field where there is only a small amount of labeled sample data or even none.
  • data enhancement is performed on the positive and negative sample regions to obtain the enhanced positive and negative sample regions.
  • the data enhancement of the positive and negative sample area is mainly flipped and cropped, and there is no need to perform data enhancement in the color space, and multiple enhanced positive and negative sample areas are obtained.
  • the number of output categories of the model can be set to 2.
  • the weight initialization of the model can first use the ImageNet data set (computer vision standard data set), and then use it
  • the public data set DDSM the public data set DDSM is a database established by medical institutions in the United States for storing breast cancer images.
  • the processing size can be 64, the initial learning rate can be 0.01, and the maximum number of iterations can be 100,000.
  • the trained regional classification network model is obtained.
  • the regional classification network model is performed Update, get the regional classification network model.
  • multiple candidate recognition regions can be segmented from the mammography target image, and the multiple candidate recognition regions can be input into the region classification network model for classification to obtain the lesion prediction probability corresponding to the candidate recognition region.
  • the prediction probability screens out the predicted lesion area from the candidate recognition area. For example, a candidate recognition area with a lesion prediction probability greater than 0.5 can be determined as a predicted lesion area.
  • the method of non-maximum suppression can also be used to remove the overlap area of the predicted lesion area.
  • the overlap threshold can be set to 50%, which can reduce the false alarm rate and improve the location of the predicted lesion area. Accuracy.
  • the medical image area filtering method can realize the recognition of the target lesion area from the mammography target image, and mark the target lesion area, for example, by a box mark.
  • the embodiment of the present application obtains the medical image of the target part of the biological tissue, and segments the part tissue area of multiple tissue types from the medical image of the target part. Based on the shooting position type of the medical image of the target part, from the multiple tissue types Part tissue area selects the reserved area to be reserved, obtains the positional relationship between the reserved area and the predicted lesion area in the medical image of the target part, and filters the predicted lesion area in the medical image of the target part based on the position relationship to obtain the target lesion area.
  • the solution uses the parallel connection of two neural networks to extract the diverse features of the image, reduces the manual workload, improves the degree of automation, accuracy, efficiency and application promotion; based on the type of shooting location of the target part of the medical image from multiple tissue types
  • the tissue area of the target part retains the area that needs to be preserved, and the predicted lesion area corresponding to the target part medical image is filtered based on the reserved area, which can effectively filter out the lymph nodes and nipples that interfere with the judgment of the target lesion area, reducing the error of the target lesion area To improve the accuracy of regional filtering.
  • the medical image area filtering device is specifically integrated in a network device as an example for description.
  • the network device obtains an image of the mammography target.
  • FIG. 9 there are many ways for network equipment to obtain mammography images.
  • it can be obtained by mammography mammography machine, or by acquiring mammography images locally, or through the Internet. Download to get mammography images, etc.
  • the network device segments multiple tissue types from the mammography target image.
  • the non-target lesion area such as lymph node and nipple will be recognized as the target lesion area, resulting in misjudgment.
  • pass Separate multiple tissue types from the mammography image that is, segment the muscle tissue type and breast tissue type from the mammography image, which can be misjudged as the lymph node and the target lesion area.
  • the nipple is removed, thereby improving the accuracy of identifying the target lesion area.
  • the network device can segment multiple tissue types of part tissue regions from the mammography target image based on the region segmentation network model.
  • the region segmentation network model is trained from multiple sample part images.
  • the mammography target image can be convolved based on the convolution sub-network to obtain the features of the image, and then the features can be up-sampled based on the deconvolution sub-network to obtain the feature image with the restored size, and finally the restored size
  • the feature images are classified to obtain multiple tissue types of tissue regions.
  • the mammography target image is convolved based on the convolution sub-network to obtain the features of the image, and the features are up-sampled based on the deconvolution sub-network to obtain the feature image with the restored size.
  • the feature images are classified, and the tissue regions of multiple tissue types are obtained.
  • the mammography target image can be input into the fully convolutional network model, and the convolution process is performed through the convolution sub-network including multiple convolution layers to obtain the image Then use the deconvolution subnetwork in the deconvolution subnetwork to upsample the obtained features, restore the size of the feature image, and then classify the restored feature image to obtain the muscle tissue type and breast tissue Type of tissue area.
  • the medical image region filtering method may further include a training step of a region segmentation network model.
  • the region segmentation network model is a fully convolutional network model
  • the initialization weights of the fully convolutional network model can be obtained through the segmentation data set PASCAL VOC training, and then the medical molybdenum target image public data set DDSM can be used for migration learning. Then use the data of 3000 cases of mammography target data from domestic hospitals marked with muscle tissue type and breast tissue type for transfer learning.
  • the batch size during network model training can be 4, the learning rate can be 0.00001, the maximum number of iterations can be 20000, and so on.
  • the trained region segmentation network model can be obtained, which can be used for more Segmentation of the tissue area of each tissue type.
  • the network device selects a reserved area that needs to be reserved from multiple tissue types based on the shooting position type of the mammogram image.
  • the reserved area that needs to be retained is for mammography images of different shooting position types.
  • the areas that need to be retained are different for multiple tissue types.
  • the shooting position types include CC position and MLO position, when the tissue area of multiple tissue types includes muscle tissue type and breast tissue type, when the mammogram of the breast is CC position, only the tissue area of the breast tissue type is reserved;
  • the target image is MLO position, the tissue area of the muscle tissue type and breast tissue type is reserved.
  • the network device may obtain a set of mapping relationships, and then select a reserved area that needs to be reserved from a plurality of organization types according to the mapping relationship. For example, when the mammography target image is CC position, the tissue area corresponding to the breast tissue type is reserved; when the mammography target image is MLO position, the tissue area corresponding to the muscle tissue type and breast tissue type is reserved. And according to the mapping relationship, select the reserved area that needs to be reserved from multiple tissue types.
  • the network device segments multiple candidate recognition regions from the mammography target image.
  • a breast image can be extracted from a mammography target image, and multiple candidate recognition regions can be segmented from the breast image.
  • the network device may perform gray-scale stretch on the mammography target image to obtain a stretched mammography target image, extract the initial breast image from the stretched mammography target image, and compare the initial breast image
  • the histogram of the image is histogram equalized to obtain an equalized breast image
  • the equalized breast image is filtered to obtain a breast image.
  • the gray scale range of the mammography target image can be stretched to 0-255 by linear stretching, and the robustness of subsequent processing can be improved by gray scale stretching of the mammography target image.
  • the Otsu segmentation method (Otsu's method) is used for segmentation, that is, the gray image is converted into a binary image.
  • the breast image can be obtained.
  • the image histogram corresponding to the breast image can be obtained, and then the histogram equalization can be performed to obtain the equalized breast image.
  • bilateral filtering can be used to filter the equalized breast image to obtain a breast image.
  • the breast image can be segmented based on the genetic algorithm to obtain multiple segmented images, use the morphological open operation on the segmented images to obtain the segmented image after the operation, and select multiple candidates from the segmented images after multiple operations Identify the area.
  • a two-dimensional wavelet transform may be performed on the breast image first to reduce the dimensionality.
  • genetic genetic algorithm can be used to realize the mapping from phenotype to genetic nature through binary coding.
  • the length of the binary coding sequence can be the number of gray levels of the image, which can make the bit value 0 represent the gray level
  • the level is the segmentation threshold.
  • the genetic algorithm value function is a method to measure the excellence of chromosomes. The value function takes the maximum between-class variance and minimum within-class variance as the standard.
  • the three processes of iterative selection, crossover and mutation are repeated until convergence, such as the initial population size It can be 30, the number of iterations can be 40, the selection rate can be 10%, the crossover rate can be 80%, the mutation rate can be 10%, etc.
  • the segmentation threshold is output, and the sub-part image is segmented according to the segmentation threshold , Get multiple segmented images.
  • multiple candidate recognition regions are selected from the segmented images after multiple operations. For example, regions with higher gray levels can be extracted from the segmented images after multiple operations. For example, the segmented images after multiple operations can be extracted. Extract the Top5 gray-level areas, and then based on the extracted areas, select 10 areas with a larger area in each mammogram image as candidate recognition areas.
  • the network device classifies the candidate recognition area based on the area classification network model, and obtains the lesion prediction probability corresponding to the candidate recognition area.
  • the network device can perform convolution processing on the candidate recognition area based on the convolution layer to obtain the characteristics of the area, and then classify the characteristics of the area based on the fully connected layer to obtain the lesion prediction corresponding to the candidate recognition area Probability.
  • the region classification network model is the Inception V3 network model
  • the candidate recognition region is input into the network model, and the convolution processing is performed through the convolution layer to obtain the characteristics of the region, and then the characteristics of the region are classified through the fully connected layer.
  • the lesion prediction probability corresponding to the candidate recognition area.
  • the regional classification network model is the Inception V3 network model
  • the convolution kernel of various sizes the neural network with a depth of 22 layers allows the network model to maintain the sparsity of the network structure and use the high density of the dense matrix. Computing performance.
  • the medical image region filtering method may further include a training process of a region classification network model.
  • the model weights in the regional classification network model Update to obtain the trained area classification network model, and finally update the area classification network model based on the trained area classification network model.
  • the positive and negative sample areas there are many ways to obtain the positive and negative sample areas. For example, you can use hospital data, hire experts to mark the data, and you can download the positive and negative sample areas locally or from the Internet, and so on. Then the positive and negative sample areas are enhanced to obtain the enhanced positive and negative sample areas. For example, since the image where the positive and negative sample area is located is a mammogram, the data enhancement of the positive and negative sample area is mainly flipped and cropped, and there is no need to perform data enhancement in the color space, and multiple enhanced positive and negative sample areas are obtained.
  • the number of output categories of the model can be set to 2.
  • the weight initialization of the model can first use the ImageNet data set (computer vision standard data set), and then use the public data set DDSM.
  • the public data set DDSM is a database established by medical institutions in the United States to store breast cancer images.
  • the enhanced positive and negative sample regions are used for migration learning to update the weights in the region classification network model.
  • RMSprop root mean square prop, an adaptive learning rate method based on root mean square
  • the batch size can be 64
  • the initial learning rate can be 0.01
  • the maximum number of iterations can be 100,000.
  • the trained regional classification network model is obtained.
  • the area classification network model is updated to obtain the area classification network model.
  • the network device screens out the predicted lesion area from the candidate identification areas according to the lesion prediction probability.
  • the network device can screen out the predicted lesion area from the candidate recognition area according to the lesion prediction probability. For example, a candidate recognition area with a lesion prediction probability greater than 0.5 can be determined as a predicted lesion area.
  • the non-maximum value suppression method can also be used to remove the overlap area of the predicted lesion area.
  • the overlap threshold can be set to 50%, which can reduce the false alarm rate and improve the accuracy of predicting the location of the lesion area. Sex.
  • the network device acquires the positional relationship between the reserved area and the predicted lesion area in the mammography target image.
  • the reserved area and the medical image of the target part including the predicted lesion area can be overlapped by image overlap to obtain the positional relationship.
  • the location of the location point of the reserved area can be determined in the medical image of the target part including the predicted lesion area (for example, two vertices on a diagonal line of the reserved area can be determined as the location points of the reserved area), and the reserved area and The medical images of the target part including the predicted lesion area are overlapped to obtain the positional relationship between the reserved area and the predicted lesion area.
  • the coordinate information of the reserved area and the predicted lesion area may also be obtained to obtain the positional relationship between the reserved area and the predicted lesion area.
  • the coordinate information of the predicted lesion area and the coordinate information of the reserved area can be compared to obtain the position information of the predicted lesion area in the reserved area, and so on.
  • the network device filters the predicted lesion area in the mammography target image based on the position relationship to obtain the target lesion area.
  • the reserved area and the predicted lesion area are obtained based on detecting the same medical image of the target part, the positional relationship between the reserved area and the predicted lesion area in the medical image of the target part can be obtained to achieve In the reserved area, the predicted lesion area is filtered to obtain the target lesion area, thereby improving the accuracy of medical image area filtering.
  • the predicted lesion area in the medical image of the target part is filtered based on the position relationship to obtain the target lesion area.
  • the reserved area includes muscle tissue type part tissue area and breast tissue type part tissue area
  • the positional relationship between the reserved area and the predicted lesion area can be obtained, and according to the positional relationship, the muscle area and nipple in the reserved area
  • the predicted lesion area of the region is filtered out, otherwise the predicted lesion area is recognized as the target lesion area.
  • Figure 6 and Figure 7 are MLO bit images
  • Figure 8 is CC bit images. Filter the lesion area corresponding to the mammography target image in the specific area in the reserved area to obtain the target lesion area. For example, for the muscle area and nipple area in the reserved area, when the predicted lesion area falls into the muscle area in the reserved area (as shown in Figure 6) and the nipple area (as shown in Figure 7), the predicted lesion area is filtered Otherwise, the predicted lesion area is identified as the target lesion area (as shown in Figure 6 and Figure 8).
  • the embodiment of the present application obtains the medical image of the target part of the biological tissue through the network device, segments the medical image of the target part into the part and tissue area of multiple tissue types, and based on the shooting position type of the medical image of the target part, Select the reserved area to be reserved for the tissue area of the tissue type, obtain the positional relationship between the reserved area and the predicted lesion area in the medical image of the target location, and filter the predicted lesion area in the medical image of the target location based on the position relationship to obtain the target lesion area.
  • the solution uses the parallel connection of two neural networks to extract the diverse features of the image, reduces the manual workload, improves the degree of automation, accuracy, efficiency and application promotion; based on the type of shooting location of the target part of the medical image from multiple tissue types
  • the tissue area of the target part retains the area that needs to be preserved, and based on the reserved area, the predicted lesion area corresponding to the target part medical image is filtered, which can effectively filter out the lymph nodes and nipple that interfere with the judgment of the target lesion area, reducing the error of the target lesion area To improve the accuracy of regional filtering.
  • an embodiment of the present application also provides a medical image area filtering device, which may be suitable for network equipment.
  • the medical image area filtering device may include: an acquisition module 101, a segmentation module 102, a retention module 103, a position relationship acquisition module 104, and a filtering module 105, as follows:
  • the obtaining module 101 is used to obtain a medical image of a target part of a biological tissue
  • the segmentation module 102 is configured to segment a plurality of tissue types of parts and tissue regions from the target part medical image;
  • the retention module 103 is configured to select a retention area that needs to be retained from the multiple tissue types based on the shooting position type of the medical image of the target part;
  • the position relationship obtaining module 104 is configured to obtain the position relationship between the reserved area and the predicted lesion area in the medical image of the target part;
  • the filtering module 105 is configured to filter the predicted lesion area in the medical image of the target part based on the position relationship to obtain the target lesion area.
  • the segmentation module 102 may be specifically used for:
  • the feature images after the restored size are classified to obtain the tissue regions of multiple tissue types.
  • the reservation module 103 may include a first acquisition sub-module 1031, a second acquisition sub-module 1031, and a reservation sub-module 1033, as follows:
  • the first acquisition sub-module 1031 is configured to acquire a mapping relationship set, the mapping relationship set including the mapping relationship between the preset shooting position type and the tissue type of the medical image of the target part;
  • the second acquisition submodule 1032 is configured to acquire the tissue type corresponding to the shooting location type according to the mapping relationship
  • the reservation sub-module 1033 is used to select the reserved area to be reserved from the tissue areas of the multiple tissue types.
  • the medical image region filtering device may further include a second segmentation module 106, a classification module 107, and a screening module 108, as follows:
  • the second segmentation module 106 is configured to segment multiple candidate recognition regions from the medical image of the target part
  • the classification module 107 is configured to classify the candidate recognition area based on the area classification network model to obtain the lesion prediction probability corresponding to the candidate recognition area;
  • the screening module 108 is configured to screen out the predicted lesion area from the candidate recognition area according to the lesion prediction probability, and obtain the predicted lesion area corresponding to the medical image of the target part.
  • the second segmentation module 106 may be specifically used for:
  • a plurality of candidate recognition regions are segmented from the sub-part image.
  • the classification module 107 may be specifically used to:
  • the feature of the region is classified based on the fully connected layer, and the lesion prediction probability corresponding to the candidate recognition region is obtained.
  • the embodiment of the present application acquires the medical image of the target part of the biological tissue through the acquisition module 101, the segmentation module 102 separates multiple tissue types from the medical image of the target part, and the retention module 103 is based on the medical image of the target part.
  • Image shooting position type select the reserved area to be reserved from the part and tissue area of multiple tissue types, obtain the position relationship between the reserved area and the predicted lesion area in the medical image of the target part through the position relationship acquisition module 104, and pass the filtering module 105
  • the predicted lesion area in the medical image of the target part is filtered based on the position relationship to obtain the target lesion area.
  • the solution uses the parallel connection of two neural networks to extract the diverse features of the image, reduces the manual workload, improves the degree of automation, accuracy, efficiency and application promotion; based on the type of shooting location of the target part of the medical image from multiple tissue types
  • the tissue area of the target part retains the area that needs to be preserved, and based on the reserved area, the predicted lesion area corresponding to the target part medical image is filtered, which can effectively filter out the lymph nodes and nipple that interfere with the judgment of the target lesion area, reducing the error of the target lesion area To improve the accuracy of regional filtering.
  • the embodiment of the present application also provides a network device, which may be a device such as a server or a terminal, which integrates any medical image area filtering device provided in the embodiment of the present application.
  • a network device which may be a device such as a server or a terminal, which integrates any medical image area filtering device provided in the embodiment of the present application.
  • Figure 13 is a schematic structural diagram of a network device provided by an embodiment of the present application. Specifically:
  • the network device may include one or more processing core processors 131, one or more computer-readable storage media memory 132, power supply 133, input unit 134 and other components.
  • the network device structure shown in FIG. 13 does not constitute a limitation on the network device, and may include more or fewer components than shown in the figure, or a combination of some components, or a different component arrangement. among them:
  • the processor 131 is the control center of the network device. It uses various interfaces and lines to connect various parts of the entire network device. It runs or executes software programs and/or modules stored in the memory 132, and calls Data, perform various functions of network equipment and process data, so as to monitor the network equipment as a whole.
  • the processor 131 may include one or more processing cores; the processor 131 may integrate an application processor and a modem processor.
  • the application processor mainly processes the operating system, user interface, and application programs, etc.
  • the adjustment processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 131.
  • the memory 132 can be used to store software programs and modules.
  • the processor 131 executes various functional applications and data processing by running the software programs and modules stored in the memory 132.
  • the memory 132 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of network equipment, etc.
  • the memory 132 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the memory 132 may further include a memory controller to provide the processor 131 with access to the memory 132.
  • the network device also includes a power supply 133 for supplying power to various components.
  • the power supply 133 may be logically connected to the processor 131 through a power management system, so that functions such as charging, discharging, and power consumption management can be managed through the power management system.
  • the power supply 133 may also include one or more DC or AC power supplies, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and any other components.
  • the network device may further include an input unit 134, which can be used to receive inputted digital or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • an input unit 134 which can be used to receive inputted digital or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • the network device may also include a display unit, etc., which will not be repeated here.
  • the processor 131 in the network device loads the executable file corresponding to the process of one or more application programs into the memory 132 according to the following instructions, and the processor 131 runs the executable file stored in The application programs in the memory 132 thus realize various functions, as follows:
  • Obtain the medical image of the target part of the biological tissue segment the part and tissue area of multiple tissue types from the medical image of the target part, and select the reserve that needs to be retained from the part and tissue area of multiple tissue types based on the shooting location type of the medical image of the target part Area, obtain the positional relationship between the reserved area and the predicted lesion area in the medical image of the target part, and filter the predicted lesion area in the medical image of the target part based on the position relationship to obtain the target lesion area.
  • the processor 131 may also run an application program stored in the memory 132, thereby implementing the following functions:
  • Obtain the medical image of the target part of the biological tissue segment the part and tissue area of multiple tissue types from the medical image of the target part, and select the reserve that needs to be retained from the part and tissue area of multiple tissue types based on the shooting location type of the medical image of the target part Area, obtain the positional relationship between the reserved area and the predicted lesion area in the medical image of the target part, and filter the predicted lesion area in the medical image of the target part based on the position relationship to obtain the target lesion area.
  • the embodiment of the present application obtains the medical image of the target part of the biological tissue, and segments the part tissue area of multiple tissue types from the medical image of the target part. Based on the shooting position type of the medical image of the target part, from the multiple tissue types Part tissue area selects the reserved area to be reserved, obtains the positional relationship between the reserved area and the predicted lesion area in the medical image of the target part, and filters the predicted lesion area in the medical image of the target part based on the position relationship to obtain the target lesion area.
  • the solution uses the parallel connection of two neural networks to extract the diverse features of the image, reduces the manual workload, improves the degree of automation, accuracy, efficiency and application promotion; based on the type of shooting location of the target part of the medical image from multiple tissue types
  • the tissue area of the target part retains the area that needs to be preserved, and based on the reserved area, the predicted lesion area corresponding to the target part medical image is filtered, which can effectively filter out the lymph nodes and nipple that interfere with the judgment of the target lesion area, reducing the error of the target lesion area To improve the accuracy of regional filtering.
  • All or part of the steps in the various methods of the foregoing embodiments can be completed by instructions, or by instructions to control related hardware.
  • the instructions can be stored in a computer-readable storage medium and loaded and executed by a processor. .
  • an embodiment of the present application provides a storage medium in which multiple instructions are stored, and the instructions can be loaded by a processor to execute the steps in any medical image region filtering method provided in the embodiments of the present application.
  • the instruction can perform the following steps:
  • Obtain the medical image of the target part of the biological tissue segment the part and tissue area of multiple tissue types from the medical image of the target part, and select the reserve that needs to be retained from the part and tissue area of multiple tissue types based on the shooting location type of the medical image of the target part Area, obtain the positional relationship between the reserved area and the predicted lesion area in the medical image of the target part, and filter the predicted lesion area in the medical image of the target part based on the position relationship to obtain the target lesion area.
  • the storage medium may include: read only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
  • any medical image region filtering method provided in the embodiments of this application can be implemented.
  • any medical image region filtering method provided in the embodiments of this application can be implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种医学图像区域过滤方法、装置及存储介质;该方法获取生物组织的目标部位医学图像(S201),从目标部位医学图像中分割出多个组织类型的部位组织区域(S202),基于目标部位医学图像的拍摄位置类型,从多个组织类型的部位组织区域选择需要保留的保留区域(S203),获取保留区域与目标部位医学图像中预测病变区域之间的位置关系(S204),基于位置关系对目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域(S205)。

Description

一种医学图像区域过滤方法、装置及存储介质
本申请要求于2019年02月14日提交的申请号为201910115522.9、发明名称为“一种医学图像区域过滤方法、装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像识别领域,具体涉及一种医学图像区域过滤方法、装置及存储介质。
背景技术
医学图像被应用于癌症的早期判断当中。肿块是判断生命体组织是否正常的重要局部特征,其中对疑似恶性肿块的过滤可以为医生提供较好的肿块良恶性判定依据,是癌症诊断的重要依据。
目前常见的在医学图像中过滤疑似恶性肿块的方式主要是利用人工方式对医学图像中的疑似恶性肿块进行过滤,从而判定患病情况。
发明内容
本申请实施例提供一种医学图像区域过滤方法、装置及存储介质,可以提升医学图像区域过滤的准确性。
本申请实施例提供一种医学图像区域过滤方法,包括:
获取生物组织的目标部位医学图像;
从所述目标部位医学图像中分割出多个组织类型的部位组织区域;
基于所述目标部位医学图像的拍摄位置类型,从所述多个组织类型的部位组织区域选择需要保留的保留区域;
获取所述保留区域与所述目标部位医学图像中预测病变区域之间的位置关系;
基于所述位置关系对所述目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。
相应的,本申请实施例还提供一种医学图像区域过滤装置,包括:
获取模块,用于获取生物组织的目标部位医学图像;
分割模块,用于从所述目标部位医学图像中分割出多个组织类型的部位组织区域;
保留模块,用于基于所述目标部位医学图像的拍摄位置类型,从所述多个组织类型的部位组织区域选择需要保留的保留区域;
位置关系获取模块,用于获取所述保留区域与所述目标部位医学图像中预测病变区域之间的位置关系;
过滤模块,用于基于所述位置关系对所述目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。
相应的,本申请实施例还提供一种存储介质,所述存储介质存储有指令,所述指令被处理器执行时实现本申请实施例任一提供的医学图像区域过滤方法的步骤。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的医学图像区域过滤***的场景示意图;
图2是本申请实施例提供的医学图像区域过滤方法的第一流程示意图;
图3是本申请实施例提供的医学图像区域过滤方法的第二流程示意图;
图4是本申请实施例提供的医学图像区域过滤方法的第三流程示意图;
图5是本申请实施例提供的应用场景示意图;
图6是本申请实施例提供的第一过滤病变区域示意图;
图7是本申请实施例提供的第二过滤病变区域示意图;
图8是本申请实施例提供的第三过滤病变区域示意图;
图9是本申请实施例提供的流程示意图;
图10是本申请实施例提供的医学图像区域过滤装置的第一种结构示意图;
图11是本申请实施例提供的医学图像区域过滤装置的第二种结构示意图;
图12是本申请实施例提供的医学图像区域过滤装置的第三种结构示意图;
图13是本申请实施例提供的网络设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地 描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供了一种医学图像区域过滤方法、装置及存储介质。
本申请实施例提供一种医学图像区域过滤方法,该医学图像区域过滤方法的执行主体可以是本申请实施例提供的医学图像区域过滤装置,或者集成了该医学图像区域过滤装置的网络设备,其中该医学图像区域过滤装置可以采用硬件或者软件的方式实现。其中,网络设备可以是智能手机、平板电脑、掌上电脑、笔记本电脑、或者台式电脑等设备。
请参阅图1,图1是本申请实施例提供的医学图像区域过滤方法的应用场景示意图。以医学图像区域过滤装置集成在网络设备中为例,网络设备可以获取生物组织的目标部位医学图像,从目标部位医学图像中分割出多个组织类型的部位组织区域,基于目标部位医学图像的拍摄位置类型,从多个组织类型的部位组织区域选择需要保留的保留区域,获取保留区域与目标部位医学图像中预测病变区域之间的位置关系,基于位置关系对目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。
请参阅图2,图2是本申请实施例提供的医学图像区域过滤方法的第一流程示意图。本申请实施例提供的医学图像区域过滤方法的具体流程可以如下:
201、获取生物组织的目标部位医学图像。
其中,生物组织可以为生命对象中的组织部位。比如,当生命对象为人体时,生物组织可以为肌肉组织、皮下组织等等。生物组织还可以为由生命对象中的部位构成的组织,比如***部位组织、眼睛部位组织等等。生物组织还可以为由生命对象中部分部位构成的组织,比如眼睛中的瞳孔组织、***中的乳腺组织等等。
其中,生命对象是指能对外界刺激做出相应反应,并且有生命形态的对象,比如可以为人类、猫、狗等等。
其中,目标部位医学图像为对生命组织进行拍摄获取的图像,通过目标部位医学图像可以对生命组织的病理情况进行判断。比如,该目标部位医学图像可以为乳腺钼靶图像等等。乳腺钼靶图像是通过乳腺钼靶X线机获取到的图像,应用于乳腺钼靶X线摄影检查。乳腺钼靶X线摄影检查是目前诊断乳腺疾病的首选和最简便、最可靠的无创性检测手段。
其中,获取生物组织的目标部位医学图像的方式可以有多种。比如,当该目标部位医 学图像为乳腺钼靶图像时,可以通过乳腺钼靶X线机进行乳腺钼靶图像获取,还可以通过从网络设备本地获取乳腺钼靶图像,或者通过网络进行下载获取乳腺钼靶图像,等等。
202、从目标部位医学图像中分割出多个组织类型的部位组织区域。
其中,组织类型指的是目标部位医学图像中包括不同组织的类型。比如,当目标部位医学图像为乳腺钼靶图像时,可以将***的部分看作为一个组织类型,还可以将乳腺钼靶图像中肌肉的部分看作为一个组织类型,等等。从乳腺钼靶图像中分割出的部位组织区域可以包括肌肉组织类型和***组织类型,等等。
由于在乳腺钼靶图像目标病变区域的识别过程中,会将***和***等非目标病变区域识别为目标病变区域,从而产生误判的情况。为了减少目标病变区域的误判,提高准确性,通过从乳腺钼靶图像中分割出多个组织类型的部位组织区域,即从乳腺钼靶图像中分割出肌肉组织类型和***组织类型的部位组织区域,可以将被误判为目标病变区域的***和***等去除,从而提高目标病变区域识别的准确性。
在一实施例中,为了提高从目标部位医学图像中分割出部位组织区域的准确性,可以采用网络模型对图像进行分割。具体地,步骤“从目标部位医学图像中分割出多个组织类型的部位组织区域”可以包括:
基于区域分割网络模型,从所述目标部位医学图像中分割出多个组织类型的部位组织区域,其中,所述区域分割网络模型由多个样本部位图像训练而成。
其中,区域分割网络模型为一种深度学习分割网络模型。比如,该区域分割网络模型可以为全卷积网络模型(Fully Convolutional Networks,FCN)。全卷积网络模型可以对图像进行像素级的分类,从而解决了语义级别的图像分割问题。全卷积网络模型可以接受任意尺寸的输入图像,因此采用反卷积子网络对最后一个卷积层的特征进行上采样,使它恢复到输入图像相同的尺寸,从而可以对每个像素都产生了一个预测,同时保留了原始输入图像中的空间信息,然后在上采样的特征图上进行逐像素分类。全卷积网络模型相当于把卷积神经网络最后的全连接层换成卷积层,输出的是一张已经标注好的图像。
在一实施例中,比如,该区域分割网络模型还可以通过类似网络模型(比如U-Net网络模型等)进行替代。
其中,U-Net网络模型是一种在医学领域应用的图像分割网络模型。U-Net网络模型可以在处理具有更大感受野目标的时候,根据选择的数据集自由加深网络结构,并且U-Net 网络模型在进行浅层特征融合的时候,可以采用叠加的方法。
在一实施例中,为了提高区域过滤的准确性,具体地,步骤“基于区域分割网络模型,从所述目标部位医学图像中分割出多个组织类型的部位组织区域”可以包括:
基于卷积子网络对所述目标部位医学图像进行卷积处理,得到图像的特征;
基于反卷积子网络对所述特征进行上采样,得到恢复尺寸后的特征图像;
对所述恢复尺寸后的特征图像进行分类,得到多个组织类型的部位组织区域。
其中,区域分割网络模型可以包括卷积子网络和反卷积子网络。
其中,卷积子网络可以包括卷积层和池化层等。卷积层由若干卷积单元组成,进行卷积运算。卷积运算的目的是提取输入的不同特征。第一层卷积层可能只能提取一些低级的特征如边缘、线条和角等层级,更多层的网络能从低级特征中迭代提取更复杂的特征。池化层可以对输入的特征图进行压缩,一方面使特征图变小,简化网络计算复杂度;一方面进行特征压缩,提取主要特征。
其中,反卷积子网络可以包括反卷积层,用于进行反卷积运算。反卷积又被称为转置的卷积。卷积层的前向传播过程就是反卷积子网络的反向传播过程,卷积层的反向传播过程就是反卷积子网络的前向传播过程。因此可以通过反卷积子网络对特征图像尺寸进行还原。
在实际应用中,比如,当该区域分割网络模型为全卷积网络模型时,可以将乳腺钼靶图像输入到全卷积网络模型中,通过包括多个卷积层的卷积子网络进行卷积处理,得到图像的特征,然后通过反卷积子网络中的反卷积层对得到的特征进行上采样,将特征图像进行尺寸恢复,然后对恢复尺寸后的特征图像进行分类,得到肌肉组织类型和***组织类型的部位组织区域。
在一实施例中,该医学图像区域过滤方法还可以包括区域分割网络模型的训练步骤。
比如,当该区域分割网络模型为全卷积网络模型时,该全卷积网络模型的初始化权重可以通过分割数据集PASCAL VOC训练得到,然后可以使用医学钼靶图像公开数据集DDSM进行迁移学习,再使用国内医院标注了肌肉组织类型和***组织类型的部位组织区域的3000例乳腺钼靶数据进行迁移学习。比如,网络模型训练时的批处理大小可以为4,学习率可以为0.00001,最大迭代次数可以为20000,等等,最后可以得到训练后的区域分割网络模型,可以通过该区域分割网络模型进行多个组织类型的部位组织区域的分割。
其中,分割数据集(pattern analysis,statistical modelling and computational learning visual object classes,PASCAL VOC)是一种提供用于对象类识别的标准化图像数据集,也可以为提供用于访问数据集和注释的公共工具集。
其中,医学钼靶图像公开数据集(Digital Database for Screening Mammography,DDSM)数据库是医学机构建立的专门存放乳腺癌图像的数据库。DDSM数据库里存放了恶性、常规、良性等数据类型,目前很多对乳腺癌的研究都是依据DDSM数据库进行研究。
其中,迁移学习是把已经训练好的模型参数迁移到新的模型,帮助新模型训练。考虑到大部分数据或任务是存在相关性的,所以通过迁移学习可以将已经学到的模型参数通过某种方式分享给新模型,从而加快并优化模型的学习效率,不用像大多数网络那样从零学习。在本申请实施例中,通过参数迁移的方式进行网络模型数据训练,能够将任务A训练出来的模型用来初始化任务B的模型参数,使任务B能够更快的学习训练收敛。
203、基于目标部位医学图像的拍摄位置类型,从多个组织类型的部位组织区域选择需要保留的保留区域。
其中,目标部位医学图像的拍摄位置类型为拍摄目标部位医学图像时由于投照位置不同而产生的不同拍摄位置类型。比如,当目标部位医学图像为乳腺钼靶图像时,拍摄位置类型可以包括CC位(轴位,X线束自上向下投照)、MLO位(侧斜位,可分内外斜位和外内斜位,内外斜位是将胶片置于乳腺的外下方,X线束自乳腺内上方以45度投射向外下方,外内斜位则相反)、侧位、局部点片及点片放大摄影等等。
其中,保留区域为对于不同拍摄位置类型的目标部位医学图像,多个组织类型的部位组织区域需要保留的不同区域。比如,当拍摄位置类型不同时,可以根据实际情况,选取不同组织类型的部位组织区域进行保留,以提高区域过滤的准确性。
在实际应用中,比如,当目标部位医学图像为乳腺钼靶图像,拍摄位置类型包括CC位和MLO位,多个组织类型的部位组织区域包括肌肉组织类型和***组织类型的部位组织区域时,当该乳腺钼靶图像为CC位时,只保留***组织类型的部位组织区域;当该乳腺钼靶图像为MLO位时,保留肌肉组织类型和***组织类型的部位组织区域。
在一实施例中,还可以根据实际情况,针对不同拍摄位置类型,对不同组织类型的部位组织区域进行保留。比如,也可以当该乳腺钼靶图像为CC位和MLO位时,都对肌肉组织类型和***组织类型的部位组织区域进行保留,等等。
在一实施例中,为了减少目标病变区域的误判,提高区域过滤的准确性,具体地,步骤“基于所述目标部位医学图像的拍摄位置类型,从所述多个组织类型的部位组织区域选择需要保留的保留区域”可以包括:
获取映射关系集合,所述映射关系集合包括目标部位医学图像的预设拍摄位置类型与组织类型的映射关系;
根据所述映射关系获取所述拍摄位置类型对应的组织类型;
从所述多个组织类型的部位组织区域选择需要保留的保留区域。
其中,映射关系集合包括目标部位医学图像的拍摄位置类型与组织类型的映射关系。比如,当该乳腺钼靶图像为CC位时,对应保留***组织类型的部位组织区域;当该乳腺钼靶图像为MLO位时,对应保留肌肉组织类型和***组织类型的部位组织区域。并根据该映射关系从多个组织类型的部位组织区域选择需要保留的保留区域。
在一实施例中,该映射关系还可以根据实际情况进行调整。比如,也可以为当该乳腺钼靶图像为CC位时,对应保留肌肉组织类型和***组织类型的部位组织区域;当该乳腺钼靶图像为MLO位时,对应保留肌肉组织类型和***组织类型的部位组织区域,等等。
204、获取保留区域与目标部位医学图像中预测病变区域之间的位置关系。
其中,位置关系可以为保留区域与预测病变区域之间的区域位置关系,可以根据实际情况对位置关系进行定义。比如,保留区域与预测病变区域之间的位置关系可以包括:预测病变区域全部位于保留区域内,预测病变区域部分位于保留区域内,预测病变区域不位于保留区域内,等等。
在一实施例中,比如,保留区域与预测病变区域之间的位置关系还可以为,预测病变区域在保留区域的左上方、右上方等。
其中,获取位置关系的方式可以有多种。比如,可以通过图像重叠的方式,将保留区域与包括预测病变区域的目标部位医学图像进行图像重叠,以获取位置关系。比如,可以通过在包括预测病变区域的目标部位医学图像中确定保留区域的定位点位置(比如可以将在保留区域一条对角线上的两顶点确定为保留区域的定位点),将保留区域和包括预测病变区域的目标部位医学图像进行图像重叠,从而获取保留区域和预测病变区域之间的位置关系。
在一实施例中,比如,还可以通过获取保留区域和预测病变区域的坐标信息,以获取 保留区域和预测病变区域的位置关系。比如,可以通过对比预测病变区域的坐标信息和保留区域的坐标信息,以获取预测病变区域在保留区域中的位置信息,等等。
其中,预测病变区域为在目标部位医学图像中检测到的可能发生病变的区域。该预测病变区域包括已经病变的区域,也可能包括被误判为病变区域的未病变的区域,等等。
其中,预测病变区域的获取方式可以有多种。比如,可以通过预先人工进行目标部位医学图像的识别以获取,还可以通过网络模型对目标部位医学图像进行预测病变区域的检测,以获取预测病变区域,具体流程可以参考下面图3。
205、基于位置关系对目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。
其中,预测病变区域为在目标部位医学图像中已经定位的疑似病变的区域。比如,当目标部位医学图像为乳腺钼靶图像时,预测病变区域可以为在乳腺钼靶图像中定位的疑似恶性肿块。
其中,目标病变区域为经过过滤后的预测病变区域。比如,当目标部位医学图像为乳腺钼靶图像时,目标病变区域可以为恶性肿块。
由于乳腺钼靶图像中目标病变区域的识别过程,会将***和***等非目标病变区域识别为目标病变区域,从而产生误判的情况,为了减少目标病变区域的误判,提高准确性,可以通过对预测病变区域进行过滤,得到目标病变区域,将被误判为目标病变区域的***和***等去除,从而提高目标病变区域识别的准确性。
在一实施例中,由于保留区域与预测病变区域都是基于检测同一张目标部位医学图像获得的,因此,可以通过获取保留区域与目标部位医学图像中预测病变区域之间的位置关系,从而实现在保留区域中,对预测病变区域进行过滤,以得到目标病变区域,从而提高医学图像区域过滤的准确性。
在实际应用中,基于位置关系对目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。比如,当保留区域包括肌肉组织类型的部位组织区域和***组织类型的部位组织区域时,可以获取保留区域和预测病变区域的位置关系,并根据位置关系,将落入保留区域中肌肉区域和***区域的预测病变区域进行滤除,否则将该预测病变区域识别为目标病变区域。
在一实施例中,比如,还可以根据保留区域和预测病变区域之间的位置关系,将保留 区域中预设位置区域内的预测病变区域进行过滤。比如,还可以根据实际情况,将保留区域左上角预设尺寸区域内包括的预测病变区域进行过滤,等等。
在一实施例中,请参阅图3,预测病变区域可以通过获取已经标注的图像获得,还可以通过以下方式获得,获取预测病变区域的流程可以如下:
301、从目标部位医学图像中分割出多个候选识别区域。
其中,候选识别区域为有可能包括预测病变区域的区域。比如,当目标部位医学图像为乳腺钼靶图像时,候选识别区域为有可能包括恶性肿块的区域。将候选识别区域分割出来,可以提升区域过滤的效率。
在一实施例中,通过从目标部位医学图像中分割出多个候选识别区域,可以提高区域过滤的准确性。具体地,步骤“从所述目标部位医学图像中分割出多个候选识别区域”可以包括:
从所述目标部位医学图像中提取出子部位图像;
从所述子部位图像中分割出多个候选识别区域。
其中,子部位图像可以为从目标部位医学图像中截取的某部位所在的图像,比如,当目标部位医学图像为乳腺钼靶图像时,子部位图像可以为***所在区域的图像。
在实际应用中,比如,可以从乳腺钼靶图像中提取出***所在区域的图像,并从***所在区域的图像中分割出多个候选识别区域。通过提取***所在区域以及候选识别区域,可以提升区域过滤的准确性。
在一实施例中,通过从目标部位医学图像中提取出子部位图像,可以提高区域过滤的准确性。具体地,步骤“从所述目标部位医学图像中提取出子部位图像”可以包括:
将所述目标部位医学图像进行灰度拉伸,得到拉伸后的目标部位医学图像;
在所述拉伸后的目标部位医学图像中提取出初始子部位图像;
对所述初始子部位图像的图像直方图进行直方图均衡,得到均衡后的子部位图像;
对所述均衡后的子部位图像进行滤波,得到子部位图像。
其中,灰度拉伸、直方图均衡、滤波等操作都属于预处理操作。预处理是在对输入图像进行特征抽取、分割和匹配前所进行的处理。预处理的主要目的是消除图像中无关的信息,恢复有用的真实信息,增强有关信息的可检测性和最大限度地简化数据,从而改进特征抽取、图像分割、匹配和识别的可靠性。通过对图像进行预处理,可以减少图像中的噪 声,提高区域同质性,以及后续处理的鲁棒性,并得到预处理后的子部位图像。
其中,灰度拉伸为一种灰度变换方法,通过分段线性变换函数,提高图像处理时灰度级的动态范围。
其中,图像直方图为通过左暗右亮的分布曲线形式进行图像数据显示的图像。图像直方图可以通过算法来对图像进行按比例缩小,且具有图像平移、旋转、缩放不变性等众多优点,应用于图像处理的许多领域,比如灰度图像的阈值分割、基于颜色的图像检索、图像分类等等。
其中,直方图均衡为图像处理领域中利用图像直方图对对比度进行调整的方法。通过直方图均衡有效地扩展常用的亮度,可以使得亮度更好地在直方图上分布,用于增强局部的对比度而不影响整体的对比度。
其中,滤波操作可以去除图像中的噪声,提高区域同质性。
在实际应用中,比如,当目标部位医学图像为乳腺钼靶图像时,可以通过线性拉伸的方式,将乳腺钼靶图像的灰度范围拉伸到0~255。通过对乳腺钼靶图像进行灰度拉伸,可以提高后续处理的鲁棒性。
在实际应用中,比如,当目标部位医学图像为乳腺钼靶图像,初始子部位图像为***所在区域图像时,可以通过形态学开操作,对拉伸后的乳腺钼靶图像先腐蚀后膨胀,去除图像细碎组织以及噪声,在纤细点处分离物体,并在平滑较大物体边界的同时不明显的改变其面积。之后采用大津分割方法(Otsu's method)进行分割,即将灰度图像转化为二值图像。大津分割方法假定图像根据双模直方图包含两类像素(前景像素和背景像素),于是它要计算能将两类像素分开的最佳阈值,使得它们的类内方差最小;由于两两平方距离恒定,即使得它们的类间方差最大。经过形态学开操作和大津分割方法,可以得到***所在区域图像。
在实际应用中,比如,当初始子部位图像为***所在区域图像时,可以获取***所在区域图像对应的图像直方图,之后进行直方图均衡,得到均衡后的***所在区域图像。
在实际应用中,比如,当初始子部位图像为***所在区域图像时,可以采用双边滤波对均衡后的***所在区域图像进行滤波操作,得到***所在区域图像。双边滤波是一种非线性的滤波方法,是结合图像的空间邻近度和像素值相似度的一种折中处理,同时考虑空域信息和灰度相似性,达到保边去噪的目的,不会破坏图像分割边缘。
在一实施例中,可以通过从子部位图像中分割出多个候选识别区域,从而提高区域过滤的准确性。具体地,步骤“从所述子部位图像中分割出多个候选识别区域”可以包括:
基于基因遗传算法对所述子部位图像进行分割,得到多个分割图像;
对所述分割图像使用形态学开操作,得到操作后分割图像;
从多个所述操作后分割图像中选取出多个候选识别区域。
其中,图像分割就是把图像分成若干个特定的、具有独特性质的区域并提出感兴趣目标的技术和过程。从数学角度来看,图像分割是将数字图像划分成互不相交的区域的过程。图像分割的过程也是一个标记过程,即把属于同一区域的像索赋予相同的编号。为了使得图像更容易进行训练与分类,可以将子部位图像进行分割,并从分割所得的多个区域中,选取出候选识别区域。
其中,基因遗传算法是模拟达尔文生物进化论的自然选择和遗传学机理的生物进化过程的计算模型,是一种通过模拟自然进化过程搜索最优解的方法。初代种群产生之后,按照适者生存和优胜劣汰的原理,逐代演化产生出越来越好的近似解,在每一代,根据问题域中个体的适应度大小选择个体,并借助于自然遗传学的遗传算子进行组合交叉和变异,产生出代表新的解集的种群。
在实际应用中,比如,可以首先对子部位图像进行二维小波变换以降低维度,对于低细节的图像而言,可以基于图像的图像直方图对该图像进行分割。对于图像直方图的分割可以采用基因遗传算法,通过二进制编码形式实现从表现型到基因性的映射,二进制编码序列长度可以为图像灰度级个数,可以使得位值为0时代表该灰度级为分割阈值。基因遗传算法价值函数是测量染色体优秀程度的方法,该价值函数以最大类间方差和最小类内方差为标准,在种群初始化后重复迭代选择、交叉、变异三个过程直到收敛,比如初始种群数量可以为30,迭代次数可以为40,选择率可以为10%,交叉率可以为80%,变异率可以为10%,等等,最后输出分割阈值,并根据该分割阈值对子部位图像进行分割,得到多个分割图像。
之后,可以对分割图像使用形态学开操作,得到操作后分割图像。比如,可以对分割图像先腐蚀后膨胀,去除图像细碎组织以及噪声,在纤细点处分离物体,并在平滑较大物体边界的同时不明显的改变其面积,断开腺体连接等,以方便后续区域提取。
在实际应用中,从多个操作后分割图像中选取出多个候选识别区域。比如,可以从多 个操作后分割图像中提取出灰度级较高的区域。比如可以从多个操作后分割图像中,提取Top5灰度级的区域,之后基于提取出的区域,在每张乳腺钼靶图像中选择面积较大的10个区域作为候选识别区域。
302、基于区域分类网络模型对候选识别区域进行分类,得到候选识别区域对应的病变预测概率。
其中,区域分类网络模型为可以对候选识别区域进行分类,判定候选识别区域中是否包括预测病变区域的网络模型。比如,该区域分类网络模型可以为Inception V3网络模型。
其中,Inception V3是卷积神经网络中的一种。卷积神经网络是一种前馈神经网络,人工神经元可以响应周围单元,可以进行大型图像处理,卷积神经网络包括卷积层和池化层。Inception V3通过增加单层卷积层的宽度,即在单层卷积层上使用不同尺度的卷积核,从而优化网络。
其中,病变预测概率为预测该候选识别区域中包括预测病变区域的概率。
在一实施例中,可以通过网络模型对候选识别区域进行分类,获取候选识别区域对应的病变预测概率,提高区域过滤准确率。具体地,步骤“基于区域分类网络模型对所述候选识别区域进行分类,得到所述候选识别区域对应的病变预测概率”可以包括:
基于卷积层对所述候选识别区域进行卷积处理,得到区域的特征;
基于全连接层对区域的特征进行分类,得到所述候选识别区域对应的病变预测概率。
其中,区域分类网络模型为识别出候选识别区域对应的病变预测概率的网络模型,比如,可以为深度学习卷积神经网络GoogLeNet(Inception V3)、全卷积神经网络、Resnet网络模型等等。区域分类网络模型可以包括卷积层和全连接层。
其中,Inception V3网络模型通过密集成分来近似最优的局部稀疏结,从而更高效的利用计算资源,在相同的计算量下能提取到更多的特征,从而提升训练结果。Inception网络模型的特点有两个:一是使用1x1的卷积来进行升降维;二是在多个尺寸上同时进行卷积再聚合。
其中,卷积层由若干卷积单元组成,每个卷积单元的参数都是通过反向传播算法最佳化得到的。图像卷积运算的目的是提取输入图像的不同特征,第一层卷积层可能只能提取一些低级的特征如边缘、线条和角等层级,更多层的网络能从低级特征中迭代提取出更复杂的特征。
其中,全连接层可以整合卷积层中具有类别区分性的局部信息。最后一层全连接层的输出值被传递给一个输出,可以采用softmax逻辑回归进行分类。
在实际应用中,比如,当区域分类网络模型为Inception V3网络模型时,可以将候选识别区域输入网络模型中,通过卷积层进行卷积处理,得到区域的特征,之后通过全连接层对区域的特征进行分类,得到所述候选识别区域对应的病变预测概率。比如,当区域分类网络模型为Inception V3网络模型时,通过多种尺寸的卷积核,经历22层深度的神经网络,使得网络模型既能保持网络结构的稀疏性,又能利用密集矩阵的高计算性能。
在一实施例中,医学图像区域过滤方法还可以包括区域分类网络模型的训练过程。
具体地,该医学图像区域过滤方法还可以包括:
获取正负样本区域,正样本区域包括预测病变区域;
将所述正负样本区域进行数据增强,得到增强后的正负样本区域;
根据所述增强后的正负样本区域,对区域分类网络模型中的模型权重进行更新,得到训练后的区域分类网络模型;
基于所述训练后的区域分类网络模型,更新所述区域分类网络模型。
其中,正样本区域包括预测病变区域,正负样本区域为经过医师标注的样本目标部位医学图像中的样本区域,比如,正样本可以为经过医师标注的包括疑似恶性肿块区域的乳腺钼靶图像,并且正样本包括整个肿块区域以及少量背景区域环绕,负样本可以为经过医师标注的明显良性肿块区域以及背景区域的乳腺钼靶图像。
其中,数据增强可以让有限的数据产生更多的等价数据,可以使得样本数量增多,样本增强。
其中,迁移学习是运用已存有的知识对不同但相关领域问题进行求解的新的一种机器学习方法。迁移学习放宽了传统机器学习中的两个基本假设,目的是迁移已有的知识来解决目标领域中仅有少量有标签样本数据甚至没有的学习问题。
在实际应用中,获取正负样本区域的方式可以有多种,比如,可以使用国内医院数据,聘请专家进行标注数据,还可以从本地或者从网络上进行下载获取正负样本区域,等等。
在实际应用中,将正负样本区域进行数据增强,得到增强后的正负样本区域。比如,由于正负样本区域所在图像为乳腺钼靶图像,因此对正负样本区域主要进行翻转和裁剪的数据增强,无须进行颜色空间的数据增强,得到增强后的多个正负样本区域。
在实际应用中,比如,当区域分类网络模型为Inception V3网络模型时,可以将模型的输出类别数设置为2,模型的权重初始化可以首先使用ImageNet数据集(计算机视觉标准数据集),之后使用公开数据集DDSM,公开数据集DDSM是美国的医学机构所建立的专门存放乳腺癌图像的数据库。最后使用增强后的正负样本区域进行迁移学习对区域分类网络模型中的权重进行更新,比如,可以使用RMSprop(root mean square prop,基于均方根的自适应学习率方法)作为下降算法,批处理大小可以为64,初始学习率可以为0.01,最大迭代次数可以为100000,训练结束后,得到训练后的区域分类网络模型,然后,基于训练后的区域分类网络模型,对区域分类网络模型进行更新,得到区域分类网络模型。
303、根据病变预测概率从候选识别区域中筛选出预测病变区域,得到目标部位医学图像对应的预测病变区域。
在实际应用中,比如,可以从乳腺钼靶图像中分割出多个候选识别区域,将多个候选识别区域输入区域分类网络模型中进行分类,得到候选识别区域对应的病变预测概率,之后根据病变预测概率从候选识别区域中筛选出预测病变区域。比如,病变预测概率大于0.5的候选识别区域可以确定为预测病变区域。
在一实施例中,比如,还可以利用非极大值抑制的方法对预测病变区域进行重叠区域的去除,重叠度阈值可以设为50%,可以降低误报率,还可以提高预测病变区域定位的准确性。
请参阅图5,该医学图像区域过滤方法可以实现从乳腺钼靶图像识别出目标病变区域,并对目标病变区域进行标记,比如通过方框标记。
由上可知,本申请实施例获取生物组织的目标部位医学图像,从目标部位医学图像中分割出多个组织类型的部位组织区域,基于目标部位医学图像的拍摄位置类型,从多个组织类型的部位组织区域选择需要保留的保留区域,获取保留区域与目标部位医学图像中预测病变区域之间的位置关系,基于位置关系对目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。该方案通过两个神经网络的并联,提取图像的多样性特征,减少了人工工作量,提高了自动化程度、精度、效率以及应用推广性;基于目标部位医学图像的拍摄位置类型从多个组织类型的部位组织区域保留需要保留的区域,并基于保留区域对目标部位医学图像对应的预测病变区域进行过滤,可以有效过滤掉干扰目标病变区域判定的***和***等,减少了对于目标病变区域的误判,从而提升区域过滤的准确性。
根据上述实施例所描述的方法,以下将举例作进一步详细说明。
请参阅图4,在本实施例中,将以该医学图像区域过滤装置具体集成在网络设备中为例进行说明。
401、网络设备获取乳腺钼靶图像。
在实际应用中,参考图9,网络设备获取乳腺钼靶图像的方式可以有多种,比如,可以通过乳腺钼靶X线机进行获取,还可以通过从本地获取乳腺钼靶图像,或者通过网络进行下载获取乳腺钼靶图像,等等。
402、网络设备从乳腺钼靶图像中分割出多个组织类型的部位组织区域。
由于乳腺钼靶图像中目标病变区域的识别过程,会将***和***等非目标病变区域识别为目标病变区域,从而产生误判的情况,为了减少目标病变区域的误判,提高准确性,通过从乳腺钼靶图像中分割出多个组织类型的部位组织区域,即从乳腺钼靶图像中分割出肌肉组织类型和***组织类型的部位组织区域,可以将被误判为目标病变区域的***和***等去除,从而提高目标病变区域识别的准确性。
在实际应用中,参考图9,网络设备可以基于区域分割网络模型从乳腺钼靶图像中分割出多个组织类型的部位组织区域,区域分割网络模型由多个样本部位图像训练而成。
具体地,可以基于卷积子网络对乳腺钼靶图像进行卷积处理,得到图像的特征,之后基于反卷积子网络对特征进行上采样,得到恢复尺寸后的特征图像,最后对恢复尺寸后的特征图像进行分类,得到多个组织类型的部位组织区域。
在实际应用中,基于卷积子网络对乳腺钼靶图像进行卷积处理,得到图像的特征,基于反卷积子网络对特征进行上采样,得到恢复尺寸后的特征图像,对恢复尺寸后的特征图像进行分类,得到多个组织类型的部位组织区域。比如,当该区域分割网络模型为全卷积网络模型时,可以将乳腺钼靶图像输入到全卷积网络模型中,通过包括多个卷积层的卷积子网络进行卷积处理,得到图像的特征,然后通过反卷积子网络中的反卷积子网络对得到的特征进行上采样,将特征图像进行尺寸恢复,然后对恢复尺寸后的特征图像进行分类,得到肌肉组织类型和***组织类型的部位组织区域。
在一实施例中,该医学图像区域过滤方法还可以包括区域分割网络模型的训练步骤。比如,当该区域分割网络模型为全卷积网络模型时,该全卷积网络模型的初始化权重可以通过分割数据集PASCAL VOC训练得到,然后可以使用医学钼靶图像公开数据集DDSM 进行迁移学习,再使用国内医院标注了肌肉组织类型和***组织类型的部位组织区域的3000例乳腺钼靶数据进行迁移学习。比如,网络模型训练时的批处理大小可以为4,学习率可以为0.00001,最大迭代次数可以为20000,等等,最后可以得到训练后的区域分割网络模型,可以通过该区域分割网络模型进行多个组织类型的部位组织区域的分割。
403、网络设备基于乳腺钼靶图像的拍摄位置类型,从多个组织类型的部位组织区域选择需要保留的保留区域。
在实际应用中,参考图9,比如,需要保留的保留区域为对于不同拍摄位置类型的乳腺钼靶图像中,多个组织类型的部位组织区域需要保留的区域不同,拍摄位置类型包括CC位和MLO位,多个组织类型的部位组织区域包括肌肉组织类型和***组织类型的部位组织区域时,当该乳腺钼靶图像为CC位时,只保留***组织类型的部位组织区域;当该乳腺钼靶图像为MLO位时,保留肌肉组织类型和***组织类型的部位组织区域。
具体地,网络设备可以获取映射关系集合,之后根据映射关系从多个组织类型的部位组织区域选择需要保留的保留区域。比如,当该乳腺钼靶图像为CC位时,对应保留***组织类型的部位组织区域;当该乳腺钼靶图像为MLO位时,对应保留肌肉组织类型和***组织类型的部位组织区域。并根据该映射关系从多个组织类型的部位组织区域选择需要保留的保留区域。
404、网络设备从乳腺钼靶图像中分割出多个候选识别区域。
在实际应用中,参考图9,可以从乳腺钼靶图像中提取出***图像,并从***图像中分割出多个候选识别区域。
具体地,网络设备可以将乳腺钼靶图像进行灰度拉伸,得到拉伸后的乳腺钼靶图像,在所述拉伸后的乳腺钼靶图像中提取出初始***图像,对初始***图像的图像直方图进行直方图均衡,得到均衡后的***图像,对均衡后的***图像进行滤波,得到***图像。然后可以通过线性拉伸的方式,将乳腺钼靶图像的灰度范围拉伸到0~255,通过对乳腺钼靶图像进行灰度拉伸,可以提高后续处理的鲁棒性。然后可以通过形态学开操作,对拉伸后的乳腺钼靶图像先腐蚀后膨胀,去除图像细碎组织以及噪声,在纤细点处分离物体,并在平滑较大物体边界的同时不明显的改变其面积。之后采用大津分割方法(Otsu's method)进行分割,即将灰度图像转化为二值图像。经过形态学开操作和大津分割方法,可以得到***图像。然后可以获取***图像对应的图像直方图,之后进行直方图均衡,得到均衡后的 ***图像。最后可以采用双边滤波对均衡后的***图像进行滤波操作,得到***图像。
在实际应用中,可以基于基因遗传算法对***图像进行分割,得到多个分割图像,对分割图像使用形态学开操作,得到操作后分割图像,从多个操作后分割图像中选取出多个候选识别区域。
具体地,可以首先对***图像进行二维小波变换以降低维度。对于图像直方图的分割可以采用基因遗传算法,通过二进制编码形式实现从表现型到基因性的映射,二进制编码序列长度可以为图像灰度级个数,可以使得位值为0时代表该灰度级为分割阈值。基因遗传算法价值函数是测量染色体优秀程度的方法,该价值函数以最大类间方差和最小类内方差为标准,在种群初始化后重复迭代选择、交叉、变异三个过程直到收敛,比如初始种群数量可以为30,迭代次数可以为40,选择率可以为10%,交叉率可以为80%,变异率可以为10%,等等,最后输出分割阈值,并根据该分割阈值对子部位图像进行分割,得到多个分割图像。
之后,可以对分割图像使用形态学开操作,得到操作后分割图像。最后,从多个操作后分割图像中选取出多个候选识别区域,比如,可以从多个操作后分割图像中提取出灰度级较高的区域,比如可以从多个操作后分割图像中,提取Top5灰度级的区域,之后基于提取出的区域,在每张乳腺钼靶图像中选择面积较大的10个区域作为候选识别区域。
405、网络设备基于区域分类网络模型对候选识别区域进行分类,得到候选识别区域对应的病变预测概率。
在实际应用中,参考图9,网络设备可以基于卷积层对候选识别区域进行卷积处理,得到区域的特征,然后基于全连接层对区域的特征进行分类,得到候选识别区域对应的病变预测概率。
具体地,当区域分类网络模型为Inception V3网络模型时,将候选识别区域输入网络模型中,通过卷积层进行卷积处理,得到区域的特征,之后通过全连接层对区域的特征进行分类,得到候选识别区域对应的病变预测概率。比如,当区域分类网络模型为Inception V3网络模型时,通过多种尺寸的卷积核,经历22层深度的神经网络,使得网络模型既能保持网络结构的稀疏性,又能利用密集矩阵的高计算性能。
在一实施例中,医学图像区域过滤方法还可以包括区域分类网络模型的训练过程。
在实际应用中,可以获取正负样本区域,然后将正负样本区域进行数据增强,得到增 强后的正负样本区域,然后根据增强后的正负样本区域,对区域分类网络模型中的模型权重进行更新,得到训练后的区域分类网络模型,最后基于训练后的区域分类网络模型,更新区域分类网络模型。
具体地,获取正负样本区域的方式可以有多种,比如,可以使用医院数据,聘请专家进行标注数据,还可以从本地或者从网络上进行下载获取正负样本区域,等等。然后将正负样本区域进行数据增强,得到增强后的正负样本区域。比如,由于正负样本区域所在图像为乳腺钼靶图像,因此对正负样本区域主要进行翻转和裁剪的数据增强,无须进行颜色空间的数据增强,得到增强后的多个正负样本区域。
当区域分类网络模型为Inception V3网络模型时,可以将模型的输出类别数设置为2。模型的权重初始化可以首先使用ImageNet数据集(计算机视觉标准数据集),之后使用公开数据集DDSM。公开数据集DDSM是美国的医学机构所建立的专门存放乳腺癌图像的数据库。最后使用增强后的正负样本区域进行迁移学习对区域分类网络模型中的权重进行更新。比如,可以使用RMSprop(root mean square prop,基于均方根的自适应学习率方法)作为下降算法,批处理大小可以为64,初始学习率可以为0.01,最大迭代次数可以为100000。训练结束后,得到训练后的区域分类网络模型。然后,基于训练后的区域分类网络模型,对区域分类网络模型进行更新,得到区域分类网络模型。
406、网络设备根据病变预测概率从候选识别区域中筛选出预测病变区域。
在实际应用中,参考图9,网络设备可以根据病变预测概率从候选识别区域中筛选出预测病变区域。比如,病变预测概率大于0.5的候选识别区域可以确定为预测病变区域。
在一实施例中,还可以利用非极大值抑制的方法对预测病变区域进行重叠区域的去除,重叠度阈值可以设为50%,可以降低误报率,还可以提高预测病变区域定位的准确性。
407、网络设备获取保留区域与乳腺钼靶图像中预测病变区域之间的位置关系。
其中,获取位置关系的方式可以有多种。比如,可以通过图像重叠的方式,将保留区域与包括预测病变区域的目标部位医学图像进行图像重叠,以获取位置关系。比如,可以通过在包括预测病变区域的目标部位医学图像中确定保留区域的定位点位置(比如可以将在保留区域一条对角线上的两顶点确定为保留区域的定位点),将保留区域和包括预测病变区域的目标部位医学图像进行图像重叠,从而获取保留区域和预测病变区域之间的位置关系。
在一实施例中,比如,还可以通过获取保留区域和预测病变区域的坐标信息,以获取保留区域和预测病变区域的位置关系。比如,可以通过对比预测病变区域的坐标信息和保留区域的坐标信息,以获取预测病变区域在保留区域中的位置信息,等等。
408、网络设备基于位置关系对乳腺钼靶图像中的预测病变区域进行过滤,得到目标病变区域。
在一实施例中,由于保留区域与预测病变区域都是基于检测同一张目标部位医学图像获得的,因此,可以通过获取保留区域与目标部位医学图像中预测病变区域之间的位置关系,从而实现在保留区域中,对预测病变区域进行过滤,以得到目标病变区域,从而提高医学图像区域过滤的准确性。
在实际应用中,基于位置关系对目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。比如,当保留区域包括肌肉组织类型的部位组织区域和***组织类型的部位组织区域时,可以获取保留区域和预测病变区域的位置关系,并根据位置关系,将落入保留区域中肌肉区域和***区域的预测病变区域进行滤除,否则将该预测病变区域识别为目标病变区域。
在实际应用中,请参阅图6、图7和图8,图6和图7为MLO位图像,图8为CC位图像。对保留区域中特定区域包括的在乳腺钼靶图像对应的病变区域进行过滤,得到目标病变区域。比如,对于保留区域中的肌肉区域和***区域,当预测病变区域落入保留区域中的肌肉区域(如图6所示)和***区域(如图7所示)时,将该预测病变区域滤除,否则将该预测病变区域识别为目标病变区域(如图6、图8所示)。
由上可知,本申请实施例通过网络设备获取生物组织的目标部位医学图像,从目标部位医学图像中分割出多个组织类型的部位组织区域,基于目标部位医学图像的拍摄位置类型,从多个组织类型的部位组织区域选择需要保留的保留区域,获取保留区域与目标部位医学图像中预测病变区域之间的位置关系,基于位置关系对目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。该方案通过两个神经网络的并联,提取图像的多样性特征,减少了人工工作量,提高了自动化程度、精度、效率以及应用推广性;基于目标部位医学图像的拍摄位置类型从多个组织类型的部位组织区域保留需要保留的区域,并基于保留区域对目标部位医学图像对应的预测病变区域进行过滤,可以有效过滤掉干扰目标病变区域判定的***和***等,减少了对于目标病变区域的误判,从而提升区域过滤的 准确性。
为了更好地实施以上方法,本申请实施例还提供一种医学图像区域过滤装置,该医学图像区域过滤装置可以适用于网络设备。如图10所示,该医学图像区域过滤装置可以包括:获取模块101、分割模块102、保留模块103、位置关系获取模块104和过滤模块105,如下:
获取模块101,用于获取生物组织的目标部位医学图像;
分割模块102,用于从所述目标部位医学图像中分割出多个组织类型的部位组织区域;
保留模块103,用于基于所述目标部位医学图像的拍摄位置类型,从所述多个组织类型的部位组织区域选择需要保留的保留区域;
位置关系获取模块104,用于获取所述保留区域与所述目标部位医学图像中预测病变区域之间的位置关系;
过滤模块105,用于基于所述位置关系对所述目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。
在一实施例中,分割模块102可以具体用于:
基于卷积子网络对所述目标部位医学图像进行卷积处理,得到图像的特征;
基于反卷积子网络对所述特征进行上采样,得到恢复尺寸后的特征图像;
对所述恢复尺寸后的特征图像进行分类,得到多个组织类型的部位组织区域。
在一实施例中,请参阅图11,保留模块103可以包括第一获取子模块1031、第二获取子模块1031和保留子模块1033,如下:
第一获取子模块1031,用于获取映射关系集合,所述映射关系集合包括目标部位医学图像的预设拍摄位置类型与组织类型的映射关系;
第二获取子模块1032,用于根据所述映射关系获取所述拍摄位置类型对应的组织类型;
保留子模块1033,用于从所述多个组织类型的部位组织区域选择需要保留的保留区域。
在一实施例中,请参阅图12,该医学图像区域过滤装置还可以包括第二分割模块106、分类模块107和筛选模块108,如下:
第二分割模块106,用于从所述目标部位医学图像中分割出多个候选识别区域;
分类模块107,用于基于区域分类网络模型对所述候选识别区域进行分类,得到所述候选识别区域对应的病变预测概率;
筛选模块108,用于根据所述病变预测概率从所述候选识别区域中筛选出预测病变区域,得到所述目标部位医学图像对应的预测病变区域。
在一实施例中,第二分割模块106可以具体用于:
从所述目标部位医学图像中提取出子部位图像;
从所述子部位图像中分割出多个候选识别区域。
在一实施例中,分类模块107可以具体用于:
基于卷积层对所述候选识别区域进行卷积处理,得到区域的特征;
基于全连接层对区域的特征进行分类,得到所述候选识别区域对应的病变预测概率。
由上可知,本申请实施例通过获取模块101获取生物组织的目标部位医学图像,通过分割模块102从目标部位医学图像中分割出多个组织类型的部位组织区域,通过保留模块103基于目标部位医学图像的拍摄位置类型,从多个组织类型的部位组织区域选择需要保留的保留区域,通过位置关系获取模块104获取保留区域与目标部位医学图像中预测病变区域之间的位置关系,通过过滤模块105基于位置关系对目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。该方案通过两个神经网络的并联,提取图像的多样性特征,减少了人工工作量,提高了自动化程度、精度、效率以及应用推广性;基于目标部位医学图像的拍摄位置类型从多个组织类型的部位组织区域保留需要保留的区域,并基于保留区域对目标部位医学图像对应的预测病变区域进行过滤,可以有效过滤掉干扰目标病变区域判定的***和***等,减少了对于目标病变区域的误判,从而提升区域过滤的准确性。
本申请实施例还提供一种网络设备,该网络设备可以为服务器或终端等设备,其集成了本申请实施例所提供的任一种医学图像区域过滤装置。如图13所示,图13是本申请实施例提供的网络设备的结构示意图,具体来讲:
该网络设备可以包括一个或者一个以上处理核心的处理器131、一个或一个以上计算机可读存储介质的存储器132、电源133和输入单元134等部件。图13中示出的网络设备结构并不构成对网络设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
处理器131是该网络设备的控制中心,利用各种接口和线路连接整个网络设备的各个部分,通过运行或执行存储在存储器132内的软件程序和/或模块,以及调用存储在存储器 132内的数据,执行网络设备的各种功能和处理数据,从而对网络设备进行整体监控。可选的,处理器131可包括一个或多个处理核心;处理器131可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器131中。
存储器132可用于存储软件程序以及模块,处理器131通过运行存储在存储器132的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器132可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据网络设备的使用所创建的数据等。此外,存储器132可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器132还可以包括存储器控制器,以提供处理器131对存储器132的访问。
网络设备还包括给各个部件供电的电源133,电源133可以通过电源管理***与处理器131逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。电源133还可以包括一个或一个以上的直流或交流电源、再充电***、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
该网络设备还可包括输入单元134,该输入单元134可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
尽管未示出,网络设备还可以包括显示单元等,在此不再赘述。具体在本实施例中,网络设备中的处理器131会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器132中,并由处理器131来运行存储在存储器132中的应用程序,从而实现各种功能,如下:
获取生物组织的目标部位医学图像,从目标部位医学图像中分割出多个组织类型的部位组织区域,基于目标部位医学图像的拍摄位置类型,从多个组织类型的部位组织区域选择需要保留的保留区域,获取保留区域与目标部位医学图像中预测病变区域之间的位置关系,基于位置关系对目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。
处理器131还可以运行存储在存储器132中的应用程序,从而实现如下功能:
获取生物组织的目标部位医学图像,从目标部位医学图像中分割出多个组织类型的部 位组织区域,基于目标部位医学图像的拍摄位置类型,从多个组织类型的部位组织区域选择需要保留的保留区域,获取保留区域与目标部位医学图像中预测病变区域之间的位置关系,基于位置关系对目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。
以上各个操作的具体实施可参见前面的实施例,在此不再赘述。
由上可知,本申请实施例获取生物组织的目标部位医学图像,从目标部位医学图像中分割出多个组织类型的部位组织区域,基于目标部位医学图像的拍摄位置类型,从多个组织类型的部位组织区域选择需要保留的保留区域,获取保留区域与目标部位医学图像中预测病变区域之间的位置关系,基于位置关系对目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。该方案通过两个神经网络的并联,提取图像的多样性特征,减少了人工工作量,提高了自动化程度、精度、效率以及应用推广性;基于目标部位医学图像的拍摄位置类型从多个组织类型的部位组织区域保留需要保留的区域,并基于保留区域对目标部位医学图像对应的预测病变区域进行过滤,可以有效过滤掉干扰目标病变区域判定的***和***等,减少了对于目标病变区域的误判,从而提升区域过滤的准确性。
上述实施例的各种方法中的全部或部分步骤可以通过指令来完成,或通过指令控制相关的硬件来完成,该指令可以存储于一计算机可读存储介质中,并由处理器进行加载和执行。
为此,本申请实施例提供一种存储介质,其中存储有多条指令,该指令能够被处理器进行加载,以执行本申请实施例所提供的任一种医学图像区域过滤方法中的步骤。例如,该指令可以执行如下步骤:
获取生物组织的目标部位医学图像,从目标部位医学图像中分割出多个组织类型的部位组织区域,基于目标部位医学图像的拍摄位置类型,从多个组织类型的部位组织区域选择需要保留的保留区域,获取保留区域与目标部位医学图像中预测病变区域之间的位置关系,基于位置关系对目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。
以上各个操作的具体实施可参见前面的实施例,在此不再赘述。
其中,该存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
由于该存储介质中所存储的指令,可以执行本申请实施例所提供的任一种医学图像区域过滤方法中的步骤,因此,可以实现本申请实施例所提供的任一种医学图像区域过滤方 法所能实现的有益效果,详见前面的实施例,在此不再赘述。
以上对本申请实施例所提供的一种医学图像区域过滤方法、装置以及存储介质进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (14)

  1. 一种医学图像区域过滤方法,其特征在于,包括:
    获取生物组织的目标部位医学图像;
    从所述目标部位医学图像中分割出多个组织类型的部位组织区域;
    基于所述目标部位医学图像的拍摄位置类型,从所述多个组织类型的部位组织区域选择需要保留的保留区域;
    获取所述保留区域与所述目标部位医学图像中预测病变区域之间的位置关系;
    基于所述位置关系对所述目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。
  2. 根据权利要求1所述的医学图像区域过滤方法,其特征在于,从所述目标部位医学图像中分割出多个组织类型的部位组织区域,包括:
    基于区域分割网络模型,从所述目标部位医学图像中分割出多个组织类型的部位组织区域,其中,所述区域分割网络模型由多个样本部位图像训练而成。
  3. 根据权利要求2所述的医学图像区域过滤方法,其特征在于,所述区域分割网络模型包括卷积子网络和反卷积子网络;
    基于区域分割网络模型,从所述目标部位医学图像中分割出多个组织类型的部位组织区域,包括:
    基于卷积子网络对所述目标部位医学图像进行卷积处理,得到图像的特征;
    基于反卷积子网络对所述特征进行上采样,得到恢复尺寸后的特征图像;
    对所述恢复尺寸后的特征图像进行分类,得到多个组织类型的部位组织区域。
  4. 根据权利要求1所述的医学图像区域过滤方法,其特征在于,基于所述目标部位医学图像的拍摄位置类型,从所述多个组织类型的部位组织区域选择需要保留的保留区域,包括:
    获取映射关系集合,所述映射关系集合包括目标部位医学图像的预设拍摄位置类型与组织类型的映射关系;
    根据所述映射关系获取所述拍摄位置类型对应的组织类型;
    从所述多个组织类型的部位组织区域选择需要保留的保留区域。
  5. 根据权利要求1所述的医学图像区域过滤方法,其特征在于,所述方法还包括:
    从所述目标部位医学图像中分割出多个候选识别区域;
    基于区域分类网络模型对所述候选识别区域进行分类,得到所述候选识别区域对应的病变预测概率;
    根据所述病变预测概率从所述候选识别区域中筛选出预测病变区域,得到所述目标部位医学图像对应的预测病变区域。
  6. 根据权利要求5所述的医学图像区域过滤方法,其特征在于,从所述目标部位医学图像中分割出多个候选识别区域,包括:
    从所述目标部位医学图像中提取出子部位图像;
    从所述子部位图像中分割出多个候选识别区域。
  7. 根据权利要求5所述的医学图像区域过滤方法,其特征在于,所述区域分类网络模型包括:卷积层和全连接层;
    基于区域分类网络模型对所述候选识别区域进行分类,得到所述候选识别区域对应的病变预测概率,包括:
    基于卷积层对所述候选识别区域进行卷积处理,得到区域的特征;
    基于全连接层对区域的特征进行分类,得到所述候选识别区域对应的病变预测概率。
  8. 根据权利要求1至7任一项所述的医学图像区域过滤方法,其特征在于,所述方法还包括:
    获取正负样本区域,正样本区域包括预测病变区域;
    将所述正负样本区域进行数据增强,得到增强后的正负样本区域;
    根据所述增强后的正负样本区域,对区域分类网络模型中的模型权重进行更新,得到训练后的区域分类网络模型;
    基于所述训练后的区域分类网络模型,更新所述区域分类网络模型。
  9. 一种医学图像区域过滤装置,其特征在于,包括:
    获取模块,用于获取生物组织的目标部位医学图像;
    分割模块,用于从所述目标部位医学图像中分割出多个组织类型的部位组织区域;
    保留模块,用于基于所述目标部位医学图像的拍摄位置类型,从所述多个组织类型的部位组织区域选择需要保留的保留区域;
    位置关系获取模块,用于获取所述保留区域与所述目标部位医学图像中预测病变区域之间的位置关系;
    过滤模块,用于基于所述位置关系对所述目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。
  10. 一种医学图像处理设备,所述医学图像区域过滤设备包括医学图像采集单元、处理器和存储器,其中:
    所述医学图像采集单元用于采集生物组织的目标部位医学图像;
    所述存储器用于存储医学图像数据以及多条指令;
    所述处理器用于读取所述存储器存储的多条指令,以执行以下步骤:
    获取生物组织的目标部位医学图像;
    从所述目标部位医学图像中分割出多个组织类型的部位组织区域;
    基于所述目标部位医学图像的拍摄位置类型,从所述多个组织类型的部位组织区域选择需要保留的保留区域;
    获取所述保留区域与所述目标部位医学图像中预测病变区域之间的位置关系;
    基于所述位置关系对所述目标部位医学图像中的预测病变区域进行过滤,得到目标病变区域。
  11. 如权利要求10所述的医学图像区域过滤设备,其特征在于,当执行步骤从所述目标部位医学图像中分割出多个组织类型的部位组织区域时,所述处理器具体执行以下步骤:
    基于区域分割网络模型,从所述目标部位医学图像中分割出多个组织类型的部位组织区域,其中,所述区域分割网络模型由多个样本部位图像训练而成。
  12. 如权利要求11所述的医学图像区域过滤设备,其特征在于,所述区域分割网络模型包括卷积子网络和反卷积子网络,当执行步骤基于区域分割网络模型,从所述目标部位医学图像中分割出多个组织类型的部位组织区域时,所述处理器具体执行以下步骤:
    基于卷积子网络对所述目标部位医学图像进行卷积处理,得到图像的特征;
    基于反卷积子网络对所述特征进行上采样,得到恢复尺寸后的特征图像;
    对所述恢复尺寸后的特征图像进行分类,得到多个组织类型的部位组织区域。
  13. 如权利要求10所述的医学图像区域过滤设备,其特征在于,当执行步骤基于所述目标部位医学图像的拍摄位置类型,从所述多个组织类型的部位组织区域选择需要保留的 保留区域时,所述处理器具体执行以下步骤:
    获取映射关系集合,所述映射关系集合包括目标部位医学图像的预设拍摄位置类型与组织类型的映射关系;
    根据所述映射关系获取所述拍摄位置类型对应的组织类型;
    从所述多个组织类型的部位组织区域选择需要保留的保留区域。
  14. 一种存储介质,其特征在于,所述存储介质存储有指令,所述指令被处理器执行时实现如权利要求1-8任一项所述医学图像区域过滤方法的步骤。
PCT/CN2020/074782 2019-02-14 2020-02-12 一种医学图像区域过滤方法、装置及存储介质 WO2020164493A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20755919.6A EP3926581A4 (en) 2019-02-14 2020-02-12 MEDICAL IMAGE AREA FILTERING METHOD AND APPARATUS, AND STORAGE MEDIA
US17/367,316 US11995821B2 (en) 2019-02-14 2021-07-02 Medical image region screening method and apparatus and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910115522.9 2019-02-14
CN201910115522.9A CN110009600A (zh) 2019-02-14 2019-02-14 一种医学图像区域过滤方法、装置及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/367,316 Continuation US11995821B2 (en) 2019-02-14 2021-07-02 Medical image region screening method and apparatus and storage medium

Publications (1)

Publication Number Publication Date
WO2020164493A1 true WO2020164493A1 (zh) 2020-08-20

Family

ID=67165772

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/074782 WO2020164493A1 (zh) 2019-02-14 2020-02-12 一种医学图像区域过滤方法、装置及存储介质

Country Status (4)

Country Link
US (1) US11995821B2 (zh)
EP (1) EP3926581A4 (zh)
CN (2) CN110009600A (zh)
WO (1) WO2020164493A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986217A (zh) * 2020-09-03 2020-11-24 北京大学口腔医学院 一种图像处理方法、装置及设备
CN112712508A (zh) * 2020-12-31 2021-04-27 杭州依图医疗技术有限公司 一种确定气胸的方法及装置
CN113744801B (zh) * 2021-09-09 2023-05-26 首都医科大学附属北京天坛医院 肿瘤类别的确定方法、装置、***、电子设备及存储介质
CN116386043A (zh) * 2023-03-27 2023-07-04 北京市神经外科研究所 一种脑神经医疗影像胶质瘤区域快速标记方法及***
WO2023138190A1 (zh) * 2022-01-24 2023-07-27 上海商汤智能科技有限公司 目标检测模型的训练方法及对应的检测方法
TWI832671B (zh) * 2023-01-13 2024-02-11 國立中央大學 藉由***x光攝影影像運用機器學習進行自動偵測乳癌病灶之方法

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009600A (zh) * 2019-02-14 2019-07-12 腾讯科技(深圳)有限公司 一种医学图像区域过滤方法、装置及存储介质
CN110414539A (zh) * 2019-08-05 2019-11-05 腾讯科技(深圳)有限公司 一种提取特征描述信息的方法和相关装置
CN110533672B (zh) * 2019-08-22 2022-10-28 杭州德适生物科技有限公司 一种基于条带识别的染色体排序方法
CN110689525B (zh) * 2019-09-09 2023-10-13 上海中医药大学附属龙华医院 基于神经网络识别***的方法及装置
CN110991443A (zh) * 2019-10-29 2020-04-10 北京海益同展信息科技有限公司 关键点检测、图像处理方法、装置、电子设备及存储介质
JP7382240B2 (ja) * 2020-01-30 2023-11-16 富士フイルムヘルスケア株式会社 医用画像処理装置及び医用画像処理方法
CN111462067B (zh) * 2020-03-30 2021-06-29 推想医疗科技股份有限公司 图像分割方法及装置
CN111863232B (zh) * 2020-08-06 2021-02-19 深圳市柯尼达巨茂医疗设备有限公司 基于区块链和医学影像的远程疾病智能诊断***
CN112241954B (zh) * 2020-10-22 2024-03-15 上海海事大学 基于肿块差异化分类的全视野自适应分割网络配置方法
CN114049937A (zh) * 2021-11-22 2022-02-15 上海商汤智能科技有限公司 图像评测方法及相关装置、电子设备和存储介质
US20230196081A1 (en) * 2021-12-21 2023-06-22 International Business Machines Corporation Federated learning for training machine learning models
CN114511887B (zh) * 2022-03-31 2022-07-05 北京字节跳动网络技术有限公司 组织图像的识别方法、装置、可读介质和电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749061A (zh) * 2017-09-11 2018-03-02 天津大学 基于改进的全卷积神经网络脑肿瘤图像分割方法及装置
US20180253839A1 (en) * 2015-09-10 2018-09-06 Magentiq Eye Ltd. A system and method for detection of suspicious tissue regions in an endoscopic procedure
CN108615237A (zh) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 一种肺部图像处理方法及图像处理设备
CN108765409A (zh) * 2018-06-01 2018-11-06 电子科技大学 一种基于ct图像的候选结节的筛选方法
CN109146899A (zh) * 2018-08-28 2019-01-04 众安信息技术服务有限公司 Ct图像的危及器官分割方法及装置
CN110009600A (zh) * 2019-02-14 2019-07-12 腾讯科技(深圳)有限公司 一种医学图像区域过滤方法、装置及存储介质

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7736313B2 (en) 2004-11-22 2010-06-15 Carestream Health, Inc. Detecting and classifying lesions in ultrasound images
CN101401730A (zh) * 2008-11-14 2009-04-08 南京大学 一种基于分层结构的乳腺肿块可疑区域快速检测方法
US20100158332A1 (en) * 2008-12-22 2010-06-24 Dan Rico Method and system of automated detection of lesions in medical images
US8064677B2 (en) * 2009-11-25 2011-11-22 Fujifilm Corporation Systems and methods for measurement of objects of interest in medical images
KR20130012297A (ko) * 2011-07-25 2013-02-04 삼성전자주식회사 병변 검출 장치, 병변 검출 방법 및 병변 진단 장치
US9430829B2 (en) * 2014-01-30 2016-08-30 Case Western Reserve University Automatic detection of mitosis using handcrafted and convolutional neural network features
KR20150098119A (ko) 2014-02-19 2015-08-27 삼성전자주식회사 의료 영상 내 거짓양성 병변후보 제거 시스템 및 방법
CN104732213B (zh) * 2015-03-23 2018-04-20 中山大学 一种基于乳腺磁共振图像的计算机辅助肿块检测方法
CN106339591B (zh) * 2016-08-25 2019-04-02 汤一平 一种基于深度卷积神经网络的预防乳腺癌自助健康云服务***
JP6955303B2 (ja) * 2017-04-12 2021-10-27 富士フイルム株式会社 医用画像処理装置および方法並びにプログラム
GB201705911D0 (en) * 2017-04-12 2017-05-24 Kheiron Medical Tech Ltd Abstracts
CN107240102A (zh) * 2017-04-20 2017-10-10 合肥工业大学 基于深度学习算法的恶性肿瘤计算机辅助早期诊断方法
EP3432263B1 (en) * 2017-07-17 2020-09-16 Siemens Healthcare GmbH Semantic segmentation for cancer detection in digital breast tomosynthesis
CN107424152B (zh) * 2017-08-11 2020-12-18 联想(北京)有限公司 器官病变的检测装置、训练神经元网络的方法和电子设备
CN107563123A (zh) 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 用于标注医学图像的方法和装置
US11126914B2 (en) * 2017-10-11 2021-09-21 General Electric Company Image generation using machine learning
CN107977956B (zh) * 2017-11-17 2021-10-29 深圳蓝韵医学影像有限公司 检测x线图像组织区域的方法、装置及计算机存储介质
CN107886514B (zh) * 2017-11-22 2021-04-23 浙江中医药大学 基于深度残差网络的乳腺钼靶图像肿块语义分割方法
CN108052977B (zh) * 2017-12-15 2021-09-14 福建师范大学 基于轻量级神经网络的乳腺钼靶图像深度学习分类方法
CN108464840B (zh) * 2017-12-26 2021-10-19 安徽科大讯飞医疗信息技术有限公司 一种乳腺肿块自动检测方法及***
CN108109144A (zh) * 2017-12-29 2018-06-01 广州柏视医疗科技有限公司 一种钼靶图像中***位置自动检测方法
CN108109152A (zh) * 2018-01-03 2018-06-01 深圳北航新兴产业技术研究院 医学图像分类和分割方法和装置
CN108510482B (zh) * 2018-03-22 2020-12-04 姚书忠 一种基于***镜图像的***检测装置
CN108550150B (zh) * 2018-04-17 2020-11-13 上海联影医疗科技有限公司 乳腺密度的获取方法、设备及可读存储介质
CN108765387A (zh) * 2018-05-17 2018-11-06 杭州电子科技大学 基于Faster RCNN乳腺DBT影像肿块自动检测方法
CN109190540B (zh) * 2018-06-06 2020-03-17 腾讯科技(深圳)有限公司 活检区域预测方法、图像识别方法、装置和存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180253839A1 (en) * 2015-09-10 2018-09-06 Magentiq Eye Ltd. A system and method for detection of suspicious tissue regions in an endoscopic procedure
CN107749061A (zh) * 2017-09-11 2018-03-02 天津大学 基于改进的全卷积神经网络脑肿瘤图像分割方法及装置
CN108615237A (zh) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 一种肺部图像处理方法及图像处理设备
CN108765409A (zh) * 2018-06-01 2018-11-06 电子科技大学 一种基于ct图像的候选结节的筛选方法
CN109146899A (zh) * 2018-08-28 2019-01-04 众安信息技术服务有限公司 Ct图像的危及器官分割方法及装置
CN110009600A (zh) * 2019-02-14 2019-07-12 腾讯科技(深圳)有限公司 一种医学图像区域过滤方法、装置及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3926581A4 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986217A (zh) * 2020-09-03 2020-11-24 北京大学口腔医学院 一种图像处理方法、装置及设备
CN111986217B (zh) * 2020-09-03 2024-01-16 北京大学口腔医学院 一种图像处理方法、装置及设备
CN112712508A (zh) * 2020-12-31 2021-04-27 杭州依图医疗技术有限公司 一种确定气胸的方法及装置
CN112712508B (zh) * 2020-12-31 2024-05-14 杭州依图医疗技术有限公司 一种确定气胸的方法及装置
CN113744801B (zh) * 2021-09-09 2023-05-26 首都医科大学附属北京天坛医院 肿瘤类别的确定方法、装置、***、电子设备及存储介质
WO2023138190A1 (zh) * 2022-01-24 2023-07-27 上海商汤智能科技有限公司 目标检测模型的训练方法及对应的检测方法
TWI832671B (zh) * 2023-01-13 2024-02-11 國立中央大學 藉由***x光攝影影像運用機器學習進行自動偵測乳癌病灶之方法
CN116386043A (zh) * 2023-03-27 2023-07-04 北京市神经外科研究所 一种脑神经医疗影像胶质瘤区域快速标记方法及***

Also Published As

Publication number Publication date
CN110490850A (zh) 2019-11-22
EP3926581A1 (en) 2021-12-22
US11995821B2 (en) 2024-05-28
US20210343021A1 (en) 2021-11-04
CN110009600A (zh) 2019-07-12
EP3926581A4 (en) 2022-04-27
CN110490850B (zh) 2021-01-08

Similar Documents

Publication Publication Date Title
WO2020164493A1 (zh) 一种医学图像区域过滤方法、装置及存储介质
CN110033456B (zh) 一种医疗影像的处理方法、装置、设备和***
Kallenberg et al. Unsupervised deep learning applied to breast density segmentation and mammographic risk scoring
US10402979B2 (en) Imaging segmentation using multi-scale machine learning approach
Roth et al. A new 2.5 D representation for lymph node detection using random sets of deep convolutional neural network observations
Zhu et al. Adversarial deep structural networks for mammographic mass segmentation
Cheng et al. Microcalcification detection using fuzzy logic and scale space approaches
Mayya et al. Automated microaneurysms detection for early diagnosis of diabetic retinopathy: A Comprehensive review
TW202014984A (zh) 一種圖像處理方法、電子設備及存儲介質
CN110459319A (zh) 基于人工智能的乳腺钼靶图像的辅助诊断***
CN110276741B (zh) 结节检测及其模型训练的方法和装置以及电子设备
WO2021136368A1 (zh) 钼靶图像中胸大肌区域自动检测方法及装置
CN111507932B (zh) 高特异性的糖尿病性视网膜病变特征检测方法及存储设备
WO2019184851A1 (zh) 图像处理方法和装置及神经网络模型的训练方法
WO2024016812A1 (zh) 显微图像的处理方法、装置、计算机设备及存储介质
Isfahani et al. Presentation of novel hybrid algorithm for detection and classification of breast cancer using growth region method and probabilistic neural network
Sambyal et al. Modified residual networks for severity stage classification of diabetic retinopathy
Mathew et al. Deep learning‐based automated mitosis detection in histopathology images for breast cancer grading
Shen et al. Multicontext multitask learning networks for mass detection in mammogram
Aguirre Nilsson et al. Classification of ulcer images using convolutional neural networks
Wang et al. Optic disc detection based on fully convolutional neural network and structured matrix decomposition
Sharma et al. Solving image processing critical problems using machine learning
Sharma et al. Advancement in diabetic retinopathy diagnosis techniques: automation and assistive tools
Wang System designs for diabetic foot ulcer image assessment
Oprea et al. A self organizing map approach to breast cancer detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20755919

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020755919

Country of ref document: EP

Effective date: 20210914