WO2020215985A1 - 医学影像分割方法、装置、电子设备和存储介质 - Google Patents
医学影像分割方法、装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2020215985A1 WO2020215985A1 PCT/CN2020/081660 CN2020081660W WO2020215985A1 WO 2020215985 A1 WO2020215985 A1 WO 2020215985A1 CN 2020081660 W CN2020081660 W CN 2020081660W WO 2020215985 A1 WO2020215985 A1 WO 2020215985A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- slice
- feature information
- level feature
- pair
- segmentation
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 98
- 238000003709 image segmentation Methods 0.000 title claims abstract description 90
- 230000011218 segmentation Effects 0.000 claims abstract description 408
- 238000000605 extraction Methods 0.000 claims abstract description 54
- 230000004927 fusion Effects 0.000 claims description 96
- 238000012549 training Methods 0.000 claims description 80
- 238000012545 processing Methods 0.000 claims description 40
- 230000008569 process Effects 0.000 claims description 29
- 238000012935 Averaging Methods 0.000 claims description 7
- 238000012216 screening Methods 0.000 claims description 4
- 238000002059 diagnostic imaging Methods 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 2
- 210000004185 liver Anatomy 0.000 description 45
- 230000006870 function Effects 0.000 description 25
- 230000000875 corresponding effect Effects 0.000 description 23
- 238000005516 engineering process Methods 0.000 description 21
- 238000010586 diagram Methods 0.000 description 14
- 238000013473 artificial intelligence Methods 0.000 description 11
- 238000005070 sampling Methods 0.000 description 10
- 238000002591 computed tomography Methods 0.000 description 7
- 238000010801 machine learning Methods 0.000 description 7
- 238000013527 convolutional neural network Methods 0.000 description 6
- 230000007246 mechanism Effects 0.000 description 5
- 238000011176 pooling Methods 0.000 description 5
- 238000012423 maintenance Methods 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 238000002595 magnetic resonance imaging Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/174—Segmentation; Edge detection involving the use of two or more images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
- G06V10/811—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30056—Liver; Hepatic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/031—Recognition of patterns in medical or anatomical images of internal organs
Definitions
- This application relates to the field of artificial intelligence (Artificial Intelligence, AI) technology, and specifically relates to a medical image processing technology.
- AI Artificial Intelligence
- a two-dimensional (2 Dimension, 2D) convolutional neural network can be pre-trained to segment the liver image slice by slice, and then the three-dimensional (3 Dimension, 3D) liver image to be segmented is like the electronic liver.
- Computed Tomography (CT) images are sliced, and the slices are respectively imported into the trained 2D convolutional neural network for segmentation, and segmentation results are obtained, such as the liver area, and so on.
- CT Computed Tomography
- the embodiments of the present application provide a medical image segmentation method, device, and storage medium, which can improve the accuracy of segmentation.
- An embodiment of the application provides a medical image segmentation method, which is executed by an electronic device, and the method includes:
- the slice pair including two slices sampled from the medical image to be segmented
- a segmentation result of the slice pair is generated.
- an embodiment of the present application also provides a medical image segmentation device, including:
- An extraction unit configured to use different receptive fields to perform feature extraction on each slice in the slice pair to obtain high-level feature information and low-level feature information of each slice in the slice pair;
- a segmentation unit for each slice in the slice pair, segment the target object in the slice according to the low-level feature information and the high-level feature information of the slice to obtain an initial segmentation result of the slice;
- a fusion unit for fusing the low-level feature information and the high-level feature information of each slice in the slice pair;
- a determining unit configured to determine the association information between the slices in the slice pair according to the feature information after fusion
- the generating unit is configured to generate a segmentation result of the slice pair based on the associated information and the initial segmentation result of each slice in the slice pair.
- this application also provides an electronic device, including a memory and a processor; the memory stores an application program, and the processor is configured to run the application program in the memory to execute any one of the application programs provided in the embodiments of this application. Operations in a medical image segmentation method.
- an embodiment of the present application also provides a storage medium that stores multiple instructions, and the instructions are suitable for loading by a processor to execute any of the medical image segmentation methods provided in the embodiments of the present application. step.
- embodiments of the present application also provide a computer program product, including instructions, which when run on a computer, cause the computer to execute the steps in any Chinese medical image segmentation method provided in the embodiments of the present application.
- the embodiment of the application can use different receptive fields to perform feature extraction on each slice of the slice pair to obtain the high-level feature information and low-level feature information of each slice. For each slice, segment the target object in the slice according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation result of the slice. On the other hand, pair the low-level feature information and high-level feature information of each slice in the slice.
- the method provided in the embodiment of the present application simultaneously segment two slices (slice pair), and use the correlation between the slices to further adjust the segmentation result, Therefore, the shape information of the target object (such as the liver) can be captured more accurately, and the segmentation accuracy is higher.
- FIG. 1 is a schematic diagram of a scene of a medical image segmentation method provided by an embodiment of the present application
- Figure 2 is a flowchart of a medical image segmentation method provided by an embodiment of the present application.
- FIG. 3 is a schematic diagram of the receptive field in the medical image segmentation method provided by an embodiment of the present application.
- FIG. 4 is a schematic diagram of the structure of the residual network in the image segmentation model provided by an embodiment of the present application.
- FIG. 5 is a schematic structural diagram of an image segmentation model provided by an embodiment of the present application.
- FIG. 6 is a schematic diagram of signal components in a medical image segmentation method provided by an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of a channel attention module in an image segmentation model provided by an embodiment of the present application.
- FIG. 8 is another schematic structural diagram of an image segmentation model provided by an embodiment of the present application.
- FIG. 9 is a schematic diagram of the association relationship in the medical image segmentation method provided by the embodiment of the present application.
- FIG. 10 is another schematic diagram of the association relationship in the medical image segmentation method provided by the embodiment of the present application.
- FIG. 11 is another flowchart of a medical image segmentation method provided by an embodiment of the present application.
- FIG. 12 is an exemplary diagram of overlapping squares in a medical image segmentation method provided by an embodiment of the present application.
- FIG. 13 is a schematic structural diagram of a medical image segmentation device provided by an embodiment of the present application.
- FIG. 14 is a schematic diagram of another structure of the medical image segmentation device provided by an embodiment of the present application.
- FIG. 15 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
- Artificial intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results.
- artificial intelligence is a comprehensive technology of computer science, which attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
- Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
- Artificial intelligence technology is a comprehensive discipline, covering a wide range of fields, including both hardware-level technology and software-level technology.
- Basic artificial intelligence technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, and mechatronics.
- Artificial intelligence software technology mainly includes computer vision technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
- Computer Vision is a science that studies how to make machines "see”. More specifically, it refers to the use of cameras and computers instead of human eyes to identify, track, and measure machine vision for targets, and further Do graphics processing to make computer processing more suitable for human eyes to observe or send to the instrument to detect images.
- Computer vision technology usually includes image segmentation, image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronization Technologies such as positioning and map construction also include common facial recognition, fingerprint recognition and other biometric recognition technologies.
- Machine Learning is a multi-disciplinary interdisciplinary, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other subjects. Specializing in the study of how computers simulate or realize human learning behaviors in order to acquire new knowledge or skills, and reorganize the existing knowledge structure to continuously improve its own performance.
- Machine learning is the core of artificial intelligence, the fundamental way to make computers intelligent, and its applications cover all fields of artificial intelligence.
- Machine learning and deep learning usually include artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning techniques.
- the medical image segmentation method provided by the embodiments of the present application involves artificial computer vision technology and machine learning technology, etc., which are specifically described by the following embodiments.
- the embodiments of the present application provide a medical image segmentation method, device, electronic equipment, and storage medium.
- the medical image segmentation device can be integrated in an electronic device, and the electronic device can be a server or a terminal or other equipment.
- the so-called image segmentation refers to the technology and process of dividing an image into a number of specific areas with unique properties and proposing objects of interest. In the embodiments of the present application, it mainly refers to segmenting the three-dimensional medical image and finding the required target object.
- the 3D medical image is divided into multiple single-frame slices (referred to as slices) along the z-axis, and then the slices Segment the liver area, etc.; after obtaining the segmentation results of all the slices of the 3D medical image, combine these segmentation results along the z-axis to obtain the 3D segmentation result corresponding to the 3D medical image-the target object such as the liver area 3D shape.
- the segmented target object can subsequently be analyzed by medical staff or other medical experts for further operations.
- the electronic device can acquire a slice pair (the slice pair includes two slices sampled from the medical image to be segmented), using different receptive fields. (receptive field) Perform feature extraction on each slice in the slice pair to obtain the high-level feature information and low-level feature information of each slice; then, on the one hand, for each slice in the slice pair, according to the low-level feature information and high-level feature information of the slice.
- the feature information segment the target object in the slice to obtain the initial segmentation result of the slice; on the other hand, the low-level feature information and high-level feature information of each slice in the slice pair are fused, and the slice pair is determined according to the fused feature information
- the segmentation result of the slice pair is generated based on the association information and the initial segmentation result of each slice in the slice pair.
- the medical image segmentation device may be specifically integrated in an electronic device.
- the electronic device may be a server or a terminal.
- the terminal may include a tablet computer, Notebook computers, personal computers (Personal Computer, PC), medical image acquisition equipment, or other electronic medical equipment, etc.
- a medical image segmentation method includes: acquiring a slice pair, the slice pair including two slices sampled from a medical image to be segmented; and using different receptive fields to perform feature extraction on each slice in the slice pair to obtain the slice pair High-level feature information and low-level feature information of each slice in the slice pair; for each slice in the slice pair, segment the target object in the slice according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation result of the slice; Fuse the low-level feature information and high-level feature information of each slice in the slice pair, and determine the correlation information between the slices in the slice pair based on the fused feature information; generate based on the correlation information and the initial segmentation results of each slice in the slice pair The segmentation result of this slice pair.
- the specific process of the medical image segmentation method can be as follows:
- a medical image to be segmented can be acquired, and two slices can be sampled from the medical image to be segmented.
- the set of these two slices is called a slice pair.
- the medical image to be segmented can be provided to the medical image segmentation device after image acquisition of biological tissues (such as the heart or liver) by each medical image acquisition device.
- the medical image acquisition equipment may include electronic equipment such as a magnetic resonance imaging (MRI), a computer tomography device, a colposcope, or an endoscope.
- the receptive field determines the area size of the input layer corresponding to an element in the output result of a certain layer. That is to say, the receptive field is the size of the element point on the input image mapped on the output result of a certain layer of the convolutional neural network (ie, feature map, also called feature information), for example, see Figure 3.
- the size of the receptive field of the output feature image element of the first convolutional layer (such as C 1 ) is equal to the size of the convolution kernel (Filter size), and the high-level convolutional layer (such as C 4, etc.)
- the size of the receptive field is related to the size of the convolution kernel and the step length of all layers before it. Therefore, different levels of information can be captured based on different receptive fields, and the purpose of extracting feature information of different scales can be achieved; that is, through After using different receptive fields to perform feature extraction on slices, multiple scales of high-level feature information and multiple scales of low-level feature information of the slice can be obtained.
- the high-level feature information and low-level feature information of each slice in a slice pair may include:
- the slice pair includes the first slice and the second slice
- the residual network includes the first residual network branch and the second residual network branch that are parallel and the same in structure.
- the first residual network branch in the residual network can be used to perform feature extraction on the first slice to obtain the high-level feature information corresponding to the first slice and the low-level feature information of different scales; and the residual network
- the second residual network branch of performs feature extraction on the second slice to obtain high-level feature information and bottom-level feature information of different scales corresponding to the second slice.
- the high-level feature information refers to the feature map finally output by the residual network.
- the so-called “high-level feature” can generally contain information related to categories and high-level abstractions.
- the low-level feature information refers to the feature map obtained by the residual network during the feature extraction process of the medical image to be segmented.
- the so-called “low-level feature” can generally contain image details such as edges and textures.
- the high-level feature information refers to the last piece of residual The feature map output by the module
- the low-level feature information refers to the feature map output by other residual modules except the first residual module and the last residual module.
- each residual network branch includes residual module 1 (Block1), residual module 2 (Block2), residual module 3 (Block3), residual module 4 (Block4) and residual module 5. (Block5), the feature map output by the residual module 5 is high-level feature information, and the feature map output by the residual module 2, the residual module 3, and the residual module 4 is the low-level feature information.
- the network structure of the first residual network branch and the second residual network branch can be specifically determined according to actual application requirements.
- ResNet-18 a residual network
- the first residual network branch The parameters of and the parameters of the second residual network branch can be shared, and the specific parameter settings can be determined according to actual application requirements.
- spatial pyramid pooling (SPP) processing may also be performed on the obtained high-level feature information.
- a spatial pyramid pooling module such as an Atrous Spatial Pyramid Pooling (ASPP) module, may be added after the first residual network branch and the second residual network branch respectively.
- ASPP uses Atrous Convolution, it can expand the feature receiving field without sacrificing feature spatial resolution. Therefore, it is natural to extract higher-level feature information at more scales.
- the parameters of the ASPP connected to the first residual network branch and the parameters of the ASPP connected to the second residual network branch may not be shared, and the specific parameters can be determined according to actual application requirements, so we will not do it here. Repeat.
- the residual network part can be regarded as the coding module part of the segmentation model after training.
- the target object in the slice can be segmented through the segmentation network in the segmentation model after training according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation result of the slice.
- the details can be as follows:
- the low-level feature information and high-level feature information of the slice are respectively convolved (Conv), and the high-level feature information after the convolution processing is upsampled (Upsample) to have the same size as the low-level feature information after the convolution process, and then connect with the low-level feature information after the convolution process (Concat) to obtain the connected feature information, and filter the slices according to the connected feature information Pixels of the target object to get the initial segmentation result of the slice.
- the segmentation network can be regarded as the decoding module part of the segmentation model after training.
- the segmentation network includes a first segmentation network branch (decoding module A) and a second segmentation network branch (decoding module B) that are parallel and have the same structure .
- the details can be as follows:
- the high-level feature information after the product process is up-sampled to the same size as the low-level feature information after the convolution process, it is connected with the low-level feature information after the convolution process to obtain the connected feature information of the first slice.
- the convolution kernel of the connected feature information can be "3 ⁇ 3".
- the fusion network in the segmentation model after training can be used to fuse the low-level feature information and high-level feature information of each slice in the slice pair.
- the step of "segmenting the fusion network in the model after training, fusing the low-level feature information and high-level feature information of each slice in the slice pair" can include:
- the low-level feature information of each slice in the slice pair is added element by element to obtain the fused low-level feature information.
- the low-level feature information of the first slice and the low-level feature information of the second slice can be added element by element. , Obtain low-level feature information after fusion.
- the slice pair includes the first slice and the second slice as an example.
- the high-level feature information of the first slice and the high-level feature information of the second slice can be correlated element by element. Plus, get high-level feature information after fusion.
- the fusion low-level feature information and the fused high-level feature information are fused to obtain the fused feature information.
- any one of the following methods can be used for fusion:
- the fused low-level feature information and the fused high-level feature information are added element by element to obtain the fused feature information.
- the second method can also be used to fuse the fused low-level feature information and the fused high-level feature information, as follows:
- weight is assigned to the fused low-level feature information according to the fused low-level feature information and the fused high-level feature information to obtain the weighted feature information; the weighted post-processing feature sum After the fusion, the low-level feature information is multiplied element by element to obtain the processed feature information, and the processed feature information and the fused high-level feature information are added element by element to obtain the fused feature information, see Figure 5.
- the channel attention module refers to the network module that adopts the attention mechanism of the channel domain.
- each image is initially represented by three channels (R, G, B), and then after different convolution kernels, each channel will generate new signals, such as each image feature Using 64-core convolution for each channel will generate a matrix of 64 new channels (H, W, 64).
- H and W represent the height and width of the image feature, and so on.
- the characteristics of each channel actually represent the components of the image on different convolution kernels, similar to time-frequency transformation, and the convolution with the convolution kernel is similar to the Fourier transform of the signal, so that this
- the information of a characteristic channel is decomposed into signal components on 64 convolution kernels, for example, see Figure 6.
- each signal can be decomposed into signal components on 64 convolution kernels (equivalent to the 64 channels generated), however, the contribution of the new 64 channels to key information is not the same, but how many There are few, so you can assign a weight to each channel to represent the correlation between the channel and the key information (information that plays a key role in the segmentation task). The greater the weight, the higher the correlation and the higher the correlation. The channel is the channel that needs more attention. For this reason, this mechanism is called the "channel domain attention mechanism".
- the structure of the channel attention module can be specifically determined according to the needs of actual applications.
- the channel attention module in the fusion network in the segmentation model after training can be divided according to the fusion
- the post-low-level feature information and the fused high-level feature information give weights to the fused low-level feature information to obtain the weighted feature information.
- the weighted feature information and the fused low-level feature information are multiplied element by element (Mul) to get
- the processed characteristic information and the fused high-level characteristic information are added element by element to obtain the fused characteristic information.
- steps 103 and 104 can be executed in no particular order, and will not be repeated here.
- the fusion network in the segmentation model after training can be used to determine the associated information between the slices in the slice pair according to the fusion feature information.
- the features belonging to the target object can be screened out from the fusion feature information, according to The filtered features (that is, the features belonging to the target object) determine the associated information between the slices in the slice pair.
- the target object refers to the object that needs to be identified in the slice, such as "liver” in liver image segmentation, “heart” in heart image segmentation, and so on.
- the target object specifically the liver as an example, at this time, it can be determined that the area where the features belonging to the liver are filtered out is the foreground area of the slice, and the other remaining areas in the slice are the background area of the slice.
- the target object specifically the heart as an example, at this time, it can be determined that the area where the features belonging to the heart are filtered out is the foreground area of the slice, and the other remaining areas in the slice are the background area of the slice.
- the difference pixel For example, in the feature information after fusion, only the pixels in the foreground area of any slice in the slice pair can be combined to obtain the pixel set of the difference area, referred to as the difference pixel; and the fused feature information can also belong to the slice
- the pixel points of the foreground area of the two slices are combined to obtain the pixel set of the intersection area, referred to as the intersection pixel.
- the feature information after fusion can be regarded as the feature information corresponding to the superimposed slice after "superimposing all the slices in the slice pair". Therefore, in the superimposed slice, the foreground area (two If the pixels in the foreground area of each slice do not produce overlapping areas, the difference pixels can be obtained. Similarly, the pixels in the overlapping areas of the foreground area can be obtained to obtain the intersection pixels.
- pixels that belong to the background areas of two slices in the slice pair at the same time can be used as the background areas of the slice pair, in other words, the intersection of the background areas of all slices As the background area of the slice pair, then the pixel type identification is performed on the background area, the difference pixel and the intersection pixel of the slice pair to obtain the association information between the slices.
- different pixel values can be used to identify the pixel types of these areas. For example, you can set the pixel value of the background area of the slice pair to "0", set the value of the difference pixel to "1", and set the intersection pixel The value of is set to "2"; alternatively, the pixel value of the background area can be set to "0", the value of the difference pixel is set to "2”, the value of the intersection pixel is set to "1", etc. .
- different colors can be used to identify the pixel types of these areas. For example, you can set the background area to "black”, the value of the difference pixel to “red”, and the value of the intersection pixel to " Green”; or, you can also set the pixel value of the background area to "black”, the value of the difference pixel to "green”, the value of the intersection pixel to “red”, etc.
- the step “generate the slice based on the correlation information between the slices in the slice pair and the initial segmentation results of each slice in the slice pair.
- the “right segmentation result” can include:
- association information between the slices at this time refers to the association information between the first slice and the second slice, it can reflect the difference pixels and intersection pixels between the first slice and the second slice. Therefore, According to the association information and the initial segmentation result of the first slice, the segmentation result of the second slice can be predicted.
- the predicted segmentation result of the second slice is " ⁇ (A ⁇ B) ⁇ C ⁇ B", where " ⁇ ” refers to "union” and " ⁇ " refers to difference.
- the first slice Similar to predicting the segmentation result of the second slice, if the difference pixel of the first slice and the second slice is the A area, the intersection pixel is the B area, and the initial segmentation result of the second slice is the D area, then the first slice The predicted segmentation result is " ⁇ (A ⁇ B) ⁇ D ⁇ B".
- the predicted segmentation result of the first slice and the initial segmentation result of the first slice are averaged to obtain an adjusted segmentation result of the first slice.
- the pixel value in the predicted segmentation result of the first slice and the pixel value at the same position in the initial segmentation result of the first slice are averaged, and the pixel average value is used as the adjusted segmentation result of the first slice The pixel value at the same position in.
- the predicted segmentation result of the second slice and the initial segmentation result of the second slice are averaged to obtain an adjusted segmentation result of the second slice.
- the pixel value in the predicted segmentation result of the second slice and the pixel value at the same position in the initial segmentation result of the second slice are averaged, and the pixel average value is taken as the same position in the adjusted segmentation result of the second slice The pixel value on the.
- the adjusted segmentation result of the first slice and the adjusted segmentation result of the second slice are averaged, and the averaged result is binarized to obtain the segmentation result of the slice pair.
- the pixel value in the adjusted segmentation result of the first slice and the pixel value in the same position in the adjusted segmentation result of the second slice are averaged, and the pixel average value is regarded as the same in the segmentation result of the slice pair The pixel value at the position.
- binarization refers to setting the gray value of the pixels on the image to 0 or 255, which means that the entire image presents an obvious visual effect of only black and white.
- the post-training segmentation model in the embodiment of the present application may include a residual network, a segmentation network, and a fusion network.
- the residual network may include a first residual network branch and a second residual network branch in parallel
- the segmentation network may include a first partition network branch and a second partition network branch in parallel.
- the residual network part can be regarded as the encoder part of the trained image segmentation model, called the encoding module, used for feature information extraction
- the segmentation network can be regarded as the training
- the decoder part of the post-segmentation model, called the decoding module is used for classification and segmentation according to the extracted feature information.
- the post-training segmentation model may be trained on samples from multiple pairs of slices marked with true values. Specifically, it may be pre-set by operation and maintenance personnel, or it may be obtained through training by the image segmentation device itself. That is, before the step of "using the residual network in the segmentation model after training to perform feature extraction on each slice in the slice pair to obtain high-level feature information and low-level feature information of each slice", the medical image segmentation method may further include:
- the original data set For example, it is possible to collect multiple medical images as the original data set, for example, obtain the original data set from a database or the Internet, and then preprocess the medical images in the original data set to obtain input standards that meet the preset segmentation model Image, you can get medical image samples. Cut the obtained medical image sample into slices (referred to as slice samples in the embodiment of this application), label each slice sample with the target object (referred to as true value labeling), and form a set in pairs to obtain Multiple pairs of slice pair samples labeled with true values.
- slice samples in the embodiment of this application
- true value labeling label each slice sample with the target object
- preprocessing may include operations such as deduplication, cropping, rotation, and/or flipping. For example, if the input size of the preset segmentation network is "128*128*32 (width*height*deep)" as an example, at this time, the image in the original data set can be cropped to a size of "128*128*32" Of course, other preprocessing operations can be performed on these images.
- the residual can be used at this time
- the first residual network branch in the network performs feature extraction on the first slice sample to obtain high-level feature information of different scales and low-level feature information of different scales corresponding to the first slice sample; and adopt the first slice of the residual network
- the two-residual network branch performs feature extraction on the second slice sample to obtain high-level feature information of different scales and bottom-level feature information of different scales corresponding to the second slice sample.
- the target object in the slice sample is segmented through the segmentation network in the preset segmentation model to obtain the slice sample
- the predicted split value that is, the predicted probability map
- the following operations can be performed at this time:
- the convolution kernel of the connected feature information can be specified as " After 3 ⁇ 3" convolution processing, up-sampling to the size of the first slice sample, the predicted segmentation value of the first slice sample can be obtained.
- the B. Perform convolution processing on the low-level feature information of the second slice sample and the high-level feature information of the second slice sample through the second segmentation network branch. For example, perform convolution processing with a convolution kernel of "1 ⁇ 1", and convolution After the processed high-level feature information is up-sampled to the same size as the convolution processed low-level feature information, it is connected with the convolution processed low-level feature information to obtain the connected feature information of the second slice sample. Then, according to The connected feature information filters the pixels belonging to the target object in the second slice sample to obtain the predicted segmentation value of the second slice sample. For example, the convolution kernel of the connected feature information can be "3 ⁇ 3". After convolution processing, up-sampling to the size of the second slice sample, the predicted segmentation value of the second slice sample can be obtained.
- the low-level feature information and high-level feature information of each slice sample in the slice are merged, and the association between the slice and each slice sample in the sample is predicted based on the fused feature information information.
- the low-level feature information of each slice sample in the slice can be added element by element to obtain the low-level feature information after fusion
- the high-level feature information of each slice sample in the slice can be added element by element to obtain After the fusion of high-level feature information; then, through the fusion network in the preset segmentation model, the fused low-level feature information and the fused high-level feature information are fused to obtain the fused feature information, which can then be obtained from the fused feature information
- the characteristics belonging to the target object are screened out, and the correlation information between the slice samples in the slice pair sample is determined according to the screened characteristics.
- the method of fusing the low-level feature information after the fusion and the high-level feature information after the fusion can be found in the previous embodiment.
- the method of calculating the correlation information between the slice samples in the slice pair sample is the same as calculating the slice centering
- the manner of associating information between each slice is also the same. For details, please refer to the previous embodiment, which is not repeated here.
- a loss function such as a Dice loss function may be specifically used to converge the preset segmentation model according to the true value, predicted segmentation value, and predicted associated information, to obtain the segmentation model after training.
- the loss function can be specifically set according to actual application requirements. For example, taking the slice pair sample including the first slice sample x i and the second slice sample x j as an example, if the first slice sample x i is labeled as true The value is y i , and the true value labeled by the second slice sample x j is y j , then the Dice loss function of the first segmentation network branch can be as follows:
- the Dice loss function of the second division network branch can be as follows:
- p i and p j are the predicted segmentation values of the first segmentation network branch and the second segmentation network branch, respectively
- s and t are the position indexes of the rows and columns in the slice, respectively
- Represents the true value marked by the pixel with position index (s, t) in the first slice sample Represents the predicted segmentation value of the pixel with the position index of (s, t) in the first slice sample
- Represents the true value marked by the pixel with the position index of (s, t) in the second slice sample Represents the predicted segmentation value of the pixel with the position index (s, t) in the second slice sample at the position.
- the Dice loss function of the fusion network can be calculated as :
- y ij is the true value of the association relationship between the first slice sample x i and the second slice sample x j
- the true value of the association relationship can be based on the marked true value of the first slice sample x i
- the true value of the second slice sample x j is calculated, for example, the background area of the image after the superposition of the first slice sample x i and the second slice sample x j , and the true value of the first slice sample x i can be determined
- the difference and intersection between the true value marked with the second slice sample x j , the background area, difference and intersection obtained here are the superimposed “background” of the first slice sample x i and the second slice sample x j
- the true value of the "region, difference pixel, and intersection pixel" is the true value of the association relationship mentioned in the embodiment of this application.
- p ij is the correlation between the first slice sample x i and the second slice sample x j output by the fusion network
- s and t are the position indexes of the row and column in the slice
- l is the above three types of relationship (that is, the background area, The category index of the intersection pixel and difference pixel).
- the overall loss function of the image segmentation model can be calculated for:
- ⁇ 1 , ⁇ 2 and ⁇ # are hyperparameters manually set to balance the contribution of each part of the loss to the overall loss.
- different receptive fields can be used to separately perform feature extraction on the slices in the slice pair to obtain the high-level feature information and low-level feature information of each slice.
- the feature information is fused, and the correlation information between the slices in the slice pair is determined according to the fused feature information.
- the slice is generated based on the correlation information between the slices in the slice pair and the initial segmentation result of each slice in the slice pair
- the segmentation results are further adjusted to ensure that the shape information of the target object (such as the liver) can be captured more accurately, and the segmentation accuracy is higher.
- the image segmentation device is integrated in an electronic device, and its target object is the liver as an example for description.
- the image segmentation model can include a residual network, a segmentation network, and a fusion network.
- the residual network can include two parallel residual network branches with the same structure—the first residual network Branch and the second residual network branch.
- an ASPP void convolutional spatial pyramid pooling
- the residual network is used as the coding module of the image segmentation model. It is used to extract feature information from the input image such as the slice in the slice pair or the slice sample in the slice pair sample.
- the segmentation network can include two segmentation network branches that are side by side and have the same structure—a first segmentation network branch and a second segmentation network branch.
- the segmentation network serves as the decoding module of the image segmentation model and is used to extract data from the encoding module.
- the feature information is used to segment the target object such as the liver.
- the fusion network is used to predict the relationship between each slice in the slice pair (or each slice sample in the slice pair sample) based on the feature information extracted by the encoding module. Based on the structure of the image segmentation model, its training method will be described in detail below.
- the electronic device can collect multiple 3D medical images containing the liver structure, such as from a database or network, etc., and then preprocess these 3D medical images, such as deduplication and cropping. , Rotate, and/or flip to obtain an image that meets the input criteria of the preset segmentation model as a medical image sample. Then, the medical image sample is processed along the z axis (3D coordinate axis ⁇ x, y, z ⁇ ) Direction, sampling at a certain time interval to obtain multiple slice samples. After that, mark the liver area and other information in each slice sample, and form a set in pairs to obtain multiple pairs of slices with true values sample.
- slice sample 1 and slice sample 2 are formed into slice pair sample 1, and then slice sample 1 and slice sample 3 are combined again Slice to sample 2 and so on.
- slice sample augmentation to get more training data (ie data augmentation), so that even a small amount of manual annotation data can be used to complete the image segmentation Model training.
- the electronic device can input the slice pair samples into a preset image segmentation model, and perform feature extraction on the slice samples through the residual network.
- a residual network branch performs feature extraction on the first slice sample in the slice to obtain high-level feature information of different scales and low-level feature information of different scales corresponding to the first slice sample; and through the second residual network The branch performs feature extraction on the second slice sample in the sample to obtain high-level feature information of different scales and low-level feature information of different scales corresponding to the second slice sample.
- ASPP can be further used to further process the high-level feature information corresponding to the first slice sample and the high-level feature information corresponding to the second slice sample to obtain more high-level feature information of different scales, see FIG. 8.
- the electronic device can use these high-level feature information and low-level feature information.
- Information using the first segmentation network branch and the second segmentation network branch to segment the first slice sample and the second slice sample to obtain the predicted segmentation value of the first slice sample and the predicted segmentation value of the second slice sample.
- the electronic device can merge the low-level feature information and high-level feature information of the first slice sample and the second slice sample through the fusion network, and predict the first slice sample and the second slice according to the fused feature information
- the related information between samples for example, can be as follows:
- the low-level feature information of the first slice sample and the low-level feature information of the second slice sample can be added element by element to obtain the fused low-level feature information, and the high-level feature information of the first slice sample and the first slice can be combined.
- the high-level feature information of the two slice samples is added element by element to obtain the fused high-level feature information.
- the channel attention module is used to assign weight to the fused low-level feature information according to the fused low-level feature information and the fused high-level feature information.
- the weighted feature information is obtained, and then the weighted feature information and the fused low-level feature information are multiplied element by element (Mul) to obtain the processed feature information, and the processed feature information and the fused high-level feature information are carried out one by one.
- the elements are added together to get the feature information after fusion.
- the features belonging to the liver can be filtered from the fused feature information, and the correlation information between the first slice sample and the second slice sample can be predicted based on the filtered characteristics belonging to the liver. For example, the first slice can be predicted.
- the real value of the slice-to-sample labeling, the predicted segmentation value of each slice sample in the slice-to-sample, and the predicted associated information can be used to converge the preset image segmentation model to obtain the trained image segmentation model.
- the true value of the slice labeling the sample includes the liver area labelled in the first slice sample and the liver area labelled in the second slice sample. And through the liver area marked in the first slice sample and the liver area marked in the second slice sample, the true relationship between the first slice sample and the second slice sample can be further determined, including the first slice sample The background area of the slice pair sample composed of the second slice sample, the real difference pixel between the first slice sample and the second slice sample, and the real intersection pixel between the first slice sample and the second slice sample Wait.
- the slice composed of the first slice sample and the second slice sample has the real background area of the sample
- the background area of the first slice sample can be obtained by superimposing the first slice sample and the second slice sample And the intersection of the background area of the second slice sample.
- the real difference pixel between the first slice sample and the second slice sample and the real intersection pixel between the first slice sample and the second slice sample, the liver area and the first slice sample marked by the first slice sample can be calculated.
- the difference between the liver regions labeled by the two slice samples and the intersection between the liver regions labeled by the first slice sample and the liver regions labeled by the second slice sample are calculated.
- the liver area marked by the first slice sample and the liver area marked by the second slice sample may be In the overlay image, different colors or pixel values are identified for different types of areas. For example, referring to Figures 9 and 10, you can mark the color of the background area of the slice on the sample as black, the color of the intersection pixel as red (white in Figures 9 and 10), and the color of the intersection pixel as green (Gray in Figure 9 and Figure 10), or you can set the pixel value of the background area of the slice pair sample to 0, set the pixel value of the intersection pixel to 1, and set the pixel value of the intersection pixel to 2, etc. Wait.
- the images in the center of FIGS. 9 and 10 are superimposed images of the liver area marked by the first slice sample and the liver area marked by the second slice sample.
- the first slice sample and the second slice sample in Figure 9 are sampled from different CT images, while the first slice sample and the second slice sample in Figure 10 are sampled from the same CT image .
- the Dice loss function When converging, you can use the Dice loss function to converge.
- the Dice loss function The details can be as follows:
- ⁇ 1 , ⁇ 2 and ⁇ # are hyperparameters manually set to balance the contribution of each part of the loss to the overall loss.
- sum (y ij , p ij ) please refer to the previous embodiment, which will not be repeated here.
- the Dice loss function is used to converge the preset image segmentation model, one training is completed, and so on, after multiple trainings, the trained image segmentation model can be obtained.
- the part of the image segmentation model "used to determine the relationship between slice samples" can use information other than the label information of the slice sample itself on the target object (that is, the shape of the slice) in the training process. (Inter-association relationship) to train the image segmentation model to learn prior knowledge of the shape (Prior knowledge, which refers to the knowledge that can be used by machine learning algorithms). Therefore, the part "used to determine the association relationship between slice samples" It can also be referred to as the proxy supervision (Proxy Supervision) part, which will not be repeated here.
- Prxy Supervision proxy supervision
- the image segmentation model after training includes a residual network, a segmentation network, and a fusion network.
- the residual network can include a first residual network branch and a second residual network branch, and the segmentation network includes a first segmentation network branch and a second residual network branch. Two split network branches.
- a medical image segmentation method the specific process can be as follows:
- the electronic device acquires a medical image to be segmented.
- the electronic device may receive medical images sent by various medical image acquisition devices, such as MRI or CT, which collect images of the liver of the human body, and use these medical images as medical images to be segmented.
- various medical image acquisition devices such as MRI or CT, which collect images of the liver of the human body, and use these medical images as medical images to be segmented.
- the received medical image may be preprocessed, such as deduplication, cropping, rotation, and/or flipping.
- the electronic device samples two slices from the medical image to be segmented to obtain a slice pair that currently needs to be segmented.
- the electronic device can sample two slices continuously along the z-axis at a certain time interval to form a slice pair, or it can randomly sample two slices along the z-axis at a certain time interval to form a slice pair, etc. Wait.
- a patch-wise unit with overlapping parts may be used for sampling.
- patch-wise is a basic unit of an image.
- image-wise refers to the image level (that is, using an image as a unit)
- patch-wise refers to the area between the pixel level and the image level, where each patch is composed of A lot of pixels.
- the two slices sampled into the slice pair may not overlap, may also have partial overlap, or may also be completely overlapped (that is, the same slice). It should be understood that since in the image segmentation model after training, the parameters of the same network structure in different branches may be different (for example, the parameters of ASPP in different branches are not shared), so the same input is different The initial segmentation results output by the branch may also be different, so it makes sense even if the two input slices are the same.
- the electronic device performs feature extraction on each slice in the slice pair through the residual network in the image segmentation model after training, to obtain high-level feature information and low-level feature information of each slice.
- step 203 may be specifically as follows:
- the electronic device uses the first residual network branch in the residual network such as ResNet-18 to perform feature extraction on the first slice to obtain high-level feature information corresponding to the first slice and low-level feature information of different scales, and then use ASPP processes the high-level feature information corresponding to the first slice to obtain high-level feature information of multiple scales corresponding to the first slice.
- ResNet-18 residual network branch in the residual network such as ResNet-18
- the electronic device uses the second residual network branch in the residual network such as another ResNet-18 to perform feature extraction on the second slice to obtain high-level feature information corresponding to the second slice and low-level feature information of different scales, and then Then another ASPP is used to process the high-level feature information corresponding to the second slice to obtain high-level feature information of multiple scales corresponding to the second slice.
- the second residual network branch in the residual network such as another ResNet-18 to perform feature extraction on the second slice to obtain high-level feature information corresponding to the second slice and low-level feature information of different scales
- another ASPP is used to process the high-level feature information corresponding to the second slice to obtain high-level feature information of multiple scales corresponding to the second slice.
- the parameters of the first residual network branch and the second residual network branch can be shared, while the parameters of the ASPP connected by the two branches may not be shared.
- the specific parameters can be changed according to actual application requirements. Certainly, I won’t repeat it here.
- the electronic device divides the target object in the slice through the segmentation network in the image segmentation model after training according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation of the slice. result.
- step 204 may be specifically as follows:
- the electronic device uses the first segmentation network branch to perform convolution processing on the low-level feature information and high-level feature information of the first slice with a convolution kernel of "1 ⁇ 1", and upsample the high-level feature information after the convolution processing to the AND volume
- the convolution kernel for the connected feature information is " 3 ⁇ 3" convolution processing, and up-sampling the concatenated feature information after the convolution processing to the size of the first slice, the initial segmentation result of the first slice can be obtained.
- the other branch can also perform similar operations, that is, the electronic device uses the second segmentation network branch to convolve the low-level feature information of the second slice and the high-level feature information of the second slice.
- the kernel is a "1 ⁇ 1" volume.
- Convolution processing up-sampling the convolution processed high-level feature information to the same size as the convolution processed low-level feature information, and then connect it with the convolution processed low-level feature information to obtain the connected feature of the second slice Then, perform convolution processing with the convolution kernel of "3 ⁇ 3" on the connected feature information, and up-sample the connected feature information after convolution processing to the size of the second slice, then the first slice can be obtained.
- the initial segmentation result of the second slice is a "1 ⁇ 1" volume.
- the electronic device fuses the low-level feature information and high-level feature information of each slice in the slice pair through the fusion network in the image segmentation model after training.
- the details can be as follows:
- the low-level feature information of the first slice and the low-level feature information of the second slice are added element by element to obtain the fused low-level feature information; on the other hand, the high-level feature information of the first slice is combined with the second slice The high-level feature information is added element by element to obtain the fused high-level feature information. Then, the fused low-level feature information and the fused high-level feature information are processed through the channel attention module in the fusion network in the segmentation model after training, and get The processed feature information, and then, the processed feature information and the fused high-level feature information are added element by element to obtain the fused feature information.
- steps 204 and 205 can be executed in no particular order.
- the electronic device uses the fusion network in the trained image segmentation model to determine the association information between the slices in the slice pair according to the fusion feature information.
- the electronic device can specifically filter out the features belonging to the liver area from the fused feature information, and respectively determine the foreground area of the first slice (that is, the area where the liver is located) and the foreground area in the second slice ( That is, the area where the liver is located), and the remaining areas in the first slice except the union of the foreground areas of the two slices are used as the background area of the slice pair, and then the fused feature information belongs to only two slices
- the pixels in the foreground area of any slice in the slice pair are taken as the difference set pixels of the slice pair, and the pixels belonging to the two slices in the fused feature information are taken as the intersection pixels.
- the background area, difference set pixels and Intersecting pixels perform pixel type identification, for example, using different pixel values to identify these regions, or using different colors to identify these regions, etc., to obtain the associated information of the first slice and the second slice.
- the operation of determining the associated information between the slices in the slice pair according to the fused feature information can be implemented through a variety of network structures.
- the specific convolution kernel is 3 ⁇
- the convolutional layer of 3 performs convolution processing on the fused feature information, and then upsamples the fused feature information after convolution processing to the same size as the input slices (first slice and second slice), You can get the associated information between the first slice and the second slice, such as the background area of the slice pair, the intersection pixel of the first slice and the second slice, and the difference pixel of the first slice and the second slice Wait.
- the electronic device generates a segmentation result of the slice pair based on the association information between the slices in the slice pair and the initial segmentation result of each slice in the slice pair.
- the segmentation result of the second slice can be predicted according to the correlation information between the slices and the initial segmentation result of the first slice, and the first slice can be obtained.
- the predicted segmentation result of the second slice, and the segmentation result of the first slice is predicted according to the correlation information between the slices and the initial segmentation result of the second slice, and the predicted segmentation result of the first slice is obtained.
- the predicted segmentation result and the initial segmentation result of the first slice are averaged to obtain the adjusted segmentation result of the first slice, and the predicted segmentation result of the second slice and the initial segmentation result of the second slice are averaged to obtain the first segmentation result.
- the adjusted segmentation result of the second slice, and further, the adjusted segmentation result of the first slice and the adjusted segmentation result of the second slice are merged, such as averaging, and the averaged result is binarized.
- the segmentation result of the slice pair can be obtained.
- the electronic device can return to step 202 to sample the other two slices from the medical image to be segmented as the slice pair that currently needs to be segmented, and perform processing in the manner of steps 203 to 207 to obtain the corresponding segmentation results, and so on After obtaining the segmentation results of all the slice pairs in the medical image to be segmented, the segmentation results of these slice pairs are combined in the order of the slices to obtain the segmentation result of the medical image to be segmented (ie, the 3D segmentation result).
- an image segmentation model can be trained in advance by using slice-to-sample and the correlation between slice samples in the slice-to-sample (information such as prior knowledge), and then, after obtaining the medical image to be segmented,
- the image segmentation model can be used to extract features from the slice pairs of the medical image to be segmented using different receptive fields to obtain high-level feature information and low-level feature information for each slice in the slice pair.
- Slice segment the liver area in the slice according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation result of the slice.
- the trained image segmentation model considering the correlation between the slices of the 3D medical image, the trained image segmentation model can be used to simultaneously segment two slices (slice pairs), and use the inter-slice Further adjustments are made to the segmentation results to ensure that the shape information of the target object (such as the liver) can be captured more accurately, making the segmentation more accurate.
- an embodiment of the present application also provides a medical image segmentation device.
- the medical image segmentation device may be integrated in an electronic device, such as a server or a terminal.
- the terminal may include a tablet computer, a notebook computer, Personal computer, medical image acquisition equipment, or electronic medical equipment, etc.
- the medical image segmentation device may obtain an acquisition unit 301, an extraction unit 302, a segmentation unit 303, a fusion unit 304, a determination unit 305, a generation unit 306, etc., as follows:
- the acquiring unit 301 is configured to acquire a slice pair, which includes two slices sampled from the medical image to be segmented.
- the acquiring unit 301 may be specifically configured to acquire a medical image to be segmented, and sample two slices from the medical image to be segmented to form a slice pair.
- the medical image to be segmented may be collected by various medical image acquisition equipment such as MRI or CT on biological tissues such as the heart or liver, and then provided to the acquisition unit 301.
- various medical image acquisition equipment such as MRI or CT on biological tissues such as the heart or liver
- the extraction unit 302 is configured to perform feature extraction on each slice in the slice pair using different receptive fields to obtain high-level feature information and low-level feature information of each slice.
- receptive fields can be used to perform feature extraction on the slice in various ways.
- it can be implemented through a residual network, namely:
- the extraction unit 302 can be specifically used to extract features of each slice in the slice pair through the residual network in the segmentation model after training, to obtain high-level feature information and low-level feature information of each slice.
- the extraction unit 302 may use The first residual network branch in the residual network performs feature extraction on the first slice to obtain high-level feature information of different scales and low-level feature information of different scales corresponding to the first slice; and use the The second residual network branch performs feature extraction on the second slice to obtain high-level feature information of different scales and bottom-level feature information of different scales corresponding to the second slice.
- the network structure of the first residual network branch and the second residual network branch can be specifically determined according to actual application requirements, for example, ResNet-18 can be used.
- the parameters of the first residual network branch and the second residual network branch can be shared, and the specific parameter settings can be determined according to the needs of the actual application.
- spatial pyramid pooling such as ASPP processing, may also be performed on the obtained high-level feature information.
- ASPP processing may also be performed on the obtained high-level feature information.
- the segmentation unit 303 is configured to segment the target object in the slice according to the low-level feature information and the high-level feature information of the slice for each slice in the slice pair to obtain the initial segmentation result of the slice.
- the segmentation unit 303 can be specifically configured to segment the target object in the slice through the segmentation network in the segmentation model after training for each slice in the slice pair according to the low-level feature information and high-level feature information of the slice. Obtain the initial segmentation result of the slice; for example, it is specifically used as follows:
- the low-level feature information and high-level feature information of the slice are respectively convolved through the segmentation network in the segmentation model after training; the high-level feature information after the convolution process is upsampled to the convolution process After the low-level feature information has the same size, it is connected with the low-level feature information after convolution processing to obtain the connected feature information; according to the connected feature information, the pixels belonging to the target object in the slice are filtered to obtain the slice
- the initial segmentation result please refer to the previous method embodiment for details, which will not be repeated here.
- the fusion unit 304 is configured to fuse the low-level feature information and the high-level feature information of each slice in the slice pair.
- the fusion unit 304 may be specifically used to segment the fusion network in the model after training to fuse the low-level feature information and high-level feature information of each slice in the slice pair.
- the fusion unit 304 can be specifically used for:
- the low-level feature information of each slice in the slice pair is added element by element to obtain the fused low-level feature information; the high-level feature information of each slice in the slice pair is added element by element to obtain the fused high-level feature information; through training
- the fusion network in the post-segmentation model fuses the fused low-level feature information and the fused high-level feature information to obtain the fused feature information.
- the fusion unit 304 may be specifically used to segment the fusion network in the model after training to add the fused low-level feature information and the fused high-level feature information element by element to obtain the fused feature information.
- the attention mechanism can also be used to allow the network to automatically recognize different features. Different weights are assigned to the information so that the network can selectively integrate the characteristic information. which is:
- the fusion unit 304 can be specifically used for the channel attention module in the fusion network in the segmentation model after training, according to the fusion low-level feature information and the fused high-level feature information to assign weights to the fused low-level feature information, and get the weighted Feature information; multiply the weighted feature information and the fused low-level feature information element by element to obtain the processed feature information; add the processed feature information and the fused high-level feature information element by element to obtain the fused feature information .
- the specific structure of the channel attention module can be determined according to actual application requirements, and will not be repeated here.
- the determining unit 305 is configured to determine the association information between the slices in the slice pair according to the feature information after the fusion.
- the target object refers to the object that needs to be identified in the slice, such as "liver” in liver image segmentation, “heart” in heart image segmentation, and so on.
- the determining unit 305 may include a screening subunit and a determining subunit, as follows:
- the screening subunit can be used to screen out the features belonging to the target object from the fusion feature information.
- the determining subunit can be used to determine the associated information between the slices according to the filtered features. For example, it can be as follows:
- the determining subunit can be specifically used to determine the background area and the foreground area of each slice in the slice pair according to the filtered features, calculate the difference pixel and the intersection pixel of the foreground area between the slices, according to the background area, difference set Pixels and intersection pixels generate the associated information between the slices in the slice pair.
- the determining subunit can be specifically used to include pixels in the foreground area of any slice in the fused feature information as the difference set pixels; and in the fused feature information, they also belong to the slice
- the pixels in the foreground area of the two slices in the center are regarded as the intersection pixels.
- the determining subunit is specifically used to identify the background area, the difference pixel, and the intersection pixel to identify the pixel type to obtain the association information between the slices.
- the determining subunit is specifically used to identify the background area, the difference pixel, and the intersection pixel to identify the pixel type to obtain the association information between the slices.
- the generating unit 306 is configured to generate a segmentation result of the slice pair based on the association information and the initial segmentation result of each slice in the slice pair.
- the generating unit 306 may be specifically used for:
- the generating unit 306 may be specifically used for: averaging the predicted segmentation result of the first slice and the initial segmentation result of the first slice to obtain the adjusted segmentation result of the first slice; and for the second slice The predicted segmentation result of and the initial segmentation result of the second slice are averaged to obtain the adjusted segmentation result of the second slice.
- the generating unit 306 may be specifically configured to perform averaging processing on the adjusted segmentation result of the first slice and the adjusted segmentation result of the second slice, and binarize the averaged result to obtain the slice. Right segmentation result.
- the trained image segmentation model can be trained on samples from multiple pairs of slices marked with true values. Specifically, it can be set in advance by the operation and maintenance personnel, or it can be trained by the image segmentation device itself. get. That is, as shown in FIG. 14, the image segmentation device may further include an acquisition unit 307 and a training unit 308;
- the collection unit 307 may be used to collect multiple pairs of slice pairs labeled with true values.
- the slice pair sample includes two slice samples sampled from medical image samples.
- the previous embodiment which will not be repeated here.
- the training unit 308 can be used to perform feature extraction on each slice sample in the slice through the residual network in the preset segmentation model to obtain high-level feature information and low-level feature information of each slice sample; For each slice sample, according to the low-level feature information and high-level feature information of the slice sample, the target object in the slice sample is segmented through the segmentation network in the preset segmentation model to obtain the predicted segmentation value of the slice sample;
- the fusion network in the segmentation model fuses the low-level feature information and high-level feature information of each slice sample in the slice pair sample, and predicts the correlation information between the slice samples in the slice pair sample according to the fusion feature information; Value, slice prediction segmentation value and prediction associated information of each slice sample in the sample converge the preset segmentation model to obtain the segmentation model after training.
- the training unit 308 may be specifically used to use the Dice loss function to converge the segmentation model according to the true value, the predicted segmentation value of each slice sample in the slice pair sample, and the predicted associated information to obtain the segmentation model after training.
- each of the above units can be implemented as an independent entity, or can be combined arbitrarily, and implemented as the same or several entities.
- each of the above units please refer to the previous method embodiments, which will not be repeated here.
- the extraction unit 302 can use different receptive fields to perform feature extraction on each slice in the slice pair to obtain the high-level feature information and low-level feature information of each slice. Then, On the one hand, for each slice in the slice pair, the segmentation unit 303 segments the target object in the slice according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation result of the slice.
- the fusion unit 304 The low-level feature information and high-level feature information of each slice in the slice pair are fused, and the determining unit 305 determines the association information between the slices in the slice pair according to the fused feature information, and further, the generation unit 306 determines the correlation information between each slice in the slice pair based on The correlation information between the slices and the initial segmentation result of each slice in the slice pair generate the segmentation result of the slice pair; considering the correlation between the slices of the 3D medical image, the device provided in this embodiment of the application simultaneously The slices (slice pairs) are segmented, and the segmentation results are further adjusted by using the correlation between the slices to ensure that the shape information of the target object (such as the liver) can be captured more accurately, and the segmentation accuracy is higher.
- the target object such as the liver
- FIG. 15 shows a schematic structural diagram of the electronic device involved in the embodiment of the present application, specifically:
- the electronic device may include one or more processing core processors 401, one or more computer-readable storage medium memory 402, power supply 403, input unit 404 and other components.
- processing core processors 401 one or more computer-readable storage medium memory 402, power supply 403, input unit 404 and other components.
- FIG. 15 does not constitute a limitation on the electronic device, and may include more or fewer components than shown in the figure, or a combination of certain components, or different component arrangements. among them:
- the processor 401 is the control center of the electronic device. It uses various interfaces and lines to connect the various parts of the entire electronic device. It runs or executes the software programs and/or modules stored in the memory 402, and calls the data stored in the memory 402. Data, perform various functions of electronic equipment and process data, so as to monitor the electronic equipment as a whole.
- the processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 401.
- the memory 402 can be used to store software programs and modules.
- the processor 401 executes various functional applications and data processing by running the software programs and modules stored in the memory 402.
- the memory 402 may mainly include a program storage area and a data storage area.
- the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of electronic equipment, etc.
- the memory 402 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
- the memory 402 may further include a memory controller to provide the processor 401 with access to the memory 402.
- the electronic device also includes a power supply 403 for supplying power to various components.
- the power supply 403 may be logically connected to the processor 401 through a power management system, so that functions such as charging, discharging, and power management can be managed through the power management system.
- the power supply 403 may also include one or more DC or AC power supplies, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and any other components.
- the electronic device may further include an input unit 404, which can be used to receive inputted digital or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
- an input unit 404 which can be used to receive inputted digital or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
- the electronic device may also include a display unit, etc., which will not be repeated here.
- the processor 401 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs and stores the executable file
- the application programs in the memory 402 thus realize various functions, as follows:
- the slice pair including two slices sampled from the medical image to be segmented
- the segmentation result of the slice pair is generated.
- the residual network in the segmentation model after training can be used to perform feature extraction on each slice to obtain the high-level feature information and low-level feature information of each slice; then, for each slice in the slice pair, according to the The low-level feature information and high-level feature information of the slice are segmented through the segmentation network in the segmentation model after the training to obtain the initial segmentation result of the segment; and through the fusion network in the segmentation model after the training, The low-level feature information and high-level feature information of each slice in the slice pair are fused, and the correlation information between the slices in the slice pair is determined according to the fused characteristic information, and then, based on the correlation information and each slice in the slice pair The initial segmentation result of generates the segmentation result of the slice pair.
- the post-training segmentation model can be trained on samples from multiple pairs of slices marked with true values. Specifically, it can be pre-set by the operation and maintenance personnel, or it can be obtained by training by the image segmentation device itself . That is, the processor 401 may also run an application program stored in the memory 402, so as to realize the following functions:
- the target object in the slice sample is segmented through the segmentation network in the preset segmentation model to obtain the predicted segmentation value of the slice sample ;
- the low-level feature information and high-level feature information of each slice sample in the slice pair sample are merged, and the correlation information between each slice sample in the slice pair sample is predicted based on the fused characteristic information;
- the preset segmentation model is converged according to the true value, the predicted segmentation value of each segment sample in the segment pair sample, and the predicted associated information to obtain the segmentation model after training.
- the electronic device of this embodiment can use different receptive fields to perform feature extraction on each slice of the slice pair to obtain high-level feature information and low-level feature information of each slice. Then, On the one hand, for each slice in the slice pair, segment the target object in the slice according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation result of the slice; on the other hand, the slice is centered on the lower layer of each slice
- the feature information and high-level feature information are fused, and the correlation information between the slices in the slice pair is determined according to the fused characteristic information, and then the initial segmentation results of each slice in the slice pair are adjusted by using the obtained correlation information to obtain the final The desired segmentation result; considering the correlation between the slices of the 3D medical image, the method provided in the embodiment of the present application simultaneously divides two slices (slice pair), and uses the correlation between the slices to divide the result Make further adjustments to ensure that the shape information of the target object (such as the liver
- an embodiment of the present application provides a storage medium in which multiple instructions are stored, and the instructions can be loaded by a processor to execute the steps in any medical image segmentation method provided in the embodiments of the present application.
- the instruction can perform the following steps:
- the slice pair including two slices sampled from the medical image to be segmented
- the segmentation result of the slice pair is generated.
- the residual network in the segmentation model after training can be used to extract features of each slice in the slice pair to obtain high-level feature information and low-level feature information of each slice; then, for each slice in the slice pair, according to The low-level feature information and high-level feature information of the slice, the target object in the slice is segmented by the segmentation network in the segmentation model after training, and the initial segmentation result of the slice is obtained; and the fusion network in the segmentation model after the training , Fuse the low-level feature information and high-level feature information of each slice in the slice pair, and determine the correlation information between the slices in the slice pair based on the fused feature information, and then, based on the correlation information and each slice in the slice pair The initial segmentation result of the slice generates the segmentation result of the slice pair.
- the post-training segmentation model can be trained on samples from multiple pairs of slices marked with true values. Specifically, it can be pre-set by the operation and maintenance personnel, or it can be obtained by training by the image segmentation device itself . That is, the instruction can also perform the following steps:
- the target object in the slice sample is segmented through the segmentation network in the segmentation model to obtain the predicted segmentation value of the slice sample;
- the low-level feature information and high-level feature information of each slice sample in the slice pair sample are merged, and the correlation information between each slice sample in the slice pair sample is predicted based on the fused characteristic information;
- the preset segmentation model is converged according to the true value, the predicted segmentation value of each segment sample in the segment pair sample, and the predicted associated information to obtain the segmentation model after training.
- the storage medium may include: read only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
Description
Claims (29)
- 一种医学影像分割方法,由电子设备执行,所述方法包括:获取切片对,所述切片对包括两张从待分割医学影像中采样所得的切片;采用不同感受野对所述切片对中每一切片进行特征提取,得到所述切片对中每一切片的高层特征信息和低层特征信息;针对所述切片对中每一切片,根据该切片的所述低层特征信息和所述高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果;将所述切片对中各切片的所述低层特征信息和所述高层特征信息进行融合,并根据融合后特征信息确定所述切片对中各切片之间的关联信息;基于所述关联信息和所述切片对中各切片的初始分割结果,生成所述切片对的分割结果。
- 根据权利要求1所述的方法,所述根据融合后特征信息确定所述切片对中各切片之间的关联信息,包括:从所述融合后特征信息中筛选出属于目标对象的特征;根据筛选出的特征确定所述切片对中各切片之间的关联信息。
- 根据权利要求2所述的方法,所述根据筛选出的特征确定所述切片对中各切片之间的关联信息,包括:根据筛选出的特征确定所述切片对中每一切片的背景区域和前景区域;计算所述切片对中各切片之间前景区域的差集像素和交集像素;根据所述背景区域、所述差集像素和所述交集像素生成所述切片对中各切片之间的关联信息。
- 根据权利要求3所述的方法,所述计算所述切片对各切片之间前景区域的差集像素和交集像素,包括:将所述融合后特征信息中,只属于所述切片对中任一切片的前景区域的像素点作为所述差集像素;将所述融合后特征信息中,同时属于所述切片对中两张切片的前景区域的像素点作为所述交集像素。
- 根据权利要求3所述的方法,所述根据所述背景区域、所述差集像素和所述交集像素生成所述切片对中各切片之间的关联信息,包括:对所述背景区域、所述差集像素和所述交集像素进行像素类型标识,得到所述切片对中各切片之间的关联信息。
- 根据权利要求1至5任一项所述的方法,所述切片对中包括第一切片和第二切片;所述基于所述关联信息和所述切片对中各切片的初始分割结果,生成所述切片对的分割结果,包括:根据所述关联信息和所述第一切片的初始分割结果预测所述第二切片的分割结果,得到所述第二切片的预测分割结果;根据所述关联信息和所述第二切片的初始分割结果预测所述第一切片的 分割结果,得到所述第一切片的预测分割结果;基于所述第一切片的预测分割结果对所述第一切片的初始分割结果进行调整,得到所述第一切片的调整后分割结果;基于所述第二切片的预测分割结果对所述第二切片的初始分割结果进行调整,得到所述第二切片的调整后分割结果;对所述第一切片的调整后分割结果和所述第二切片的调整后分割结果进行融合,得到所述切片对的分割结果。
- 根据权利要求6所述的方法,所述基于所述第一切片的预测分割结果对所述第一切片的初始分割结果进行调整,得到所述第一切片的调整后分割结果,包括:对所述第一切片的预测分割结果和所述第一切片的初始分割结果进行平均处理,得到所述第一切片的调整后分割结果;所述基于所述第二切片的预测分割结果对所述第二切片的初始分割结果进行调整,得到所述第二切片的调整后分割结果,包括:对所述第二切片的预测分割结果和所述第二切片的初始分割结果进行平均处理,得到所述第二切片的调整后分割结果。
- 根据权利要求6所述的方法,所述对所述第一切片的调整后分割结果和所述第二切片的调整后分割结果进行融合,得到所述切片对的分割结果,包括:对所述第一切片的调整后分割结果和所述第二切片的调整后分割结果进行平均处理,将平均处理后的结果进行二值化处理,得到所述切片对的分割结果。
- 根据权利要求1至5任一项所述的方法,所述采用不同感受野对所述切片对中每一切片进行特征提取,得到所述切片对中每一切片的高层特征信息和低层特征信息,包括:通过训练后分割模型中的残差网络对所述切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息;所述针对所述切片对中每一切片,根据该切片的所述低层特征信息和所述高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果,包括:针对所述切片对中每一切片,根据该切片的所述低层特征信息和所述高层特征信息,通过所述训练后分割模型中的分割网络对该切片中的目标对象进行分割,得到该切片的初始分割结果;所述将所述切片对中各切片的所述低层特征信息和所述高层特征信息进行融合,并根据融合后特征信息确定所述切片对中各切片之间的关联信息,包括:通过所述训练后分割模型中的融合网络,将所述切片对中各切片的所述 低层特征信息和所述高层特征信息进行融合,并根据融合后特征信息确定所述切片对中各切片之间的关联信息。
- 根据权利要求9所述的方法,所述通过所述训练后分割模型中的融合网络,将所述切片对中各切片的所述低层特征信息和所述高层特征信息进行融合,包括:将所述切片对中各切片的低层特征信息进行逐元素相加,得到融合后低层特征信息;将所述切片对中各切片的高层特征信息进行逐元素相加,得到融合后高层特征信息;通过所述训练后分割模型中的融合网络,将所述融合后低层特征信息和所述融合后高层特征信息进行融合,得到所述融合后特征信息。
- 根据权利要求10所述的方法,所述通过所述训练后分割模型中的融合网络,将所述融合后低层特征信息和所述融合后高层特征信息进行融合,得到所述融合后特征信息,包括:通过所述训练后分割模型中的融合网络,将所述融合后低层特征信息和所述融合后高层特征信息进行逐元素相加,得到所述融合后特征信息;或者,通过所述训练后分割模型中的融合网络中的通道注意力模块,根据所述融合后低层特征信息和所述融合后高层特征信息为所述融合后低层特征信息赋予权重,得到加权后特征信息;将所述加权后特征信息和所述融合后低层特征信息进行逐元素相乘,得到处理后特征信息;将所述处理后特征信息和所述融合后高层特征信息进行逐元素相加,得到所述融合后特征信息。
- 根据权利要求9所述的方法,所述针对所述切片对中每一切片,根据该切片的所述低层特征信息和所述高层特征信息,通过所述训练后分割模型中的分割网络对该切片中的目标对象进行分割,得到该切片的初始分割结果,包括:针对所述切片对中每一切片,通过所述训练后分割模型中的分割网络分别对该切片的所述低层特征信息和所述高层特征信息进行卷积处理;将卷积处理后的高层特征信息上采样至与卷积处理后的低层特征信息具有相同的尺寸后,与卷积处理后的低层特征信息进行连接,得到连接后特征信息;根据所述连接后特征信息筛选属于该切片中的目标对象的像素点,得到该切片的初始分割结果。
- 根据权利要求9所述的方法,在所述通过训练后分割模型中的残差网络对所述切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息之前,所述方法还包括:采集多对标注有真实值的切片对样本,所述切片对样本包括两张从医学影像样本中采样所得的切片样本;通过预设分割模型中的残差网络对所述切片对样本中每一切片样本进行 特征提取,得到每一切片样本的高层特征信息和低层特征信息;针对所述切片对样本中每一切片样本,根据该切片样本的低层特征信息和高层特征信息,通过所述预设分割模型中的分割网络对该切片样本中的目标对象进行分割,得到该切片样本的预测分割值;通过所述预设分割模型中的融合网络,将所述切片对样本中各切片样本的低层特征信息和高层特征信息进行融合,并根据融合后特征信息预测所述切片对样本中各切片样本之间的关联信息;根据所述真实值、所述切片对样本中各切片样本的预测分割值和预测的关联信息对所述预设分割模型进行收敛,得到所述训练后分割模型。
- 一种医学影像分割装置,所述装置包括:获取单元,用于获取切片对,所述切片对包括两张从待分割医学影像中采样所得的切片;提取单元,用于采用不同感受野对所述切片对中每一切片进行特征提取,得到所述切片对中每一切片的高层特征信息和低层特征信息;分割单元,用于针对所述切片对中每一切片,根据该切片的所述低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果;融合单元,用于将所述切片对中各切片的所述低层特征信息和所述高层特征信息进行融合;确定单元,用于根据融合后特征信息确定所述切片对中各切片之间的关联信息;生成单元,用于基于所述关联信息和所述切片对中各切片的初始分割结果,生成所述切片对的分割结果。
- 根据权利要求14所述的装置,所述确定单元包括:筛选子单元和确定子单元;所述筛选子单元,用于从所述融合后特征信息中筛选出属于目标对象的特征;所述确定子单元,用于根据筛选出的特征确定所述切片对中各切片之间的关联信息。
- 根据权利要求15所述的装置,所述确定子单元具体用于:根据筛选出的特征确定所述切片对中每一切片的背景区域和前景区域;计算所述切片对中各切片之间前景区域的差集像素和交集像素;根据所述背景区域、所述差集像素和所述交集像素生成所述切片对中各切片之间的关联信息。
- 根据权利要求16所述的装置,所述确定子单元具体用于:将所述融合后特征信息中,只属于所述切片对中任一切片的前景区域的像素点作为所述差集像素;将所述融合后特征信息中,同时属于所述切片对中两张切片的前景区域的像素点作为所述交集像素。
- 根据权利要求16所述的装置,所述确定子单元具体用于:对所述背景区域、所述差集像素和所述交集像素进行像素类型标识,得到所述切片对中各切片之间的关联信息。
- 根据权利要求14至18任一项所述的装置,所述切片对中包括第一切片和第二切片;所述生成单元具体用于:根据所述关联信息和所述第一切片的初始分割结果预测所述第二切片的分割结果,得到所述第二切片的预测分割结果;根据所述关联信息和所述第二切片的初始分割结果预测所述第一切片的分割结果,得到所述第一切片的预测分割结果;基于所述第一切片的预测分割结果对所述第一切片的初始分割结果进行调整,得到所述第一切片的调整后分割结果;基于所述第二切片的预测分割结果对所述第二切片的初始分割结果进行调整,得到所述第二切片的调整后分割结果;对所述第一切片的调整后分割结果和所述第二切片的调整后分割结果进行融合,得到所述切片对的分割结果。
- 根据权利要求19所述的装置,所述生成单元具体用于:对所述第一切片的预测分割结果和所述第一切片的初始分割结果进行平均处理,得到所述第一切片的调整后分割结果;对所述第二切片的预测分割结果和所述第二切片的初始分割结果进行平均处理,得到所述第二切片的调整后分割结果。
- 根据权利要求19所述的装置,所述生成单元具体用于:对所述第一切片的调整后分割结果和所述第二切片的调整后分割结果进行平均处理,将平均处理后的结果进行二值化处理,得到所述切片对的分割结果。
- 根据权利要求14至18任一项所述的装置,所述提取单元具体用于:通过训练后分割模型中的残差网络对所述切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息;则所述分割单元具体用于:针对所述切片对中每一切片,根据该切片的所述低层特征信息和所述高层特征信息,通过所述训练后分割模型中的分割网络对该切片中的目标对象进行分割,得到该切片的初始分割结果;则所述融合单元具体用于:通过所述训练后分割模型中的融合网络,将所述切片对中各切片的所述低层特征信息和所述高层特征信息进行融合,并根据融合后特征信息确定所述切片对中各切片之间的关联信息。
- 根据权利要求22所述的装置,所述融合单元具体用于:将所述切片对中各切片的低层特征信息进行逐元素相加,得到融合后低层特征信息;将所述切片对中各切片的高层特征信息进行逐元素相加,得到融合后高层特征信息;通过所述训练后分割模型中的融合网络,将所述融合后低层特征信息和融合后高层特征信息进行融合,得到所述融合后特征信息。
- 根据权利要求23所述的装置,所述融合单元具体用于:通过所述训练后分割模型中的融合网络,将所述融合后低层特征信息和所述融合后高层特征信息进行逐元素相加,得到所述融合后特征信息;或者,通过所述训练后分割模型中的融合网络中的通道注意力模块,根据所述融合后低层特征信息和融合后高层特征信息为所述融合后低层特征信息赋予权重,得到加权后特征信息;将所述加权后特征信息和所述融合后低层特征信息进行逐元素相乘,得到处理后特征信息;将所述处理后特征信息和所述融合后高层特征信息进行逐元素相加,得到所述融合后特征信息。
- 根据权利要求22所述的装置,所述分割单元具体用于:针对所述切片对中每一切片,通过所述训练后分割模型中的分割网络分别对该切片的所述低层特征信息和所述高层特征信息进行卷积处理;将卷积处理后的高层特征信息上采样至与卷积处理后的低层特征信息具有相同的尺寸后,与卷积处理后的低层特征信息进行连接,得到连接后特征信息;根据所述连接后特征信息筛选属于该切片中的目标对象的像素点,得到该切片的初始分割结果。
- 根据权利要求22所述的装置,所述医学影像分割装置还包括:采集单元和训练单元;所述采集单元,用于采集多对标注有真实值的切片对样本,所述切片对样本包括两张从医学影像样本中采样所得的切片样本;所述训练单元,用于通过预设分割模型中的残差网络对所述切片对样本中每一切片样本进行特征提取,得到每一切片样本的高层特征信息和低层特征信息;针对所述切片对样本中每一切片样本,根据该切片样本的低层特征信息和高层特征信息,通过所述预设分割模型中的分割网络对该切片样本中的目标对象进行分割,得到该切片样本的预测分割值;通过所述预设分割模型中的融合网络,将所述切片对样本中各切片样本的低层特征信息和高层特征信息进行融合,并根据融合后特征信息预测所述切片对样本中各切片样本之间的关联信息;根据所述真实值、所述切片对样本中各切片样本的预测分割值和预测的关联信息对所述预设分割模型进行收敛,得到所述训练后分割模型。
- 一种电子设备,包括存储器和处理器;所述存储器存储有应用程序, 所述处理器用于运行所述存储器内的应用程序,以执行权利要求1至13任一项所述的医学影像分割方法中的操作。
- 一种存储介质,所述存储介质存储有多条指令,所述指令适于处理器进行加载,以执行权利要求1至13任一项所述的医学影像分割方法中的步骤。
- 一种计算机程序产品,包括指令,当其在计算机上运行时,使得计算机执行权利要求1至13任一项中所述的医学影像分割方法的步骤。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020217020738A KR102607800B1 (ko) | 2019-04-22 | 2020-03-27 | 의료 영상 세그먼트화 방법 및 디바이스, 전자 디바이스 및 저장 매체 |
JP2021541593A JP7180004B2 (ja) | 2019-04-22 | 2020-03-27 | 医用画像分割方法、医用画像分割装置、電子機器及びコンピュータプログラム |
EP20793969.5A EP3961484B1 (en) | 2019-04-22 | 2020-03-27 | Medical image segmentation method and device, electronic device and storage medium |
US17/388,249 US11887311B2 (en) | 2019-04-22 | 2021-07-29 | Method and apparatus for segmenting a medical image, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910322783.8A CN110110617B (zh) | 2019-04-22 | 2019-04-22 | 医学影像分割方法、装置、电子设备和存储介质 |
CN201910322783.8 | 2019-04-22 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/388,249 Continuation US11887311B2 (en) | 2019-04-22 | 2021-07-29 | Method and apparatus for segmenting a medical image, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020215985A1 true WO2020215985A1 (zh) | 2020-10-29 |
Family
ID=67486110
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/081660 WO2020215985A1 (zh) | 2019-04-22 | 2020-03-27 | 医学影像分割方法、装置、电子设备和存储介质 |
Country Status (6)
Country | Link |
---|---|
US (1) | US11887311B2 (zh) |
EP (1) | EP3961484B1 (zh) |
JP (1) | JP7180004B2 (zh) |
KR (1) | KR102607800B1 (zh) |
CN (1) | CN110110617B (zh) |
WO (1) | WO2020215985A1 (zh) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112699950A (zh) * | 2021-01-06 | 2021-04-23 | 腾讯科技(深圳)有限公司 | 医学图像分类方法、图像分类网络处理方法、装置和设备 |
CN113222012A (zh) * | 2021-05-11 | 2021-08-06 | 北京知见生命科技有限公司 | 一种肺部数字病理图像自动定量分析方法及*** |
CN113223017A (zh) * | 2021-05-18 | 2021-08-06 | 北京达佳互联信息技术有限公司 | 目标分割模型的训练方法、目标分割方法及设备 |
CN113378855A (zh) * | 2021-06-22 | 2021-09-10 | 北京百度网讯科技有限公司 | 用于处理多任务的方法、相关装置及计算机程序产品 |
CN113793345A (zh) * | 2021-09-07 | 2021-12-14 | 复旦大学附属华山医院 | 一种基于改进注意力模块的医疗影像分割方法及装置 |
CN113822314A (zh) * | 2021-06-10 | 2021-12-21 | 腾讯云计算(北京)有限责任公司 | 图像数据处理方法、装置、设备以及介质 |
CN114119514A (zh) * | 2021-11-12 | 2022-03-01 | 北京环境特性研究所 | 一种红外弱小目标的检测方法、装置、电子设备和存储介质 |
WO2023273956A1 (zh) * | 2021-06-29 | 2023-01-05 | 华为技术有限公司 | 一种基于多任务网络模型的通信方法、装置及*** |
WO2023276750A1 (ja) * | 2021-06-29 | 2023-01-05 | 富士フイルム株式会社 | 学習方法、画像処理方法、学習装置、画像処理装置、学習プログラム、及び画像処理プログラム |
CN116628457A (zh) * | 2023-07-26 | 2023-08-22 | 武汉华康世纪医疗股份有限公司 | 一种磁共振设备运行中的有害气体检测方法及装置 |
CN117095447A (zh) * | 2023-10-18 | 2023-11-21 | 杭州宇泛智能科技有限公司 | 一种跨域人脸识别方法、装置、计算机设备及存储介质 |
Families Citing this family (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110110617B (zh) * | 2019-04-22 | 2021-04-20 | 腾讯科技(深圳)有限公司 | 医学影像分割方法、装置、电子设备和存储介质 |
CN110490881A (zh) * | 2019-08-19 | 2019-11-22 | 腾讯科技(深圳)有限公司 | 医学影像分割方法、装置、计算机设备及可读存储介质 |
CN110598714B (zh) * | 2019-08-19 | 2022-05-17 | 中国科学院深圳先进技术研究院 | 一种软骨图像分割方法、装置、可读存储介质及终端设备 |
WO2021031066A1 (zh) * | 2019-08-19 | 2021-02-25 | 中国科学院深圳先进技术研究院 | 一种软骨图像分割方法、装置、可读存储介质及终端设备 |
CN110516678B (zh) | 2019-08-27 | 2022-05-06 | 北京百度网讯科技有限公司 | 图像处理方法和装置 |
CN110705381A (zh) * | 2019-09-09 | 2020-01-17 | 北京工业大学 | 遥感影像道路提取方法及装置 |
CN110766643A (zh) * | 2019-10-28 | 2020-02-07 | 电子科技大学 | 一种面向眼底图像的微动脉瘤检测方法 |
CN110852325B (zh) * | 2019-10-31 | 2023-03-31 | 上海商汤智能科技有限公司 | 图像的分割方法及装置、电子设备和存储介质 |
CN111028246A (zh) * | 2019-12-09 | 2020-04-17 | 北京推想科技有限公司 | 一种医学图像分割方法、装置、存储介质及电子设备 |
CN111091091A (zh) * | 2019-12-16 | 2020-05-01 | 北京迈格威科技有限公司 | 目标对象重识别特征的提取方法、装置、设备及存储介质 |
EP3843038B1 (en) * | 2019-12-23 | 2023-09-20 | HTC Corporation | Image processing method and system |
CN111260055B (zh) * | 2020-01-13 | 2023-09-01 | 腾讯科技(深圳)有限公司 | 基于三维图像识别的模型训练方法、存储介质和设备 |
CN113362331A (zh) * | 2020-03-04 | 2021-09-07 | 阿里巴巴集团控股有限公司 | 图像分割方法、装置、电子设备及计算机存储介质 |
CN111461130B (zh) * | 2020-04-10 | 2021-02-09 | 视研智能科技(广州)有限公司 | 一种高精度图像语义分割算法模型及分割方法 |
CN111583282B (zh) * | 2020-05-18 | 2024-04-23 | 联想(北京)有限公司 | 图像分割方法、装置、设备及存储介质 |
CN113724181A (zh) * | 2020-05-21 | 2021-11-30 | 国网智能科技股份有限公司 | 一种输电线路螺栓语义分割方法及*** |
CN111967538B (zh) * | 2020-09-25 | 2024-03-15 | 北京康夫子健康技术有限公司 | 应用于小目标检测的特征融合方法、装置、设备以及存储介质 |
CN111968137A (zh) * | 2020-10-22 | 2020-11-20 | 平安科技(深圳)有限公司 | 头部ct图像分割方法、装置、电子设备及存储介质 |
US11776128B2 (en) * | 2020-12-11 | 2023-10-03 | Siemens Healthcare Gmbh | Automatic detection of lesions in medical images using 2D and 3D deep learning networks |
US11715276B2 (en) * | 2020-12-22 | 2023-08-01 | Sixgill, LLC | System and method of generating bounding polygons |
CN112820412B (zh) * | 2021-02-03 | 2024-03-08 | 东软集团股份有限公司 | 用户信息的处理方法、装置、存储介质和电子设备 |
CN113470048B (zh) * | 2021-07-06 | 2023-04-25 | 北京深睿博联科技有限责任公司 | 场景分割方法、装置、设备及计算机可读存储介质 |
CN113627292B (zh) * | 2021-07-28 | 2024-04-30 | 广东海启星海洋科技有限公司 | 基于融合网络的遥感图像识别方法及装置 |
CN114136274A (zh) * | 2021-10-29 | 2022-03-04 | 杭州中科睿鉴科技有限公司 | 基于计算机视觉的站台限界测量方法 |
CN114067179A (zh) * | 2021-11-18 | 2022-02-18 | 上海联影智能医疗科技有限公司 | 图像标注方法、标注模型的训练方法和装置 |
CN113936220B (zh) * | 2021-12-14 | 2022-03-04 | 深圳致星科技有限公司 | 图像处理方法、存储介质、电子设备及图像处理装置 |
CN113989305B (zh) * | 2021-12-27 | 2022-04-22 | 城云科技(中国)有限公司 | 目标语义分割方法及应用其的街道目标异常检测方法 |
CN113989498B (zh) * | 2021-12-27 | 2022-07-12 | 北京文安智能技术股份有限公司 | 一种用于多类别垃圾场景识别的目标检测模型的训练方法 |
CN115830001B (zh) * | 2022-12-22 | 2023-09-08 | 抖音视界有限公司 | 肠道图像处理方法、装置、存储介质及电子设备 |
KR20240102817A (ko) * | 2022-12-26 | 2024-07-03 | 광운대학교 산학협력단 | 원격 의료시스템을 위한 동영상 압축 전송 방법 및 장치 |
CN116664953A (zh) * | 2023-06-28 | 2023-08-29 | 北京大学第三医院(北京大学第三临床医学院) | 2.5d肺炎医学ct影像分类装置及设备 |
CN117456191B (zh) * | 2023-12-15 | 2024-03-08 | 武汉纺织大学 | 一种基于三分支网络结构的复杂环境下语义分割方法 |
CN117635962B (zh) * | 2024-01-25 | 2024-04-12 | 云南大学 | 基于多频率融合的通道注意力图像处理方法 |
CN117853858A (zh) * | 2024-03-07 | 2024-04-09 | 烟台大学 | 基于全局和局部信息的磁共振图像合成方法、***和设备 |
CN118071774A (zh) * | 2024-04-17 | 2024-05-24 | 中南大学 | 一种基于多重注意力的医学图像分割方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108447052A (zh) * | 2018-03-15 | 2018-08-24 | 深圳市唯特视科技有限公司 | 一种基于神经网络的对称性脑肿瘤分割方法 |
CN109427052A (zh) * | 2017-08-29 | 2019-03-05 | ***通信有限公司研究院 | 基于深度学习处理眼底图像的相关方法及设备 |
CN109598732A (zh) * | 2018-12-11 | 2019-04-09 | 厦门大学 | 一种基于三维空间加权的医学图像分割方法 |
CN110110617A (zh) * | 2019-04-22 | 2019-08-09 | 腾讯科技(深圳)有限公司 | 医学影像分割方法、装置、电子设备和存储介质 |
Family Cites Families (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7379576B2 (en) * | 2003-11-03 | 2008-05-27 | Siemens Medical Solutions Usa, Inc. | Method and system for patient identification in 3D digital medical images |
JP2005245830A (ja) * | 2004-03-05 | 2005-09-15 | Jgs:Kk | 腫瘍検出方法、腫瘍検出装置及びプログラム |
US7088850B2 (en) * | 2004-04-15 | 2006-08-08 | Edda Technology, Inc. | Spatial-temporal lesion detection, segmentation, and diagnostic information extraction system and method |
US8913830B2 (en) * | 2005-01-18 | 2014-12-16 | Siemens Aktiengesellschaft | Multilevel image segmentation |
DE102007028895B4 (de) * | 2007-06-22 | 2010-07-15 | Siemens Ag | Verfahren zur Segmentierung von Strukturen in 3D-Bilddatensätzen |
FR2919747B1 (fr) * | 2007-08-02 | 2009-11-06 | Gen Electric | Procede et systeme d'affichage d'images de tomosynthese |
JP5138431B2 (ja) * | 2008-03-17 | 2013-02-06 | 富士フイルム株式会社 | 画像解析装置および方法並びにプログラム |
CN102573638A (zh) * | 2009-10-13 | 2012-07-11 | 新加坡科技研究局 | 一种用于分割图像中的肝脏对象的方法和*** |
US9196049B2 (en) * | 2011-03-09 | 2015-11-24 | Siemens Aktiengesellschaft | Method and system for regression-based 4D mitral valve segmentation from 2D+t magnetic resonance imaging slices |
JP6006307B2 (ja) * | 2011-07-07 | 2016-10-12 | ザ ボード オブ トラスティーズ オブ ザ レランド スタンフォード ジュニア ユニバーシティー | ボリューム測定位相コントラストmriによる総合的心血管解析 |
EP2751779B1 (en) * | 2011-10-11 | 2018-09-05 | Koninklijke Philips N.V. | A workflow for ambiguity guided interactive segmentation of lung lobes |
KR102204437B1 (ko) * | 2013-10-24 | 2021-01-18 | 삼성전자주식회사 | 컴퓨터 보조 진단 방법 및 장치 |
WO2017019833A1 (en) * | 2015-07-29 | 2017-02-02 | Medivation Technologies, Inc. | Compositions containing repair cells and cationic dyes |
DE102015217948B4 (de) * | 2015-09-18 | 2017-10-05 | Ernst-Moritz-Arndt-Universität Greifswald | Verfahren zur Segmentierung eines Organs und/oder Organbereiches in Volumendatensätzen der Magnetresonanztomographie |
JP6993334B2 (ja) * | 2015-11-29 | 2022-01-13 | アーテリーズ インコーポレイテッド | 自動化された心臓ボリュームセグメンテーション |
WO2017210690A1 (en) * | 2016-06-03 | 2017-12-07 | Lu Le | Spatial aggregation of holistically-nested convolutional neural networks for automated organ localization and segmentation in 3d medical scans |
US10667778B2 (en) * | 2016-09-14 | 2020-06-02 | University Of Louisville Research Foundation, Inc. | Accurate detection and assessment of radiation induced lung injury based on a computational model and computed tomography imaging |
US10580131B2 (en) * | 2017-02-23 | 2020-03-03 | Zebra Medical Vision Ltd. | Convolutional neural network for segmentation of medical anatomical images |
CN108229455B (zh) * | 2017-02-23 | 2020-10-16 | 北京市商汤科技开发有限公司 | 物体检测方法、神经网络的训练方法、装置和电子设备 |
WO2018222755A1 (en) * | 2017-05-30 | 2018-12-06 | Arterys Inc. | Automated lesion detection, segmentation, and longitudinal identification |
GB201709672D0 (en) * | 2017-06-16 | 2017-08-02 | Ucl Business Plc | A system and computer-implemented method for segmenting an image |
US9968257B1 (en) * | 2017-07-06 | 2018-05-15 | Halsa Labs, LLC | Volumetric quantification of cardiovascular structures from medical imaging |
JP6888484B2 (ja) * | 2017-08-29 | 2021-06-16 | 富士通株式会社 | 検索プログラム、検索方法、及び、検索プログラムが動作する情報処理装置 |
US10783640B2 (en) * | 2017-10-30 | 2020-09-22 | Beijing Keya Medical Technology Co., Ltd. | Systems and methods for image segmentation using a scalable and compact convolutional neural network |
CN109377496B (zh) * | 2017-10-30 | 2020-10-02 | 北京昆仑医云科技有限公司 | 用于分割医学图像的***和方法及介质 |
EP3714467A4 (en) * | 2017-11-22 | 2021-09-15 | Arterys Inc. | CONTENT-BASED IMAGE RECOVERY FOR LESION ANALYSIS |
CN108268870B (zh) * | 2018-01-29 | 2020-10-09 | 重庆师范大学 | 基于对抗学习的多尺度特征融合超声图像语义分割方法 |
US10902288B2 (en) * | 2018-05-11 | 2021-01-26 | Microsoft Technology Licensing, Llc | Training set sufficiency for image analysis |
US10964012B2 (en) * | 2018-06-14 | 2021-03-30 | Sony Corporation | Automatic liver segmentation in CT |
CN109191472A (zh) * | 2018-08-28 | 2019-01-11 | 杭州电子科技大学 | 基于改进U-Net网络的胸腺细胞图像分割方法 |
CN113506310B (zh) * | 2021-07-16 | 2022-03-01 | 首都医科大学附属北京天坛医院 | 医学图像的处理方法、装置、电子设备和存储介质 |
-
2019
- 2019-04-22 CN CN201910322783.8A patent/CN110110617B/zh active Active
-
2020
- 2020-03-27 KR KR1020217020738A patent/KR102607800B1/ko active IP Right Grant
- 2020-03-27 WO PCT/CN2020/081660 patent/WO2020215985A1/zh unknown
- 2020-03-27 JP JP2021541593A patent/JP7180004B2/ja active Active
- 2020-03-27 EP EP20793969.5A patent/EP3961484B1/en active Active
-
2021
- 2021-07-29 US US17/388,249 patent/US11887311B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109427052A (zh) * | 2017-08-29 | 2019-03-05 | ***通信有限公司研究院 | 基于深度学习处理眼底图像的相关方法及设备 |
CN108447052A (zh) * | 2018-03-15 | 2018-08-24 | 深圳市唯特视科技有限公司 | 一种基于神经网络的对称性脑肿瘤分割方法 |
CN109598732A (zh) * | 2018-12-11 | 2019-04-09 | 厦门大学 | 一种基于三维空间加权的医学图像分割方法 |
CN110110617A (zh) * | 2019-04-22 | 2019-08-09 | 腾讯科技(深圳)有限公司 | 医学影像分割方法、装置、电子设备和存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3961484A4 |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112699950A (zh) * | 2021-01-06 | 2021-04-23 | 腾讯科技(深圳)有限公司 | 医学图像分类方法、图像分类网络处理方法、装置和设备 |
CN113222012A (zh) * | 2021-05-11 | 2021-08-06 | 北京知见生命科技有限公司 | 一种肺部数字病理图像自动定量分析方法及*** |
CN113223017A (zh) * | 2021-05-18 | 2021-08-06 | 北京达佳互联信息技术有限公司 | 目标分割模型的训练方法、目标分割方法及设备 |
CN113822314A (zh) * | 2021-06-10 | 2021-12-21 | 腾讯云计算(北京)有限责任公司 | 图像数据处理方法、装置、设备以及介质 |
CN113822314B (zh) * | 2021-06-10 | 2024-05-28 | 腾讯云计算(北京)有限责任公司 | 图像数据处理方法、装置、设备以及介质 |
CN113378855A (zh) * | 2021-06-22 | 2021-09-10 | 北京百度网讯科技有限公司 | 用于处理多任务的方法、相关装置及计算机程序产品 |
WO2023273956A1 (zh) * | 2021-06-29 | 2023-01-05 | 华为技术有限公司 | 一种基于多任务网络模型的通信方法、装置及*** |
WO2023276750A1 (ja) * | 2021-06-29 | 2023-01-05 | 富士フイルム株式会社 | 学習方法、画像処理方法、学習装置、画像処理装置、学習プログラム、及び画像処理プログラム |
CN113793345B (zh) * | 2021-09-07 | 2023-10-31 | 复旦大学附属华山医院 | 一种基于改进注意力模块的医疗影像分割方法及装置 |
CN113793345A (zh) * | 2021-09-07 | 2021-12-14 | 复旦大学附属华山医院 | 一种基于改进注意力模块的医疗影像分割方法及装置 |
CN114119514A (zh) * | 2021-11-12 | 2022-03-01 | 北京环境特性研究所 | 一种红外弱小目标的检测方法、装置、电子设备和存储介质 |
CN116628457B (zh) * | 2023-07-26 | 2023-09-29 | 武汉华康世纪医疗股份有限公司 | 一种磁共振设备运行中的有害气体检测方法及装置 |
CN116628457A (zh) * | 2023-07-26 | 2023-08-22 | 武汉华康世纪医疗股份有限公司 | 一种磁共振设备运行中的有害气体检测方法及装置 |
CN117095447A (zh) * | 2023-10-18 | 2023-11-21 | 杭州宇泛智能科技有限公司 | 一种跨域人脸识别方法、装置、计算机设备及存储介质 |
CN117095447B (zh) * | 2023-10-18 | 2024-01-12 | 杭州宇泛智能科技有限公司 | 一种跨域人脸识别方法、装置、计算机设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP3961484A4 (en) | 2022-08-03 |
KR102607800B1 (ko) | 2023-11-29 |
EP3961484B1 (en) | 2024-07-17 |
KR20210097772A (ko) | 2021-08-09 |
CN110110617B (zh) | 2021-04-20 |
US20210365717A1 (en) | 2021-11-25 |
US11887311B2 (en) | 2024-01-30 |
JP7180004B2 (ja) | 2022-11-29 |
JP2022529557A (ja) | 2022-06-23 |
CN110110617A (zh) | 2019-08-09 |
EP3961484A1 (en) | 2022-03-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020215985A1 (zh) | 医学影像分割方法、装置、电子设备和存储介质 | |
CN110111313B (zh) | 基于深度学习的医学图像检测方法及相关设备 | |
CN109508681B (zh) | 生成人体关键点检测模型的方法和装置 | |
CN112017189B (zh) | 图像分割方法、装置、计算机设备和存储介质 | |
Gao et al. | Classification of CT brain images based on deep learning networks | |
CN111862044B (zh) | 超声图像处理方法、装置、计算机设备和存储介质 | |
US20220254134A1 (en) | Region recognition method, apparatus and device, and readable storage medium | |
CN104484886B (zh) | 一种mr图像的分割方法及装置 | |
CN111667459B (zh) | 一种基于3d可变卷积和时序特征融合的医学征象检测方法、***、终端及存储介质 | |
Ryou et al. | Automated 3D ultrasound biometry planes extraction for first trimester fetal assessment | |
CN108052909B (zh) | 一种基于心血管oct影像的薄纤维帽斑块自动检测方法和装置 | |
CN113424222A (zh) | 用于使用条件生成对抗网络提供中风病灶分割的***和方法 | |
CN114219855A (zh) | 点云法向量的估计方法、装置、计算机设备和存储介质 | |
CN112215217B (zh) | 模拟医师阅片的数字图像识别方法及装置 | |
CN117237351B (zh) | 一种超声图像分析方法以及相关装置 | |
CN115170401A (zh) | 图像补全方法、装置、设备及存储介质 | |
CN113724185A (zh) | 用于图像分类的模型处理方法、装置及存储介质 | |
CN113610746A (zh) | 一种图像处理方法、装置、计算机设备及存储介质 | |
CN117788810A (zh) | 一种无监督语义分割的学习*** | |
CN113096080A (zh) | 图像分析方法及*** | |
WO2023160157A1 (zh) | 三维医学图像的识别方法、装置、设备、存储介质及产品 | |
CN110147715A (zh) | 一种视网膜OCT图像Bruch膜开角自动检测方法 | |
Alsmirat et al. | Building an image set for modeling image re-targeting using deep learning | |
Jiang et al. | Computational approach to body mass index estimation from dressed people in 3D space | |
KR101916596B1 (ko) | 이미지의 혐오감을 예측하는 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20793969 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20217020738 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2021541593 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020793969 Country of ref document: EP Effective date: 20211122 |