CN110782446B - Method and device for determining volume of lung nodule - Google Patents

Method and device for determining volume of lung nodule Download PDF

Info

Publication number
CN110782446B
CN110782446B CN201911024057.4A CN201911024057A CN110782446B CN 110782446 B CN110782446 B CN 110782446B CN 201911024057 A CN201911024057 A CN 201911024057A CN 110782446 B CN110782446 B CN 110782446B
Authority
CN
China
Prior art keywords
image
lung nodule
lung
region
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911024057.4A
Other languages
Chinese (zh)
Other versions
CN110782446A (en
Inventor
石磊
魏子昆
王�琦
华铱炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yitu Healthcare Technology Co ltd
Original Assignee
Hangzhou Yitu Healthcare Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yitu Healthcare Technology Co ltd filed Critical Hangzhou Yitu Healthcare Technology Co ltd
Priority to CN201911024057.4A priority Critical patent/CN110782446B/en
Publication of CN110782446A publication Critical patent/CN110782446A/en
Application granted granted Critical
Publication of CN110782446B publication Critical patent/CN110782446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Geometry (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Primary Health Care (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a method and a device for determining the volume of a lung nodule, wherein the method comprises the steps of obtaining a chest 3D image and three-dimensional coordinates of the lung nodule on the chest 3D image, determining a first ROI from the chest 3D image according to the three-dimensional coordinates of the lung nodule, segmenting the first ROI along different dimensions, determining a plurality of groups of first 2D image layers, obtaining confidence coefficients that pixel points in each frame of 2D image in each group of first 2D image layers corresponding to the different dimensions belong to the lung nodule region through a convolutional neural network region segmentation model, determining the lung nodule region according to the confidence coefficients that the pixel points in each frame of 2D image in each group of first 2D image layers corresponding to the different dimensions belong to the lung nodule region, and determining the volume of the lung nodule according to the lung nodule region. Compared with the traditional doctor diagnosis mode, the method can reduce the inaccuracy rate of the lung nodule volume measurement caused by the doctor level difference, and further reduce the error rate of the lung nodule diagnosis.

Description

Method and device for determining volume of lung nodule
Technical Field
The embodiment of the invention relates to the technical field of machine learning, in particular to a method and a device for determining lung nodule volume, computing equipment and a computer-readable non-volatile storage medium.
Background
Currently, doctors can diagnose various diseases by means of medical knowledge and medical experience. That is, most of the prior art diagnoses diseases by doctors, however, since the medical level of each region is not uniform and the personal experience level of doctors is also different, the conventional method for diagnosing diseases by doctors is easily affected by the medical level of the region and the personal experience level of doctors, resulting in large diagnosis error.
Taking the example of determining the volume of a lung nodule, a physician typically needs to manually observe the size of the lung nodule volume, determine the doubling time based on the degree and time of change of the lung nodule volume, and further analyze the benign or malignant nature of the lung nodule based on other factors such as the doubling time. The volume of the lung nodules is measured manually, which inevitably leads to inaccuracies, and thus to false diagnoses of doubling time and benign and malignant lung nodules.
Based on this, there is a need for a method of determining lung nodule volume for improving the accuracy of the lung nodule volume measurement.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining the volume of a lung nodule, which are used for improving the accuracy of measuring the volume of the lung nodule.
The embodiment of the invention provides a method for determining the volume of a lung nodule, which comprises the following steps:
acquiring a chest 3D image and three-dimensional coordinates of lung nodules on the chest 3D image;
determining a first ROI from the chest 3D image according to the three-dimensional coordinates of the lung nodule;
segmenting the first ROI along different dimensions, and determining multiple groups of first 2D image layers corresponding to the different dimensions; each group of 2D image layers comprises a plurality of frames of 2D images;
obtaining confidence coefficients of pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions, which belong to lung nodule regions, through a convolutional neural network region segmentation model;
determining the lung nodule region according to the confidence coefficient that pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to the lung nodule region;
determining a volume of the lung nodule from the lung nodule region.
Among the above-mentioned technical scheme, through carrying out the segmentation along different dimensions to the first ROI who confirms from chest 3D image, can obtain the first 2D image layer of multiunit that different dimensions correspond, the first 2D image layer of multiunit that corresponds with different dimensions inputs into convolutional neural network region segmentation model in proper order, the confidence that pixel belongs to the lung nodule region in obtaining each frame 2D image, thereby can obtain the lung nodule region, can the volume of automated determination lung nodule based on this lung nodule region, compare in the diagnostic mode of traditional doctor, can reduce the inaccurate rate to lung nodule volume measurement that leads to because of doctor's level difference, and then reduced the diagnostic error rate to the lung nodule.
Optionally, the obtaining, by using the convolutional neural network region segmentation model, confidence levels that pixel points in 2D images of frames in each group of first 2D image layers corresponding to different dimensions belong to a lung nodule region includes:
sequentially inputting each group of first 2D image layers corresponding to different dimensions and each group of second 2D image layers corresponding to the first 2D image layers as multiple channels into the convolutional neural network region segmentation model to obtain confidence coefficients that pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to lung nodule regions, wherein the second 2D image layer corresponding to any group of first 2D image layers of any dimension refers to: and an image layer consisting of frames of lung 2D images corresponding to the position and region of each frame of 2D image in the first 2D image layer.
In the technical scheme, the groups of first 2D image layers corresponding to different dimensions and the groups of second 2D image layers corresponding to the groups of first 2D image layers corresponding to the different dimensions are sequentially and simultaneously input into the convolutional neural network region segmentation model, so that the accuracy of the confidence coefficient that pixel points in each obtained frame of 2D image belong to the lung nodule region is improved, the non-lung nodule region is prevented from being wrongly judged as the lung nodule region, and the misdiagnosis rate of subsequent diagnosis of the lung nodule can be reduced.
Optionally, each frame of 2D lung image is obtained by:
segmenting lung regions in the chest 3D image to obtain a lung 3D image;
determining a second ROI from the lung 3D image according to the three-dimensional coordinates of the lung nodule;
and segmenting the second ROI along different dimensions to determine a plurality of groups of second 2D image layers corresponding to the different dimensions, wherein the second 2D image layer of any dimension comprises a plurality of frames of lung 2D images.
Optionally, the determining the lung nodule region according to the confidence that the pixel point in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belongs to the lung nodule region includes:
for any pixel point of any dimensionality, determining the confidence degree that the pixel point belongs to the lung nodule region in the dimensionality based on the confidence degree that the pixel point belongs to the lung nodule region in each frame of 2D image of each first 2D image layer;
obtaining a confidence distribution graph corresponding to each frame of 2D image of the first ROI under a preset dimension based on the confidence that any pixel belongs to the lung nodule region under different dimensions, wherein the confidence distribution graph corresponding to each frame of 2D image is related to the confidence that each pixel on the frame of 2D image belongs to the lung nodule region under different dimensions;
and obtaining a lung nodule region based on the confidence distribution graph corresponding to each frame of 2D image under the preset dimension and a first threshold value.
In the technical scheme, the lung nodule region is obtained through the confidence coefficient that the pixel points in the 2D images of the frames in the first 2D image layers of the groups corresponding to different dimensions belong to the lung nodule region, and the accuracy of the obtained lung nodule region is improved by adopting the method.
Optionally, the determining the volume of the lung nodule according to the lung nodule region includes:
acquiring the number of pixel points in the lung nodule region;
and determining the volume of the lung nodule according to the number of the pixel points in the lung nodule region and a preset scale.
In a second aspect, an embodiment of the present invention provides an apparatus for determining lung nodule volume, including:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a chest 3D image and three-dimensional coordinates of lung nodules on the chest 3D image;
the processing unit is used for determining a first ROI from the chest 3D image according to the three-dimensional coordinates of the lung nodule; segmenting the first ROI along different dimensions, and determining multiple groups of first 2D image layers corresponding to the different dimensions; each group of 2D image layers comprises a plurality of frames of 2D images; obtaining confidence coefficients of pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions, which belong to lung nodule regions, through a convolutional neural network region segmentation model; determining the lung nodule region according to the confidence coefficient that pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to the lung nodule region; determining a volume of the lung nodule from the lung nodule region.
Optionally, the processing unit is specifically configured to:
sequentially inputting each group of first 2D image layers corresponding to different dimensions and each group of second 2D image layers corresponding to the first 2D image layers as multiple channels into the convolutional neural network region segmentation model to obtain confidence coefficients that pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to lung nodule regions, wherein the second 2D image layer corresponding to any group of first 2D image layers of any dimension refers to: and an image layer consisting of frames of lung 2D images corresponding to the position and region of each frame of 2D image in the first 2D image layer.
Optionally, the processing unit is specifically configured to:
each frame of lung 2D image is obtained by:
segmenting lung regions in the chest 3D image to obtain a lung 3D image;
determining a second ROI from the lung 3D image according to the three-dimensional coordinates of the lung nodule;
and segmenting the second ROI along different dimensions to determine a plurality of groups of second 2D image layers corresponding to the different dimensions, wherein the second 2D image layer of any dimension comprises a plurality of frames of lung 2D images.
Optionally, the processing unit is specifically configured to:
for any pixel point of any dimensionality, determining the confidence degree that the pixel point belongs to the lung nodule region in the dimensionality based on the confidence degree that the pixel point belongs to the lung nodule region in each frame of 2D image of each first 2D image layer;
obtaining a confidence distribution graph corresponding to each frame of 2D image of the first ROI under a preset dimension based on the confidence that any pixel belongs to the lung nodule region under different dimensions, wherein the confidence distribution graph corresponding to each frame of 2D image is related to the confidence that each pixel on the frame of 2D image belongs to the lung nodule region under different dimensions;
and obtaining a lung nodule region based on the confidence distribution graph corresponding to each frame of 2D image under the preset dimension and a first threshold value.
Optionally, the processing unit is specifically configured to:
acquiring the number of pixel points in the lung nodule region;
and determining the volume of the lung nodule according to the number of the pixel points in the lung nodule region and a preset scale.
In a third aspect, an embodiment of the present invention further provides a computing device, including:
a memory for storing program instructions;
and a processor for calling the program instructions stored in the memory and executing the method for determining the volume of the lung nodule according to the obtained program.
In a fourth aspect, embodiments of the present invention also provide a computer-readable non-transitory storage medium including computer-readable instructions which, when read and executed by a computer, cause the computer to perform the above method for determining lung nodule volume.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for determining lung nodule volume according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a 3D image of a breast according to an embodiment of the present invention;
fig. 4a and 4b are schematic diagrams of a CT image of a breast according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an apparatus for determining lung nodule volume according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a system architecture provided in an embodiment of the present invention. Referring to fig. 1, the system architecture may be a server 100 including a processor 110, a communication interface 120, and a memory 130.
The communication interface 120 is used for communicating with a terminal device suitable for a doctor, and receiving and transmitting information transmitted by the terminal device to realize communication.
The processor 110 is a control center of the server 100, connects various parts of the entire server 100 using various interfaces and lines, performs various functions of the server 100 and processes data by running or executing software programs and/or modules stored in the memory 130 and calling data stored in the memory 130. Alternatively, processor 110 may include one or more processing units.
The memory 130 may be used to store software programs and modules, and the processor 110 executes various functional applications and data processing by operating the software programs and modules stored in the memory 130. The memory 130 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to a business process, and the like. Further, the memory 130 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
It should be noted that the structure shown in fig. 1 is only an example, and the embodiment of the present invention is not limited thereto.
Based on the above description, fig. 2 schematically illustrates a flowchart of a method for determining a lung nodule volume according to an embodiment of the present invention, where the flowchart may be performed by an apparatus for determining a lung nodule volume, and the apparatus may be located in the server 100 shown in fig. 1, or may be the server 100.
As shown in fig. 2, the process specifically includes:
step 201, obtaining a 3D image of a chest and three-dimensional coordinates of a lung nodule on the 3D image of the chest.
The 3D image of the chest may be an image acquired by a Computed Tomography (CT) apparatus, an image acquired by a Magnetic Resonance Imaging (MRI) apparatus, or the like, and fig. 3 exemplarily shows a CT image of the chest of one patient for more clearly describing the 3D image of the chest. The three-dimensional coordinates of the lung nodule on the 3D breast image may be three-dimensional coordinates of a point inside the lung nodule on the 3D breast image (e.g., three-dimensional coordinates of a center point of the lung nodule), or may be three-dimensional coordinates of a point on the surface of the lung nodule.
Step 202, determining a first ROI from the 3D chest image according to the three-dimensional coordinates of the lung nodule.
Specifically, a three-dimensional block of a preset size including a lung nodule region is cut from a chest 3D image as a first ROI with the three-dimensional coordinates of the lung nodule as a center. The voxel may be a voxel of a lung nodule. The preset size may be set empirically, and may be, for example, a preset multiple of the radius of the lung nodule. The radius of the lung nodule may be determined by the prior art, and the embodiment of the present invention is not particularly limited thereto.
Such as 2 times the lung nodule radius, then truncates this pixel cube and interpolates the scaled to a certain size. And then, adding a spatial information channel to each pixel in the pixel cube, and outputting a first ROI, wherein the spatial information channel is the distance between the pixel cube and the three-dimensional coordinates of the lung nodule. For example, the three-dimensional coordinates of the lung nodule may be used as the center, and the three coordinate axes may be aligned with each otherExtending L pixels in the direction, a pixel cube of size 2L x 2L can be selected. Fig. 4a is a schematic diagram of an example of a CT image of a breast. The lung nodule A exists in the chest CT image, and the central coordinate of the lung nodule A is (x)0,y0,z0) The radius of the lung nodule A is r, and the center coordinate (x) can be used0,y0,z0) With 2 times the radius r as the side length as the center point, the resulting region (cube) is the first ROI for lung nodule a, as shown in fig. 4 b.
It should be noted that the ROI may have various shapes, and the region corresponding to the lung nodule a in the above-described cubic breast CT image is merely an example, and in other possible examples, the ROI may also be a sphere or other shapes.
And 203, segmenting the first ROI along different dimensions, and determining multiple groups of first 2D image layers corresponding to the different dimensions.
Wherein, each group of 2D image layers comprises a plurality of frames of 2D images. In the embodiment of the present invention, the 2D images obtained in different dimensions can be generally divided into a 2D image of a cross section, a 2D image of a sagittal plane, and a 2D image of a coronal plane, which correspond to the 2D image sliced along the Z axis of the coordinate system, the 2D image sliced along the Y axis of the coordinate system, and the 2D image sliced along the X axis of the coordinate system, respectively. Each frame of image is cut along different dimensions, and then a preset number of multi-frame 2D images are extracted on the different dimensions to serve as a group of first 2D image layers of the different dimensions. The predetermined number may be set empirically, for example, each set of 2D image layers includes 3 frames of 2D images, where each set of 2D image layers may be continuous multi-frame 2D images or discontinuous multi-frame 2D images, for example, the continuous multi-frame 2D image may include { 1 st frame 2D image, 2 nd frame 2D image }, and the discontinuous multi-frame 2D image may include { 1 st frame 2D image, 3 rd frame 2D image, 5 th frame 2D image }. Further, any two of the plurality of first 2D video layers may be the first 2D video layers of the 2D video having the common frame, or any two of the plurality of first 2D video layers may be the first 2D video layers of the 2D video having no common frame. For example, any two of the plurality of sets of first 2D video layers having 2D videos with a common frame may be { 1 st 2D video, 2 nd 2D video, 3 rd 2D video } and { 3 rd 2D video, 4 th 2D video, 5 th 2D video }, or { 1 st 2D video, 2 nd 2D video, 3 rd 2D video } and { 2 nd 2D video, 3 rd 2D video, 4 th 2D video }, or { 1 st 2D video, 3 rd 2D video, 5 th 2D video } and { 3 rd 2D video, 5 th 2D video, 7 th 2D video }, or { 1 st 2D video, 2 nd 2D video, 3 rd 2D video } and { 2 nd 2D video, 4 th 2D video, 6 th 2D video }, and so on. The first 2D video layers of any two sets of 2D videos having no common frame in the plurality of sets of first 2D video layers may be { 1 st 2D video, 2 nd 2D video, 3 rd 2D video } and { 4 th 2D video, 5 th 2D video, 6 th 2D video }, or { 1 st 2D video, 3 rd 2D video, 5 th 2D video } and { 2 nd 2D video, 4 th 2D video, 6 th 2D video }, etc.
It should be noted that, the embodiments of the present invention do not limit the number of the image layers corresponding to any dimension, and may be the same or different, and are not limited specifically. When they are the same, the number of groups included in any dimension may be the same. For example, each dimension is sliced to obtain 90 frames of 2D images, and if each group includes 3 frames of 2D images, each dimension can obtain 30 groups of first 2D image layers.
And 204, obtaining confidence that pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to a lung nodule region through a convolutional neural network region segmentation model.
In the embodiment of the invention, each group of first 2D image layers corresponding to different dimensions can be directly input into the convolutional neural network region segmentation model to obtain the confidence that the pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to the lung nodule region.
In order to more accurately obtain the confidence that the pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to the lung nodule region, the input of the convolutional neural network region segmentation model may further include each group of second 2D image layers corresponding to each group of first 2D image layers corresponding to different dimensions, where the second 2D image layer corresponding to any group of first 2D image layers of any dimension refers to: and an image layer consisting of frames of lung 2D images corresponding to the position and region of each frame of 2D image in the first 2D image layer.
Wherein, each frame of 2D lung image can be obtained by: first, the lung region in the 3D thoracic image is segmented to obtain a 3D pulmonary image, but the method of segmenting the lung region in the 3D thoracic image is not particularly limited in the embodiments of the present invention. And then determining a second ROI from the lung 3D image based on the three-dimensional coordinates of the lung nodules, and finally segmenting the second ROI along different dimensions to determine multiple groups of second 2D image layers corresponding to the different dimensions. It should be noted that the manner of determining the second ROI and the manner of segmenting the second ROI are the same as the manner of determining the first ROI and the manner of segmenting the first ROI, and are not described herein again. Wherein the second 2D image layer of any dimension comprises a plurality of frames of 2D images of the lungs.
The 2D images of the lungs of each frame corresponding to the position and the region of each 2D image in the first 2D image layer may refer to the 2D images and the lung 2D images corresponding thereto, which have the same segmentation position and belong to the region including the three-dimensional coordinates of the same lung nodule when segmenting the first ROI and segmenting the second ROI. Taking the cross-sectional image as an example, the cutting position can be represented by the coordinate of the Z-axis in the coordinate system.
When each group of first 2D image layers corresponding to different dimensions and each group of second 2D image layers corresponding to each group of first 2D image layers corresponding to different dimensions are input to the convolutional neural network region segmentation model at the same time, a group of first 2D image layers with the same segmentation position in different dimensions and a corresponding second 2D image layer thereof can be input to the convolutional neural network region segmentation model as multiple channels, so that the confidence that pixels in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to a pulmonary nodule region can be obtained.
The groups of first 2D image layers corresponding to different dimensions and the groups of second 2D image layers corresponding to the groups of first 2D image layers corresponding to the different dimensions are simultaneously input into the convolutional neural network region segmentation model, so that the accuracy of determining the confidence coefficient that the pixel points in each frame of 2D image belong to the lung nodule region can be improved, some non-lung nodule regions can be removed, the misjudgment is reduced, and the misdiagnosis rate of subsequent lung nodule diagnosis can be reduced. For example, by masking the 2D image of the lung, some regions of the nodule that lie outside the contours of the lung may be removed.
It should be noted that the convolutional neural network region segmentation model in the embodiment of the present invention may be a 2D convolutional neural network region segmentation model or a 3D convolutional neural network region segmentation model, if the convolutional neural network region segmentation model is a 2D convolutional neural network region segmentation model, a set of 2D images may be input frame by frame at the time of input, and if the convolutional neural network region segmentation model is a 3D convolutional neural network region segmentation model, a set of 2D images may be input together at the time of input, and then the 3D convolutional module performs the dimension reduction processing.
In an embodiment of the present invention, the convolutional neural network region segmentation model may include a feature extraction module, N upsampling modules, and N downsampling modules.
When the groups of first 2D image layers corresponding to different dimensions and the groups of second 2D image layers corresponding to the groups of first 2D image layers corresponding to different dimensions are simultaneously input into the convolutional neural network region segmentation model, a group of first 2D image layers with the same segmentation position in different dimensions and a corresponding second 2D image layer thereof can be input into the feature extraction module as multiple channels to obtain first feature images corresponding to each frame of 2D image in a group of first 2D image layers with the same segmentation position in different dimensions, then inputting the first characteristic image corresponding to each frame of 2D image in a group of first 2D image layers with the same segmentation position in different dimensions into N down-sampling modules, therefore, a sampling result corresponding to each down-sampling module, that is, a second feature image is obtained, wherein the size of each second feature image is different. And finally, inputting the second characteristic image obtained by the last down-sampling module in the N down-sampling modules into the N up-sampling modules to sequentially obtain the sampling result of each up-sampling module, and splicing the sampling result of each up-sampling module with the sampling result of the down-sampling module with the corresponding size to obtain a third characteristic image. And after the sampling result of each up-sampling module is spliced with the second characteristic image with the same size, the sampling result is input into the next up-sampling module.
Step 205, determining the lung nodule region according to the confidence that the pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to the lung nodule region.
In the embodiment of the present invention, for any pixel point of any dimension, based on the confidence that the pixel point in each frame of 2D image of each first 2D image layer belongs to the lung nodule region, the confidence that the pixel point belongs to the lung nodule region in the dimension can be obtained, and then based on the confidence that any pixel point belongs to the lung nodule region in different dimensions, the confidence distribution map corresponding to each frame of 2D image of the first ROI in the predetermined dimension is obtained, where the confidence distribution map corresponding to each frame of 2D image is associated with the confidence that each pixel point on the frame of 2D image belongs to the lung nodule region in different dimensions. Finally, the lung nodule region can be obtained based on the confidence distribution map corresponding to each frame of 2D image under the preset dimension and the first threshold value. Wherein the first threshold value may be set empirically.
Based on the above, as a possible implementation manner, for any two of the first 2D video layers of the 2D video having the common frame in any dimension, for example, the first 2D video layer is { 1 st frame 2D video, 2 nd frame 2D video, 3 rd frame 2D video }, the second 2D video layer is { 2 nd frame 2D video, 3 rd frame 2D video, 4 th frame 2D video }, and the third 2D video layer is { 3 rd frame 2D video, 4 th frame 2D video, 5 th frame 2D video }. The pixel point A is located on the 3 rd frame 2D image, and at the moment, the pixel point A simultaneously belongs to the first group of 2D image layers, the second group of 2D image layers and the third group of 2D image layers. In the same dimension, for the pixel point a, the confidence that the three pixel points a belong to the lung nodule region can be obtained. In this case, the confidence distributions that the three pixel points a belong to the lung nodule region may be superimposed to obtain an average value, so as to obtain the confidence that the pixel point a belongs to the lung nodule region in the 3 rd frame 2D image in the dimension.
If only one dimension exists, the confidence coefficient that any pixel point in the 3 rd frame 2D image belongs to the lung nodule region in the dimension can be obtained based on the same mode, so that a confidence coefficient distribution map that any pixel point in each frame 2D image belongs to the lung nodule region in the dimension is obtained, and the confidence coefficient distribution maps that any pixel point in each frame 2D image belongs to the lung nodule region in the dimension are correspondingly combined together, so that the confidence coefficient distribution map that any pixel point in the first ROI belongs to the lung nodule region in the dimension can be obtained, and the lung nodule region can be obtained. For example, taking the coordinate system Z-axis direction as an example, the confidence degree distribution maps of any pixel point in each frame of 2D image belonging to the lung nodule region are combined together, so as to obtain a confidence degree distribution map of any pixel point in a cube belonging to the lung nodule region, that is, the confidence degree distribution map of any three-dimensional pixel point belonging to the lung nodule region. When any two groups of 2D images in the multiple groups of first 2D image layers do not have a common frame, the confidence coefficient distribution maps of any pixel point in each 2D image in one dimension, which belong to the lung nodule region, can be directly combined together to obtain the lung nodule region.
Furthermore, in order to improve the accuracy of determining the lung nodule region, the embodiment of the invention not only considers the confidence that the pixel points in each 2D image frame belong to the lung nodule region in one dimension, but also comprehensively considers the confidence that the pixel points in each 2D image frame belong to the lung nodule region in multiple dimensions. Specifically, taking the pixel point a as an example, the confidence levels that the pixel point a belongs to the lung nodule region in different dimensions can be obtained through the above method, and then the confidence levels that the pixel point a belongs to the lung nodule region in different dimensions are superimposed to obtain an average value, so that the confidence level that the pixel point a belongs to the lung nodule region can be obtained. With any dimension as a predetermined dimension, the confidence coefficient that the pixel point a belongs to the lung nodule region in the dimensions other than the predetermined dimension and the confidence coefficient that the pixel point a belongs to the lung nodule region in the predetermined dimension are superimposed to obtain an average, so that the confidence coefficient distribution that the pixel point belongs to the lung nodule region in each frame of 2D image in the predetermined dimension can be obtained, and thus the confidence coefficient distribution map corresponding to each frame of 2D image of the first ROI in the predetermined dimension can be obtained. For example, in the Z-axis direction of the coordinate system, the confidence that the pixel point a belongs to the lung nodule region is 95%, in the X-axis direction of the coordinate system, the confidence that the pixel point a belongs to the lung nodule region is 96%, and in the Y-axis direction of the coordinate system, the confidence that the pixel point a belongs to the lung nodule region is 97%. And taking the Z-axis direction of the coordinate system as a preset dimension, superposing the three confidences to calculate an average value, and obtaining a confidence coefficient of 96%, namely the final confidence coefficient that the pixel point A belongs to the lung nodule region in the Z-axis direction of the coordinate system, and calculating the confidence coefficient that all the pixel points belong to the lung nodule region by adopting the method, so that a confidence coefficient distribution graph corresponding to each frame of 2D image in the Z-axis direction of the coordinate system can be obtained.
The confidence levels that the pixel point a belongs to the lung nodule region in the different dimensions may also be determined in another manner, for example, in the Z-axis direction of the coordinate system, the pixel point a is located on the 3 rd frame of 2D image, and the 3 rd frame of 2D image is located in three different first 2D image layers, in which case, the first confidence level, the second confidence level, and the third confidence level may be obtained. Similarly, in the X-axis direction of the coordinate system, the pixel point a is also in three different first 2D image layers, so that a fourth confidence, a fifth confidence and a sixth confidence can be obtained, and in the Y-axis direction of the coordinate system, the pixel point a is also in three different first 2D image layers, so that a seventh confidence, an eighth confidence and a ninth confidence can be obtained. Therefore, nine different confidence coefficients can be obtained for the same pixel point A, at the moment, the Z-axis direction of the coordinate system is taken as a preset dimension, the nine different confidence coefficients can be superposed and then averaged, so that the final confidence coefficient of the pixel point A belonging to the lung nodule area under the preset dimension is obtained, and further the confidence coefficient distribution graph corresponding to each frame of 2D image in the Z-axis direction of the coordinate system can be obtained.
Based on the above manner, a confidence distribution map of any pixel point in the first ROI, which belongs to the lung nodule region, may be further obtained, and at this time, the lung nodule region may be determined by setting a first threshold. For example, the region where the pixel point whose confidence is greater than the first threshold is located may be determined as a lung nodule region, and the adjacent pixel points whose confidence is greater than the first threshold are connected in this way, where the region where the most connected pixel points are is the lung nodule region. At this time, binarization processing is performed on the connected pixel points, for example, the gray value of the pixel point with the confidence degree greater than the first threshold is 1, and the gray value of the pixel point with the confidence degree less than or equal to the first threshold is 0, so that the lung nodule region can be obviously distinguished, and the lung nodule region is obtained.
Step 206, determining the volume of the lung nodule according to the segmentation region of the lung nodule.
After the lung nodule region is obtained in step 205, the volume of the lung nodule may be calculated, specifically, the number of pixel points in the lung nodule region may be obtained, and then the volume of the lung nodule may be determined according to the number of pixel points in the lung nodule region and a preset scale.
The preset scale can be a preset scale on each dimension of the 3D image and can be set according to experience. The acquisition mode of the number of the pixel points in the lung nodule region may be to count the pixel points in the lung nodule region. The volume of the lung nodule can be obtained by multiplying the number of pixel points in each dimension by the preset scale in the dimension.
Therefore, according to the embodiment of the invention, the three-dimensional coordinates of the lung nodule on the chest 3D image and the chest 3D image are obtained, the first ROI is determined from the chest 3D image according to the three-dimensional coordinates of the lung nodule, the first ROI is segmented along different dimensions, multiple groups of first 2D image layers corresponding to the different dimensions are determined, each group of 2D image layers comprises multiple frames of 2D images, the confidence coefficient that the pixel point in each frame of 2D image in each group of first 2D image layers corresponding to the different dimensions belongs to the lung nodule region is obtained through a convolutional neural network region segmentation model, the lung nodule region is determined according to the confidence coefficient that the pixel point in each frame of 2D image in each group of first 2D image layers corresponding to the different dimensions belongs to the lung nodule region, and the volume of the lung nodule is determined according to the lung nodule region. The first ROI determined from the chest 3D image is segmented along different dimensions, multiple groups of first 2D image layers corresponding to different dimensions can be obtained, the multiple groups of first 2D image layers corresponding to different dimensions and each group of second 2D image layers corresponding to the multiple groups of first 2D image layers are sequentially input into the convolutional neural network region segmentation model, the confidence coefficient that pixel points belong to a lung nodule region in each frame of 2D image is obtained, and therefore the lung nodule region can be obtained.
Based on the same technical concept, fig. 5 exemplarily shows a structure of an apparatus for determining a lung nodule volume, which may perform a process of determining a lung nodule volume and be located in the server 100 shown in fig. 1 or the server 100 according to an embodiment of the present invention.
As shown in fig. 5, the apparatus specifically includes:
an obtaining unit 501, configured to obtain a chest 3D image and three-dimensional coordinates of a lung nodule on the chest 3D image;
a processing unit 502 for determining a first ROI from the 3D image of the chest according to the three-dimensional coordinates of the lung nodule; segmenting the first ROI along different dimensions, and determining multiple groups of first 2D image layers corresponding to the different dimensions; each group of 2D image layers comprises a plurality of frames of 2D images; obtaining confidence coefficients of pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions, which belong to lung nodule regions, through a convolutional neural network region segmentation model; determining the lung nodule region according to the confidence coefficient that pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to the lung nodule region; determining a volume of the lung nodule from the lung nodule region.
Optionally, the processing unit 502 is specifically configured to:
sequentially inputting each group of first 2D image layers corresponding to different dimensions and each group of second 2D image layers corresponding to the first 2D image layers as multiple channels into the convolutional neural network region segmentation model to obtain confidence coefficients that pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to lung nodule regions, wherein the second 2D image layer corresponding to any group of first 2D image layers of any dimension refers to: and an image layer consisting of frames of lung 2D images corresponding to the position and region of each frame of 2D image in the first 2D image layer.
Optionally, the processing unit 502 is specifically configured to:
each frame of lung 2D image is obtained by:
segmenting lung regions in the chest 3D image to obtain a lung 3D image;
determining a second ROI from the lung 3D image according to the three-dimensional coordinates of the lung nodule;
and segmenting the second ROI along different dimensions to determine a plurality of groups of second 2D image layers corresponding to the different dimensions, wherein the second 2D image layer of any dimension comprises a plurality of frames of lung 2D images.
Optionally, the processing unit 502 is specifically configured to:
for any pixel point of any dimensionality, determining the confidence degree that the pixel point belongs to the lung nodule region in the dimensionality based on the confidence degree that the pixel point belongs to the lung nodule region in each frame of 2D image of each first 2D image layer;
obtaining a confidence distribution graph corresponding to each frame of 2D image of the first ROI under a preset dimension based on the confidence that any pixel belongs to the lung nodule region under different dimensions, wherein the confidence distribution graph corresponding to each frame of 2D image is related to the confidence that each pixel on the frame of 2D image belongs to the lung nodule region under different dimensions;
and obtaining a lung nodule region based on the confidence distribution graph corresponding to each frame of 2D image under the preset dimension and a first threshold value.
Optionally, the processing unit 502 is specifically configured to:
acquiring the number of pixel points in the lung nodule region;
and determining the volume of the lung nodule according to the number of the pixel points in the lung nodule region and a preset scale.
Based on the same technical concept, an embodiment of the present invention further provides a computing device, including:
a memory for storing program instructions;
and a processor for calling the program instructions stored in the memory and executing the method for determining the volume of the lung nodule according to the obtained program.
Based on the same technical concept, embodiments of the present invention also provide a computer-readable non-transitory storage medium including computer-readable instructions, which when read and executed by a computer, cause the computer to perform the above method for determining lung nodule volume.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (6)

1. A method of determining lung nodule volume, comprising:
acquiring a chest 3D image and three-dimensional coordinates of lung nodules on the chest 3D image;
determining a first ROI from the chest 3D image according to the three-dimensional coordinates of the lung nodule;
segmenting the first ROI along different dimensions, and determining multiple groups of first 2D image layers corresponding to the different dimensions; each group of 2D image layers comprises a plurality of frames of 2D images;
sequentially inputting each group of first 2D image layers corresponding to different dimensionalities and each group of second 2D image layers corresponding to the first 2D image layers as multiple channels into a convolutional neural network region segmentation model to obtain confidence coefficients that pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensionalities belong to a lung nodule region, wherein the confidence coefficients of the pixel points in each group of first 2D image layers corresponding to any dimensionality refer to the following two 2D image layers: an image layer composed of 2D images of each frame of lung corresponding to the position and region of each 2D image in the first 2D image layer; each frame of 2D image of the lungs is obtained as follows: segmenting lung regions in the chest 3D image to obtain a lung 3D image; determining a second ROI from the lung 3D image according to the three-dimensional coordinates of the lung nodule; segmenting the second ROI along different dimensions to determine a plurality of groups of second 2D image layers corresponding to the different dimensions, wherein the second 2D image layer of any dimension comprises a plurality of frames of lung 2D images;
determining the lung nodule region according to the confidence coefficient that pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to the lung nodule region;
determining a volume of the lung nodule from the lung nodule region.
2. The method of claim 1, wherein the determining the lung nodule region according to the confidence that the pixel point in each frame 2D image in each group of first 2D image layers corresponding to different dimensions belongs to the lung nodule region comprises:
for any pixel point of any dimensionality, determining the confidence degree that the pixel point belongs to the lung nodule region in the dimensionality based on the confidence degree that the pixel point belongs to the lung nodule region in each frame of 2D image of each first 2D image layer;
obtaining a confidence distribution graph corresponding to each frame of 2D image of the first ROI under a preset dimension based on the confidence that any pixel belongs to the lung nodule region under different dimensions, wherein the confidence distribution graph corresponding to each frame of 2D image is related to the confidence that each pixel on the frame of 2D image belongs to the lung nodule region under different dimensions;
and obtaining a lung nodule region based on the confidence distribution graph corresponding to each frame of 2D image under the preset dimension and a first threshold value.
3. The method of claim 1, wherein said determining a volume of said lung nodule from said lung nodule region comprises:
acquiring the number of pixel points in the lung nodule region;
and determining the volume of the lung nodule according to the number of the pixel points in the lung nodule region and a preset scale.
4. An apparatus for determining lung nodule volume, comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a chest 3D image and three-dimensional coordinates of lung nodules on the chest 3D image;
the processing unit is used for determining a first ROI from the chest 3D image according to the three-dimensional coordinates of the lung nodule; segmenting the first ROI along different dimensions, and determining multiple groups of first 2D image layers corresponding to the different dimensions; each group of 2D image layers comprises a plurality of frames of 2D images; sequentially inputting each group of first 2D image layers corresponding to different dimensionalities and each group of second 2D image layers corresponding to the first 2D image layers as multiple channels into a convolutional neural network region segmentation model to obtain confidence coefficients that pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensionalities belong to a lung nodule region, wherein the confidence coefficients of the pixel points in each group of first 2D image layers corresponding to any dimensionality refer to the following two 2D image layers: an image layer composed of 2D images of each frame of lung corresponding to the position and region of each 2D image in the first 2D image layer; each frame of 2D image of the lungs is obtained as follows: segmenting lung regions in the chest 3D image to obtain a lung 3D image; determining a second ROI from the lung 3D image according to the three-dimensional coordinates of the lung nodule; segmenting the second ROI along different dimensions to determine a plurality of groups of second 2D image layers corresponding to the different dimensions, wherein the second 2D image layer of any dimension comprises a plurality of frames of lung 2D images; determining the lung nodule region according to the confidence coefficient that pixel points in each frame of 2D image in each group of first 2D image layers corresponding to different dimensions belong to the lung nodule region; determining a volume of the lung nodule from the lung nodule region.
5. A computing device, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to execute the method of any one of claims 1 to 3 in accordance with the obtained program.
6. A computer-readable non-transitory storage medium including computer-readable instructions which, when read and executed by a computer, cause the computer to perform the method of any one of claims 1 to 3.
CN201911024057.4A 2019-10-25 2019-10-25 Method and device for determining volume of lung nodule Active CN110782446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911024057.4A CN110782446B (en) 2019-10-25 2019-10-25 Method and device for determining volume of lung nodule

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911024057.4A CN110782446B (en) 2019-10-25 2019-10-25 Method and device for determining volume of lung nodule

Publications (2)

Publication Number Publication Date
CN110782446A CN110782446A (en) 2020-02-11
CN110782446B true CN110782446B (en) 2022-04-15

Family

ID=69386611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911024057.4A Active CN110782446B (en) 2019-10-25 2019-10-25 Method and device for determining volume of lung nodule

Country Status (1)

Country Link
CN (1) CN110782446B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340756B (en) * 2020-02-13 2023-11-28 北京深睿博联科技有限责任公司 Medical image lesion detection merging method, system, terminal and storage medium
CN111402260A (en) * 2020-02-17 2020-07-10 北京深睿博联科技有限责任公司 Medical image segmentation method, system, terminal and storage medium based on deep learning
CN111047591A (en) * 2020-03-13 2020-04-21 北京深睿博联科技有限责任公司 Focal volume measuring method, system, terminal and storage medium based on deep learning
CN111967462B (en) * 2020-04-26 2024-02-02 杭州依图医疗技术有限公司 Method and device for acquiring region of interest
CN112712508B (en) * 2020-12-31 2024-05-14 杭州依图医疗技术有限公司 Pneumothorax determination method and pneumothorax determination device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108389202A (en) * 2018-03-16 2018-08-10 青岛海信医疗设备股份有限公司 Calculation method of physical volume, device, storage medium and the equipment of three-dimensional organ
CN108986085A (en) * 2018-06-28 2018-12-11 深圳视见医疗科技有限公司 CT image pulmonary nodule detection method, device, equipment and readable storage medium storing program for executing
CN109035261A (en) * 2018-08-09 2018-12-18 北京市商汤科技开发有限公司 Medical imaging processing method and processing device, electronic equipment and storage medium
CN109446951A (en) * 2018-10-16 2019-03-08 腾讯科技(深圳)有限公司 Semantic segmentation method, apparatus, equipment and the storage medium of 3-D image
CN109447963A (en) * 2018-10-22 2019-03-08 杭州依图医疗技术有限公司 A kind of method and device of brain phantom identification

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446730B (en) * 2018-03-16 2021-05-28 推想医疗科技股份有限公司 CT pulmonary nodule detection device based on deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108389202A (en) * 2018-03-16 2018-08-10 青岛海信医疗设备股份有限公司 Calculation method of physical volume, device, storage medium and the equipment of three-dimensional organ
CN108986085A (en) * 2018-06-28 2018-12-11 深圳视见医疗科技有限公司 CT image pulmonary nodule detection method, device, equipment and readable storage medium storing program for executing
CN109035261A (en) * 2018-08-09 2018-12-18 北京市商汤科技开发有限公司 Medical imaging processing method and processing device, electronic equipment and storage medium
CN109446951A (en) * 2018-10-16 2019-03-08 腾讯科技(深圳)有限公司 Semantic segmentation method, apparatus, equipment and the storage medium of 3-D image
CN109447963A (en) * 2018-10-22 2019-03-08 杭州依图医疗技术有限公司 A kind of method and device of brain phantom identification

Also Published As

Publication number Publication date
CN110782446A (en) 2020-02-11

Similar Documents

Publication Publication Date Title
CN110782446B (en) Method and device for determining volume of lung nodule
CN110934606B (en) Cerebral apoplexy early-stage flat-scan CT image evaluation system and method and readable storage medium
CN109446951B (en) Semantic segmentation method, device and equipment for three-dimensional image and storage medium
CN108629784A (en) A kind of CT image intracranial vessel dividing methods and system based on deep learning
CN111340756B (en) Medical image lesion detection merging method, system, terminal and storage medium
US10970837B2 (en) Automated uncertainty estimation of lesion segmentation
CN110400299A (en) A kind of method and device of lung's pleural effusion detection
CN113436166A (en) Intracranial aneurysm detection method and system based on magnetic resonance angiography data
CN113077479A (en) Automatic segmentation method, system, terminal and medium for acute ischemic stroke focus
US10878564B2 (en) Systems and methods for processing 3D anatomical volumes based on localization of 2D slices thereof
CN110619635A (en) Hepatocellular carcinoma magnetic resonance image segmentation system and method based on deep learning
US11600379B2 (en) Systems and methods for generating classifying and quantitative analysis reports of aneurysms from medical image data
US8306354B2 (en) Image processing apparatus, method, and program
CN114648541A (en) Automatic segmentation method for non-small cell lung cancer gross tumor target area
WO2009029670A1 (en) Object segmentation using dynamic programming
CN113487536A (en) Image segmentation method, computer device and storage medium
CN110992310A (en) Method and device for determining partition where mediastinal lymph node is located
CN116758087B (en) Lumbar vertebra CT bone window side recess gap detection method and device
CN116862930B (en) Cerebral vessel segmentation method, device, equipment and storage medium suitable for multiple modes
CN113658106A (en) Liver focus automatic diagnosis system based on abdomen enhanced CT
CN111967462A (en) Method and device for acquiring region of interest
CN109767468B (en) Visceral volume detection method and device
CN111105476A (en) Three-dimensional reconstruction method for CT image based on Marching Cubes
CN112712507B (en) Method and device for determining calcified region of coronary artery
CN110533637B (en) Method and device for detecting object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant