CN111724360A - Lung lobe segmentation method and device and storage medium - Google Patents

Lung lobe segmentation method and device and storage medium Download PDF

Info

Publication number
CN111724360A
CN111724360A CN202010534722.0A CN202010534722A CN111724360A CN 111724360 A CN111724360 A CN 111724360A CN 202010534722 A CN202010534722 A CN 202010534722A CN 111724360 A CN111724360 A CN 111724360A
Authority
CN
China
Prior art keywords
lung
image
segmented
images
fused
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010534722.0A
Other languages
Chinese (zh)
Other versions
CN111724360B (en
Inventor
***
杨英健
刘洋
郭英委
曾吴涛
康雁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Technology University
Original Assignee
Shenzhen Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Technology University filed Critical Shenzhen Technology University
Priority to CN202010534722.0A priority Critical patent/CN111724360B/en
Publication of CN111724360A publication Critical patent/CN111724360A/en
Application granted granted Critical
Publication of CN111724360B publication Critical patent/CN111724360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a lung lobe segmentation method, a device and a storage medium, which relate to the field of medical image processing, and the lung lobe segmentation method comprises the following steps: acquiring lung images at multiple moments in a breathing process; determining a lung image to be segmented in the multi-time lung image, wherein the lung image except the lung image to be segmented is used as a first lung image; fusing the to-be-segmented lung images by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at the moment before the to-be-segmented image and/or at least one lung image at the moment after the to-be-segmented image; and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented. The method aims to solve the problem of poor segmentation effect caused by insufficient characteristic information of the lung (leaf).

Description

Lung lobe segmentation method and device and storage medium
Technical Field
The invention relates to the field of medical image processing, in particular to a lung lobe segmentation method, a device and a storage medium.
Background
The blunt round upper end of the lung is called the apex of the lung, and protrudes upwards into the root of the neck through the upper mouth of the thorax, the bottom is located above the diaphragm, the surface facing the gap between the rib and the rib is called the rib surface, the surface facing the mediastinum is called the inner side surface, the exit of the bronchus, blood vessel, lymphatic vessel and nerve in and out of the center of the surface is called the hilum of the lung, and the structures of the exit and the entrance of the hilum of the lung are wrapped by connective tissue and called the root of the. The left lung is divided into upper and lower lobes by oblique cleft, and the right lung is divided into upper, middle and lower lobes by a horizontal cleft in addition to oblique cleft.
At present, no matter traditional machine learning or deep learning is adopted to segment lung lobes, classification is carried out according to the characteristics of the lung (lobes), so the characteristics of the lung (lobes) are particularly important, and if the characteristic information of one lung (lobe) is enough, a classifier can better learn and classify, and lung lobe segmentation is better completed.
Disclosure of Invention
In view of the above, the present invention provides a lung lobe segmentation method, device and storage medium to solve the problem of poor segmentation effect caused by insufficient feature information of the current lung (lobe).
In a first aspect, the present invention provides a lung lobe segmentation method, including:
acquiring lung images at multiple moments in a breathing process;
determining a lung image to be segmented in the multi-time lung image, wherein the lung image except the lung image to be segmented is used as a first lung image;
fusing the to-be-segmented lung images by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at the moment before the to-be-segmented image and/or at least one lung image at the moment after the to-be-segmented image;
and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
Preferably, the method for fusing the to-be-segmented lung images by using the at least two first lung images to obtain a fused lung image includes:
performing registration operation from the at least one first lung image to the lung image to be segmented to obtain a lung image to be fused;
fusing the lung image to be fused and the lung image to be segmented to obtain a fused lung image;
and/or the presence of a gas in the interior of the container,
the method for determining the lung image to be segmented at a certain moment in the multi-moment lung image comprises the following steps:
and calculating the lung volume in the lung image at the multiple moments, and determining the lung image with the maximum lung volume as the lung image to be segmented.
Preferably, the method for fusing the lung image to be fused and the lung image to be segmented to obtain a fused lung image includes:
determining a weight value of the lung image to be fused;
obtaining a weighted lung image according to the weighted value and the lung image to be fused;
fusing the weighted lung image and the lung image to be segmented to obtain a fused lung image;
and/or the presence of a gas in the interior of the container,
the method for determining the lung image to be segmented at a certain moment in the multi-moment lung image further comprises the following steps:
before calculating the lung volume in the multi-time lung image, respectively extracting the left lung and the right lung of the multi-time lung image, respectively calculating a first volume of the left lung and a second volume of the right lung in the multi-time lung image, and respectively calculating the lung volume in the multi-time lung image according to the first volume and the second volume.
Preferably, the method for determining the weight value of the lung image to be fused includes: determining registration points of the lung images to be fused, determining that the weight values of the registration points are greater than those of non-registration points, and determining that characteristic points except the registration points are the non-registration points;
and/or the presence of a gas in the interior of the container,
the method for obtaining the fused lung image by fusing the weighted lung image and the lung image to be segmented comprises the following steps: performing summation processing on the weighted lung image and the lung image to be segmented to obtain a fused lung image;
and/or the presence of a gas in the interior of the container,
the method for obtaining the fused lung image by fusing the weighted lung image and the lung image to be segmented comprises the following steps: and fusing the lung image to be fused and the lung image to be segmented by utilizing a first preset neural network to obtain a fused lung image.
Preferably, the method for performing the registration operation of the at least one first lung image to the lung image to be segmented to obtain the lung image to be fused includes:
extracting images from the at least two first lung images and the same position in the image to be segmented to obtain a lung motion sequence image formed by the images extracted from the same position;
respectively calculating lung displacement of adjacent images in the lung motion sequence image, and executing registration operation from the at least one first lung image to the lung image to be segmented according to the lung displacement;
and/or the presence of a gas in the interior of the container,
the lung lobe segmentation method further comprises the following steps: the number of the preset lung lobe segmentation models is at least 2, the features of the lung lobe segmentation images obtained by the preset lung lobe segmentation models are fused to obtain fusion features, and the fusion features are classified to obtain final lung lobe images.
Preferably, the method for extracting images from the same position in the at least two first lung images and the image to be segmented to obtain a lung motion sequence image formed by the images extracted from the same position comprises:
determining the number of layers of the lung images at the multiple moments;
determining the at least two first lung images and the lung images of the image to be segmented at the same position according to the layer number;
obtaining the lung motion sequence image according to the lung images at the same position at multiple moments;
and/or the presence of a gas in the interior of the container,
the method for respectively calculating the lung displacement of the adjacent images in the lung motion sequence image comprises the following steps:
respectively determining first forward optical flows of adjacent images in the lung motion sequence images;
determining lung displacement of the adjacent images according to the first forward optical flows respectively;
and/or the presence of a gas in the interior of the container,
the method for obtaining the fusion characteristics by fusing the characteristics of the lung lobe segmentation images obtained by the preset lung lobe segmentation model comprises the following steps:
and respectively splicing the lung lobe segmentation images obtained by the preset lung lobe segmentation model to obtain splicing characteristics, and inputting the splicing characteristics into a second preset neural network to carry out convolution operation to obtain the fusion characteristics.
Preferably, the method for respectively calculating lung displacement of adjacent images in the lung motion sequence images further comprises:
respectively determining first reverse optical flows corresponding to the first forward optical flows;
determining lung displacement of the neighboring image from the first inverse optical flow and the first inverse optical flow, respectively;
and/or the presence of a gas in the interior of the container,
the method for respectively calculating the lung displacement of the adjacent images in the lung motion sequence image further comprises the following steps: performing optical flow optimization processing on the first forward optical flows and the first backward optical flows respectively to obtain second forward optical flows corresponding to the first forward optical flows and second backward optical flows corresponding to the first backward optical flows; determining lung displacement of the neighboring image from the second forward optical flow and the second backward optical flow, respectively.
Preferably, the method of determining lung displacement of the adjacent image from the first inverse optical flow and the first inverse optical flow, respectively, comprises:
calculating the second forward optical flow and the second backward optical flow respectively to obtain corrected optical flows;
and respectively determining the lung displacement of the adjacent images according to the corrected optical flow.
Preferably, the method of performing optical-flow optimization processing on the first forward optical flow and the first backward optical flow respectively to obtain a second forward optical flow corresponding to each of the first forward optical flows and a second backward optical flow corresponding to each of the first backward optical flows includes:
connecting each first forward optical flow to obtain a first connecting optical flow, and connecting each first reverse optical flow to obtain a second connecting optical flow;
respectively executing N times of optical flow optimization processing on the first connection optical flow and the second connection optical flow to obtain a first optimized optical flow corresponding to the first connection optical flow and a second optimized optical flow corresponding to the second connection optical flow;
obtaining a second forward optical flow corresponding to each first forward optical flow according to the first optimized optical flow, and obtaining a second backward optical flow corresponding to each first backward optical flow according to the second optimized optical flow;
wherein N is a positive integer greater than or equal to 1.
Preferably, the performing N-times optical flow optimization processing on the first and second connected optical flows, respectively, includes:
performing first optical flow optimization processing on the first connection optical flow and the second connection optical flow to obtain a first optimized sub-optical flow corresponding to the first connection optical flow and a first optimized sub-optical flow corresponding to the second connection optical flow; and
respectively executing the (i + 1) th optimized sub-optical flows of the first connection optical flow and the second connection optical flow to obtain an (i + 1) th optimized sub-optical flow corresponding to the first connection optical flow and an (i + 1) th optimized sub-optical flow corresponding to the second connection optical flow;
wherein i is a positive integer greater than 1 and less than N; determining an Nth optimized sub-optical flow of the obtained first connected optical flow as the first optimized optical flow and an Nth optimized sub-optical flow of the obtained second connected optical flow as the second optimized optical flow through an Nth sub-optimization process; each optical flow optimization process includes a residual process and an upsampling process.
Preferably, the first forward optical flow of adjacent images in the lung motion sequence images is determined according to a forward temporal order of the multi-temporal lung lobe images, and the first backward optical flow of adjacent images in the lung motion sequence images is determined according to a backward temporal order of the multi-temporal lung lobe images.
In a second aspect, the present invention provides a lung lobe segmentation apparatus, comprising:
the acquisition unit is used for acquiring lung images at multiple moments in the respiratory process;
a determining unit, configured to determine a lung image to be segmented in the multi-time lung images, where the lung images other than the lung image to be segmented serve as first lung images;
a fusion unit, configured to fuse the to-be-segmented lung image with at least one first lung image to obtain a fused lung image, where the at least two first lung images include at least one lung image at a time before the to-be-segmented image and/or at least one lung image at a time after the to-be-segmented image;
and the segmentation unit is used for segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
In a third aspect, the present invention provides a storage medium, which when executed by a processor implements the method described above, comprising:
acquiring lung images at multiple moments in a breathing process;
determining a lung image to be segmented in the multi-time lung image, wherein the lung image except the lung image to be segmented is used as a first lung image;
fusing the to-be-segmented lung images by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at the moment before the to-be-segmented image and/or at least one lung image at the moment after the to-be-segmented image;
and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
The invention has at least the following beneficial effects:
the invention provides a lung lobe segmentation method, a device and a storage medium, which aim to solve the problem of poor segmentation effect caused by insufficient characteristic information of the current lung (lobe).
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of the embodiments of the present invention with reference to the accompanying drawings, in which:
fig. 1 is a flowchart illustrating a lung lobe segmentation method according to an embodiment of the present invention.
Detailed Description
The present invention will be described below based on examples, but it should be noted that the present invention is not limited to these examples. In the following detailed description of the present invention, certain specific details are set forth. However, the present invention may be fully understood by those skilled in the art for those parts not described in detail.
Furthermore, those skilled in the art will appreciate that the drawings are provided solely for the purposes of illustrating the invention, features and advantages thereof, and are not necessarily drawn to scale.
Also, unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, the meaning of "includes but is not limited to".
The main body of the lung lobe segmentation method provided by the embodiment of the present disclosure may be any image processing apparatus, for example, the lung lobe segmentation method may be executed by a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a user terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, and the like. The server may be a local server or a cloud server. In some possible implementations, the method of lung lobe segmentation may be implemented by a processor calling computer readable instructions stored in a memory.
Fig. 1 is a flowchart illustrating a lung lobe segmentation method according to an embodiment of the present invention. As shown in fig. 1, a lung lobe segmentation method includes: step 101: acquiring lung images at multiple moments in a breathing process; step 102: determining a lung image to be segmented in the multi-time lung image, wherein the lung image except the lung image to be segmented is used as a first lung image; step 103: fusing the to-be-segmented lung images by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at the moment before the to-be-segmented image and/or at least one lung image at the moment after the to-be-segmented image; step 104: and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented. The method aims to solve the problem of poor segmentation effect caused by insufficient characteristic information of the lung (leaf).
Step 101: and acquiring lung images at multiple moments in the respiratory process.
In the embodiment of the present disclosure, the acquired lung images at multiple times in the breathing process may be lung images at multiple times in the inspiration process, lung images at multiple times in the breathing process, or lung images at multiple times in the inspiration and breathing processes; the multi-time lung lobe image is a multi-time lung image obtained by the same patient at a plurality of times during expiration and/or inspiration. The time of day in the embodiments of the present disclosure may be expressed as a time period, i.e., time information for acquiring a set of lung images. The specific acquisition process can be carried out according to the guidance of an imaging doctor; for example, in the breathing process, at least one set of lung images may be acquired at the time of deep inhalation, or at least one set of lung images may be acquired at the time of deep exhalation, and at least one set of lung images may be acquired in a quiet state, where the quiet state is the set of lung images acquired after normal exhalation. For another example, during the breath-to-expiration period, the patient holds his breath at different times during the inspiration or expiration phases to acquire multi-time lung images.
The segmentation result may include position information corresponding to each region (lung lobe) in the identified lung image. For example, the lung image may include five lung lobe regions, which are the upper right lobe, the middle right lobe, the lower right lobe, the upper left lobe and the lower left lobe, respectively, and the obtained segmentation result may include the position information of the five lung lobes in the lung image. The segmentation result may be represented in a mask feature manner, that is, the segmentation result obtained in the embodiment of the present disclosure may be represented in a mask form, for example, the embodiment of the present disclosure may allocate unique corresponding mask values, such as 1, 2, 3, 4, and 5, to the above five lung lobe regions, respectively, and a region formed by each mask value is a position region where a corresponding lung lobe is located. The mask values described above are merely exemplary, and other mask values may be configured in other embodiments.
In some possible implementations, the disclosed embodiments may obtain lung images at multiple moments by taking CT (computed tomography). The specific method comprises the following steps: determining the number of scanning layers, the layer thickness and the interlayer distance of the multi-time lung lobe image; and acquiring the lung images at multiple moments according to the scanning layer number, the layer thickness and the interlayer distance. The lung image obtained by the embodiment of the present disclosure is composed of multiple layers of images, and can be regarded as a three-dimensional image structure.
In some possible embodiments, the acquisition of the lung images at multiple time instants may be requested from another electronic device or a server, that is, multiple sets of lung images may be obtained, where each set of lung images corresponds to one time instant, and the multiple sets of lung images constitute the lung images at multiple time instants. In addition, in the embodiment of the present disclosure, in order to reduce the images with other features, when the lung image is obtained, the lung parenchyma segmentation processing may be performed on the lung image, the position of the lung region in the lung image is determined, and the image of the position region may be used as the lung image to perform the subsequent processing. The lung parenchymal segmentation may be obtained according to an existing manner, for example, through a deep learning neural network, or may be implemented through a lung parenchymal segmentation algorithm, which is not specifically limited by the present disclosure.
Step 102: and determining a lung image to be segmented in the multi-time lung image, wherein the lung image except the lung image to be segmented is taken as a first lung image.
In some possible embodiments, the lung image at any one time in the lung images at multiple times may be determined as the lung image to be segmented, or input time information may be received, and the lung image corresponding to the time information may be determined as the lung image to be segmented.
Alternatively, in this embodiment of the present disclosure, the method for determining a to-be-segmented lung image in the multi-time lung images, and taking a lung image other than the to-be-segmented lung image as the first lung image, may also include: and respectively calculating the lung volume of each lung image in the multi-time lung images, and determining the lung image with the maximum lung volume as the lung image to be segmented.
That is to say, in the embodiment of the present disclosure, the lung image with the largest lung volume may be determined as the lung image to be segmented, so that the lung lobe feature may be more fully embodied, and the lung lobe segmentation accuracy is improved.
In this embodiment of the present disclosure, the method for determining a to-be-segmented lung image in the multi-time lung images, where the lung image other than the to-be-segmented lung image is used as the first lung image, further includes: before calculating the lung volume in the multi-time lung image, respectively extracting a left lung and a right lung of the multi-time lung image, respectively calculating a first volume of the left lung and a second volume of the right lung of the multi-time lung image, and respectively calculating the lung volume in the multi-time lung image according to the first volume and the second volume. Specifically, the lung volume in the multi-temporal lung image is a sum of a first volume of the left lung and a second volume of the right lung, respectively. The left and right lungs may be extracted by using a lung parenchyma extraction algorithm or a neural network for lung parenchyma segmentation, so as to obtain left and right lung regions. The calculation of the left and right lung volumes may be obtained by using the sum of the areas of the left and right lungs extracted from each slice of the lung image, respectively. Other calculations may be used by those skilled in the art and are not specifically limited by this disclosure.
Step 103: and fusing the lung images to be segmented by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at the moment before the images to be segmented and/or at least one lung image at the moment after the images to be segmented.
In some possible embodiments, the fused lung image may be obtained by performing supplementary correction and fusion on the features of the to-be-segmented image by using at least one group of lung images at a time before the to-be-segmented lung image and/or at least one group of lung images at a time after the to-be-segmented lung image. Alternatively, in the embodiment of the present disclosure, the lung images other than the lung image to be segmented may be all used as the first lung image, so that all feature information of the lung image extracted in the breathing process may be retained.
In an embodiment of the present disclosure, the method for fusing the to-be-segmented lung image by using at least one first lung image to obtain a fused lung image includes:
performing registration operation from the first lung image to the lung image to be segmented to obtain a lung image to be fused; and fusing the lung image to be fused and the lung image to be segmented to obtain a fused lung image.
In some embodiments of the present invention, the registration operation is to find corresponding points of the lung image at a time before and/or after the certain time of the lung image to be segmented and the lung image to be segmented, complete a matching process between the lung image to be segmented and the first lung image, and further fuse image features of the first lung image at each time through the registration process. The registration between each first lung image and the lung image to be segmented can be realized by utilizing a registration algorithm, and the first lung image is registered to the lung image to be segmented. The Registration algorithm may use an elastic Registration algorithm or perform Registration using a VGG network (VGG-net) in deep Learning, such as the paper flexible image Registration connected neural networks or U-networks (U-nets), such as the paper pulmonsimple ct Registration through provided superior Learning with connected neural networks. The invention is not limited to a specific registration algorithm.
In other embodiments of the present disclosure, the method for performing a registration operation of the first lung image to the lung image to be segmented to obtain a lung image to be fused includes: extracting images from the at least two first lung images and the same position in the image to be segmented to obtain a lung motion sequence image formed by the images extracted from the same position; and respectively calculating the lung displacement of adjacent images in the lung motion sequence image, and executing the registration operation of the at least one first lung image to the lung image to be segmented according to the lung displacement.
In the embodiment of the present disclosure, the same positions may be represented by the same number of layers, and as described in the above embodiment, each group of lung images may include multiple layers of images, and the images with the same number of layers are selected from each group of lung images in the first lung image and the lung image to be segmented, so as to form a group of lung motion sequence images. That is to say, the embodiments of the present disclosure may obtain the same number of sets of lung motion sequence images as the number of layers, that is, the lung motion sequence images at each position.
In the embodiment of the disclosure, the method for extracting images from the same positions in the at least two first lung images and the image to be segmented to obtain the lung motion sequence image formed by the images extracted from the same positions comprises the steps of determining the number of layers of the lung images at multiple moments; determining the at least two first lung images and the lung images of the image to be segmented at the same position according to the layer number; and obtaining the lung motion sequence image according to the lung image at the same position.
In a specific embodiment of the invention, when acquiring a lung image at multiple moments in a respiratory process, the number of scanning layers, the thickness of the layers and the distance between the layers of the lung image at the multiple moments are already determined; therefore, the lung images of the multi-time lung images at the same position can be determined according to the number of layers, and the lung images at the same position are selected from the multi-time lung images to obtain the lung motion sequence image, for example, the position corresponding to the nth layer of the lung image at the first time is the same as the position corresponding to the nth layer of the lung image at the second time and the mth time, and both are a lung plane, the same lung planes at all times are combined to form the lung motion sequence image, M is an integer greater than 1 and used for representing the number of times or the number of groups, and N represents any layer value.
In the case of obtaining a plurality of images of the lung motion sequence, an image corresponding to an image to be segmented in the lung motion sequence may be determined, and the rest are images corresponding to the first lung image. The images in the lung motion sequence image are arranged in order of time. Since the image to be segmented has been determined in the foregoing embodiment, a time corresponding to the image to be segmented may also be obtained correspondingly, and an image corresponding to the image to be segmented and an image corresponding to the first lung image may be determined according to the time in the lung motion sequence image. Included in the lung motion sequence image is one layer of each lung image, which is described as an image to be segmented or a first segmented image for convenience of description of the subsequent embodiments, but it should be noted here that the images in the lung motion sequence image are only images of corresponding layers in the lung image to be segmented and the first lung image.
In the embodiment of the present disclosure, in the case of obtaining a lung motion sequence image, a motion situation between the first lung image and the lung image to be segmented may be performed. Namely, the lung displacements of the adjacent images in the lung motion sequence image can be respectively calculated, and the registration operation from the at least one first lung image to the lung image to be segmented is performed according to the lung displacements. The lung displacement between the first lung image and the lung image to be segmented can be determined by determining the lung displacement between the adjacent images, and then the registration operation of the first lung image and the lung image to be segmented is performed. Wherein the lung displacement may represent a displacement between the image to be segmented and the lung feature point in the lung image to be segmented.
In an embodiment of the present disclosure, the method for separately calculating lung displacements of adjacent images in the lung motion sequence image includes: respectively determining first forward optical flows of adjacent images in the lung motion sequence images; determining lung displacement of the neighboring images according to the first forward optical flows, respectively.
In a specific embodiment of the present invention, optical flow (optical flow) can be used to represent the change between moving images, which refers to the velocity of pattern motion in time-varying images. When the lung is moving, the luminance pattern of its corresponding point on the image is also moving, so the optical flow can be used to represent the change between images, since it contains information of the lung motion, and thus can be used by the viewer to determine the lung motion. In the embodiment of the present disclosure, optical flow estimation is performed on each adjacent image in the lung motion sequence image, and optical flow information between the adjacent images can be obtained. It is assumed that the times corresponding to the multi-time lung images are t1, t2, …, tM, and M respectively, which indicate the number of groups. The nth lung motion sequence image may comprise the nth layer images F1N, F2N, …, FMN, respectively, of the M groups of lung images, representing the nth layer images within the lung images of the 1 st to M groups.
In performing the optical flow estimation, the first forward optical flows of two adjacent images within each lung motion sequence image are obtained respectively in the forward order of the 1 to M groups, for example, the first forward optical flows of F1N to F2N, the first forward optical flows of F2N to F3N, and so on, to obtain the first forward optical flows of F (M-1) to FMN. The first forward optical flow represents motion velocity information of each feature point in adjacent lung images arranged in the forward order of time. Specifically, the lung motion sequence images may be input into an optical flow estimation model for obtaining a first forward optical flow between adjacent images, and the optical flow estimation model may be flownet2.0, or may be another optical flow estimation model, which is not specifically limited by the present disclosure. Alternatively, optical flow estimation algorithms such as a sparse optical flow estimation algorithm and a dense optical flow estimation algorithm may be used to perform optical flow estimation on the adjacent images, which is not specifically limited in this disclosure.
In a specific embodiment of the present invention, a method for determining lung displacement of the neighboring image according to the first forward optical flow comprises: and obtaining the lung displacement of the adjacent images by using the speed information of the first forward optical flow and the time information of the adjacent images in the lung lobe motion sequence image. The dicom file in the lung image acquired by CT is used for having scanning time and the number of layers, and the scanning time is divided by the number of layers to approximately obtain the time information of the adjacent images in the lung motion sequence image.
In the embodiment of the disclosure, each layer of images in the acquired lung images may have corresponding acquisition time information, and the product of the time difference of the acquisition times of two adjacent images in the lung motion sequence images and the first forward optical flow may be used to obtain the lung displacement of the two adjacent images within the time difference.
In addition, since the time information of the adjacent images in the lung motion sequence image is small, in the embodiment of the present disclosure, the speed information corresponding to the optical flow may also be approximately equal to the lung displacement.
The image to be segmented and the first lung image are predetermined, so that a first forward optical flow of the first lung image and the image to be segmented in the lung motion sequence image and time information between the first lung image and the image to be segmented can be correspondingly and sequentially determined, and correspondingly, lung displacement between the first lung image and the lung image to be segmented can be obtained through the product of the first forward optical flow and the time information.
In an embodiment of the present disclosure, the method for separately calculating lung displacements of adjacent images in the lung motion sequence image further includes: respectively determining first reverse optical flows corresponding to the first forward optical flows; determining lung displacement of the neighboring image from the first inverse optical flow and/or the first inverse optical flow, respectively.
In an embodiment of the disclosure, the first forward optical flow of neighboring images in the lung motion sequence images is determined according to a forward temporal order of the multi-temporal lung images, and the first backward optical flow of neighboring images in the lung motion sequence images may be determined according to a backward temporal order of the multi-temporal lung images.
Correspondingly, when optical flow estimation is performed, first inverse optical flows of two adjacent images in each lung motion sequence image are respectively obtained according to the inverse order of the M to 1 groups, for example, first inverse optical flows of FMN to F (M-1) N, first inverse optical flows of F (M-2) N to F (M-1) N, and the like are obtained, and first inverse optical flows of F2N to F1N are obtained. The first backward optical flow represents information on the movement velocity of each feature point in the adjacent lung images arranged in the backward order of time. Similarly, the lung motion sequence image may be input into the optical flow estimation model to obtain a first inverse optical flow between adjacent images, or the optical flow estimation algorithm, such as a sparse optical flow estimation algorithm, a dense optical flow estimation algorithm, or the like, may also be used to perform optical flow estimation on the adjacent images, which is not limited in this disclosure.
In a specific embodiment of the present disclosure, a method for determining lung displacement of the neighboring image according to the first backward optical flow comprises: and obtaining the lung displacement of the adjacent images by using the speed information of the first reverse optical flow and the time information of the adjacent images in the lung lobe motion sequence image. The dicom file in the lung image acquired by CT is used for obtaining the scanning time and the number of layers, and the scanning time is divided by the number of layers to approximately obtain the time information of the adjacent images in the lung lobe motion sequence image.
In the embodiment of the present disclosure, each of the acquired lung images may have corresponding acquisition time information, and the product of the time difference between the acquisition times of two adjacent images in the lung lobe motion sequence image and the first inverse optical flow may be used to obtain the lung displacement of the two adjacent images within the time difference. In addition, since the time information of the adjacent images in the lung lobe motion sequence image is small, in the embodiment of the present disclosure, the speed information corresponding to the optical flow may also be approximately equal to the lung lobe displacement.
The image to be segmented and the first lung image are predetermined, so that a first inverse optical flow of the first lung image and the image to be segmented in the lung motion sequence image and time information between the first lung image and the image to be segmented can be correspondingly and sequentially determined, and correspondingly, lung displacement between the first lung image and the lung image to be segmented can be obtained through the product of the first inverse optical flow and the time information.
In an embodiment of the present disclosure, the method for separately calculating lung displacements of adjacent images in the lung motion sequence image further includes: performing optical flow optimization processing on the first forward optical flows and the first backward optical flows respectively to obtain second forward optical flows corresponding to the first forward optical flows and second backward optical flows corresponding to the first backward optical flows; determining lung displacement of the neighboring image from the second forward optical flow and/or the second backward optical flow, respectively.
In a specific embodiment of the present invention, the method for determining lung displacement of the adjacent image according to the first inverse optical flow and the first inverse optical flow respectively comprises: calculating the second forward optical flow and the second backward optical flow respectively to obtain corrected optical flows; and respectively determining the lung displacement of the adjacent images according to the corrected optical flow.
In a specific embodiment of the present invention, a method for obtaining a corrected optical flow by separately calculating the second forward optical flow and the second backward optical flow includes: and performing an addition operation on the second forward optical flow and the second backward optical flow to obtain a bidirectional optical flow sum, and then averaging the bidirectional optical flow sum to obtain a corrected optical flow. That is, the average value of the second forward optical flow and the second backward optical flow is obtained, and the corrected optical flow is (second forward optical flow + second backward optical flow)/2.
In a specific embodiment of the present invention, the method for performing optical flow optimization processing on the first forward optical flow and the first backward optical flow respectively to obtain a second forward optical flow corresponding to each first forward optical flow and a second backward optical flow corresponding to each first backward optical flow comprises connecting each first forward optical flow to obtain a first connecting optical flow and connecting each first backward optical flow to obtain a second connecting optical flow; respectively executing N times of optical flow optimization processing on the first connection optical flow and the second connection optical flow to obtain a first optimized optical flow corresponding to the first connection optical flow and a second optimized optical flow corresponding to the second connection optical flow; obtaining a second forward optical flow corresponding to each first forward optical flow according to the first optimized optical flow, and obtaining a second backward optical flow corresponding to each first backward optical flow according to the second optimized optical flow; wherein N is a positive integer greater than or equal to 1.
Wherein connecting each of the first forward optical flows to obtain a first connected optical flow, and connecting each of the first backward optical flows to obtain a second connected optical flow comprises: and sequentially connecting the first forward optical flows between every two adjacent images in the lung motion sequence images to obtain first connecting optical flows corresponding to the group of lung motion sequence images, and sequentially connecting the first reverse optical flows between every two adjacent images in the lung motion sequence images to obtain second connecting optical flows corresponding to the group of lung motion sequence images. The connection here is a splice in the depth direction.
After obtaining the first and second connected optical flows, optical-flow optimization processing may be performed on the first and second connected optical flows, respectively, and embodiments of the present disclosure may perform at least one optical-flow optimization processing procedure. For example, each time the optical flow optimization processing in the embodiment of the present disclosure is performed, the optical flow optimization module may be composed of a neural network, or the optimization operation may be performed by using a corresponding optimization algorithm. Correspondingly, when the optical flow optimization processing is performed for N times, the optical flow optimization network module may include N optical flow optimization network modules connected in sequence, where an input of a subsequent optical flow optimization network module is an output of a previous optical flow optimization network module, and an output of a last optical flow optimization network module is an optimization result of the first connection optical flow and the second connection optical flow.
Specifically, when only one optical flow optimization network module is included, the optical flow optimization network module may be used to perform optimization processing on the first connection optical flow to obtain a first optimized optical flow corresponding to the first connection optical flow, and perform optimization processing on the second connection optical flow through the optical flow optimization network module to obtain a second optimized optical flow corresponding to the second connection optical flow. Wherein the optical flow optimization process may include a residual process and an upsampling process. That is, the optical flow optimization network module may further include a residual unit and an upsampling unit, where the residual unit performs residual processing on the input first connection optical flow or the second connection optical flow, where the residual unit may include a plurality of convolutional layers, each convolutional layer employs a convolution kernel, which is not specifically limited by the embodiment of the present disclosure, and a scale of the first connection optical flow after residual processing by the residual unit becomes smaller, for example, is reduced to one fourth of a scale of the input connection optical flow, which is not specifically limited by the present disclosure, and may be set according to a requirement. After performing the residual processing, an upsampling process may be performed on the residual processed first connected optical flow or the second connected optical flow, by which the scale of the output first optimized sub-optical flow may be adjusted to the scale of the first connected optical flow and the scale of the output second optimized sub-optical flow may be adjusted to the scale of the second connected optical flow. And the characteristics of each optical flow can be fused through the optical flow optimization process, and the optical flow precision can be improved.
In other embodiments, the optical flow optimization module may also include a plurality of optical flow optimization network modules, such as N optical flow optimization network modules. The first optical flow optimization network module may receive the first connection optical flow and the second connection optical flow, and perform first optical flow optimization processing on the first connection optical flow and the second connection optical flow, where the first optical flow optimization processing includes residual processing and upsampling processing, and a specific process is the same as that in the above embodiment, and is not described herein again. A first optimized sub-luminous flux of the first connection luminous flux and a first optimized sub-luminous flux of the second connection luminous flux can be obtained by the first luminous flux optimization process.
Similarly, each optical flow optimization network module may perform an optical flow optimization process once, that is, an i +1 th optical flow optimization network module may perform an i +1 th optical flow optimization process on an i-th optimized sub-optical flow of the first connection optical flow and the second connection optical flow to obtain an i +1 th optimized sub-optical flow corresponding to the first connection optical flow and an i +1 th optimized sub-optical flow corresponding to the second connection optical flow, where i is a positive integer greater than 1 and less than N. Finally, an nth sub-optimization process, which may be performed by an nth optical flow optimization network module, obtains an nth optimized sub-optical flow of a first connected optical flow and an nth optimized sub-optical flow of a second connected optical flow, and may determine the obtained nth optimized sub-optical flow of the first connected optical flow as the first optimized optical flow and the obtained nth optimized sub-optical flow of the second connected optical flow as the second optimized optical flow. In the embodiment of the disclosure, the optical flow optimization processing procedure executed by each optical flow optimization network module may be residual error processing and upsampling processing. That is, each optical flow optimization network module may be the same optical flow optimization module.
In the case of obtaining the first optimized optical flow and the second optimized optical flow for each lung motion sequence image, a second forward optical flow corresponding to each first forward optical flow may be obtained by using the first optimized optical flow, and a second backward optical flow corresponding to each first backward optical flow may be obtained according to the second optimized optical flow.
After N times of optical flow optimization processing, the scale of the obtained first optimized optical flow is the same as the scale of the first connection optical flow, and the first optimized optical flow can be split into M-1 second forward optical flows according to the depth direction, and the M-1 second forward optical flows respectively correspond to the optimization results of the first forward optical flows. Similarly, after the optical flow optimization processing is performed for N times, the scale of the obtained second optimized optical flow is the same as the scale of the second connection optical flow, and the second optimized optical flow can be split into M-1 second inverse optical flows according to the depth direction, wherein the M-1 second inverse optical flows respectively correspond to the optimization results of the first inverse optical flows.
With the above embodiment, the first forward optical flow optimized second forward optical flow between each adjacent image of the lung motion sequence image and the first backward optical flow optimized second backward optical flow between each adjacent image of the lung motion sequence image can be obtained.
In the case of obtaining the second forward optical flow and/or the second backward optical flow, the motion displacement of the lung lobe corresponding to the adjacent image may be determined by using the second forward optical flow and/or the second backward optical flow, and then the lung displacement between the lung segmentation image and the first lung image may be obtained.
Based on the above, the embodiment of the present disclosure may obtain the motion displacement (lung lobe displacement) of each layer of image in the lung image in each time range, and in a case of performing the keypoint detection on each layer of image in the lung image, may obtain the motion trajectory of the matched keypoint in each time range, so as to obtain the motion state and the motion trajectory of the whole lung in each time range.
By the above embodiment, the lung displacement between the first lung image and the lung image to be segmented can be obtained, and then the registration operation of the first lung image to the lung image to be segmented can be performed according to the lung displacement. The lung displacement of the embodiment of the present disclosure may include a displacement value between any pixel point in the first lung image and the lung image to be segmented, and the registration result corresponding to the first lung image, that is, the lung image to be fused, may be obtained by adding the first lung image and the lung displacement. Then, the lung image to be fused corresponding to the registration operation of each first lung image and the lung image to be segmented can be obtained by the embodiment of the disclosure.
Under the condition of obtaining the lung image to be fused, the fused lung image can be obtained directly through the addition processing between the lung image to be fused and the lung image to be segmented, or different weights can be set for the lung image to be fused, and the fused lung image can be obtained by utilizing the set weights.
In an embodiment of the present disclosure, the method for fusing the lung image to be fused and the lung image to be segmented to obtain a fused lung image includes: determining a weight value of the lung image to be fused; obtaining a weighted lung image according to the weighted value and the lung image to be fused; and fusing the weighted lung image and the lung image to be segmented to obtain the fused lung image.
In some embodiments of the present disclosure, the lung image to be fused may be pre-configured with a corresponding weight value. The weighted values of the lung images to be fused may be the same or may also be different, for example, the weighted value of the lung images to be fused may be 1/k, where k is the number of the lung images to be fused. Or the configured weight value may also be determined according to the image quality of the lung image to be fused, for example, an image quality score of each lung image to be fused may be determined by a Single Stimulus Continuous quality evaluation method (SSCQE), and the score is normalized to be in a range of [0,1], so as to obtain the weight value of each lung image to be fused. Or, the input lung Image to be fused may be evaluated by an Image quality evaluation model nima (neural Image assessment) to obtain a corresponding weight value.
Alternatively, in other embodiments of the present disclosure, the method for determining a weight value of the lung image to be fused includes: and determining registration points of the lung image to be fused, wherein points except the registration points are non-registration points, and the weight value of the registration points is greater than that of the non-registration points. That is to say, in the embodiment of the present disclosure, the weighted value of each pixel point in the lung image to be fused may be different, and the weighted value of the registration point may be set to be greater than the weighted value of the non-registration point, so as to highlight the feature information of the registration point, where the registration point is a feature point highlighting the lung feature. In a specific embodiment of the present invention, determining the registration point of the lung image to be fused may detect the lung image to be fused and a key point (registration point) of the lung image to be fused through SIFT (Scale-invariant feature transform). The resulting keypoints may be weighted by a value a greater than 0.5, and the non-registration points may be weighted by 1-a, or any other positive value less than a.
Alternatively, the setting of the weight values may be implemented by a neural network of attention mechanism (attention). The attention mechanism neural network can comprise at least one layer of convolution layer and an attention mechanism module (attention module) connected with the convolution layer, convolution processing is carried out on an image to be fused through the convolution layer to obtain convolution characteristics, the convolution characteristics are input into the attention module to obtain an attention characteristic diagram corresponding to each image to be fused, the attention characteristic diagram comprises an attention value corresponding to each pixel point in the image to be fused, the attention value can be used as a weighted value of corresponding pixel points, and the pixel points with the attention value larger than 0.5 are alignment points. The person skilled in the art can select an appropriate manner to obtain the weight value of the lung image to be fused according to the requirement, which is not specifically limited by the present disclosure. In an embodiment of the present invention, determining the registration point of the lung image to be fused may detect the lung image to be fused and a key point (registration point) of the lung image to be fused through SIFT (Scale-invariant feature transform)
In the case of obtaining the weight of the lung image to be fused, the product of the weight and the lung image to be fused may be used to obtain a weighted lung image.
In a specific embodiment of the present invention, the method for obtaining the fused lung image by fusing the weighted lung image and the lung image to be segmented includes: and adding the weighted lung image and the lung image to be segmented to obtain the fused lung image.
In a specific embodiment of the present invention, the method for obtaining the fused lung image by fusing the weighted lung image and the lung image to be segmented may also be implemented by using the following embodiments, including: the method for fusing the lung image to be fused and the lung image to be segmented by utilizing a first preset neural network to obtain a fused lung image comprises the following steps: connecting the weighted lung image and the lung image to be segmented to obtain a connected lung image; and performing at least one layer of convolution processing on the connected lung images to obtain fusion characteristics of the connected lung images, wherein the images corresponding to the fusion characteristics are fusion lung images. The first preset neural network is a network which is trained in advance and can realize fusion of lung feature information and extraction, and may be, for example, a residual error network, a Unet, a feature pyramid network, or the like, which is not specifically limited in this disclosure.
Step 104: and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
In the embodiment of the present disclosure, the preset lung lobe segmentation model may be a traditional machine learning lung lobe segmentation model, or a progressive dense V-network (PDV-NET) lung lobe segmentation model proposed by the 2018 voxel science and technology in deep learning. In the invention, the fused lung image is the lung image at the previous moment and/or the next moment of the certain moment of the lung image to be segmented and all information of the lung image to be segmented, so that the information content of the lung image to be segmented is ensured, and lung lobe segmentation is better carried out.
Alternatively, in the embodiment of the present disclosure, the preset lung lobe segmentation model may also be implemented by a neural network, and may include at least one of the residual error networks Resnet, Unet, and Vnet, which is not specifically limited by the present disclosure. The preset lung lobe segmentation model in the embodiment of the present disclosure can be used to implement segmentation detection of at least one lung lobe, and the obtained segmentation result includes position information of the detected lung lobe, for example, a position region of the detected lung lobe in a lung image may be represented by a preset mask.
In an embodiment of the present disclosure, the lung lobe segmentation method further includes: the number of the preset lung lobe segmentation models is at least 2, the features of the lung lobe segmentation images obtained by the preset lung lobe segmentation models are fused to obtain fusion features, and the fusion features are classified to obtain final lung lobe images. In an embodiment of the present disclosure, the method for obtaining a fusion feature by fusing features of lung lobe segmentation images obtained by using the preset lung lobe segmentation model includes:
and respectively splicing the lung lobe segmentation images obtained by the preset lung lobe segmentation model to obtain splicing characteristics, and inputting the splicing characteristics into a second preset neural network to carry out convolution operation to obtain the fusion characteristics.
Wherein, the two preset segmentation models can be different segmentation models. For example, the first preset segmentation model may be Resnet, and the second preset segmentation model may be Unet, but not specifically limited to the disclosure, any two different neural networks capable of being used for lung lobe segmentation may be used as the preset lung lobe segmentation model. And inputting the fused lung image into a first preset segmentation model and a second preset segmentation model to respectively obtain a first segmentation result and a second segmentation result. The first segmentation result and the second segmentation result may respectively include position information of the detected lung lobe region. Since there may be a difference in the segmentation result obtained by performing the segmentation process through different preset segmentation models, the embodiment of the present disclosure may further improve the segmentation accuracy by combining the two segmentation results. The final lung lobe segmentation result can be obtained by averaging the position information of the first segmentation result and the second segmentation result.
Alternatively, in some embodiments, a first feature map output by the convolutional layer before the first preset segmentation model outputs the first segmentation result and a second feature map output by the convolutional layer before the second preset segmentation model outputs the second segmentation result may be fused to obtain a fused feature. The first preset segmentation model and the second preset segmentation model may respectively include a corresponding feature extraction module and a classification module, wherein the classification module obtains a final first segmentation result and a final second segmentation result, the feature extraction module may include a plurality of convolution layers, and a feature map output by the last convolution layer is used for being input to the classification module to obtain the first segmentation result or the second segmentation result. The embodiment of the disclosure can obtain a first feature map output by the last convolutional layer of the feature extraction module in the first preset segmentation model and a second feature map output by the last convolutional layer of the feature extraction module in the second preset segmentation model. And fusing the first feature map and the second image of the lung lobe segmentation image obtained by the preset lung lobe segmentation model to obtain fusion features, and classifying the fusion features to obtain a final lung lobe image. Specifically, the first feature map and the second feature map may be respectively stitched to obtain a stitching feature, and the stitching feature may be input to at least one convolution layer to obtain a fusion feature. And then, classifying the fusion features through a classification network to obtain a classification (segmentation) result of the lung lobes to be detected, namely obtaining a lung lobe segmentation result corresponding to the lung image to be detected.
In the embodiment of the present disclosure, the static lung data during the lung respiration process is generally used for clinical analysis, and the analysis accuracy of the lung feature data is inevitably imaged without considering the motion information of the lung. The accuracy of the detection of lung motion data would be improved if correlations between lung characteristics over different time periods could be combined. The embodiment of the disclosure can solve the problem that the current lung (leaf) feature information is insufficient, resulting in poor segmentation effect.
The present invention also provides a lung lobe segmentation apparatus, comprising: the acquisition unit is used for acquiring lung images at multiple moments in the respiratory process; the determining unit is used for determining a lung image to be segmented at a certain moment in the multi-moment lung images; the fusion unit is used for fusing the lung image to be segmented by utilizing the lung image at the previous moment and/or the next moment of the certain moment of the lung image to be segmented to obtain a fused lung image; and the segmentation unit is used for segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented. The specific implementation manner of the method can refer to a detailed description in a lung lobe segmentation method.
In addition, the present invention provides a storage medium, wherein the computer program instructions, when executed by a processor, implement the method described above, comprising: acquiring lung images at multiple moments in a breathing process; determining a lung image to be segmented in the multi-time lung image, wherein the lung image except the lung image to be segmented is used as a first lung image; fusing the to-be-segmented lung images by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at the moment before the to-be-segmented image and/or at least one lung image at the moment after the to-be-segmented image; and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
The above-mentioned embodiments are merely embodiments for expressing the invention, and the description is specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for those skilled in the art, various changes, substitutions of equivalents, improvements and the like can be made without departing from the spirit of the invention, and these are all within the scope of the invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A lung lobe segmentation method is characterized by comprising the following steps:
acquiring lung images at multiple moments in a breathing process;
determining a lung image to be segmented in the multi-time lung image, wherein the lung image except the lung image to be segmented is used as a first lung image;
fusing the to-be-segmented lung images by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at the moment before the to-be-segmented image and/or at least one lung image at the moment after the to-be-segmented image;
and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
2. The method of claim 1, wherein:
the method for fusing the to-be-segmented lung images by using the at least two first lung images to obtain fused lung images comprises the following steps:
performing registration operation from the at least one first lung image to the lung image to be segmented to obtain a lung image to be fused;
fusing the lung image to be fused and the lung image to be segmented to obtain a fused lung image;
and/or the presence of a gas in the interior of the container,
the method for determining the lung image to be segmented in the multi-time lung image comprises the following steps:
and respectively calculating the lung volume of each lung image in the multi-time lung images, and determining the lung image with the maximum lung volume as the lung image to be segmented.
3. The method of claim 2, wherein:
the method for fusing the lung image to be fused and the lung image to be segmented to obtain the fused lung image comprises the following steps:
determining a weight value of the lung image to be fused;
obtaining a weighted lung image according to the weighted value and the lung image to be fused;
fusing the weighted lung image and the lung image to be segmented to obtain a fused lung image;
and/or the presence of a gas in the interior of the container,
the method for determining the lung image to be segmented in the multi-time lung image further comprises the following steps:
before calculating the lung volume in the multi-time lung image, respectively extracting the left lung and the right lung of the multi-time lung image, respectively calculating a first volume of the left lung and a second volume of the right lung in the multi-time lung image, and respectively calculating the lung volume in the multi-time lung image according to the first volume and the second volume.
4. The method of claim 3, wherein:
the method for determining the weight value of the lung image to be fused comprises the following steps: determining registration points of the lung images to be fused, determining that the weight values of the registration points are greater than those of non-registration points, and determining that characteristic points except the registration points are the non-registration points;
and/or the presence of a gas in the interior of the container,
the method for obtaining the fused lung image by fusing the weighted lung image and the lung image to be segmented comprises the following steps: performing summation processing on the weighted lung image and the lung image to be segmented to obtain a fused lung image;
and/or the presence of a gas in the interior of the container,
the method for obtaining the fused lung image by fusing the weighted lung image and the lung image to be segmented comprises the following steps: and fusing the lung image to be fused and the lung image to be segmented by utilizing a first preset neural network to obtain a fused lung image.
5. The method according to any one of claims 2-4, wherein:
the method for performing the registration operation of the at least one first lung image to the lung image to be segmented to obtain the lung image to be fused includes:
extracting images from the at least two first lung images and the same position in the image to be segmented to obtain a lung motion sequence image formed by the images extracted from the same position;
respectively calculating lung displacement of adjacent images in the lung motion sequence image, and executing registration operation from the at least one first lung image to the lung image to be segmented according to the lung displacement;
and/or the presence of a gas in the interior of the container,
the lung lobe segmentation method further comprises the following steps: the number of the preset lung lobe segmentation models is at least 2, the features of the lung lobe segmentation images obtained by the preset lung lobe segmentation models are fused to obtain fusion features, and the fusion features are classified to obtain final lung lobe images.
6. The method of claim 5, wherein:
the method for extracting images from the same position in the at least two first lung images and the image to be segmented to obtain a lung motion sequence image formed by the images extracted from the same position comprises the following steps:
determining the number of layers of the lung images at the multiple moments;
determining the at least two first lung images and the lung images of the image to be segmented at the same position according to the layer number;
obtaining the lung motion sequence image according to the lung image at the same position;
and/or the presence of a gas in the interior of the container,
the method for respectively calculating the lung displacement of the adjacent images in the lung motion sequence image comprises the following steps:
respectively determining first forward optical flows of adjacent images in the lung motion sequence images;
determining lung displacement of the adjacent images according to the first forward optical flows respectively;
and/or the presence of a gas in the interior of the container,
the method for obtaining the fusion characteristics by fusing the characteristics of the lung lobe segmentation images obtained by the preset lung lobe segmentation model comprises the following steps:
and respectively splicing the lung lobe segmentation images obtained by the preset lung lobe segmentation model to obtain splicing characteristics, and inputting the splicing characteristics into a second preset neural network to carry out convolution operation to obtain the fusion characteristics.
7. The method of claim 6, wherein:
the method for respectively calculating the lung displacement of the adjacent images in the lung motion sequence image further comprises the following steps:
respectively determining first reverse optical flows corresponding to the first forward optical flows;
determining lung displacement of the neighboring image from the first inverse optical flow and the first inverse optical flow, respectively;
and/or the presence of a gas in the interior of the container,
the method for respectively calculating the lung displacement of the adjacent images in the lung motion sequence image further comprises the following steps: performing optical flow optimization processing on the first forward optical flows and the first backward optical flows respectively to obtain second forward optical flows corresponding to the first forward optical flows and second backward optical flows corresponding to the first backward optical flows; determining lung displacement of the neighboring image from the second forward optical flow and the second backward optical flow, respectively.
8. The method of claim 7, wherein:
the method of determining lung displacement of the neighboring image from the first inverse optical flow and the first inverse optical flow, respectively, comprising:
calculating the second forward optical flow and the second backward optical flow respectively to obtain corrected optical flows;
and respectively determining the lung displacement of the adjacent images according to the corrected optical flow.
9. A lung lobe segmentation device, comprising:
the acquisition unit is used for acquiring lung images at multiple moments in the respiratory process;
a determining unit, configured to determine a lung image to be segmented in the multi-time lung images, where the lung images other than the lung image to be segmented serve as first lung images;
a fusion unit, configured to fuse the to-be-segmented lung image with at least one first lung image to obtain a fused lung image, where the at least two first lung images include at least one lung image at a time before the to-be-segmented image and/or at least one lung image at a time after the to-be-segmented image;
and the segmentation unit is used for segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
10. A storage medium, wherein computer program instructions, when executed by a processor, implement the method of any of claims 1 to 8, comprising:
acquiring lung images at multiple moments in a breathing process;
determining a lung image to be segmented in the multi-time lung image, wherein the lung image except the lung image to be segmented is used as a first lung image;
fusing the to-be-segmented lung images by using at least one first lung image to obtain a fused lung image, wherein the at least two first lung images comprise at least one lung image at the moment before the to-be-segmented image and/or at least one lung image at the moment after the to-be-segmented image;
and segmenting the fused lung image by using a preset lung lobe segmentation model to obtain a lung lobe image of the lung image to be segmented.
CN202010534722.0A 2020-06-12 2020-06-12 Lung lobe segmentation method, device and storage medium Active CN111724360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010534722.0A CN111724360B (en) 2020-06-12 2020-06-12 Lung lobe segmentation method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010534722.0A CN111724360B (en) 2020-06-12 2020-06-12 Lung lobe segmentation method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111724360A true CN111724360A (en) 2020-09-29
CN111724360B CN111724360B (en) 2023-06-02

Family

ID=72568049

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010534722.0A Active CN111724360B (en) 2020-06-12 2020-06-12 Lung lobe segmentation method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111724360B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113808082A (en) * 2021-08-19 2021-12-17 东北大学 Lung image processing method and device, electronic device and storage medium
CN114038561A (en) * 2021-11-08 2022-02-11 数聚工研(北京)科技有限公司 Bronchiectasis scoring method and system
WO2023123873A1 (en) * 2021-12-28 2023-07-06 天翼数字生活科技有限公司 Dense optical flow calculation method employing attention mechanism

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1418353A (en) * 2000-01-18 2003-05-14 芝加哥大学 Automated method and system for segmentation of lung regions in computed tomography scans
EP1447772A1 (en) * 2003-02-11 2004-08-18 MeVis GmbH A method of lung lobe segmentation and computer system
US20070276214A1 (en) * 2003-11-26 2007-11-29 Dachille Frank C Systems and Methods for Automated Segmentation, Visualization and Analysis of Medical Images
CN108985345A (en) * 2018-06-25 2018-12-11 重庆知遨科技有限公司 A kind of detection device based on the classification of lung's Medical image fusion
CN109598727A (en) * 2018-11-28 2019-04-09 北京工业大学 A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network
CN109658425A (en) * 2018-12-12 2019-04-19 上海联影医疗科技有限公司 A kind of lobe of the lung dividing method, device, computer equipment and storage medium
CN109727251A (en) * 2018-12-29 2019-05-07 上海联影智能医疗科技有限公司 The system that lung conditions are divided a kind of quantitatively, method and apparatus
CN110033005A (en) * 2019-04-08 2019-07-19 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1418353A (en) * 2000-01-18 2003-05-14 芝加哥大学 Automated method and system for segmentation of lung regions in computed tomography scans
EP1447772A1 (en) * 2003-02-11 2004-08-18 MeVis GmbH A method of lung lobe segmentation and computer system
US20070276214A1 (en) * 2003-11-26 2007-11-29 Dachille Frank C Systems and Methods for Automated Segmentation, Visualization and Analysis of Medical Images
CN108985345A (en) * 2018-06-25 2018-12-11 重庆知遨科技有限公司 A kind of detection device based on the classification of lung's Medical image fusion
CN109598727A (en) * 2018-11-28 2019-04-09 北京工业大学 A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network
CN109658425A (en) * 2018-12-12 2019-04-19 上海联影医疗科技有限公司 A kind of lobe of the lung dividing method, device, computer equipment and storage medium
CN109727251A (en) * 2018-12-29 2019-05-07 上海联影智能医疗科技有限公司 The system that lung conditions are divided a kind of quantitatively, method and apparatus
CN110033005A (en) * 2019-04-08 2019-07-19 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
H. HANEISHI 等: "Lung image segmentation and registration for quantitative image analysis", 《2001 IEEE NUCLEAR SCIENCE SYMPOSIUM CONFERENCE RECORD》 *
QIANG LI等: "PRF-RW: a progressive random forest-based random walk approach for interactive semi-automated pulmonary lobes segmentation", 《INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS》 *
侍新: "基于CT图像的肺实质分割和肺结节检测方法研究", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》 *
解德芳: "基于医学图像分割技术的计算机辅助肺功能评估***", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113808082A (en) * 2021-08-19 2021-12-17 东北大学 Lung image processing method and device, electronic device and storage medium
CN113808082B (en) * 2021-08-19 2023-10-03 东北大学 Lung image processing method and device, electronic equipment and storage medium
CN114038561A (en) * 2021-11-08 2022-02-11 数聚工研(北京)科技有限公司 Bronchiectasis scoring method and system
WO2023123873A1 (en) * 2021-12-28 2023-07-06 天翼数字生活科技有限公司 Dense optical flow calculation method employing attention mechanism

Also Published As

Publication number Publication date
CN111724360B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
CA3097712C (en) Systems and methods for full body measurements extraction
CN111724360A (en) Lung lobe segmentation method and device and storage medium
CN112767329B (en) Image processing method and device and electronic equipment
CN111429421B (en) Model generation method, medical image segmentation method, device, equipment and medium
US20180174311A1 (en) Method and system for simultaneous scene parsing and model fusion for endoscopic and laparoscopic navigation
WO2019169884A1 (en) Image saliency detection method and device based on depth information
CN110599421A (en) Model training method, video fuzzy frame conversion method, device and storage medium
CN111242931B (en) Method and device for judging small airway lesions of single lung lobes
CN108648178A (en) A kind of method and device of image nodule detection
WO2023071154A1 (en) Image segmentation method, training method and apparatus for related model, and device
CN111724364B (en) Method and device based on lung lobes and trachea trees, electronic equipment and storage medium
CN115115575A (en) Image detection method and device, computer equipment and storage medium
CN116843647A (en) Method and device for determining lung field area and evaluating lung development, electronic equipment and medium
CN116883426A (en) Lung region segmentation method, lung disease assessment method, lung region segmentation device, lung disease assessment device, electronic equipment and storage medium
US20230110263A1 (en) Computer-implemented systems and methods for analyzing examination quality for an endoscopic procedure
CN111724359B (en) Method, device and storage medium for determining motion trail of lung lobes
KR102667467B1 (en) Apparatus and Method for Providing a Virtual Reality-based Surgical Environment Using a Patient's Virtual Lung Model
WO2024111429A1 (en) Posture evaluation device, posture evaluation system, posture evaluation method, and program
US20240070905A1 (en) Systems and methods for determining 3d human pose
CN116977366A (en) Diaphragm movement detection and assessment method and device, electronic equipment and storage medium
CN116993678A (en) Method and device for detecting and analyzing lung adhesion, electronic equipment and storage medium
CN116883329A (en) Data analysis method and device for medical CT image and related products
CN116843646A (en) Method and device for determining lung ventilation and air retention, electronic equipment and medium
CN115471474A (en) Right ventricle segmentation and cardiopulmonary analysis method and device, electronic device and storage medium
CN118212221A (en) Lung positioning method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant