CN115049850A - Feature extraction method for fibrosis region of lung CT image - Google Patents
Feature extraction method for fibrosis region of lung CT image Download PDFInfo
- Publication number
- CN115049850A CN115049850A CN202210856483.XA CN202210856483A CN115049850A CN 115049850 A CN115049850 A CN 115049850A CN 202210856483 A CN202210856483 A CN 202210856483A CN 115049850 A CN115049850 A CN 115049850A
- Authority
- CN
- China
- Prior art keywords
- image
- region
- lung
- fibrosis
- candidate region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 206010016654 Fibrosis Diseases 0.000 title claims abstract description 59
- 230000004761 fibrosis Effects 0.000 title claims abstract description 59
- 210000004072 lung Anatomy 0.000 title claims abstract description 55
- 238000000605 extraction Methods 0.000 title abstract description 9
- 238000000034 method Methods 0.000 claims abstract description 34
- 238000009826 distribution Methods 0.000 claims abstract description 30
- 230000000877 morphologic effect Effects 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims abstract description 6
- 238000010276 construction Methods 0.000 claims abstract description 4
- 208000005069 pulmonary fibrosis Diseases 0.000 claims abstract description 4
- 230000003176 fibrotic effect Effects 0.000 claims description 32
- 238000001914 filtration Methods 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 15
- 230000011218 segmentation Effects 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 6
- 230000005653 Brownian motion process Effects 0.000 claims description 5
- 230000008859 change Effects 0.000 claims description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 102100035971 Molybdopterin molybdenumtransferase Human genes 0.000 claims description 3
- 101710119577 Molybdopterin molybdenumtransferase Proteins 0.000 claims description 3
- 238000009825 accumulation Methods 0.000 claims description 3
- 239000003550 marker Substances 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000003595 spectral effect Effects 0.000 claims description 3
- 230000002685 pulmonary effect Effects 0.000 claims 3
- 238000012549 training Methods 0.000 abstract description 6
- 230000035945 sensitivity Effects 0.000 abstract description 5
- 238000001514 detection method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000012216 screening Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000003902 lesion Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 201000009794 Idiopathic Pulmonary Fibrosis Diseases 0.000 description 1
- 208000029523 Interstitial Lung disease Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 208000036971 interstitial lung disease 2 Diseases 0.000 description 1
- 230000009325 pulmonary function Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000003437 trachea Anatomy 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30061—Lung
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention discloses a feature extraction method for a fibrosis region of a lung CT image, belongs to the field of machine vision and image processing application, and solves the problems that the sensitivity and accuracy of a classifier extraction result obtained after training are low due to the fact that features which are not analyzed for the CT image and the fibrosis region are adopted in the prior art. The method comprises the steps of acquiring a plurality of CT images containing fibrosis lung parts, and establishing fibrosis corner distribution prior of the CT images; extracting a lung region mask based on the CT image to obtain a lung region; processing CT images and lung regions based on connected component analysis and morphological operationsObtaining a candidate area; fibrosis angular point distribution prior construction statistical characteristic f based on candidate region and CT image 1 Shape feature f 2 And fractal features f 3 (ii) a Cascading statistical feature f 1 Shape feature f 2 And fractal features f 3 And (5) constructing a fibrosis characteristic model f. The method is used for feature extraction of the fibrosis region of the lung CT image.
Description
Technical Field
A feature extraction method for a fibrosis region of a lung CT image is used for feature extraction of the fibrosis region of the lung CT image and belongs to the field of machine vision and image processing application.
Background
The CT image can clearly display the fibrosis area, the fibrosis area is widely distributed in the CT image, the scale change range is large, various imaging performances are realized, and the visual separability is low. Early detection of the fibrotic region can avoid the occurrence of a rapid decline in pulmonary function, and simultaneously increase clinical efficiency.
At present, the fibrosis region is mainly screened manually, so that the efficiency is low, the existing method for automatically detecting the fibrosis region mainly depends on the pixel gray value to perform threshold segmentation, the sensitivity is low, and the fibrosis region cannot be completely extracted. In the prior art, a training data set is obtained by modeling the feature of the fibrosis region, and then a classifier is trained by the training data set to obtain a trained classifier to classify the CT image, but in the prior art, the feature modeling for the fibrosis region uses a feature common to natural images, such as: scale invariant feature descriptors, local binary pattern features, thresholds, etc. Therefore, the following technical problems exist:
1. features which are not analyzed aiming at the CT image and the fibrosis region are adopted, so that the classifier obtained after training is low in sensitivity and accuracy, wherein the sensitivity is used for measuring the detection capability of the classifier on the fibrosis region.
2. The adopted characteristic combination is single, the description of the fibrosis region is not accurate enough, and the area of the finally extracted fibrosis region is far smaller than that of the real fibrosis region.
Disclosure of Invention
In view of the above research problems, an object of the present invention is to provide a method for extracting features of a fibrosis region in a lung CT image, which solves the problem in the prior art that the sensitivity and accuracy of the extracted result of a classifier obtained after training are low due to the adoption of features that are not analyzed with respect to the CT image and the fibrosis region.
In order to achieve the purpose, the invention adopts the following technical scheme:
a feature extraction method for a fibrosis region of lung CT image comprises the following steps:
step 1: acquiring a plurality of CT images containing fibrosis lung parts, and establishing fibrosis corner point distribution prior of the CT images;
step 2: extracting a lung region mask based on the CT image to obtain a lung region;
and step 3: processing the CT image and the lung region based on connected component analysis and morphological operation to obtain a candidate region;
and 4, step 4: fibrosis angular point distribution prior construction statistical characteristic f based on candidate region and CT image 1 Shape feature f 2 And fractal features f 3 ;
And 5: cascading statistical feature f 1 Shape feature f 2 And fractal features f 3 And (5) constructing a fibrosis characteristic model f.
Further, the step 1 comprises the following specific steps:
step 1.1: acquiring a plurality of CT images of lung parts containing fibrosis;
step 1.2: the method comprises the steps of artificially carrying out contrastive analysis on a CT image containing a fibrotic lung and a CT image containing a healthy lung, observing multiple edges of a fibrotic region and the fibrotic region with large gray value change, dividing the fibrotic region according to local windows with the size of 32 x 32, obtaining the number of corner points in the corresponding local windows according to pixels in the multiple edges, and establishing a fibrosis corner point distribution prior according to the number of the corner points of each local window, wherein the prior is specifically represented by the fact that the number of the corner points of the pixels of the fibrotic region in the local windows with the size of 32 x 32 is more than that of the corner points of the healthy region, the number of the corner points of the fibrotic region is 4.5 times that of the corner points of the healthy region with the same size, and the gray value variance of the fibrotic region is 4 times that of the gray value variance of the healthy region with the same size.
Further, the step 2 comprises the following specific steps:
step 2.1: fixing the window width and window position of the CT image to (1600-400) to obtain a CT image I with an 8-bit format;
step 2.2: the improved U-Net network is utilized to segment the lung region in the CT image obtained in the step 2.1, and a lung region mask is extracted after segmentation, so that a lung region l corresponding to the CT image I is obtained mask 。
Further, the improved U-Net network in step 2.2 includes an encoding part, a decoding part, and a 1 × 1 convolutional layer connected in sequence;
the encoding part and the decoding part respectively comprise 5 blocks which are connected in sequence, each block sequentially comprises a convolution layer, an activation layer and a batch normalization layer, wherein the size of the convolution layer is 3 x 3, and the activation function is ReLu;
the blocks of the coding part are connected by 2 x 2 max poling downsampling;
each block of the decoding part is connected through 2 x 2 deconvolution up-sampling, wherein each block in the decoding part needs to be combined with a feature map obtained by each block in the corresponding encoding part during the decoding process;
the loss function of the improved U-Net network is improved to be the sum of the IOU loss and the binary cross entropy.
Further, the step 3 specifically comprises the following steps:
step 3.1: carrying out second order differential sharpening on the CT image I to obtain an image I l ;
Step 3.2: image I obtained according to step 3.1 l An image I l Dividing into blocks of 3 x 3 size and calculating image I in each 3 x 3 block l In the horizontal and vertical directions of x And I y And calculating an image I l Response value R at each pixel point, formulaComprises the following steps:
R=det(M)-kt[trace(M)] 2
wherein M is represented by image I l Horizontal gradient I calculated in 3 x 3 blocks x And a vertical gradient I y The method comprises the following steps of forming a matrix, wherein kt is a threshold value for controlling the number of angular points, 0.3 is selected after a large number of screening, det (M) represents a determinant for solving the matrix M, trace (M) represents a trace for solving the matrix M, when R is greater than a set threshold value and is a local maximum value, a pixel point is marked as an angular point, and otherwise, the pixel point is not an angular point;
step 3.3: based on images I l Setting the gray level of the corner point to 1 and the gray level of the rest pixels to 0 to establish a mark image I label Computing a marker image I label The number of corner points of each pixel point in the neighborhood range of 32 multiplied by 32 is used as the intensity value of the pixel to obtain a graph I n :
Wherein, a represents the size of a neighborhood, and 16, c and r represent pixel point coordinates;
step 3.4: image I obtained according to step 3.3 n This is combined with the lung area l obtained in step 2.2 mask Logical AND, obtaining a constrained image I of the lung region con :
I con =l mask I I n
Wherein I represents a logical AND;
step 3.5: according to the fibered angular point distribution prior, the image I obtained in the step 3.4 is subjected to con Performing adaptive threshold segmentation, the adaptive threshold being image I con Average gray value is 0.6 times to obtain a binary image I threshold ;
Step 3.6: based on binary images I threshold Obtaining a candidate region I c 。
Further, the specific steps of step 3.6 are:
step 3.61: let T be and binary image I threshold All 0 matrices of the same size;
step 3.62: finding the binary image I from the top to the bottom and from the left to the right threshold Setting the coordinate of the first pixel point with the gray value of 255 as (x, y);
step 3.63: taking (x, y) as a seed point, and obtaining a binary image I threshold Finding a corresponding connected region G, and changing the coordinate gray value corresponding to the connected region into 1 in T;
step 3.64: finding the external contour of the connected domain G in T, marking the gray value of a pixel point corresponding to the external contour as 2, and marking the external contour as p, wherein the definition formula of the pixel point of the external contour is as follows:
wherein (x, y) represents a binary image I threshold The coordinates of the external contour pixel points of the middle connected region G, bi1 and bi2 represent given bias, and the value range is { -1, 1 };
step 3.65: performing region growing in the outer contour p, i.e. arbitrarily selecting a pixel point p from p idx The subscript idx represents the idx-th pixel point in p, if the pixel point is p idx In the 3 multiplied by 3 neighborhood range as the center, when the gray value phase of all pixel points is equal to 0 or 1, the point p is divided into two idx Setting the gray value to be 3, and turning to the step 3.66, otherwise, directly turning to the step 3.66;
step 3.66: if the binary image I threshold The seed points in the binary image are traversed, the step 3.67 is carried out, otherwise, the step 3.63 is carried out to continue searching the binary image I threshold The next seed point meeting the condition is selected;
step 3.67: changing the gray value of non-2 in T to 0, changing the pixel point with gray value of 2 to 255, and changing T and the binary image I threshold Phase or obtaining filled-in imagesRemoving the regions with the total number of pixels less than 7 in the filled image to obtain a candidate region I c 。
Further, the specific steps of step 4 are:
step 4.1: based on the candidate region I c Histogram feature f of a And frequency domain statistical characteristic f b Structural statistical features f 1 ;
Step 4.2: based on the candidate region I c Rotation invariant orthonormal and mean and standard deviation of morphological filtering constructs shape features f 2 ;
Step 4.3: candidate region I c The gray value of each line is used as a wiener process, and then the variance is calculated through the accumulation of the power spectral density to construct a fractal feature f 3 。
Further, the specific steps of step 4.1 are as follows:
step 4.11: obtaining a candidate region I c Histogram feature f of a Then, the histogram feature f is obtained a Kurtosis, skewness, mean and variance of;
kurtosis kurt:
skewness skew:
wherein N is a candidate region I c The length of the vector after line expansion, namely the total number of pixels; x is the number of index Is a candidate region I c The gray value corresponding to the first index pixel,is a candidate region I c The gray level mean value of (1);
step 4.12: for candidate region I c Gabor filtering is carried out on 2 scales of 4 directions {0 degrees, 45 degrees, 90 degrees, 135 degrees }, and images obtained through filtering are divided intoForming 2 x 2 image blocks, calculating the mean value in each image block, and cascading the mean values of each image block to construct a frequency domain statistical characteristic f b :
Wherein, g d,s Representing Gabor functions with different directions and scales, d representing the direction of the Gabor function and taking {0 degrees, 45 degrees, 90 degrees, 135 degrees }, s representing the scale of the Gabor function and taking {1, 3}, representing convolution operation, and (c, r) representing candidate regions I c The pixel coordinates of (a);
step 4.13: histogram feature f a And frequency domain statistical characteristic f b Cascading to obtain a statistical feature f 1 :
f 1 =[f a ,f b ]
Further, the specific steps of step 4.2 are as follows:
step 4.21: by candidate region I c The centroid of the point is taken as the origin, the coordinate of the centroid is affine transformed into a given unit circle, and the distance rho between the pixel point coordinate (c, r) and the origin of the coordinate is calculated cr Pixel point coordinates (c, r) and phase angle theta of x-axis cr And the number of pixels λ for which the pixel point coordinates (c, r) fall within the unit circle d And build up a radial distance R nm (ρ cr ):
Where n is the order, m is the multiplicity of the azimuth angle, and s represents the pixel point and the candidate region I c Distance of centroid! Representing a factoring operation;
candidate region I c Each row and each column of pixels by a corresponding radial distance R nm (ρ cr ) And corresponding x-axis phase angle theta cr Then, the candidate region I is divided c Summing the results of each pixel, then multiplying the result by the number n and summing 1, and dividing the result by the candidate region I c Number of pixels falling within unit circle λ d To construct a rotation invariant quadrature distance, the formula:
step 4.22: using an infrastructure operator to pair candidate regions I c Performing opening and closing operations, and combining the obtained result with the candidate region I c And (3) making a difference, performing morphological filtering transformation on the area obtained after the difference is made according to the fibered angular point distribution prior, obtaining the mean value and the standard deviation of the area after the transformation, and calculating the mean value and the standard deviation of the morphological filtering:
f c =[mean(I c -I c ost),σ(I c -I c ost),mean(I c ·st-I c ),σ(I c ·st-I c )]
wherein mean (mean)) represents the mean, σ (sigma)) represents the standard deviation, st is a basic structure operator, a linear structure with the length of 11 is used, o represents an open operation, and · represents a close operation;
step 4.23: cascading the rotation-invariant orthogonal distance and the mean value and the standard deviation of the morphological filtering transformation to obtain the shape characteristic f of the candidate region 2 :
f 2 =[Z n,m ,f c ]
Further, the specific steps of step 4.3 are: fractal features f 3 The calculation formula of (2) is as follows:
wherein, B (t) represents the wiener process, var (.) represents the variance calculation, C represents a small constant, which is taken as 0.0001, and t represents the candidate region I c The position of each row of pixels, Δ t represents the pixel spacing, B (t) represents the candidate region I c The gray scale value of the pixel at position t, B (t + Δ t) represents the candidate region I c The gray value of the pixel at position t + Δ t.
Compared with the prior art, the invention has the beneficial effects that:
the method uses the fibrosis angular point distribution to extract the fibrosis candidate region, establishes the fibrosis angular point distribution prior of the CT image, selects the region with dense angular point distribution as the candidate region, eliminates the influence of a large number of irrelevant structures (the irrelevant structures comprise a chest region, bones, an instrument bed and the like) on the detection result, and completely reserves the candidate region at the stage, and has short time consumption and high efficiency;
the statistical characteristics, the shape characteristics and the fractal characteristics of the fibrosis region are obtained to perform characteristic modeling on the fibrosis region, and the statistical characteristics, the shape characteristics and the fractal characteristics can effectively distinguish the fibrosis region from the health region from multiple dimensions, so that the classification accuracy of a classifier trained subsequently is improved by over 8%;
the invention designs a two-stage framework for detecting the fibrosis area, extracts a candidate area in the first stage, and positions all suspected areas; and a second stage of further screening the candidate area, firstly establishing a characteristic model according to the difference of the fibrosis area and the healthy area in gray scale, shape and fractal characteristics so as to train a classifier model on the basis of the characteristic model and remove the healthy area in the candidate area. Compared with the existing method, the features obtained by the method enable the detection framework to be thick to thin, and can improve the accuracy of the classifier model for extracting the fibrosis region.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
fig. 2 is a prior diagram of the distribution of fibrosis corner points in the present invention, wherein (a) in fig. 2 is a healthy area, (b) in fig. 2 is a fibrosis area, (c) in fig. 2 is a distribution of health area corner points, and (d) in fig. 2 is a distribution of fibrosis area corner points, wherein a green cross represents the corner point;
FIG. 3 is a diagram of an 8-bit CT image obtained after format conversion according to the present invention;
FIG. 4 is a diagram of a modified U-Net network according to the present invention;
FIG. 5 is a graph of the segmentation results of the lung region mask of the present invention;
fig. 6 is a candidate region map of idiopathic pulmonary fibrosis extracted in the present invention, wherein (a) in fig. 6 is an image I of the CT image obtained in fig. 3 after second order differential sharpening l FIG. 6(b) shows an image I n In FIG. 6, (c) is I n Results after Lung restriction I con FIG. 6(d) shows a binary image I threshold FIG. 6(e) shows a candidate region I c ;
Fig. 7 is a comparison of histogram distributions of healthy regions and healthy regions in the present invention, in which (a) in fig. 7 is a fibrosis region, (b) in fig. 7 is a healthy region, and (c) in fig. 7 is a histogram distribution of (a) in fig. 7 and (b) in fig. 7, where the horizontal axis represents CT values, the unit is HU, and the vertical axis N represents numbers;
fig. 8 is a frequency domain statistical characteristic of the present invention, wherein (a) in fig. 8 is the frequency domain statistical characteristic of (a) in fig. 6, and (b) in fig. 8 is the frequency domain statistical characteristic of (b) in fig. 6;
FIG. 9 is a black and white top-hat transform, i.e. a finger-type morphological filter transform, of a candidate region in the present invention, wherein (a) in FIG. 9 is the white top-hat transform of (a) in FIG. 6, and (b) in FIG. 9 is the black bottom-hat transform of (a) in FIG. 6;
fig. 10 is a diagram showing the extraction result of the fibrosis region after training the classifier based on the feature model obtained in the present invention.
Detailed Description
The invention will be further described with reference to the accompanying drawings and specific embodiments.
Examples
As shown in fig. 1, a method for extracting features of a fibrosis region in a lung CT image includes the following steps:
step 1: acquiring a plurality of CT images containing fibrosis lung parts, and establishing fibrosis corner point distribution prior of the CT images;
the method comprises the following specific steps:
step 1.1: acquiring a plurality of CT images containing fibrotic lung, wherein the CT images are derived from ILD data and hospital data;
step 1.2: the method comprises the steps of artificially carrying out contrastive analysis on a CT image containing a fibrotic lung and a CT image containing a healthy lung, observing multiple edges of a fibrotic region and the fibrotic region with large gray value change, dividing the fibrotic region according to local windows with the size of 32 x 32, obtaining the number of corner points in the corresponding local windows according to pixels in the multiple edges, and establishing a fibrosis corner point distribution prior according to the number of the corner points of each local window, wherein the prior is specifically represented by the fact that the number of the corner points of the pixels of the fibrotic region in the local windows with the size of 32 x 32 is more than that of the corner points of the healthy region, the number of the corner points of the fibrotic region is 4.5 times that of the corner points of the healthy region with the same size, and the gray value variance of the fibrotic region is 4 times that of the gray value variance of the healthy region with the same size. The fibrosis corner distribution prior is established according to the number of local window corners of pixels in the size of 32 x 32, and the corner distribution prior is shown in fig. 2, so that it can be observed that the corners of a fibrosis region are significantly greater than those of a healthy region in number and distribution density, and the number of corners of the fibrosis region is about 4.5 times that of the healthy region in the same size.
Step 2: extracting a lung region mask based on the CT image to obtain a lung region;
the method comprises the following specific steps:
step 2.1: setting the window width and window level of the CT image to (1600 to 400) to obtain an 8-bit format CT image I, wherein the converted CT image can clearly display the lung as shown in FIG. 3;
step 2.2: segmenting the lung region in the CT image obtained in the step 2.1 by using the improved U-Net network shown in figure 4, extracting to obtain a lung region mask after segmentation, and obtaining the lung region l corresponding to the CT image I mask The segmentation result is shown in fig. 5, and it can be seen that the improved U-Net network has a better segmentation effect, the lung parenchyma is finely reserved in the complex areas of the lung portal, the lung tip and the like, and the large trachea is removed;
the improved U-Net network comprises an encoding part, a decoding part and a 1 x 1 convolution layer which are connected in sequence;
the encoding part and the decoding part respectively comprise 5 blocks which are connected in sequence, each block sequentially comprises a convolution layer, an activation layer and a batch normalization layer, wherein the size of the convolution layer is 3 x 3, and the activation function is ReLu;
the blocks of the coding part are connected by 2 x 2 max poling downsampling;
each block of the decoding part is connected through 2 x 2 deconvolution up-sampling, wherein each block in the decoding part needs to be combined with a feature map obtained by each block in the corresponding encoding part during the decoding process;
the loss function of the improved U-Net network is improved to be the sum of the IOU loss and the binary cross entropy.
And 3, step 3: processing the CT image and the lung region based on connected component analysis and morphological operation to obtain a candidate region;
the method comprises the following specific steps:
step 3.1: carrying out second order differential sharpening on the CT image I to obtain an image I l See fig. 6(a), the purpose of this step is to make the lesion region more obvious, and then produce more corner points, reduce the probability of missing detection;
step 3.2: image I obtained according to step 3.1 l An image I l Dividing into blocks of 3 x 3 size and calculating image I in each 3 x 3 block l In the horizontal and vertical directions of x And I y And calculating an image I l The response value R at each pixel point is given by:
R=det(M)-kt[trace(M)] 2
wherein M is represented by image I l Horizontal gradient I calculated in 3 x 3 blocks x And a vertical gradient I y The method comprises the following steps of forming a matrix, wherein kt is a threshold value for controlling the number of angular points, 0.3 is selected after a large number of screening, det (M) represents a determinant for solving the matrix M, trace (M) represents a trace for solving the matrix M, when R is greater than a set threshold value and is a local maximum value, a pixel point is marked as an angular point, and otherwise, the pixel point is not an angular point;
step 3.3: setting the gray level of the corner point to be 1 and the gray levels of other pixels to be 0 to establish a marked image I label Computing a marker image I label The number of corner points of each pixel point in the neighborhood range of 32 multiplied by 32 is used as the intensity value of the pixel to obtain a graph I n (i.e., corner intensity map), see FIG. 6(b), corner intensity map I n The larger the gray value of the middle pixel, the larger the number of corner points representing the periphery of the pixel:
wherein, a represents the size of a neighborhood, and 16, c and r represent pixel point coordinates;
step 3.4: image I obtained according to step 3.3 n This is combined with the lung area l obtained in step 2.2 mask Logical AND, obtaining a constrained image I of the lung region con Referring to fig. 6(c), it can be seen that the pixel outside the lung region is set to 0, and the purpose of this step is that since the fibrosis region exists only in the lung region, the pixel only in the lung region can be saved by the calculation:
I con =l mask I I n
wherein I represents a logical AND;
step 3.5: according to the fibered angular point distribution prior, the image I obtained in the step 3.4 is subjected to con Performing adaptive threshold segmentation, the adaptive threshold being image I con Average gray value is 0.6 times to obtain a binary image I threshold See fig. 6 (d);
step 3.6: based on binary images I threshold Removing the connected regions with the total number of pixels less than 7 to obtain a candidate region I c See fig. 6(e), where it can be observed that the present invention can successfully extract the fibrotic lesion area, but there are also some healthy areas extracted. The method comprises the following specific steps:
step 3.61: let T be and binary image I threshold All 0 matrices of the same size;
step 3.62: finding two values from top to bottom and from left to rightImage I threshold Setting the coordinate of the first pixel point with the gray value of 255 as (x, y);
step 3.63: taking (x, y) as a seed point, and obtaining a binary image I threshold Finding a corresponding connected region G, and changing the coordinate gray value corresponding to the connected region into 1 in T;
step 3.64: finding the external contour of the connected domain G in T, marking the gray value of a pixel point corresponding to the external contour as 2, and marking the external contour as p, wherein the definition formula of the pixel point of the external contour is as follows:
wherein (x, y) represents a binary image I threshold The coordinates of the external contour pixel points of the middle connected region G, bil and bi2 represent given offset, and the value range is { -1, 1 };
step 3.65: performing region growing in the outer contour p, i.e. arbitrarily selecting a pixel point p from p idx Subscript idx represents the idx-th pixel point in p, if the pixel point p is idx In the range of 3 multiplied by 3 neighborhood as the center, when the gray value phase of all pixel points is equal to 0 or 1, the point p is processed idx Setting the gray value to be 3, and turning to the step 3.66, otherwise, directly turning to the step 3.66;
step 3.66: if the binary image I threshold The seed points in the binary image are traversed, the step 3.67 is carried out, otherwise, the step 3.63 is carried out to continue searching the binary image I threshold Selecting the next seed point meeting the condition;
step 3.67: changing the gray value of non-2 in T to 0, changing the pixel point with gray value of 2 to 255, and changing T and the binary image I threshold Performing phase matching or obtaining a filled image, and removing the regions with the total number of pixels less than 7 in the filled image to obtain a candidate region I c 。
And 4, step 4: fibrosis angular point distribution prior construction statistical characteristic f based on candidate region and CT image 1 Shape feature f 2 And fractal features f 3 ;
The method comprises the following specific steps:
step 4.1: based on the candidate region I c Histogram feature f of a And frequency domain statistical characteristic f b Structural statistical features f 1 ;
The method comprises the following specific steps:
step 4.11: obtaining a candidate region I c Histogram feature f of a Then, the histogram feature f is obtained a Kurtosis, skewness, mean and variance of the features of (a), the reason for selecting these features is: fig. 7(a) and 7(b) are the same size, and fig. 7(c) shows the corresponding histogram distribution, it can be seen that the histogram distribution of the fibrosis region is flatter and the average CT value is larger; the histogram of healthy regions is more symmetric and steeper, thus the present invention selects kurtosis, skewness, mean and variance to quantify this difference;
kurtosis kurt:
skewness skew:
wherein N is a candidate region I c The length of the vector after line expansion, namely the total number of pixels; x is the number of index Is a candidate region I c The gray value corresponding to the first index pixel,is a candidate region I c The gray level mean value of (1);
step 4.12: for candidate region I c Gabor filtering of 2 scales in 4 directions {0 degrees, 45 degrees, 90 degrees, 135 degrees }, dividing the filtered image into 2 x 2 image blocks, calculating the mean value in each image block, and constructing a frequency domain statistical characteristic f by adopting a mean value mode of cascading each image block b Frequency domain statistical characteristic f b FIG. 8, wherein FIG. 8(a) is a table of frequency domain statistics for the fibrotic regionFig. 8(b) shows the frequency domain statistical characteristics of the healthy regions:
wherein, g d,s Representing Gabor functions with different directions and scales, d representing the direction of the Gabor function and taking {0 degrees, 45 degrees, 90 degrees, 135 degrees }, s representing the scale of the Gabor function and taking {1, 3}, representing convolution operation, and (c, r) representing candidate regions I c The pixel coordinates of (a);
step 4.13: histogram feature f a And frequency domain statistical characteristic f b Cascading to obtain a statistical feature f 1 :
f 1 =[f a ,f b ]
Step 4.2: based on the candidate region I c Rotation invariant orthonormal and mean and standard deviation of morphological filtering constructs shape features f 2 ;
The method comprises the following specific steps:
step 4.21: by candidate region I c Is used as an origin, the coordinates thereof are affine transformed into a given unit circle, and the distance rho between the pixel coordinates (c, r) and the origin of coordinates is calculated cr Pixel coordinates (c, r) and phase angle theta of x-axis cr And the number of pixels λ whose pixel coordinates (c, r) fall within the unit circle d And build up a radial distance R nm (ρ cr ):
Where n is the order, m is the multiplicity of the azimuth angle, and s represents the pixel point and the candidate region I c Distance between centroids! Representing a factoring operation;
candidate region I c Each row and each column of pixels by a corresponding radial distance R nm (ρ cr ) And corresponding x-axis phase angle theta cr Then the candidate region I c Summing the results for each pixel, and subsequently summingThe result is multiplied by the order n and summed with 1, and the resulting result is divided by the candidate region I c Number of pixels falling within unit circle λ d Thus constructing a rotationally invariant orthogonal distance:
step 4.22: using an infrastructure operator to pair candidate regions I c The opening and closing operation is performed, the result is shown in fig. 9, and the obtained result is combined with the candidate region I c And (3) making a difference, performing morphological filtering transformation on the area obtained after the difference is made according to the fiberization angular point distribution prior, calculating the mean value and the standard deviation of the morphological filtering according to the mean value and the standard deviation of the area obtained after the transformation:
f c =[mean(I c -I c ost),σ(I c -I c ost),mean(I c ·st-I c ),σ(I c ·st-I c )]
wherein mean (mean)) represents the mean, σ (sigma)) represents the standard deviation, st is a basic structure operator, a linear structure with the length of 11 is used, o represents an open operation, and · represents a close operation;
step 4.23: cascading the rotation-invariant orthogonal distance and the mean value and the standard deviation of the morphological filtering transformation to obtain the shape characteristic f of the candidate region 2 :
f 2 =[Z n,m ,f c ]
Step 4.3: candidate region I c The gray value of each line is used as a wiener process, and then the variance is calculated through the accumulation of the power spectral density to construct a fractal feature f 3 ;
Fractal features f 3 The calculation formula of (2) is as follows:
wherein var (eta) represents the variance, C represents a small constant, 0.0001 is taken to avoid the denominator being 0, and t represents the candidate area I c Image of each lineThe position of the pixel, Δ t represents the pixel interval, B (t) represents the candidate region I c The gray scale value of the pixel at position t, B (t + Δ t) represents the candidate region I c The gray value of the pixel at position t + Δ t.
The above are merely representative examples of the many specific applications of the present invention, and do not limit the scope of the invention in any way. All the technical solutions formed by using the conversion or the equivalent substitution fall within the protection scope of the present invention.
Claims (10)
1. A method for extracting characteristics of a fibrosis region of a lung CT image is characterized by comprising the following steps:
step 1: acquiring a plurality of CT images containing fibrosis lung parts, and establishing fibrosis corner point distribution prior of the CT images;
step 2: extracting a lung region mask based on the CT image to obtain a lung region;
and step 3: processing the CT image and the lung region based on connected component analysis and morphological operation to obtain a candidate region;
and 4, step 4: fibrosis angular point distribution prior construction statistical characteristic f based on candidate region and CT image 1 Shape feature f 2 And fractal features f 3 ;
And 5: cascading statistical feature f 1 Shape feature f 2 And fractal features f 3 And (5) constructing a fibrosis characteristic model f.
2. The method for extracting features of fibrotic regions for pulmonary CT images as claimed in claim 1, wherein the step 1 comprises the following steps:
step 1.1: acquiring a plurality of CT images of lung parts containing fibrosis;
step 1.2: the method comprises the steps of artificially carrying out contrastive analysis on a CT image containing a fibrotic lung and a CT image containing a healthy lung, observing multiple edges of a fibrotic region and the fibrotic region with large gray value change, dividing the fibrotic region according to local windows with the size of 32 x 32, obtaining the number of corner points in the corresponding local windows according to pixels in the multiple edges, and establishing a fibrosis corner point distribution prior according to the number of the corner points of each local window, wherein the prior is specifically represented by the fact that the number of the corner points of the pixels of the fibrotic region in the local windows with the size of 32 x 32 is more than that of the corner points of the healthy region, the number of the corner points of the fibrotic region is 4.5 times that of the corner points of the healthy region with the same size, and the gray value variance of the fibrotic region is 4 times that of the gray value variance of the healthy region with the same size.
3. The method as claimed in claim 2, wherein the step 2 comprises the following steps:
step 2.1: fixing the window width and the window level of the CT image as (1600-400) to obtain a CT image I with an 8-bit format;
step 2.2: utilizing the improved U-Net network to segment the lung region in the CT image obtained in the step 2.1, extracting a lung region mask after segmentation, and obtaining the lung region l corresponding to the CT image I mask 。
4. The method of claim 3, wherein the modified U-Net network of step 2.2 comprises a coding part, a decoding part and a convolution layer of 1 x 1 connected in sequence;
the encoding part and the decoding part respectively comprise 5 blocks which are connected in sequence, each block sequentially comprises a convolution layer, an activation layer and a batch standardization layer, wherein the size of the convolution layer is 3 x 3, and the activation function is ReLu;
the blocks of the coding part are connected by 2 x 2 max poling downsampling;
each block of the decoding part is connected through 2 x 2 deconvolution up-sampling, wherein each block in the decoding part needs to be combined with a feature map obtained by each block in the corresponding encoding part during the decoding process;
the loss function of the improved U-Net network is improved to be the sum of the IOU loss and the binary cross entropy.
5. The method of claim 4, wherein the step of extracting features of the fibrotic region in the CT lung image comprises: the step 3 comprises the following specific steps:
step 3.1: carrying out second order differential sharpening on the CT image I to obtain an image I l ;
Step 3.2: image I obtained according to step 3.1 l An image I l Dividing into blocks of 3 x 3 size and calculating image I in each 3 x 3 block l In the horizontal and vertical directions of x And I y And calculating an image I l The response value R at each pixel point is given by:
R=det(M)-kt[trace(M)] 2
wherein M is represented by image I l Horizontal gradient I calculated in 3 x 3 blocks x And a vertical gradient I y The method comprises the following steps of forming a matrix, wherein kt is a threshold value for controlling the number of angular points, 0.3 is selected after a large number of screens are carried out, det (M) represents a determinant for solving the matrix M, trace (M) represents a trace for solving the matrix M, when R is greater than a set threshold value and is a local maximum value, a pixel point is marked as an angular point, otherwise, the pixel point is not an angular point;
step 3.3: based on images I l Setting the gray level of the corner point to 1 and the gray level of the rest pixels to 0 to establish a mark image I label Computing a marker image I label The number of corner points of each pixel point in the neighborhood range of 32 multiplied by 32 is used as the intensity value of the pixel to obtain a graph I n :
Wherein, a represents the size of a neighborhood, and 16, c and r represent pixel point coordinates;
step 3.4: image I obtained according to step 3.3 n It is then mixed withLung region l obtained in step 2.2 mask Logical AND, obtaining a constrained image I of the lung region con :
I con =l mask I I n
Wherein I represents a logical AND;
step 3.5: according to the fibered angular point distribution prior, the image I obtained in the step 3.4 is subjected to con Performing adaptive threshold segmentation, the adaptive threshold being image I con 0.6 times of the average gray value to obtain a binary image I threshold ;
Step 3.6: based on binary images I threshold Obtaining a candidate region I c 。
6. The method of claim 5, wherein the step of extracting features of the fibrotic region comprises: the specific steps of the step 3.6 are as follows:
step 3.61: let T be and binary image I threshold All 0 matrices of the same size;
step 3.62: finding the binary image I from the top to the bottom and from the left to the right threshold Setting the coordinate of the first pixel point with the gray value of 255 as (x, y);
step 3.63: taking (x, y) as a seed point, and obtaining a binary image I threshold Finding a corresponding connected region G, and changing the coordinate gray value corresponding to the connected region into 1 in T;
step 3.64: finding the external contour of the connected domain G in T, marking the gray value of a pixel point corresponding to the external contour as 2, and marking the external contour as p, wherein the definition formula of the pixel point of the external contour is as follows:
wherein (x, y) represents a binary image I threshold The coordinates of the external contour pixel points of the middle connected region G, bil and bi2 represent given offset, and the value range is { -1, 1 };
step 3.65: performing region growing in the outer contour p, i.e. arbitrarily selecting a pixel point p from p idx The subscript idx represents the idx-th pixel point in p, if the pixel point is p idx In the 3 multiplied by 3 neighborhood range as the center, when the gray value phase of all pixel points is equal to 0 or 1, the point p is divided into two idx Setting the gray value to be 3, and turning to the step 3.66, otherwise, directly turning to the step 3.66;
step 3.66: if the binary image I threshold The seed points in the binary image are traversed, the step 3.67 is carried out, otherwise, the step 3.63 is carried out to continue searching the binary image I threshold The next seed point meeting the condition is selected;
step 3.67: changing the gray value of non-2 in T to 0, changing the pixel point with gray value of 2 to 255, and changing T and the binary image I threshold Performing phase matching or obtaining a filled image, and removing the regions with the total number of pixels less than 7 in the filled image to obtain a candidate region I c 。
7. The method of claim 6, wherein the step of extracting features of the fibrotic region comprises: the specific steps of the step 4 are as follows:
step 4.1: based on the candidate region I c Histogram feature f of a And frequency domain statistical characteristic f b Structural statistical features f 1 ;
And 4.2: based on the candidate region I c Rotation invariant orthonormal and mean and standard deviation of morphological filtering constructs shape features f 2 ;
Step 4.3: candidate region I c The gray value of each line is used as a wiener process, and then the variance is calculated through the accumulation of the power spectral density to construct a fractal feature f 3 。
8. The method of claim 7, wherein the step of extracting features of the fibrotic region in the pulmonary CT image comprises: the specific steps of the step 4.1 are as follows:
step 4.11: obtaining a candidate region I c Histogram feature f of (1) a Then, histogram features are obtainedf a Kurtosis, skewness, mean and variance of;
kurtosis kurt:
skewness skew:
wherein N is a candidate region I c The length of the vector after line expansion, namely the total number of pixels; x is the number of index Is a candidate region I c The gray value corresponding to the first index pixel,is a candidate region I c The gray level mean value of (1);
step 4.12: for candidate region I c Gabor filtering of 2 scales in 4 directions {0 degrees, 45 degrees, 90 degrees, 135 degrees }, dividing the filtered image into 2 x 2 image blocks, calculating the mean value in each image block, and cascading the mean value of each image block to construct a frequency domain statistical feature f b :
Wherein, g d,s Representing Gabor functions with different directions and scales, d representing the direction of the Gabor function and taking {0 degrees, 45 degrees, 90 degrees, 135 degrees }, s representing the scale of the Gabor function and taking {1, 3}, representing convolution operation, and (c, r) representing candidate regions I c The pixel coordinates of (a);
step 4.13: histogram feature f a And frequency domain statistical characteristic f b Cascading to obtain a statistical feature f 1 :
f 1 =[f a ,f b ]
9. The method of claim 8, wherein the step of extracting features of the fibrotic region comprises: the specific steps of the step 4.2 are as follows:
step 4.21: by candidate region I c The centroid of the point is the origin, the coordinates of the point are affine transformed into a given unit circle, and the distance rho between the coordinates (c, r) of the pixel point and the origin of the coordinates is calculated cr Pixel point coordinates (c, r) and phase angle theta of x-axis cr And the number of pixels λ for which the pixel point coordinates (c, r) fall within the unit circle d And build up a radial distance R nm (ρ cr ):
Where n is the order, m is the multiplicity of the azimuth angle, and s represents the pixel point and the candidate region I c Distance of centroid! Representing factoring operation;
candidate region I c Each row and each column of pixels of (a) by the corresponding radial distance R nm (ρ cr ) And corresponding x-axis phase angle theta cr Then, the candidate region I is divided c Summing the results of each pixel, then multiplying the result by the number n and summing 1, and dividing the result by the candidate region I c Number of pixels falling within unit circle λ d To construct a rotation invariant quadrature distance, the formula:
step 4.22: using an infrastructure operator to pair candidate regions I c Performing opening and closing operations, and combining the obtained result with the candidate region I c And (3) making a difference, performing morphological filtering transformation on the area obtained after the difference is made according to the fibered angular point distribution prior, obtaining the mean value and the standard deviation of the area after the transformation, and calculating the mean value and the standard deviation of the morphological filtering:
f c =[mean(I c -I c ost),σ(I c -I c ost),mean(I c ·st-I c ),σ(I c ·st-I c )]
wherein mean (mean)) represents the mean, σ (sigma)) represents the standard deviation, st is a basic structure operator, a linear structure with the length of 11 is used, o represents an open operation, and · represents a close operation;
step 4.23: cascading the rotation-invariant orthogonal distance and the mean value and the standard deviation of the morphological filtering transformation to obtain the shape characteristic f of the candidate region 2 :
f 2 =[Z n,m ,f c ]
10. The method of claim 9, wherein the step of extracting features of the fibrotic region in the pulmonary CT image comprises: the specific steps of the step 4.3 are as follows: fractal features f 3 The calculation formula of (2) is as follows:
wherein, B (t) represents the wiener process, var (.) represents the variance calculation, C represents a small constant, which is taken as 0.0001, and t represents the candidate region I c The position of each row of pixels, Δ t represents the pixel spacing, B (t) represents the candidate region I c The gray scale value of the pixel at position t, B (t + Δ t) represents the candidate region I c The gray value of the pixel at position t + Δ t.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210856483.XA CN115049850B (en) | 2022-07-20 | 2022-07-20 | Feature extraction method for fibrosis region of lung CT image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210856483.XA CN115049850B (en) | 2022-07-20 | 2022-07-20 | Feature extraction method for fibrosis region of lung CT image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115049850A true CN115049850A (en) | 2022-09-13 |
CN115049850B CN115049850B (en) | 2024-06-14 |
Family
ID=83167397
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210856483.XA Active CN115049850B (en) | 2022-07-20 | 2022-07-20 | Feature extraction method for fibrosis region of lung CT image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115049850B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116645372A (en) * | 2023-07-27 | 2023-08-25 | 汉克威(山东)智能制造有限公司 | Intelligent detection method and system for appearance image of brake chamber |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107563998A (en) * | 2017-08-30 | 2018-01-09 | 上海联影医疗科技有限公司 | Medical image cardiac image processing method |
CN108765409A (en) * | 2018-06-01 | 2018-11-06 | 电子科技大学 | A kind of screening technique of the candidate nodule based on CT images |
CN109146854A (en) * | 2018-08-01 | 2019-01-04 | 东北大学 | A kind of analysis method of Lung neoplasm and pulmonary vascular association relationship |
CN110288616A (en) * | 2019-07-01 | 2019-09-27 | 电子科技大学 | A method of based on dividing shape and RPCA to divide hard exudate in eye fundus image |
US20200085382A1 (en) * | 2017-05-30 | 2020-03-19 | Arterys Inc. | Automated lesion detection, segmentation, and longitudinal identification |
CN113012124A (en) * | 2021-03-15 | 2021-06-22 | 大连海事大学 | Shoe mark hole and insert feature detection and description method |
CN113140035A (en) * | 2021-04-27 | 2021-07-20 | 青岛百洋智能科技股份有限公司 | Full-automatic human cerebral vessel reconstruction method and device based on multi-modal image fusion technology |
CN113706492A (en) * | 2021-08-20 | 2021-11-26 | 复旦大学 | Lung parenchyma automatic segmentation method based on chest CT image |
WO2022063198A1 (en) * | 2020-09-24 | 2022-03-31 | 上海健康医学院 | Lung image processing method, apparatus and device |
-
2022
- 2022-07-20 CN CN202210856483.XA patent/CN115049850B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200085382A1 (en) * | 2017-05-30 | 2020-03-19 | Arterys Inc. | Automated lesion detection, segmentation, and longitudinal identification |
CN107563998A (en) * | 2017-08-30 | 2018-01-09 | 上海联影医疗科技有限公司 | Medical image cardiac image processing method |
CN108765409A (en) * | 2018-06-01 | 2018-11-06 | 电子科技大学 | A kind of screening technique of the candidate nodule based on CT images |
CN109146854A (en) * | 2018-08-01 | 2019-01-04 | 东北大学 | A kind of analysis method of Lung neoplasm and pulmonary vascular association relationship |
CN110288616A (en) * | 2019-07-01 | 2019-09-27 | 电子科技大学 | A method of based on dividing shape and RPCA to divide hard exudate in eye fundus image |
WO2022063198A1 (en) * | 2020-09-24 | 2022-03-31 | 上海健康医学院 | Lung image processing method, apparatus and device |
CN113012124A (en) * | 2021-03-15 | 2021-06-22 | 大连海事大学 | Shoe mark hole and insert feature detection and description method |
CN113140035A (en) * | 2021-04-27 | 2021-07-20 | 青岛百洋智能科技股份有限公司 | Full-automatic human cerebral vessel reconstruction method and device based on multi-modal image fusion technology |
CN113706492A (en) * | 2021-08-20 | 2021-11-26 | 复旦大学 | Lung parenchyma automatic segmentation method based on chest CT image |
Non-Patent Citations (3)
Title |
---|
M. MARY SYNTHUJA JAIN PREETHA等: "Image segmentation using seeded region growing", 《 2012 INTERNATIONAL CONFERENCE ON COMPUTING, ELECTRONICS AND ELECTRICAL TECHNOLOGIES (ICCEET)》, 24 May 2012 (2012-05-24) * |
XUEQING YU等: "Detection of idiopathic pulmonary fibrosis lesion regions based on corner point distribution", 《2022 7TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND SIGNAL PROCESSING (ICSP)》, 24 May 2022 (2022-05-24), pages 1 - 4 * |
马鸣;刘少芳;蒲立新;: "一种基于 DICOM 序列影像的肺结节 ROI 自动检测方法", 中国卫生信息管理杂志, no. 06, 20 December 2013 (2013-12-20) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116645372A (en) * | 2023-07-27 | 2023-08-25 | 汉克威(山东)智能制造有限公司 | Intelligent detection method and system for appearance image of brake chamber |
CN116645372B (en) * | 2023-07-27 | 2023-10-10 | 汉克威(山东)智能制造有限公司 | Intelligent detection method and system for appearance image of brake chamber |
Also Published As
Publication number | Publication date |
---|---|
CN115049850B (en) | 2024-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115861135B (en) | Image enhancement and recognition method applied to panoramic detection of box body | |
CN110264448B (en) | Insulator fault detection method based on machine vision | |
CN109949340A (en) | Target scale adaptive tracking method based on OpenCV | |
CN107316031A (en) | The image characteristic extracting method recognized again for pedestrian | |
CN110443128B (en) | Finger vein identification method based on SURF feature point accurate matching | |
CN109190625B (en) | Large-angle perspective deformation container number identification method | |
CN111539330B (en) | Transformer substation digital display instrument identification method based on double-SVM multi-classifier | |
CN113609984A (en) | Pointer instrument reading identification method and device and electronic equipment | |
CN115049850B (en) | Feature extraction method for fibrosis region of lung CT image | |
Xue et al. | Lung 4D CT image registration based on high-order markov random field | |
CN107392211B (en) | Salient target detection method based on visual sparse cognition | |
CN112330561A (en) | Medical image segmentation method based on interactive foreground extraction and information entropy watershed | |
CN112258536B (en) | Integrated positioning and segmentation method for calluses and cerebellum earthworm parts | |
CN114299080A (en) | Throat organ segmentation method based on cavity residual error characteristic pyramid | |
CN116824168B (en) | Ear CT feature extraction method based on image processing | |
CN109410233A (en) | A kind of accurate extracting method of high-definition picture road of edge feature constraint | |
CN110570450B (en) | Target tracking method based on cascade context-aware framework | |
CN109829511B (en) | Texture classification-based method for detecting cloud layer area in downward-looking infrared image | |
CN107247958A (en) | A kind of skin disease feature extracting method based on image recognition | |
CN114782715B (en) | Vein recognition method based on statistical information | |
CN114862883A (en) | Target edge extraction method, image segmentation method and system | |
CN108765384A (en) | A kind of conspicuousness detection method of joint manifold ranking and improvement convex closure | |
CN111415350B (en) | Colposcope image identification method for detecting cervical lesions | |
CN113744241A (en) | Cell image segmentation method based on improved SLIC algorithm | |
CN115511928A (en) | Matching method of multispectral image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |