CN113706687A - Nose environment modeling method and device for path planning - Google Patents

Nose environment modeling method and device for path planning Download PDF

Info

Publication number
CN113706687A
CN113706687A CN202110809521.1A CN202110809521A CN113706687A CN 113706687 A CN113706687 A CN 113706687A CN 202110809521 A CN202110809521 A CN 202110809521A CN 113706687 A CN113706687 A CN 113706687A
Authority
CN
China
Prior art keywords
edge
point
axis
image
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110809521.1A
Other languages
Chinese (zh)
Inventor
杨健
吴文灿
艾丹妮
范敬凡
宋红
涂云海
施节亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Eye Hospital of Wenzhou Medical University
Original Assignee
Beijing Institute of Technology BIT
Eye Hospital of Wenzhou Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT, Eye Hospital of Wenzhou Medical University filed Critical Beijing Institute of Technology BIT
Priority to CN202110809521.1A priority Critical patent/CN113706687A/en
Publication of CN113706687A publication Critical patent/CN113706687A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/608Rotation of whole images or parts thereof by skew deformation, e.g. two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Geometry (AREA)
  • Pathology (AREA)
  • Computer Graphics (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Image Processing (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The nose environment modeling method and device for path planning can achieve accurate construction of a nose environment map. The method comprises the following steps: (1) carrying out filtering denoising pretreatment on the CT image; (2) extracting edge point pairs of a segmentation target along the coordinate axis direction by using an axial region growing method in combination with edge detection, calculating the distance between the edge point pairs, taking pixel points between the edge point pairs with the maximum distance as seed points, and growing to the target edge along the axial direction to realize the rapid segmentation of important tissues and organs; (3) extracting the surface contour of the important tissue organ as the boundary of the barrier and the map, and performing three-dimensional surface reconstruction by using an improved Marching cube algorithm.

Description

Nose environment modeling method and device for path planning
Technical Field
The invention relates to the technical field of medical image processing, in particular to a path planning nose environment modeling method and a path planning nose environment modeling device.
Background
The operation process needs to avoid important tissues around the nose. But the nose environment map is difficult to construct due to the fact that the nose environment is complex and changeable.
The absorption coefficients of different tissues of the human body or the animal body to X-rays are different, and the gray distribution of the three-dimensional space obtained after the electronic computer tomography scanning is the CT image. In the CT image space, the X axis is generally the left-right direction, and the section perpendicular to the X axis that divides the image into left and right parts is the sagittal plane; the Y axis is in the front-back direction, and a tangent plane which is vertical to the Y axis and divides the image into a front part and a back part is a coronal plane; the Z axis is the up-down direction, the image is divided into an upper part and a lower part, and a section vertical to the Z axis is a horizontal plane (cross section).
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a path planning nose environment modeling method which can realize the accurate construction of a nose environment map.
The technical scheme of the invention is as follows: the nose environment modeling method for path planning comprises the following steps:
(1) carrying out filtering denoising pretreatment on the CT image;
(2) extracting edge point pairs of a segmentation target along the coordinate axis direction by using an axial region growing method in combination with edge detection, calculating the distance between the edge point pairs, taking pixel points between the edge point pairs with the maximum distance as seed points, and growing to the target edge along the axial direction to realize the rapid segmentation of important tissues and organs;
(3) extracting the surface contour of the important tissue organ as the boundary of the barrier and the map, and performing three-dimensional surface reconstruction by using an improved Marching cube algorithm.
According to the invention, by carrying out filtering and denoising pretreatment on a CT image and an axial region growing method, extracting edge point pairs of a segmented target along a coordinate axis direction by combining edge detection, calculating the distance between the edge point pairs, taking pixel points between the edge point pairs with the largest distance as seed points, axially growing to the target edge, realizing rapid segmentation of important tissue organs such as brain tissues, tissues in an orbit, inferior turbinates, nasal septa and the like, further extracting the surface profile of the important tissues as a barrier and a map boundary, and carrying out three-dimensional surface reconstruction by using an improved Marching Cubes algorithm to realize accurate modeling of a nasal environment map, thereby realizing accurate construction of the nasal environment map.
Also provided is a path planning nose environment modeling apparatus, comprising:
the preprocessing module is configured to carry out filtering and denoising preprocessing on the CT image;
the axial region growing module is configured to extract edge point pairs of the segmentation target along the coordinate axis direction by combining an axial region growing method with edge detection, calculate the distance between the edge point pairs, and grow to the target edge along the axial direction by taking pixel points between the edge point pairs with the maximum distance as seed points to realize the rapid segmentation of important tissues and organs;
and the reconstruction module is configured to extract the surface contour of the important tissue organ as the boundary of the barrier and the map, and performs three-dimensional surface reconstruction by using an improved Marching Cubes algorithm.
Drawings
Fig. 1 is a flow chart of a path planning nose environment modeling method according to the present invention.
Fig. 2 shows an edge detector for detecting pairs of edge points of soft tissue surrounded by bone.
Detailed Description
As shown in fig. 1, the path planning nose environment modeling method includes the following steps:
(1) carrying out filtering denoising pretreatment on the CT image;
(2) extracting edge point pairs of a segmentation target along the coordinate axis direction by using an axial region growing method in combination with edge detection, calculating the distance between the edge point pairs, taking pixel points between the edge point pairs with the maximum distance as seed points, and growing to the target edge along the axial direction to realize the rapid segmentation of important tissues and organs;
(3) extracting the surface contour of the important tissue organ as the boundary of the barrier and the map, and performing three-dimensional surface reconstruction by using an improved Marching cube algorithm.
According to the invention, by carrying out filtering and denoising pretreatment on a CT image and an axial region growing method, extracting edge point pairs of a segmented target along a coordinate axis direction by combining edge detection, calculating the distance between the edge point pairs, taking pixel points between the edge point pairs with the largest distance as seed points, axially growing to the target edge, realizing rapid segmentation of important tissue organs such as brain tissues, tissues in an orbit, inferior turbinates, nasal septa and the like, further extracting the surface profile of the important tissues as a barrier and a map boundary, and carrying out three-dimensional surface reconstruction by using an improved Marching Cubes algorithm to realize accurate modeling of a nasal environment map, thereby realizing accurate construction of the nasal environment map.
Preferably, in the step (1), the tilted CT image is aligned by rotation transformation, and with an origin as a rotation center, any voxel in the CT image is rotated by angles α, β, and γ around the X axis, the Y axis, and the Z axis, respectively, and a rotation transformation matrix of position coordinates thereof is:
Figure BDA0003162823980000031
according to the right hand rule, when the thumb points to the positive direction of the coordinate axis, the direction of the four fingers is the positive direction of rotation, and the rotation angle is positive; the tilt correction takes the center of the image as a rotation center, the center of the image is translated to an original point, and the image is translated back to the original position after being rotated around a coordinate axis; the central position coordinates of the CT image are assumed as follows: pc=[xc yc zc]The coordinates of a voxel position are: p ═ x y z]Meridian/channelThe position coordinates after the over-rotation transformation are as follows:
P′=(P-Pc)·R(α,β,γ)+Pc
(2.2)
the characteristics of the CT image change before and after correction: the left-right inclination can be corrected by rotating around the Z axis, the projection of the corrected cranium on the horizontal plane or the sagittal plane is longest, and the Y-axis coordinate of the foremost point of the edge of the facial skin is minimum; rotating around an X axis to correct the up-and-down inclination, wherein the corrected foremost skin edge is changed from the chin to the nose tip or the upper lip, and the horizontal height of the foremost point of the facial skin edge is obviously increased; correcting the skew of two sides by rotating around the Y axis, wherein the corrected craniocerebral is bilaterally symmetrical, and the horizontal heights of the leftmost point and the rightmost point of the bone edge are approximately equal; from this, the values of α, β, γ are determined: traversing the possible gamma values to 90-90 degrees for rotary transformation and detecting the foremost point of the facial skin edge, wherein the corresponding angle is the value of gamma when the Y-axis coordinate is minimum; traversing the possible alpha value of-90-0 degrees to perform rotation transformation and detecting the foremost point of the facial skin edge, wherein when the Z-axis coordinate of the foremost point is obviously increased, the corresponding angle is the alpha value; traversing the possible beta value from-90 degrees to carry out rotation transformation and detecting the leftmost point and the rightmost point of the bone edge, wherein when the Z-axis coordinates are approximately equal, the corresponding angle is the value of beta; and then updating coordinates of all voxel points of the CT image according to the formulas (2.1) and (2.2) to realize automatic inclination correction.
Preferably, in the step (1), a median filtering method is adopted to perform denoising preprocessing on the CT image.
Preferably, in the step (2), the step of segmenting the three-dimensional CT image f (x, y, z) with a size of M × N × H by using an axial region growing method is as follows:
(2.1) automatically positioning or interactively selecting an interested area where the segmentation target is located;
(2.2) selecting the coordinate axis A with larger axial edge point pair distance1Along another axis A2Segmenting the three-dimensional image into a series of two-dimensional slices;
(2.3) scanning each row or column of the two-dimensional slice, detecting and solving for an edge A1The direction is away from the maximum edge point pair, if there is no edge point pair in a certain row, the distance is 0, if there are multiple pairs, the distance is 0Taking a pair with the maximum distance;
(2.4) all slices along A1The direction edge point forms a one-dimensional sequence to the maximum distance, the longest continuous subsequence of which the distance is greater than the threshold value is solved, and two end points of the subsequence are along A2The maximum edge point pair of the direction distance realizes A2Splitting on the shaft;
(2.5) slicing in the subsequence, taking the point on the connecting line with the maximum edge point distance as a seed point, and respectively taking the seed point along A3Growing to the target edge in the positive and negative directions to realize A3Dividing on the axis, and if the edge is not detected, leveling the edge of the adjacent seed point;
(2.6) solving for edge A3The point pair with the maximum distance in the axial direction is taken as a seed point along A1Growing to the target edge in the positive and negative directions to realize A1Dividing on the axis, and if no edge point is detected, leveling the edge of the adjacent seed point;
(2.7) along the coordinate axes A, respectively1、A3And (4) segmenting the image, analyzing a connected region of the segmentation result on each segment, removing outliers and finishing segmentation.
Preferably, in the step (3), for the voxel of the important tissue surface contour, the median v of the gray values of all pixels in the neighborhood is calculated with reference to the formula (2.5)medAs the desired isosurface value
g(i,j,k)=Median({f(i+u,j+v,k+w):-N≤u,v,w≤N}) (2.5)
Wherein f (i + u, j + v, k + w) is the pixel value in the original pixel neighborhood;
performing polynomial curve fitting on all pixels in the voxel neighborhood to replace linear interpolation, wherein the polynomial function is as follows:
Figure BDA0003162823980000051
wherein W ═ W0,w1,…,wm]Is a polynomial coefficient set, X is a voxel X-axis coordinate, m is a polynomial order, and is measured as 3 for reducing calculation; the fitting error function is:
Figure BDA0003162823980000052
wherein v isnFor the pixel gray values in the voxel neighborhood, for all pixels that the three-dimensional CT image may be adjacent to: n is 27, λ is an adjustment factor for the regularization penalty term, taken to be 0.01.
Preferably, in the step (3), f (x, W) is a linear function of W, e (W) is a convex function, the minimum value of which is taken at the position where the first derivative is 0, and the error function e (W) is minimized to obtain the polynomial coefficient W; let f (x, W) be vmedSolving for a solution x of the equation in the neighborhood of voxelsmedFor reconstructing the X-axis coordinate of the surface point, the solution of the Y-axis coordinate and the Z-axis coordinate is the same as the solution of the X-axis coordinate and the Z-axis coordinate; and finally, connecting all adjacent surface points to obtain a triangular surface patch, and splicing all surface patches to obtain a reconstructed surface.
It will be understood by those skilled in the art that all or part of the steps in the method of the above embodiments may be implemented by hardware instructions related to a program, the program may be stored in a computer-readable storage medium, and when executed, the program includes the steps of the method of the above embodiments, and the storage medium may be: ROM/RAM, magnetic disks, optical disks, memory cards, and the like. Therefore, the invention also comprises a path planning nose environment modeling device corresponding to the method. As shown in fig. 1, the apparatus includes:
the preprocessing module is configured to carry out filtering and denoising preprocessing on the CT image;
the axial region growing module is configured to extract edge point pairs of the segmentation target along the coordinate axis direction by combining an axial region growing method with edge detection, calculate the distance between the edge point pairs, and grow to the target edge along the axial direction by taking pixel points between the edge point pairs with the maximum distance as seed points to realize the rapid segmentation of important tissues and organs;
and the reconstruction module is configured to extract the surface contour of the important tissue organ as the boundary of the barrier and the map, and performs three-dimensional surface reconstruction by using an improved Marching Cubes algorithm.
The present invention is described in more detail below.
1. Image pre-processing
1.1 Tilt correction
The absorption coefficients of different tissues of the human body or the animal body to X-rays are different, and the gray distribution of the three-dimensional space obtained after the electronic computer tomography scanning is the CT image. In the CT image space, the X axis is generally the left-right direction, and the section perpendicular to the X axis that divides the image into left and right parts is the sagittal plane; the Y axis is in the front-back direction, and a tangent plane which is vertical to the Y axis and divides the image into a front part and a back part is a coronal plane; the Z axis is the up-down direction, the image is divided into an upper part and a lower part, and a section vertical to the Z axis is a horizontal plane (cross section). When CT scanning is carried out, if the body position of a patient is incorrect, the CT image is inclined and left-right asymmetric, so that the observation by a doctor is not facilitated, and the positioning and the segmentation of important tissues and focus targets are influenced. In animal experiments, animals need to be anesthetized firstly, and then CT scanning is carried out, so that the inclination phenomenon is easy to occur. The rotational transformation can align the tilted CT image.
Taking an original point as a rotation center, any voxel in the CT image rotates around an X axis, a Y axis and a Z axis by angles of alpha, beta and gamma respectively, and a rotation transformation matrix of position coordinates of the voxel is as follows:
Figure BDA0003162823980000071
according to the right-hand rule, when the thumb points to the positive direction of the coordinate axis, the four-finger direction is the positive direction of rotation, and the rotation angle is positive. The tilt correction needs to take the image center as the rotation center, so the image center is translated to the origin, and then translated back to the original position after being rotated around the coordinate axis. The central position coordinates of the CT image are assumed as follows: pc=[xc yc zc]The coordinates of a voxel position are: p ═ x y z]And the position coordinates after rotation transformation are as follows:
P′=(P-Pc)·R(α,β,γ)+Pc (2.2)
the prior knowledge is combined to know that the CT image characteristics change before and after correction: the left-right inclination can be corrected by rotating around the Z axis, the projection of the corrected cranium on the horizontal plane or the sagittal plane is longest, and the Y-axis coordinate of the foremost point of the edge of the facial skin is minimum; the horizontal height (Z-axis coordinate) of the foremost point of the facial skin edge is obviously increased when the foremost point of the facial skin edge is changed from the chin (or lower lip) to the nose tip (human body) or the upper lip (monkey) after correction; the Y-axis rotation can correct both-side skew, the corrected craniocerebral is bilaterally symmetrical, and the horizontal heights of the leftmost point and the rightmost point of the bone edge are approximately equal. From this, the values of α, β, γ are determined: traversing the possible gamma values to 90-90 degrees for rotary transformation and detecting the foremost point of the facial skin edge, wherein the corresponding angle is the value of gamma when the Y-axis coordinate is minimum; traversing the possible alpha value of-90-0 degrees to perform rotation transformation and detecting the foremost point of the facial skin edge, wherein when the Z-axis coordinate of the foremost point is obviously increased, the corresponding angle is the alpha value; and traversing the possible beta value from-90 degrees to carry out rotation transformation and detecting the leftmost point and the rightmost point of the bone edge, wherein when the Z-axis coordinates are approximately equal, the corresponding angle is the value of beta. And then updating coordinates of all voxel points of the CT image according to the formulas 2.1 and 2.2 to realize automatic inclination correction. In practical application, the rotation angle can be adjusted interactively, and the corrected CT image is observed in real time until the CT image is symmetrical left and right and important tissues such as the optic nerve tube and the focus area can be observed clearly.
1.2 Filter De-noising
Under the influence of factors such as external environment and quality of imaging equipment, noise is generated in the computed tomography imaging process, so that the signal-to-noise ratio of the CT image is reduced, and even some detail features of the image are covered by the noise, which brings difficulty to subsequent feature extraction, edge detection and organ segmentation, and therefore, the CT image needs to be filtered and denoised. The CT image has a lot of noise sources, including random noise, electronic noise, optical noise and the like, and is mainly represented by Gaussian noise (white noise) and impulse noise (salt and pepper noise)[75]. The commonly used CT image denoising method is as follows:
(1) neighborhood averaging method
The average value of all pixels in the neighborhood is used to replace the value of the original pixel in the image. For the CT image f, a spatial filter can be set as a three-dimensional convolution kernel, and the gray values of all pixels can be updated by performing sliding calculation on the image by using the convolution kernel:
Figure BDA0003162823980000081
in the formula, f (i, j, k) is the original pixel value, 2N +1 can be odd numbers such as 3, 5 and the like, and the size of the convolution kernel is (2N +1)3
(2) Gauss filtering method
Carrying out weighted average on the image f, and replacing the original pixel value by using the weighted average of all pixel gray values in the neighborhood, wherein the pixel gray value updating formula is as follows:
Figure BDA0003162823980000091
where K (u, v, w) is the value of each element of the weight matrix, which is a gaussian convolution kernel, that can be solved by a gaussian function. For example, a two-dimensional Gaussian function of
Figure BDA0003162823980000092
The element values in the weight matrix with subscript (u, v) are:
Figure BDA0003162823980000093
let σ be 1.5 and N be 1, a 3 × 3 two-dimensional weight matrix can be obtained as:
Figure BDA0003162823980000094
(3) median filtering method
Replacing the value of the original pixel with the median of the grey values of all pixels in the neighborhood:
g(i,j,k)=Median({f(i+u,j+v,k+w):-N≤u,v,w≤N}) (2.5)
wherein f (i + u, j + v, k + w) is the pixel value in the original pixel neighborhood.
(4) Adaptive wiener filtering method
Assuming that both the image and the noise are random variables, an estimate of the image f not contaminated by noise is found
Figure BDA0003162823980000095
Minimizing the mean square error between them, the error metric being:
Figure BDA0003162823980000096
the minimum of this error function is expressed in the frequency domain as:
Figure BDA0003162823980000097
where H (u, v) is a degenerate function, Sη(u, v) is the noise power spectrum, Sf(u, v) is the power spectrum of an undegraded image, whose power spectral density is constant over the entire frequency domain for white noise, which can be approximated by:
Figure BDA0003162823980000098
wherein K is added to | H (u, v) & ltu2Specific constants on all terms.
(5) Wavelet de-noising method
Discrete wavelet transform is carried out on an image f (x, y) with the size of M multiplied by N to obtain a low frequency part and a high frequency part:
Figure BDA0003162823980000101
Figure BDA0003162823980000102
wherein
Figure BDA0003162823980000103
Is a function of the two-dimensional scale,
Figure BDA0003162823980000104
h, V, D measure the change in column, row and diagonal directions, j0For any starting scale, 0 is usually taken. The noise energy is generally distributed on all wavelet coefficients, the image signal energy is mainly concentrated on a few wavelet coefficients, and the wavelet coefficients are subjected to nonlinear thresholding, such as soft thresholding, to remove noise components:
Figure BDA0003162823980000105
and finally, performing inverse discrete wavelet transform by using the processed wavelet coefficient:
Figure BDA0003162823980000106
and (5) reconstructing to obtain a denoised image.
And respectively carrying out denoising processing on the CT images by using the methods. Different methods have advantages and disadvantages, and by combining algorithm principle and experimental result analysis, the neighborhood average method can cause image detail blurring due to the average effect; when the noise which follows normal distribution is restrained, Gaussian filtering is effective; the median filtering can eliminate noise to a certain degree while keeping the details of the image; the self-adaptive wiener filtering method can self-adaptively adjust output according to the local variance of the image, and the larger the local variance is, the stronger the smoothing effect of the filter is, so that the filter is not suitable for processing non-stationary signals; although the wavelet transform denoising can effectively suppress image noise, ringing effect is generated, edge contour information cannot be well processed, and the algorithm implementation is relatively complex. In summary, the median filtering method is used to perform denoising preprocessing on the CT image.
2. Axial region growing in conjunction with edge detection
An axial region growing strategy combined with edge detection is provided, the maximum distance of an edge point pair parallel to a coordinate axis in an image is extracted to be used as a main characteristic of a segmentation target, a pixel point on a connecting line segment of the axial edge point pair with the maximum distance is used as a seed point, and the pixel point grows to the edge of the segmentation target along the normal direction of the axial direction.
Defining axial edge point pairs of the segmentation target: if the connecting line of the two edge points is parallel to the coordinate axis and the pixel points on the connecting line segment are both positioned in the segmentation target, the two points are the axial edge point pairs of the segmentation target. In the CT images, the CT values of different tissues are greatly different, as shown in table 2.1.
TABLE 2.1
Figure BDA0003162823980000111
Therefore, an edge detector similar to a gradient operator such as a Sobel operator can be constructed for detecting the target edge point pair. For example, when soft tissue is surrounded by bone, the edge points are bone-soft tissue edge and soft tissue-bone edge. Let J (i) become J-2·J-1·J0·J1·J2J (i) response is 1 if and only if f (i) is located at soft tissue and bony margins, and 0 otherwise. Specifically, when the image boundary range is exceeded, let f (i) be 300. When two edge points i1、i2When the voxels in between are soft tissue: -300<f(i)<300,i1<i<i2And both are pairs of edge points of the soft tissue.
For a three-dimensional CT image f (x, y, z) of size M × N × H, the segmentation using the axial region growing method is as follows:
(1) automatically positioning or interactively selecting an interested area where a segmentation target is located;
(2) selecting coordinate axis A with larger axial edge point pair distance1Along another axis A2Segmenting the three-dimensional image into a series of two-dimensional slices;
(3) scanning each row (column) of the two-dimensional slice, detecting and solving along A1The direction is away from the maximum edge point pair, if the edge point pair of a certain row does not storeIf so, the distance is 0, and if a plurality of pairs exist, the pair with the largest distance is selected;
(4) all slices along A1The direction edge point forms a one-dimensional sequence to the maximum distance, and the longest continuous subsequence (the two endpoints of the subsequence are along A) which is larger than the threshold value is solved2Direction distance maximum edge point pair) to implement a2Splitting on the shaft;
(5) slicing in the subsequence, taking the point on the connecting line with the maximum edge point as a seed point, and respectively taking the seed point along A3Growing to the target edge in the positive and negative directions to realize A3Dividing on the axis, and if the edge is not detected, leveling the edge of the adjacent seed point;
(7) solving for edge A3The point pair with the maximum distance in the axial direction is taken as a seed point along A1Growing to the target edge in the positive and negative directions to realize A1Dividing on the axis, and if no edge point is detected, leveling the edge of the adjacent seed point;
(8) respectively along the coordinate axis A1、A3And (4) segmenting the image, analyzing a connected region of the segmentation result on each segment, removing outliers and finishing segmentation.
The method is an image segmentation method which is simple to realize, fast and effective. For an unclosed region of the edge of the segmentation target contour, the adjacent contour edge is taken as the edge, which may bring about segmentation errors, so the better the closure of the edge of the segmentation target contour, the better the segmentation effect. By using the method, the important tissues such as brain tissue, eye sockets, inferior turbinates, nasal septa and the like can be rapidly segmented, and the surface contour of the tissue is further extracted to be used for nasal environment modeling of the route planning of the nasal optic nerve tube decompression operation.
3. Nasal environment modeling based on surface reconstruction
There are many methods for modeling the map of the path planning environment, mainly including a grid map method, a visual map method, a Voronoi map method, and the like. The nose environment is complex and changeable, the shape of obstacles such as important tissues and the like is irregular, and clear vertexes do not exist, so that the nose is not suitable for adopting a drawing method such as a visual drawing method. The three-dimensional CT image and the important tissue segmentation result are subjected to discrete rasterization in space and brightness, and a nose environment model can be constructed by adopting a grid map method.
The grid map method marks the environment space as an impassable obstacle grid and a passable grid, and is simple and effective, the more grids, the higher the map precision, the easier the optimal path is to be found, but a large number of grids need a larger storage space, and the searching efficiency may be reduced. The CT image data size is huge, for example, the number of CT image pixels with the size of 512 × 512 × 100 exceeds twenty million. Therefore, the surface contour of important tissues such as brain tissues is extracted and is used as an obstacle and a map boundary instead of the surface contour, so that the number of map grids is reduced, and the searching efficiency is improved. In order to further improve the map precision, a nose environment model can be constructed based on a surface reconstruction technology.
The Marching Cubes algorithm is the most commonly used three-dimensional surface reconstruction method. Firstly, judging and marking all voxel vertexes of the CT image: and acquiring the gray value of the vertex of the voxel, marking the vertex which is larger than a preset threshold value as 1, and otherwise marking as 0. There are two states per vertex, with 8 vertices sharing 2^8 ^ 256 cases. Based on the rotational symmetry and complementary symmetry of the cube (isosurface topology is unchanged), 256 cases of isosurface distributions can be combined into 15.
Then calculating the intersection point position of the isosurface and the voxel by linear interpolation:
Figure BDA0003162823980000131
in the formula VthIs an isosurface value, P1、P2Is the vertex of the intersection of the iso-surface and the voxel, f (P)1)、f(P2) For the gray scale value, P is the intersection point.
The Marching Cubes algorithm does not consider the voxel influence in the neighborhood during linear interpolation, and the isosurface value of the Marching Cubes algorithm needs to be set in advance usually. The surface is adaptively reconstructed by improving the problems. For the voxel of the important tissue surface contour, the median v of the gray values of all pixels in its neighborhood is calculated with reference to equation (2.5)medAs the desired isosurface value. Instead of linear interpolation, a polynomial curve fit is performed on all pixels in the voxel neighborhoodThe function is:
Figure BDA0003162823980000132
wherein W ═ W0,w1,…,wm]For a set of polynomial coefficients, X is the voxel X-axis coordinate, m is the polynomial order, and may be taken to be 3 to reduce the amount of computation. The fitting error function is:
Figure BDA0003162823980000141
in the formula vnFor pixel gray values in the voxel neighborhood, all pixels (including itself) adjacent to it can be taken for the three-dimensional CT image: n is 27 and λ is an adjustment factor of the regularization penalty term, and can take a slight value close to 0, such as 0.01, to prevent the polynomial coefficient from being too large.
It is known that f (x, W) is a linear function of W, e (W) is a convex function, the minimum of which is obtained at the position where the first derivative is 0, and the polynomial coefficient W can be solved by minimizing the error function e (W). Let f (x, W) be vmedSolving for a solution x of the equation in the neighborhood of voxelsmedThe solution is similar to the solution of the X-axis coordinate, the Y-axis coordinate and the Z-axis coordinate of the reconstructed surface point. And finally, connecting all adjacent surface points to obtain a triangular surface patch, and splicing all surface patches to obtain a reconstructed surface. The path planning can be completed by knowing the surface information of the obstacles around the nose, and the passable area does not need to be displayed and solved, so that the method is adopted to carry out three-dimensional surface reconstruction on all the obstacles to obtain the required nose environment model.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still belong to the protection scope of the technical solution of the present invention.

Claims (7)

1. The nose environment modeling method for path planning is characterized in that: which comprises the following steps:
(1) carrying out filtering denoising pretreatment on the CT image;
(2) extracting edge point pairs of a segmentation target along the coordinate axis direction by using an axial region growing method in combination with edge detection, calculating the distance between the edge point pairs, taking pixel points between the edge point pairs with the maximum distance as seed points, and growing to the target edge along the axial direction to realize the rapid segmentation of important tissues and organs;
(3) extracting the surface contour of the important tissue organ as the boundary of the barrier and the map, and performing three-dimensional surface reconstruction by using an improved Marching cube algorithm.
2. The path planning nasal environment modeling method of claim 1, wherein: in the step (1), the tilted CT image is adjusted by rotation transformation, and with the origin as a rotation center, any voxel in the CT image is rotated by angles α, β, and γ around the X axis, the Y axis, and the Z axis, respectively, and a rotation transformation matrix of a position coordinate thereof is:
Figure FDA0003162823970000011
according to the right hand rule, when the thumb points to the positive direction of the coordinate axis, the direction of the four fingers is the positive direction of rotation, and the rotation angle is positive; the tilt correction takes the center of the image as a rotation center, the center of the image is translated to an original point, and the image is translated back to the original position after being rotated around a coordinate axis; the central position coordinates of the CT image are assumed as follows: pc=[xc yc zc]The coordinates of a voxel position are: p ═ x y z]And the position coordinates after rotation transformation are as follows:
P′=(P-Pc)·R(α,β,γ)+Pc
(2.2)
the characteristics of the CT image change before and after correction: the left-right inclination can be corrected by rotating around the Z axis, the projection of the corrected cranium on the horizontal plane or the sagittal plane is longest, and the Y-axis coordinate of the foremost point of the edge of the facial skin is minimum; rotating around an X axis to correct the up-and-down inclination, wherein the corrected foremost skin edge is changed from the chin to the nose tip or the upper lip, and the horizontal height of the foremost point of the facial skin edge is obviously increased; correcting the skew of two sides by rotating around the Y axis, wherein the corrected craniocerebral is bilaterally symmetrical, and the horizontal heights of the leftmost point and the rightmost point of the bone edge are approximately equal; from this, the values of α, β, γ are determined: traversing the possible gamma values to 90-90 degrees for rotary transformation and detecting the foremost point of the facial skin edge, wherein the corresponding angle is the value of gamma when the Y-axis coordinate is minimum; traversing the possible alpha value of-90-0 degrees to perform rotation transformation and detecting the foremost point of the facial skin edge, wherein when the Z-axis coordinate of the foremost point is obviously increased, the corresponding angle is the alpha value; traversing the possible beta value from-90 degrees to carry out rotation transformation and detecting the leftmost point and the rightmost point of the bone edge, wherein when the Z-axis coordinates are approximately equal, the corresponding angle is the value of beta; and then updating coordinates of all voxel points of the CT image according to the formulas (2.1) and (2.2) to realize automatic inclination correction.
3. The path planning nasal environment modeling method of claim 2, wherein: and (2) carrying out denoising pretreatment on the CT image by adopting a median filtering method in the step (1).
4. The path planning nasal environment modeling method of claim 3, wherein: in the step (2), the three-dimensional CT image f (x, y, z) having a size of M × N × H is segmented by the axial region growing method as follows:
(2.1) automatically positioning or interactively selecting an interested area where the segmentation target is located;
(2.2) selecting the coordinate axis A with larger axial edge point pair distance1Along another axis A2Segmenting the three-dimensional image into a series of two-dimensional slices;
(2.3) scanning each row or column of the two-dimensional slice, detecting and solving for an edge A1The direction is away from the biggest marginal point pair, if there is not some marginal point pairs of line then the distance is 0, if there are many pairs then take the biggest distance pair among them;
(2.4) all slices along A1The direction edge point forms a one-dimensional sequence to the maximum distance, the longest continuous subsequence of which the distance is greater than the threshold value is solved, and two end points of the subsequence are along A2Side with maximum direction distancePair of edge points, implement A2Splitting on the shaft;
(2.5) slicing in the subsequence, taking the point on the connecting line with the maximum edge point distance as a seed point, and respectively taking the seed point along A3Growing to the target edge in the positive and negative directions to realize A3Dividing on the axis, and if the edge is not detected, leveling the edge of the adjacent seed point;
(2.6) solving for edge A3The point pair with the maximum distance in the axial direction is taken as a seed point along A1Growing to the target edge in the positive and negative directions to realize A1Dividing on the axis, and if no edge point is detected, leveling the edge of the adjacent seed point;
(2.7) along the coordinate axes A, respectively1、A3And (4) segmenting the image, analyzing a connected region of the segmentation result on each segment, removing outliers and finishing segmentation.
5. The path planning nasal environment modeling method of claim 4, wherein: in the step (3), for the voxel of the surface contour of the important tissue, the median v of the gray values of all pixels in the neighborhood is calculated by referring to the formula (2.5)medAs the desired isosurface value
g(i,j,k)=Median({f(i+u,j+v,k+w):-N≤u,v,w≤N}) (2.5)
Wherein f (i + u, j + v, k + w) is the pixel value in the original pixel neighborhood;
performing polynomial curve fitting on all pixels in the voxel neighborhood to replace linear interpolation, wherein the polynomial function is as follows:
Figure FDA0003162823970000031
wherein W ═ W0,w1,…,wm]Is a polynomial coefficient set, X is a voxel X-axis coordinate, m is a polynomial order, and is measured as 3 for reducing calculation; the fitting error function is:
Figure FDA0003162823970000032
wherein v isnFor the pixel gray values in the voxel neighborhood, for all pixels that the three-dimensional CT image may be adjacent to: n is 27, λ is an adjustment factor for the regularization penalty term, taken to be 0.01.
6. The path planning nasal environment modeling method of claim 5, wherein: in the step (3), f (x, W) is a linear function of W, e (W) is a convex function, the minimum value of the convex function is obtained at the position where the first derivative is 0, and the polynomial coefficient W is solved by minimizing the error function e (W); let f (x, W) be vmedSolving for a solution x of the equation in the neighborhood of voxelsmedFor reconstructing the X-axis coordinate of the surface point, the solution of the Y-axis coordinate and the Z-axis coordinate is the same as the solution of the X-axis coordinate and the Z-axis coordinate; and finally, connecting all adjacent surface points to obtain a triangular surface patch, and splicing all surface patches to obtain a reconstructed surface.
7. Route planning nose environment modeling device, its characterized in that: it includes:
the preprocessing module is configured to carry out filtering and denoising preprocessing on the CT image;
the axial region growing module is configured to extract edge point pairs of the segmentation target along the coordinate axis direction by combining an axial region growing method with edge detection, calculate the distance between the edge point pairs, and grow to the target edge along the axial direction by taking pixel points between the edge point pairs with the maximum distance as seed points to realize the rapid segmentation of important tissues and organs;
and the reconstruction module is configured to extract the surface contour of the important tissue organ as the boundary of the barrier and the map, and performs three-dimensional surface reconstruction by using an improved Marching Cubes algorithm.
CN202110809521.1A 2021-07-14 2021-07-14 Nose environment modeling method and device for path planning Pending CN113706687A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110809521.1A CN113706687A (en) 2021-07-14 2021-07-14 Nose environment modeling method and device for path planning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110809521.1A CN113706687A (en) 2021-07-14 2021-07-14 Nose environment modeling method and device for path planning

Publications (1)

Publication Number Publication Date
CN113706687A true CN113706687A (en) 2021-11-26

Family

ID=78648870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110809521.1A Pending CN113706687A (en) 2021-07-14 2021-07-14 Nose environment modeling method and device for path planning

Country Status (1)

Country Link
CN (1) CN113706687A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299081A (en) * 2021-12-16 2022-04-08 北京朗视仪器股份有限公司 Maxillary sinus CBCT image segmentation method and device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299081A (en) * 2021-12-16 2022-04-08 北京朗视仪器股份有限公司 Maxillary sinus CBCT image segmentation method and device, storage medium and electronic equipment
CN114299081B (en) * 2021-12-16 2023-02-17 北京朗视仪器股份有限公司 Maxillary sinus CBCT image segmentation method, maxillary sinus CBCT image segmentation device, maxillary sinus CBCT storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN107808156B (en) Region-of-interest extraction method
CN110599494B (en) Rib image reconstruction system and method, terminal and readable storage medium
Meilinger et al. Metal artifact reduction in cone beam computed tomography using forward projected reconstruction information
JP2019051315A (en) Medical image processing apparatus, medical image capturing apparatus, and medical image processing program
CN105528800B (en) A kind of computer tomography artifact correction method and device
ES2706749T3 (en) Method to process image data that represent a three-dimensional volume
Kanade et al. Brain tumor detection using MRI images
CA2748234A1 (en) Denoising medical images
CN106909947B (en) Mean Shift algorithm-based CT image metal artifact elimination method and system
US6885762B2 (en) Scale-based image filtering of magnetic resonance data
CN104346799B (en) The extracting method of spinal cord in a kind of CT images
CN108898578B (en) Medical image processing method and device and computer storage medium
CN113870098A (en) Automatic Cobb angle measurement method based on spinal layered reconstruction
CN110599530B (en) MVCT image texture enhancement method based on double regular constraints
US11227415B2 (en) Method for creating a composite cephalometric image
CN113706687A (en) Nose environment modeling method and device for path planning
Mentl et al. Noise reduction in low-dose ct using a 3D multiscale sparse denoising autoencoder
Kiraly et al. 3D human airway segmentation for virtual bronchoscopy
CN116894783A (en) Metal artifact removal method for countermeasure generation network model based on time-varying constraint
CN116258784A (en) CT image metal artifact correction and ablation visualization method and system
KR20200083122A (en) Low Dose Cone Beam Computed Tomography Imaging System Using Total Variation Denoising Technique
JP5632920B2 (en) System and method for determining blur characteristics in a blurred image
Abdolali et al. Mandibular canal segmentation using 3D Active Appearance Models and shape context registration
CN116934885A (en) Lung segmentation method, device, electronic equipment and storage medium
CN109767396B (en) Oral cavity CBCT image denoising method based on image dynamic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination