CN110889850A - CBCT tooth image segmentation method based on central point detection - Google Patents
CBCT tooth image segmentation method based on central point detection Download PDFInfo
- Publication number
- CN110889850A CN110889850A CN201911279019.3A CN201911279019A CN110889850A CN 110889850 A CN110889850 A CN 110889850A CN 201911279019 A CN201911279019 A CN 201911279019A CN 110889850 A CN110889850 A CN 110889850A
- Authority
- CN
- China
- Prior art keywords
- teeth
- segmentation
- image
- tooth
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30036—Dental; Teeth
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a CBCT tooth image segmentation method based on central point detection, which comprises the steps of preprocessing; coarse segmentation; performing double-level set fine segmentation; performing interlayer iteration segmentation; the three-dimensional structure of the tooth. The tooth image segmentation algorithm based on the center point detection fully utilizes the detected center point information to carry out rough segmentation of an initial layer, and replaces manual initialization with a rough segmentation result, so that the image segmentation under the guidance of a full-automatic expert is realized. The tooth image segmentation algorithm based on the central point detection is provided by taking the detected central point information as prior information of the segmentation algorithm; and a double-level set segmentation algorithm and threshold optimization constraint processing are introduced in the segmentation process, so that the tooth three-dimensional structure can be obtained more quickly, conveniently and accurately, the segmentation is more accurate, and the robustness is better.
Description
Technical Field
The invention belongs to the technical field of image segmentation, and particularly relates to a CBCT tooth segmentation method based on central point detection.
Background
As society develops, more and more adults seek orthodontic treatment to improve their smile, correct dental bite conditions or correct other problems caused by injury, disease or prolonged neglect of oral care. The medical imaging technology becomes an indispensable technical means for comprehensively, accurately and accurately acquiring the data of the patient and further providing powerful guarantee for the diagnosis and treatment of oral diseases. The oral cavity CT equipment is a revolutionary tool of dentistry, and the CBCT is applied to the field of oral medicine from the late stage of the nineties of the last century, so that the three-dimensional structure of a jaw face can be truly reflected, the defect of two-dimensional imaging is overcome, the relationship between teeth and bones can be evaluated more three-dimensionally, and a more reasonable scheme can be formulated by an orthodontist. However, the CBCT dental slice images are low in brightness, poor in contrast, multiple in noise points, unobvious in boundaries, and complex in structure, so that individuals are diverse; in addition, the CT is affected by noise, artifacts caused by non-uniformity of intensity, and the same intensity of different tissues, so that automatic segmentation of teeth is very difficult, and most images have non-uniform gray levels. How to process an effective segmented dental image becomes a key step in order to obtain dental information.
Currently, the main tooth segmentation algorithms are: manual segmentation has the disadvantages that multiple persons are required to participate, the segmentation time is long, and the measurement precision is influenced by the operation of an operator; the method has the advantages that the threshold value is segmented in a self-adaptive mode, but the problems of more noise points, fuzzy adjacent tooth boundaries and uneven tooth gray values exist, so that the method is easily interfered by ineffective information; morphological segmentation, which often occurs over-segmentation due to excessive sensitivity to edges; level set segmentation based on active contour models, such as DRLSE and MICO, has the problems that the requirement for initialization is too high, the requirement depends on accurate priori knowledge, and local control on an evolution surface is lacked. Due to the imaging defects of medical images and the particularity of tooth structures, the tooth segmentation process is realized only by means of a certain amount of interactive operation. Therefore, it is essential to introduce shape priors to simplify the segmentation problem. In order to realize accurate and automatic segmentation of the ROI, the detected central point information is taken as the prior information of a segmentation algorithm, and a tooth image segmentation algorithm based on central point detection is provided, so that the segmentation is more accurate and the robustness is better. The tooth image segmentation algorithm based on the central point detection fully utilizes the detected central point information to carry out rough segmentation of an initial layer, replaces manual initialization with a rough segmentation result, and is directly blended into an image to be segmented through prior information of a key area, so that full-automatic image segmentation under the guidance of an expert is realized.
Disclosure of Invention
Aiming at the defects in the prior art, the CBCT tooth segmentation method based on the central point detection provided by the invention solves the problems that most of the existing tooth segmentation algorithms need manual initialization and the tooth segmentation precision is improved in combination.
In order to achieve the above purpose, the invention adopts the technical scheme that:
the scheme provides a CBCT tooth image segmentation method based on central point detection, which comprises the following steps:
s1, preprocessing: acquiring an original image of teeth, and calculating the size of a region wrapping the teeth by using an MIP (maximum aperture average) algorithm according to the original image of the teeth;
s2, rough segmentation: selecting a proper initial layer according to the size of the region wrapping the teeth, and calculating by using a watershed algorithm to obtain a rough segmentation result of the initial layer of the teeth;
s3, bi-level set fine segmentation: taking the rough segmentation result as initialization, and carrying out fine segmentation processing on the initial tooth layer by using a double-level set DRLSE model to obtain two-dimensional segmentation results of the initial tooth layers of the upper row and the lower row;
s4, interlayer iteration segmentation: performing optimal threshold processing on the two-dimensional segmentation results of the upper and lower rows of tooth initial layers by utilizing interlayer information, and performing upward or downward layer-by-layer iteration by utilizing a double-level set DRLSE model according to the processing results to obtain the two-dimensional segmentation result of each layer of the teeth of the CBCT image;
s5, outputting a three-dimensional tooth structure: and carrying out segmentation processing on the two-dimensional segmentation result of the CBCT image tooth by using the DRLSE model to obtain a three-dimensional segmentation result of the CBCT image tooth, thereby completing the segmentation of the CBCT image tooth.
Further, the step S1 includes the following steps:
s101, obtaining original tooth images, storing the original tooth images in a DICOM (digital imaging and communications in medicine) format, and reading tooth slice images layer by layer;
s102, carrying out piecewise linear transformation processing on the tooth original image, and normalizing the gray level of the tooth original image to [0,255 ];
s103, respectively projecting the image subjected to gray level normalization processing in the x direction, the y direction and the z direction by using an MIP (maximum intensity projection) algorithm to obtain the size of a region wrapping the teeth;
the expression for projecting the three directions x, y and z of the image respectively is as follows:
xmip(j,k)=max(xmip(j,k),a(i,j,k));
ymip(i,k)=max(ymip(i,k),a(i,j,k));
zmip(i,j)=max(zmip(i,j),a(i,j,k));
where xmip (j, k) denotes the projection of the image in the x direction, ymip (i, k) denotes the projection of the image in the y direction, zmip (i, j) denotes the projection of the image in the z direction, i denotes the image size ranging from 1 to the x direction, j denotes the image size ranging from 1 to the y direction, k denotes the image size ranging from 1 to the z direction, and a (i, j, k) denotes the grayscale size of the image at i, j, and k.
Still further, the step S2 includes the steps of:
s201, respectively selecting initial layers of upper and lower rows of teeth according to the size of the area wrapping the teeth;
s202, respectively detecting central points of the upper row of teeth and the lower row of teeth according to the initial layers of the upper row of teeth and the lower row of teeth, taking the central points as a foreground, and taking a preset threshold value as a background;
s203, marking the foreground and the background inside and outside by using a watershed algorithm, and obtaining rough segmentation results of the upper row of teeth and the lower row of teeth respectively.
Still further, the step S3 includes the steps of:
s301, according to the rough segmentation result obtained in the step S2, performing two initialization treatments of a double-level set on the upper row of teeth and the lower row of teeth according to the alternating sequence of the teeth to obtain the initialization of the initial layers of the upper row of teeth and the lower row of teeth;
s302, the two-level set DRLSE model is used for finely dividing the initial layers of the upper row of teeth and the lower row of teeth respectively, and therefore two-dimensional division results of the initial layers of the upper row of teeth and the lower row of teeth are obtained.
Still further, the step S4 includes the steps of:
s401, performing optimal threshold processing on the two-dimensional segmentation results of the upper and lower tooth initial layers by utilizing interlayer information, and taking the processed results as initialization of the upper layer or the lower layer;
s402, sequentially performing iterative segmentation processing on the upper row of teeth and the lower row of teeth by using a double-level set DRLSE model, so as to obtain a two-dimensional segmentation result of each layer of the teeth of the CBCT image.
Still further, the expression for performing the optimal threshold processing in step S401 is as follows:
wherein x isiRepresents [0,255]Pixel gray scale of yiThe number of pixel points of the gray level in an image curve is represented, A represents an amplitude value, S represents a curve width, e represents a natural constant, and mu represents a parameter for controlling evolution.
Still further, the step S402 specifically includes:
and respectively segmenting the upper row of teeth and the lower row of teeth layer by layer from the initial layer to the tooth crown direction and segmenting the upper row of teeth and the lower row of teeth layer by layer from the initial layer to the tooth root direction by using a double-level set DRLSE model, thereby obtaining a two-dimensional segmentation result of the teeth of the CBCT image.
Still further, the two bilevel set evolution curves in each iteration segmentation process in step S302 and step S402 need to satisfy the following conditions:
φ1=max(φ1,-φ2)
φ2=max(-φ1,φ2)
wherein phi is1Represents a first horizontal curve, phi2A second horizontal curve is shown, max (-) indicates maximum value.
Still further, the expressions of the step S302 and the step S402 segmented by using the dual level set DRLSE model are as follows:
wherein the content of the first and second substances,the expressions all represent a time evolution equation, mu, lambda and α all represent parameters for controlling evolution, delta represents a unit impulse function, div (·) represents divergence, phi1Represents a first horizontal curve, phi2A second curve of the level is shown,representing the corresponding gradients of the two level sets, dp(s) denotes a function defined by a parameter s, p'(s) denotes a derivation of the function p, s denotes [0,1 ]]P denotes a function defined by a distance regularization term, p2(. cndot.) represents a function of the level set gradient norm,the gradient mode of the level set curve is represented,representing gradient operators, GσDenotes a gaussian function with standard deviation σ, I denotes the entire image, x denotes a convolution operator, and g denotes an edge detection factor.
Still further, step S5 is specifically:
s501, stacking two-dimensional segmentation results of single-layer CBCT tooth images of upper and lower rows of teeth, and taking the stacking results as initialization of a three-dimensional DRLSE model;
s502, carrying out iterative processing on the stacking result by using a three-dimensional DRLSE model to obtain a three-dimensional segmentation result of the CBCT dental image, thereby completing the segmentation of the CBCT dental image.
The invention has the beneficial effects that:
(1) the method takes the detected tooth center point as the prior information of the rough segmentation algorithm, obtains the rough boundary of the teeth in the image by utilizing the prior information of the foreground region, and takes the rough boundary as the initialization of the level set model. Therefore, the time waste caused by manual intervention of doctors and the unreasonable problem of manual initialization can be reduced;
(2) the invention can effectively reduce the noise sensitivity of the function by the operation of the constrained bi-level set function by means of the information between layers, prevent the inaccurate segmentation caused by the attraction of adjacent teeth, and can effectively segment the teeth in the whole oral cavity by using the information of the tooth area to obtain a better segmentation result.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a CBCT image of a tooth according to the present embodiment.
FIG. 3 is a diagram illustrating the rough segmentation result in this embodiment
Fig. 4 is a graph showing the result of the bi-level set single layer segmentation in this embodiment.
Fig. 5 is a schematic view of the tooth division process in this embodiment.
Fig. 6 is a schematic diagram of gaussian fitting of the internal gray level of the curve in the present embodiment.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Examples
The method aims at the situation that most of the existing tooth segmentation algorithms need manual initialization and improve tooth segmentation precision. The invention provides an idea for effectively improving the automation degree, and introduces a double-level set and constraint processing in the whole process of outputting the three-dimensional tooth structure segmentation, so that the tooth three-dimensional result can be obtained more quickly, conveniently and accurately. The tooth image segmentation algorithm based on the central point detection is provided by taking the detected central point information as the prior information of the segmentation algorithm, so that the segmentation is more accurate and the robustness is better. The tooth image segmentation algorithm based on the central point detection fully utilizes the detected central point information to carry out rough segmentation of an initial layer, replaces manual initialization with a rough segmentation result, and directly blends the prior information of a key area into an image to be segmented to realize full-automatic image segmentation under the guidance of an expert. The method aims at the situation that most of the existing tooth segmentation algorithms need manual initialization and improve tooth segmentation precision.
As shown in fig. 1, the invention discloses a CBCT dental image segmentation method based on central point detection, which is implemented as follows:
s1, preprocessing: acquiring a tooth original image, and calculating the size of a region wrapping the tooth by using an MIP algorithm according to the tooth original image, wherein the implementation method comprises the following steps:
s101, obtaining original tooth images, storing the original tooth images in a DICOM (digital imaging and communications in medicine) format, and reading tooth slice images layer by layer;
s102, carrying out piecewise linear transformation processing on the tooth original image, and normalizing the gray level of the tooth original image to [0,255 ];
s103, projecting the image subjected to the gray normalization processing in the x direction, the y direction and the z direction respectively by using an MIP projection algorithm to obtain the size of the region wrapping the teeth.
In this embodiment, an original image is first acquired, stored in a standard DICOM format, and the dental slice images are read layer by layer. And the gray level histogram of the original image is obtained, the image gray level is normalized to [0,255] by carrying out piecewise linear transformation, and then the size of the wrapped tooth area is obtained by processing the original image by using the MIP algorithm, so that the size of the processed image is reduced, and the segmentation speed is convenient to improve.
In this embodiment, as shown in fig. 2, in the step S1, the gray histogram of the obtained original image has a larger gray range of approximately 3000, and for convenience of image processing, the gray scale is converted to [0,255], and then the processing speed is slowed down due to the fact that the single-layer image is too large, so that the x, y, and z directions are respectively projected by using the MIP projection algorithm, and the original image size eg:512 × 512 × 512 × 512 is processed to obtain the size eg:250 × 300 × 280) of the wrapped tooth region. Maximum Intensity Projection (MIP): as the name implies, the maximum value is projected. That is, the gray level in the pixel block through which each ray passes is calculated as the output result. For the case of three directions x, y and z:
xmip(j,k)=max(xmip(j,k),a(i,j,k));
ymip(i,k)=max(ymip(i,k),a(i,j,k));
zmip(i,j)=max(zmip(i,j),a(i,j,k));
where xmip (j, k) denotes the projection of the image in the x direction, ymip (i, k) denotes the projection of the image in the y direction, zmip (i, j) denotes the projection of the image in the z direction, i denotes the image size ranging from 1 to the x direction, j denotes the image size ranging from 1 to the y direction, k denotes the image size ranging from 1 to the z direction, and a (i, j, k) denotes the grayscale size of the image at i, j, and k.
S2, rough segmentation: selecting a proper initial layer according to the size of the region wrapping the teeth, and calculating by using a watershed algorithm to obtain a rough segmentation result of the initial layer of the teeth, wherein the realization method comprises the following steps:
s201, respectively selecting initial layers of upper and lower rows of teeth according to the size of the area wrapping the teeth;
s202, respectively detecting central points of the upper row of teeth and the lower row of teeth according to the initial layers of the upper row of teeth and the lower row of teeth, taking the central points as a foreground, and taking a preset threshold value as a background;
s203, marking the foreground and the background inside and outside by using a watershed algorithm, and obtaining rough segmentation results of the upper row of teeth and the lower row of teeth respectively.
In this embodiment, a suitable initial layer is selected first; the central point obtained by the detection of the initial layer is used as the foreground, the threshold value is used as the background, the coarse segmentation result of the initial layer is obtained by using the control mark watershed algorithm, and a better initialization that the teeth are not adhered is ensured to be obtained.
In this embodiment, there is more adhesion between teeth in step S2, and in order to distinguish the possibility of tooth adhesion, we select as the initial layer as far as possible the intermediate layer from the crown layer to the periodontal tissue appearing layer. However, the initial layer ensures that each tooth is not bonded to each other as much as possible, since the condition of a single layer of 16 teeth of the human body is considered; therefore, before fine segmentation, a central point obtained by detection of an initial layer is used as a foreground, a threshold value is used as a background, and a coarse segmentation result of the initial layer is obtained by using a control mark watershed algorithm, so that a good initialization that teeth are not adhered can be obtained. The watershed transform obtains a catchbasin image of the input image, and boundary points between catchbasins are watershed. Clearly, the watershed represents the input image maxima points. Therefore, to obtain edge information of an image, a gradient image is usually taken as an input image, namely:
g(x,y)=grad(f(x,y))
where f (x, y) represents the original image and grad (·) represents the gradient operation. But due to excessive segmentation caused by noise and other irregularities in the gradient, labels are introduced, with foreground and background labeled inside and outside, respectively. As shown in fig. 3, a better result of rough tooth segmentation can be obtained.
S3, bi-level set fine segmentation: as shown in fig. 4, the rough segmentation result is used as initialization, and the dual level set DRLSE model is used to perform fine segmentation on the initial tooth layer to obtain two-dimensional segmentation results of the initial tooth layers in the upper and lower rows, which is implemented as follows:
s301, according to the rough segmentation result obtained in the step S2, performing two initialization treatments of a double-level set on the upper row of teeth and the lower row of teeth according to the alternating sequence of the teeth to obtain the initialization of the initial layers of the upper row of teeth and the lower row of teeth;
s302, the two-level set DRLSE model is used for finely dividing the initial layers of the upper row of teeth and the lower row of teeth respectively, and therefore two-dimensional division results of the initial layers of the upper row of teeth and the lower row of teeth are obtained.
In this embodiment, the DRLSE model is used to obtain the tooth segmentation result of the initial layer, and the result of the previous layer is used as the initialization of the next layer by using the interlayer information, so as to perform iterative segmentation in sequence. Because the root division problem of the same tooth cannot be well solved by a single level set method for the positions of the molars, the optimal threshold value method is carried out on the segmentation result of the previous layer in the iterative segmentation process between layers for optimization processing. In addition, the situation that the segmentation result of two similar teeth is attracted by the adjacent teeth is probably caused by segmenting each tooth by using a single horizontal set, in order to ensure the segmentation precision of the single tooth, a double-horizontal-set method is adopted for segmentation processing, and the double-horizontal-set is subjected to intersection processing in each iteration process, so that the situation that two curves are staggered or fused into a boundary in the evolution process is avoided, and the segmentation result is favorably improved. And solving the level set function in an iterative mode to obtain the segmentation result of each layer of the upper and lower rows of teeth.
In this embodiment, the main idea of step S3 is to select appropriate initial layers for the upper and lower rows of teeth, divide the lower row of teeth from the initial layers to the tooth roots, and divide from the initial layers to the tooth crowns; the upper row teeth are segmented from the initial layer to the crown and then segmented from the initial layer to the root, the segmentation process schematic diagram is shown in figure 5 to obtain rough segmentation results, and finally the segmentation results of all layers are subjected to three-dimensional reconstruction.
The fine segmentation model utilizes a double level set DRLSE model, which is as follows:
wherein the content of the first and second substances,the expressions all represent a time evolution equation, mu, lambda and α all represent parameters for controlling evolution, delta represents a unit impulse function, div (·) represents divergence, phi1Represents a first horizontal curve, phi2A second curve of the level is shown,representing the corresponding gradients of the two level sets, dp(s) denotes a function defined by a parameter s, p'(s) denotes a derivation of the function p, s denotes [0,1 ]]P denotes a function defined by a distance regularization term, p2(. cndot.) represents a function of the level set gradient norm,the gradient mode of the level set curve is represented,representing gradient operators, GσDenotes a gaussian function with standard deviation σ, I denotes the entire image, x denotes a convolution operator, and g denotes an edge detection factor.
In this embodiment, two bilevel set evolution curves need to be guaranteed:
φ1=max(φ1,-φ2)
φ2=max(-φ1,φ2)
wherein phi is1Represents a first horizontal curve, phi2And (3) representing a second horizontal curve, wherein max (·) represents taking a maximum value, so that mutual exclusion of the two level set functions in the iterative evolution process of the level set functions can be ensured, and the aim of separating two adjacent teeth is fulfilled, and 16 teeth on each layer of the whole oral cavity can be segmented according to a single-double alternating mode to obtain a two-dimensional segmentation result of the single-layer CBCT tooth image.
Layer initialization threshold processing: as shown in fig. 6, the curve interior gray levels are gaussian fitted and the data points are fitted to a gaussian function type, even though the following formula:
to the pixel point data (x) in the curvei,yi) (i ═ 1,2, 3.) where x isiRepresents [0,255]Pixel gray scale of yiThe number of pixel points of the gray level in an image curve is represented, A represents an amplitude value, S represents a curve width, e represents a natural constant, and mu represents a parameter for controlling evolution.
In this embodiment, gaussian fitting is performed on the internal gray level distribution of the previous layer fine segmentation result curve, and T ═ μ -3 σ is selected according to a 3 σ criterion to be used as the internal threshold of the current layer result for processing, so that segmentation accuracy is improved, interference of peripheral regions is reduced, or the situation that the previous layer result is attracted by adjacent teeth due to level set iteration occurs.
S4, interlayer iteration segmentation: performing optimal threshold processing on the two-dimensional segmentation results of the initial layers of the upper row and the lower row of teeth by utilizing interlayer information, and performing upward or downward layer-by-layer iteration by utilizing a double-level set DRLSE model according to the processing result to obtain the two-dimensional segmentation result of each layer of the teeth of the CBCT image, wherein the implementation method comprises the following steps:
s401, performing optimal threshold processing on the two-dimensional segmentation results of the upper and lower tooth initial layers by utilizing interlayer information, and taking the processed results as initialization of the upper layer or the lower layer;
s402, sequentially performing iterative segmentation processing on the upper row of teeth and the lower row of teeth by using a double-level set DRLSE model so as to obtain a two-dimensional segmentation result of each layer of the teeth of the CBCT image;
the expression for performing the optimal threshold processing in step S401 is as follows:
wherein x isiRepresents [0,255]Pixel gray scale of yiExpressing the number of pixel points of the gray level in an image curve, A expressing an amplitude value, S expressing a curve width, e expressing a natural constant, mu expressing a parameter for controlling evolution, and then selecting a threshold value according to a 3 sigma criterion, wherein sigma is a standard deviation;
the step S402 specifically includes:
and respectively segmenting the upper row of teeth and the lower row of teeth layer by layer from the initial layer to the tooth crown direction and segmenting the upper row of teeth and the lower row of teeth layer by layer from the initial layer to the tooth root direction by using a double-level set DRLSE model, thereby obtaining a two-dimensional segmentation result of the teeth of the CBCT image.
S5, outputting a three-dimensional tooth structure: and (3) carrying out segmentation processing on the two-dimensional segmentation result of the CBCT image tooth by using a DRLSE model to obtain a three-dimensional segmentation result of the CBCT image tooth, thereby completing the segmentation of the CBCT image tooth, wherein the implementation method comprises the following steps:
s501, stacking two-dimensional segmentation results of single-layer CBCT tooth images of upper and lower rows of teeth, and taking the stacking results as initialization of a three-dimensional DRLSE model;
s502, carrying out iterative processing on the stacking result by using a three-dimensional DRLSE model to obtain a three-dimensional segmentation result of the CBCT dental image, thereby completing the segmentation of the CBCT dental image.
In this embodiment, the obtained single-layer segmentation result is used as initialization of the three-dimensional DRLSE model, and the three-dimensional level set function is solved in an iterative manner to obtain an accurate three-dimensional segmentation result.
In this embodiment, the step S5 stacks the two-dimensional results obtained in the step S4 as initialization of the three-dimensional DRLSE, and the three-dimensional DRLSE model is used to perform final three-dimensional accurate segmentation, so that the segmentation result can be optimized, small holes and small gaps caused by the two-dimensional stacking result can be filled, and the final three-dimensional tooth segmentation result is more accurate.
Claims (10)
1. A CBCT tooth image segmentation method based on central point detection is characterized by comprising the following steps:
s1, preprocessing: acquiring an original image of teeth, and calculating the size of a region wrapping the teeth by using an MIP (maximum aperture average) algorithm according to the original image of the teeth;
s2, rough segmentation: selecting a proper initial layer according to the size of the region wrapping the teeth, and calculating by using a watershed algorithm to obtain a rough segmentation result of the initial layer of the teeth;
s3, bi-level set fine segmentation: taking the rough segmentation result as initialization, and carrying out fine segmentation processing on the initial tooth layer by using a double-level set DRLSE model to obtain two-dimensional segmentation results of the initial tooth layers of the upper row and the lower row;
s4, interlayer iteration segmentation: performing optimal threshold processing on the two-dimensional segmentation results of the upper and lower rows of tooth initial layers by utilizing interlayer information, and performing upward or downward layer-by-layer iteration by utilizing a double-level set DRLSE model according to the processing results to obtain the two-dimensional segmentation result of each layer of the teeth of the CBCT image;
s5, outputting a three-dimensional tooth structure: and carrying out segmentation processing on the two-dimensional segmentation result of the CBCT image tooth by using the DRLSE model to obtain a three-dimensional segmentation result of the CBCT image tooth, thereby completing the segmentation of the CBCT image tooth.
2. The CBCT dental image segmentation method based on center point detection as claimed in claim 1, wherein the step S1 includes the steps of:
s101, obtaining original tooth images, storing the original tooth images in a DICOM (digital imaging and communications in medicine) format, and reading tooth slice images layer by layer;
s102, carrying out piecewise linear transformation processing on the tooth original image, and normalizing the gray level of the tooth original image to [0,255 ];
s103, respectively projecting the image subjected to gray level normalization processing in the x direction, the y direction and the z direction by using an MIP (maximum intensity projection) algorithm to obtain the size of a region wrapping the teeth;
the expression for projecting the three directions x, y and z of the image respectively is as follows:
xmip(j,k)=max(xmip(j,k),a(i,j,k));
ymip(i,k)=max(ymip(i,k),a(i,j,k));
zmip(i,j)=max(zmip(i,j),a(i,j,k));
where xmip (j, k) denotes the projection of the image in the x direction, ymip (i, k) denotes the projection of the image in the y direction, zmip (i, j) denotes the projection of the image in the z direction, i denotes the image size ranging from 1 to the x direction, j denotes the image size ranging from 1 to the y direction, k denotes the image size ranging from 1 to the z direction, and a (i, j, k) denotes the grayscale size of the image at i, j, and k.
3. The CBCT dental image segmentation method based on center point detection as claimed in claim 1, wherein the step S2 includes the steps of:
s201, respectively selecting initial layers of upper and lower rows of teeth according to the size of the area wrapping the teeth;
s202, respectively detecting central points of the upper row of teeth and the lower row of teeth according to the initial layers of the upper row of teeth and the lower row of teeth, taking the central points as a foreground, and taking a preset threshold value as a background;
s203, marking the foreground and the background inside and outside by using a watershed algorithm, and obtaining rough segmentation results of the upper row of teeth and the lower row of teeth respectively.
4. The CBCT dental image segmentation method based on center point detection as claimed in claim 1, wherein the step S3 includes the steps of:
s301, according to the rough segmentation result obtained in the step S2, performing two initialization treatments of a double-level set on the upper row of teeth and the lower row of teeth according to the alternating sequence of the teeth to obtain the initialization of the initial layers of the upper row of teeth and the lower row of teeth;
s302, the two-level set DRLSE model is used for finely dividing the initial layers of the upper row of teeth and the lower row of teeth respectively, and therefore two-dimensional division results of the initial layers of the upper row of teeth and the lower row of teeth are obtained.
5. The CBCT dental image segmentation method based on center point detection as claimed in claim 4, wherein the step S4 includes the steps of:
s401, performing optimal threshold processing on the two-dimensional segmentation results of the upper and lower tooth initial layers by utilizing interlayer information, and taking the processed results as initialization of the upper layer or the lower layer;
s402, sequentially performing iterative segmentation processing on the upper row of teeth and the lower row of teeth by using a double-level set DRLSE model, so as to obtain a two-dimensional segmentation result of each layer of the teeth of the CBCT image.
6. The CBCT dental image segmentation method based on center point detection as claimed in claim 5, wherein the optimal threshold processing in step S401 is expressed as follows:
wherein x isiRepresents [0,255]Pixel gray scale of yiThe number of pixel points of the gray level in an image curve is represented, A represents an amplitude value, S represents a curve width, e represents a natural constant, and mu represents a parameter for controlling evolution.
7. The CBCT dental image segmentation method based on central point detection as claimed in claim 5, wherein the step S402 is specifically as follows:
and respectively segmenting the upper row of teeth and the lower row of teeth layer by layer from the initial layer to the tooth crown direction and segmenting the upper row of teeth and the lower row of teeth layer by layer from the initial layer to the tooth root direction by using a double-level set DRLSE model, thereby obtaining a two-dimensional segmentation result of the teeth of the CBCT image.
8. The CBCT dental image segmentation method based on center point detection as claimed in claim 6, wherein the two bilevel set evolution curves in each iterative segmentation process in the steps S302 and S402 satisfy the following condition:
φ1=max(φ1,-φ2)
φ2=max(-φ1,φ2)
wherein phi is1Represents a first horizontal curve, phi2A second horizontal curve is shown, max (-) indicates maximum value.
9. The method for CBCT dental image segmentation based on center point detection as claimed in claim 8, wherein the expressions of the segmentation of step S302 and step S402 using the dual level set DRLSE model are as follows:
wherein the content of the first and second substances,the expressions all represent the time evolution equation, μ, λ and αAll represent parameters controlling evolution, delta represents a unit impulse function, div (·) represents divergence, phi1Represents a first horizontal curve, phi2A second curve of the level is shown,representing the corresponding gradients of the two level sets, dp(s) denotes a function defined by a parameter s, p'(s) denotes a derivation of the function p, s denotes [0,1 ]]P denotes a function defined by a distance regularization term, p2(. cndot.) represents a function of the level set gradient norm,the gradient mode of the level set curve is represented,representing gradient operators, GσDenotes a gaussian function with standard deviation σ, I denotes the entire image, x denotes a convolution operator, and g denotes an edge detection factor.
10. The CBCT dental image segmentation method based on central point detection as claimed in claim 1, wherein the step S5 is specifically as follows:
s501, stacking two-dimensional segmentation results of single-layer CBCT tooth images of upper and lower rows of teeth, and taking the stacking results as initialization of a three-dimensional DRLSE model;
s502, carrying out iterative processing on the stacking result by using a three-dimensional DRLSE model to obtain a three-dimensional segmentation result of the CBCT dental image, thereby completing the segmentation of the CBCT dental image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911279019.3A CN110889850B (en) | 2019-12-13 | 2019-12-13 | CBCT tooth image segmentation method based on central point detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911279019.3A CN110889850B (en) | 2019-12-13 | 2019-12-13 | CBCT tooth image segmentation method based on central point detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110889850A true CN110889850A (en) | 2020-03-17 |
CN110889850B CN110889850B (en) | 2022-07-22 |
Family
ID=69751749
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911279019.3A Active CN110889850B (en) | 2019-12-13 | 2019-12-13 | CBCT tooth image segmentation method based on central point detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110889850B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111968120A (en) * | 2020-07-15 | 2020-11-20 | 电子科技大学 | Tooth CT image segmentation method for 3D multi-feature fusion |
CN113393470A (en) * | 2021-05-12 | 2021-09-14 | 电子科技大学 | Full-automatic tooth segmentation method |
CN113436734A (en) * | 2020-03-23 | 2021-09-24 | 北京好啦科技有限公司 | Tooth health assessment method and device based on face structure positioning and storage medium |
CN114241173A (en) * | 2021-12-09 | 2022-03-25 | 电子科技大学 | Tooth CBCT image three-dimensional segmentation method and system |
CN114757960A (en) * | 2022-06-15 | 2022-07-15 | 汉斯夫(杭州)医学科技有限公司 | Tooth segmentation and reconstruction method based on CBCT image and storage medium |
CN117876578A (en) * | 2023-12-15 | 2024-04-12 | 北京大学口腔医学院 | Orthodontic tooth arrangement method based on crown root fusion |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105488849A (en) * | 2015-11-24 | 2016-04-13 | 嘉兴学院 | Hybrid level set based three-dimensional tooth modeling method |
CN107564023A (en) * | 2017-08-02 | 2018-01-09 | 杭州美齐科技有限公司 | A kind of CBCT teeth segmentation and modeling algorithm |
CN108932716A (en) * | 2017-05-26 | 2018-12-04 | 无锡时代天使医疗器械科技有限公司 | Image partition method for dental imaging |
-
2019
- 2019-12-13 CN CN201911279019.3A patent/CN110889850B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105488849A (en) * | 2015-11-24 | 2016-04-13 | 嘉兴学院 | Hybrid level set based three-dimensional tooth modeling method |
CN108932716A (en) * | 2017-05-26 | 2018-12-04 | 无锡时代天使医疗器械科技有限公司 | Image partition method for dental imaging |
CN107564023A (en) * | 2017-08-02 | 2018-01-09 | 杭州美齐科技有限公司 | A kind of CBCT teeth segmentation and modeling algorithm |
Non-Patent Citations (4)
Title |
---|
HUI GAO 等: "Individual tooth segmentation from CT images using level set method with shape and intensity prior", 《PATTERN RECOGNITION》 * |
LI CHUNMING 等: "Distance Regularized Level Set Evolution and Its Application to Image Segmentation", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
XIE SHIPENG 等: "A level set method for cupping artifact correction in cone-beam CT", 《MEDICAL PHYSICS》 * |
张东霞: "口腔CT图像中独立牙齿轮廓分割算法研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113436734A (en) * | 2020-03-23 | 2021-09-24 | 北京好啦科技有限公司 | Tooth health assessment method and device based on face structure positioning and storage medium |
CN113436734B (en) * | 2020-03-23 | 2024-03-05 | 北京好啦科技有限公司 | Tooth health assessment method, equipment and storage medium based on face structure positioning |
CN111968120A (en) * | 2020-07-15 | 2020-11-20 | 电子科技大学 | Tooth CT image segmentation method for 3D multi-feature fusion |
CN111968120B (en) * | 2020-07-15 | 2022-03-15 | 电子科技大学 | Tooth CT image segmentation method for 3D multi-feature fusion |
CN113393470A (en) * | 2021-05-12 | 2021-09-14 | 电子科技大学 | Full-automatic tooth segmentation method |
CN114241173A (en) * | 2021-12-09 | 2022-03-25 | 电子科技大学 | Tooth CBCT image three-dimensional segmentation method and system |
CN114757960A (en) * | 2022-06-15 | 2022-07-15 | 汉斯夫(杭州)医学科技有限公司 | Tooth segmentation and reconstruction method based on CBCT image and storage medium |
CN117876578A (en) * | 2023-12-15 | 2024-04-12 | 北京大学口腔医学院 | Orthodontic tooth arrangement method based on crown root fusion |
Also Published As
Publication number | Publication date |
---|---|
CN110889850B (en) | 2022-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110889850B (en) | CBCT tooth image segmentation method based on central point detection | |
US11651494B2 (en) | Apparatuses and methods for three-dimensional dental segmentation using dental image data | |
Lahoud et al. | Artificial intelligence for fast and accurate 3-dimensional tooth segmentation on cone-beam computed tomography | |
US11995839B2 (en) | Automated detection, generation and/or correction of dental features in digital models | |
US20200350059A1 (en) | Method and system of teeth alignment based on simulating of crown and root movement | |
KR20200108822A (en) | Automatic classification and classification system of 3D tooth data using deep learning method | |
CN113223010B (en) | Method and system for multi-tissue full-automatic segmentation of oral cavity image | |
Kumar et al. | Descriptive analysis of dental X-ray images using various practical methods: A review | |
Fontenele et al. | Influence of dental fillings and tooth type on the performance of a novel artificial intelligence-driven tool for automatic tooth segmentation on CBCT images–A validation study | |
Lakshmi et al. | Classification of Dental Cavities from X-ray images using Deep CNN algorithm | |
US9672641B2 (en) | Method, apparatus, and computer readable medium for removing unwanted objects from a tomogram | |
Cristian et al. | A cone beam computed tomography annotation tool for automatic detection of the inferior alveolar nerve canal | |
CN110619646A (en) | Single-tooth extraction method based on panoramic image | |
CN113393470A (en) | Full-automatic tooth segmentation method | |
Jain et al. | Dental image analysis for disease diagnosis | |
Chen et al. | Detection of various dental conditions on dental panoramic radiography using Faster R-CNN | |
Kakehbaraei et al. | 3D tooth segmentation in cone-beam computed tomography images using distance transform | |
CN114241173B (en) | Tooth CBCT image three-dimensional segmentation method and system | |
US20220361992A1 (en) | System and Method for Predicting a Crown and Implant Feature for Dental Implant Planning | |
US20220358740A1 (en) | System and Method for Alignment of Volumetric and Surface Scan Images | |
Zhang et al. | Advancements in oral and maxillofacial surgery medical images segmentation techniques: An overview | |
Ezhov et al. | Development and validation of a cbct-based artificial intelligence system for accurate diagnoses of dental diseases | |
Orlowska et al. | Virtual tooth extraction from cone beam computed tomography scans | |
Zak et al. | The method of teeth region detection in panoramic dental radiographs | |
Song et al. | Convolutional Neural Networks For Apical Lesion Segmentation From Panoramic Radiographs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |