CN113223010A - Method and system for fully automatically segmenting multiple tissues of oral cavity image - Google Patents

Method and system for fully automatically segmenting multiple tissues of oral cavity image Download PDF

Info

Publication number
CN113223010A
CN113223010A CN202110435626.5A CN202110435626A CN113223010A CN 113223010 A CN113223010 A CN 113223010A CN 202110435626 A CN202110435626 A CN 202110435626A CN 113223010 A CN113223010 A CN 113223010A
Authority
CN
China
Prior art keywords
data
cbct
image data
network model
labeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110435626.5A
Other languages
Chinese (zh)
Other versions
CN113223010B (en
Inventor
杨慧芳
张馨月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University School of Stomatology
Original Assignee
Peking University School of Stomatology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University School of Stomatology filed Critical Peking University School of Stomatology
Priority to CN202110435626.5A priority Critical patent/CN113223010B/en
Publication of CN113223010A publication Critical patent/CN113223010A/en
Application granted granted Critical
Publication of CN113223010B publication Critical patent/CN113223010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a system for fully automatically segmenting a plurality of tissues of an oral CBCT image, which are characterized in that according to the acquired original oral CBCT data, the data is labeled by using a deep learning method, and a new image is segmented by using the deep learning method again, so that the image processing process is simple, the manual excessive labeling and intervention are not needed, the difficulty of the conventional medical data labeling is overcome, and the artificial difference in the image labeling process is reduced. In the process of training the network, the confrontation network is generated for the training data, the training data amount is increased, and the universality of the network can be effectively improved. The establishment of the method greatly promotes the segmentation of medical image technology and provides effective technical support for automatic diagnosis and prognosis of oral diseases.

Description

Method and system for fully automatically segmenting multiple tissues of oral cavity image
Technical Field
The present application relates to techniques for automatic segmentation of multiple tissues of an image of the oral cavity.
Background
Research of artificial intelligence in the medical field is an important direction for the current interdisciplinary development. The research utilizes advanced technology and method in computer science and mathematics to solve clinical problems in medicine, and has great application prospect and popularization value.
Automatic segmentation and identification of medical images is the focus of medical automated diagnosis. Based on images such as CT, MRI and the like, the automatic segmentation of the images can not only reduce the film reading time of a clinician, but also improve the accuracy of diagnosis.
CBCT data plays an important role in oral medicine. How to accurately and efficiently analyze CBCT data gradually receives attention. The patent is dedicated to the deep learning technology, and a plurality of tissues of soft and hard tissues of the jaw face are automatically divided, so that a technical foundation is laid for the automatic diagnosis in the oral cavity, and a technical foundation is laid for the automatic tooth arrangement of tooth deformity, the automatic diagnosis of periodontal diseases, the oral and maxillofacial surgical operation planning and the like.
The CBCT image can reveal the change of hard tissues and peripheral soft tissue structures of the jaw face along with time, so the CBCT image is widely applied to the fields of orthodontic in oral medicine, periodontal, implantation, surgery and the like. The automatic CBCT tissue segmentation algorithm aims at automatically extracting, identifying and analyzing anatomical structures such as skull, teeth, alveolar bones (cortical bone and cancellous bone), periodontal gaps, dental pulp cavities, mandibular canals and the like, and is the most key technology in longitudinal oral CBCT image analysis.
The segmentation methods for CBCT teeth are mainly concentrated into two categories: threshold-based methods and methods based on active contour models, see references 1-2. The threshold-based method is used on the premise that there is a large difference in gray-scale values between teeth and periodontium, and tooth tissues are determined by controlling the threshold range. Therefore, the main problem of segmentation is to find the optimal threshold, but the optimal threshold is different for different samples, so the optimal threshold needs to be re-explored for different samples. Heo, which proposes an optimal threshold scheme for segmenting teeth, the method uses a preset threshold for segmentation, see reference 1 specifically. However, the tissue near the root is complex, and it is difficult to distinguish alveolar bone from root area by a single threshold range, see reference 2. Akhoodali et al propose a fast automatic segmentation method using a region growing method. Since the grey values of the root and the adherent tissue near the root are very similar, a proper threshold cannot be found for segmenting the tooth, see reference 3. Researchers such as Shanghai university of transportation, Zhejiang university, Beijing university and the like use an active contour method to segment teeth, and the active contour model method is an interactive segmentation method based on a contour line reconstruction method. In the process, each tooth needs to be singly segmented, and finally, a three-dimensional model of the tooth is uniformly reconstructed, which is specifically referred to in references 4 to 6.
With the development of CNN technology, deep learning methods are gradually applied to the medical field. Miki et al describe a 2D CNN algorithm that is trained to classify the teeth in CBCT images into 7 morphologies, which are surrounded by squares after the teeth are detected with an accuracy of 77.4%, see reference 7. Korean Lee teaches that tooth segmentation is performed on CBCT data of a patient based on the posterior probability function (PPM) + CNN algorithm, and that the result of tooth segmentation is superior to the existing algorithm, see reference 8.
Reference documents:
1.Heo,H.and O.-S.Chae,Segmentation of tooth in CT images for the 3D reconstruction of teeth.Electronic Imaging 2004.Vol.5298.2004:SPIE.
2.Wang,Y.,et al.,Accurate tooth segmentation with improved hybrid active contour model.Phys Med Biol,2018.64(1):p.015012.
3.H.Akhoondali,R.A.Z.G.S.,Rapid automatic segmentation and visualization of teeth in CT-scan data.Journal of Applied Sciences,2009:p.9:2031-2044.
4. wange, CBCT dental image segmentation algorithm based on level set research [ D ]. Shanghai university of science & Engineers.2017.
5. Sunja medical volume data processing algorithm research and its application in digital oral cavity [ D ]. Zhejiang university.2019.
6. Wuting; the 3-dimensional tooth reconstruction of the level set active contour model [ J ]. Chinese graphic bulletin 2016.21(08):1078-1087.
7.Miki Y,Muramatsu C,Hayashi T,Zhou X,Hara T,Katsumata A,Fujita H.Classification of teeth in cone-beam CT using deep convolutional neural network.Comput Biol Med Jan 1 2017;80:24-29.
8.Lee,S.,et al.,Automated CNN-Based Tooth Segmentation in Cone-Beam CT for Dental Implant Planning.IEEE Access,2020.8:p.50507-50518.
Disclosure of Invention
However, because the images of the CBCT images at different time points have different gray levels and noise distributions, the accuracy and longitudinal consistency of the segmentation cannot be guaranteed by using the existing traditional tooth segmentation algorithm. The patent closely surrounds the problem, and a deep learning method is applied to research the CBCT image automatic segmentation algorithm.
According to one aspect of the present invention, there is provided a computer-implemented method and system for fully automatically segmenting a plurality of tissues of an oral CBCT image, comprising:
acquiring a plurality of CBCT image data of the oral cavity;
the CBCT data is subjected to space coordinate system adjustment, a coordinate system and a coordinate origin are determined, and the similarity degree of the spatial positions of the same organization is ensured to meet a preset standard based on the definition of the coordinate origin and the space coordinate system;
normalizing the gray value of each CBCT image data;
segmenting and labeling a plurality of tissue regions on each CBCT image data;
training a specific neural network model by using the labeled CBCT image data;
and testing the new CBCT image data sample by using the trained neural network model to obtain a full-automatic segmentation result of the tissue.
And performing morphological analysis and texture analysis on the segmented tissues.
Optionally, the segmenting and labeling the plurality of tissue regions on each CBCT image data includes:
labeling a plurality of tissue regions on each CBCT image data, wherein the number of the tissue regions is m
Training an available U-net network model A by using m labeled CBCT image data samples, and then gradually optimizing the model by increasing the number of training samples;
applying the network model A to the (m + 1) th unlabelled CBCT image data sample to obtain the (m + 1) labeled CBCT image data sample m +1, and performing semi-automatic or manual correction after machine labeling;
optimizing the generic network model using the m +1 labeled CBCT image data samples and the labeling data for the corresponding m +1 labeled CBCT image data samples,
and repeating the steps until all CBCT image data samples are labeled.
Optionally, the labeling of the plurality of tissue regions on the single CBCT image data comprises:
selecting representative 1-n (n is less than the total number of layers of a single sample) pictures in the same CBCT data, and labeling the pictures, wherein the labeled content is information of a certain layer of pictures displayed by a CBCT single layer (a sagittal plane, a coronal plane or an axial plane) or information of multiple layers of pictures in a certain area of the CBCT.
The labeling method may be all manual labeling, or may be all or part of the data of q regions (the size and range of the data are not limited). And training the model B by using a neural network method to obtain relevant parameters of the neural network model B. And automatically labeling the (q + 1) th area by using a neural network model B, and manually correcting a labeling result. And optimizing the neural network model B.
And marking the q +2 th area by using the marked q +1 areas based on the model B.
And testing a new region sample or the whole CBCT data by using the well-established neural network, segmenting the required tissue, and repeatedly marking and optimizing the network model B if the segmentation effect is poor. And realizing automatic segmentation of the sample.
Optionally, a method of generating countermeasure network (GAN) may be used to amplify the data, and optimization may be performed continuously, so that the data may be used not only in the same data, but also in data with different scanning parameters, different CBCT scanners, and different scanning areas.
Optionally, the defining the spatial coordinate system of the CBCT data (here, the local coordinate system 0) includes
And taking the occlusal plane as an XOY plane, and selecting the optimal left and right symmetry plane of the skull as an XOZ plane.
Optionally, the normalizing the gray-scale values of the CBCT image data includes:
the gray values of the overall image are counted and normalized to the (0, 1) range.
Optionally, the data may be filtered before being normalized.
Before the data is normalized, if the photographed data is full cranial data (large visual field data), an oral cranial gray value distribution standard template map can be established, and the standard template can be an average template voxel distribution map established on the basis of hundreds of samples.
In the data normalization process, if the shot small-field data is small-field data, the data is registered and normalized by taking the average template voxel distribution as a reference standard
Optionally, the annotated tissue is represented in the image processing software in categories, including one or more of:
dentine, enamel, background, alveolar cortical bone, alveolar cancellous bone, mandibular nerve canal, periodontal ligament, dental sclerite and pulp chamber, wisdom tooth, trabecular bone, adipose tissue, air, maxillary sinus.
Optionally, after a certain category is segmented and labeled, further subdividing the segmented category by using a watershed algorithm.
If there are 32 teeth per person, they can be divided into teeth 11, 12, 13, …, 48, and labeled.
Each person's teeth have a corresponding alveolar bone region, which may be defined in terms of a certain tooth position relative to the teeth.
The maxillary sinus is divided into a left maxillary sinus and a right maxillary sinus.
Optionally, the method further comprises dividing the divided plurality of connected tooth regions into individual teeth.
The crown regions of the teeth are brought into intimate contact, the teeth are separated individually, and the individual teeth are stored as separate regions of Interest (ROIs), defined herein as subclasses.
Using the method of claim 7 to find the pulp of each tooth and generate corresponding labels for the different pulp. For example, 32 teeth, there are 32 tags.
And (4) segmenting the single tooth based on a watershed algorithm of distance transformation or gradient transformation. The voxel of the dental pulp is used as a seed point, and the watershed algorithm is utilized to independently label the contacted teeth.
And (4) performing boundary smoothing on the segmented teeth, wherein 2-6 Laplace transformations can be adopted for smoothing, and the outer contour of the tooth model is smoothed.
Optionally, the method further comprises segmenting a region formed by connecting a plurality of segmented teeth into individual teeth, and performing three-dimensional mesh reconstruction, including:
searching for a seed point of a region growing method algorithm, taking the dental pulp cavity voxel information of each tooth as seed information, and taking the boundary information of the tooth as termination information;
taking a pulp cavity voxel as a center, and growing by a region growing method;
and performing boundary smoothing on the segmented teeth, smoothing the outline of the tooth model, and reconstructing a three-dimensional surface model of the teeth.
Optionally, the growing by a region growing method with pulp as a center includes:
the dental pulp is used as a seed point, a regional growth method is used for growth, growth is stopped after the dental pulp meets the boundary of a tooth model which is one circle smaller than an actual tooth, the tooth is used as an independent region of interest (ROI) for storage, and after growth is stopped, the boundary reconstruction of the real tooth is realized by using a corrosion treatment goods expansion treatment method in an image;
when a tooth is grown by the area growth method, after the tooth is brought into contact with the tooth, the shortest distance analysis from the pulp to the boundary is used to determine which tooth among adjacent teeth a certain point belongs to.
Optionally, identifying the position B of the divided individual tooth, wherein the position identification of the tooth is performed by using probability distribution, comprising: establishing a standard oral CBCT data template, wherein tooth position information of teeth is respectively marked in the template;
calculating the probability value of the contact ratio of the space where the actual processed image tooth position is located and the standard template space so as to determine the tooth position where the segmented tooth belongs; and
and verifying the determined tooth position of the tooth through a distribution rule.
Optionally, the specific neural network model is a U-net neural network model,
the U-net model is 3-5 layers, and in the process of marking training data samples, the marking types comprise: dentine, enamel, background, alveolar cortical bone, alveolar cancellous bone, mandibular nerve canal, periodontal ligament, dental sclerite and pulp cavity, wisdom tooth, trabecular bone, adipose tissue, air, maxillary sinus.
Optionally, when segmenting the tooth, the input parameters selected are as follows, pitch size 32, 64, 128, Batch size 32, 64, 128, loss function is Diceloss,; the number of iterations is 30;
patch is a small block input in an image selected by the convolutional neural network during calculation, and the image is overlapped with the image to select a corresponding region from the Patch. If the overlapping ratio is 30%, Patch 1 and Patch 2 have an overlapping space of 30%.
The size of Batch size refers to the size of the image that is calculated for the neural network at a time. It determines the time required to complete each batch (epoch) and the degree of smoothness of the gradient between each iteration (iteration) in the deep learning training process.
Increasing the number of training samples by changing the training samples to obtain new training samples, wherein the changing means comprises one or a combination of the following: the data image is amplified, reduced, rotated, translated, changed in gray level value, increased in Gaussian noise, cut and amplified.
According to yet another aspect of the present invention, there is also provided a computing device comprising a memory and a processor, the memory having stored thereon computer instructions which, when executed by the processor, perform the method of any of the preceding.
According to yet another aspect of the present invention, there is also provided a computer readable medium having stored thereon computer instructions which, when executed by a computer, perform the method of any of the preceding.
The invention relates to a method and a system for fully automatically segmenting a plurality of tissues of an oral CBCT image. In the process of training the network, the confrontation network is generated for the training data, the training data amount is increased, and the universality of the network can be effectively improved. The establishment of the method greatly promotes the segmentation of medical image technology and provides effective technical support for automatic diagnosis and prognosis of oral diseases.
The tooth segmentation method provided by the patent can not only provide reference basis for clinical diagnosis of different oral cavity different tissue function conditions, but also predict treatment effect in clinical practice of patients, establish a disease prediction model and scientifically guide clinical treatment of patients.
Drawings
FIG. 1 shows a general flow diagram of a computer-implemented method for fully automatically segmenting a CBCT image of an oral cavity into a plurality of tissues, according to an embodiment of the present invention.
FIG. 2 shows a diagram of a normalized classification process according to one embodiment of the invention.
Fig. 3 shows an example of a process for labeling a CBCT image data.
FIG. 4 shows a schematic diagram of a training data amplification mode according to an embodiment of the invention.
FIG. 5 illustrates a schematic diagram of training a particular neural network model using labeled CBCT image data, in accordance with an embodiment of the present invention.
Fig. 6 shows a block diagram of training the U net model with 1 to n cases of raw data (CBCT data) and corresponding labeled data, and finally obtaining the parameters of the optimal segmentation data model.
Fig. 7 shows a generalized overview of relevant parameter settings when training the U net network model.
Fig. 8 shows a classification block diagram of evaluation methods for a multi-organization segmentation result.
Fig. 9 shows a block diagram of two levels of classification identification of CBCT data for an oral cavity.
FIG. 10 shows a block diagram of the segmentation and labeling of all teeth identified at the first level.
Fig. 11 schematically shows a schematic view of the division of the dentition into individual teeth.
Fig. 12 illustrates several exemplary methods of segmentation and coding labeling methods, in particular, a rule-based segmentation method, a deep learning method, and a probability distribution method.
Fig. 13 illustrates several tooth coding methods.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
FIG. 1 illustrates a computer-implemented method for fully automatically segmenting a plurality of tissues from an oral CBCT image according to an embodiment of the present invention.
As shown in fig. 1, in step S110, a plurality of CBCT image data of the oral cavity are acquired.
In step S120, a spatial coordinate system of the CBCT data is adjusted to determine an origin of coordinates, and based on the origin of coordinates and the definition of the spatial coordinate system, it is ensured that the spatial position proximity of the same tissue meets a predetermined standard. For example, when analyzing a periodontal disease patient, the disease becomes more and more serious as the periodontal disease progresses, and therefore, in the process of comparing the data before and after the CBCT of the affected tooth is photographed for the first time and the CBCT of the affected tooth is photographed for the second time, the spatial coordinate systems of the same tissue should be kept as consistent as possible.
In one example, performing spatial coordinate system adjustments on CBCT data includes: the occlusal plane is taken as an XOY plane, and the left and right symmetrical planes are taken as XOZ planes.
In step S130, the gradation value of each CBCT image data is normalized.
It should be noted that, before the data is normalized, filtering may be optionally performed on the data.
One normalization method is to set a lower threshold and an upper threshold of the gradation values, and in the CBCT image data, the gradations higher than the upper threshold are unified as the upper threshold, the gradations lower than the lower threshold are unified as the lower threshold, and the gradation values can be normalized to the [0,1] section by dividing the difference between the gradation value thus adjusted and the lower threshold by the difference between the upper threshold and the lower threshold.
It should be noted that CBCT data are various and different, and have large-field data, such as full-head data, medium-field data, and small-field data, such as medium-field data and small-field data, which are only partial-head data, and should be specially processed for normalization of the medium-field data and the small-field data. The medium-field data and the small-field data correspond to a predetermined size range with respect to the large-field data which is the full-head data.
In one example, a Model of a standard template (Model) of oral cranial gray scale distribution is pre-established, which may be an average distribution template with gray scale information established based on hundreds of large field of view data samples (e.g., whole cranial data).
Normalizing the gray scale values of the CBCT image data may include: before data normalization, judging whether the shot data is large-view data or middle-view and small-view data, and if the shot data is large-view data such as whole-skull data, normalizing by a conventional method; if the data of the middle visual field and the small visual field are shot, the data of the middle visual field and the small visual field are registered with a Model by taking an oral skull gray value distribution standard template map (Model) as a standard, and then normalization is carried out according to an average distribution template of Model gray information.
FIG. 2 shows a diagram of a normalized classification process according to one embodiment of the invention.
As shown in fig. 2, normalization can be divided into global normalization, partial area normalization, and normalization after filtering partial area.
Wherein, the global normalization refers to calculating all data in the data according to a formula;
Y=aX+b;
x is the corresponding gray value of the original image, Y is the corresponding gray value of the processed image, and Y belongs to (0, 1).
The partial area normalization means that when original data are shot, only local data of an oral cavity, such as a certain periodontal disease, are included, only diseased tissues are concerned, therefore, shot images only include local data of the oral cavity, and therefore, in the calculation process, the small visual field data are registered to a Model by taking an oral cavity skull gray value distribution standard template map as a reference standard and a Model as a reference template, and the small visual field data are normalized;
as described above, there is an artifact in some data before normalization, and a filtering method is required to remove the corresponding artifact.
In step S140, a plurality of tissue regions on each CBCT image data are segmented and labeled.
As a preferred example, segmenting and labeling a plurality of tissue regions on each CBCT image data includes:
labeling a plurality of tissue regions on each CBCT image data, wherein the number of the tissue regions is m;
training an available U-net network model A by using the m labeled CBCT image data samples, and then gradually optimizing the model by increasing the number of training samples;
specifically, the network model A is applied to the (m + 1) th unlabelled CBCT image data sample to obtain the (m + 1) labeled CBCT image data sample m +1, and semi-automatic or manual correction is performed after machine labeling;
optimizing the generic network model using the m +1 annotated CBCT image data samples (i.e. CBCT image data together with corresponding annotation data) (previous m annotated CBCT image data samples plus the m +1 th CBCT image data sample annotated later),
and repeating the steps until all CBCT image data samples are labeled.
Labeling a CBCT image data sample (three-dimensional multi-slice volumetric data) may include labeling the data (hereinafter referred to as a picture) for each slice thereof. Fig. 3 shows an example of a process for labeling a CBCT image data.
As shown in fig. 3, labeling multiple tissue regions on a single CBCT image data includes:
selecting representative n (n is less than the total number of layers of a single sample) pictures (called 1 st to nth) in the same CBCT data, and labeling the pictures, wherein the labeled content is information of a certain layer of picture displayed by a CBCT single layer (sagittal plane, coronal plane or axial plane) or multi-layer picture information of a certain area of the CBCT. Illustratively, a selection of regions containing analysis data information in each layer may be selected, 1 or more regions may be selected in each layer, each selected region may be a square, a circle, a trapezoid, any other shape, etc., but the selected regions are imaged on the selected calculation region range of the later depth learning.
The labeling method may be all manual labeling, or may be all or part of data of q (q is an integer of 1 or more) regions (the size and range of the data are not limited). And training the model B by using a neural network method to obtain relevant parameters of the neural network model B. And automatically labeling the (q + 1) th area by using a B neural network model B, and semi-automatically or manually correcting a labeling result. And optimizing the neural network model B by using the marked q +1 regions. Next, based on model B, the region of q +2 is labeled. The process is repeated until all the n pictures are marked. And testing the whole CBCT data by utilizing the established neural network, segmenting the required tissue, and repeatedly marking and optimizing the network model B if the segmentation effect is not good. Finally, the automatic segmentation of the CBCT sample is realized.
In another example, the segmenting and labeling the plurality of tissue regions on the respective CBCT image data includes:
training a generic network model using a first number of labeled CBCT image data samples;
applying the universal network model to a second number of unlabeled CBCT image data samples to obtain the second number of labeled CBCT image data samples;
training the generic network model using the first number of annotated CBCT image data samples and the second number of annotated CBCT image data samples,
this continues until all CBCT image data samples have been annotated.
By way of example, the categories of annotations include: dentine, enamel, background, alveolar cortical bone, alveolar cancellous bone, mandibular nerve canal, periodontal ligament, dental sclerite and pulp chamber, wisdom tooth, trabecular bone, adipose tissue, air, maxillary sinus.
In this way, labeling of a single CBCT sample can be achieved.
It should be noted that, when training the network model B, the training data may be augmented by using a method of Generating Adaptive Networks (GAN), and iteratively and continuously optimized, so that the training data is gradually used in the same data, and may also be used in data with different scanning parameters, different CBCT scanners, and different scanning areas. Fig. 4 shows a schematic diagram of a training data amplification mode according to an embodiment of the present invention, and it can be seen that the sample amplification mode may be one or a combination of the following: the data image is amplified, reduced, rotated, translated, changed in gray level value, increased in Gaussian noise, cut and amplified.
In step S150, a specific neural network model is trained using the labeled CBCT image data.
In one example, the particular neural network model is a U-net neural network model. FIG. 5 shows a schematic diagram of training a particular neural network model using labeled CBCT image data.
In one example, the U-net model is 3-5 layers, the training data sample size is 1-n, n is not more and is better, and in the process of training data by using 3D U-net, the identification category is, for example: different combinations in tooth (dentin, enamel), background, alveolar cortical bone, alveolar cancellous bone, mandibular nerve canal, periodontal ligament, dental scleral and pulp cavity, wisdom tooth, trabecular bone, adipose tissue, air, maxillary sinus.
In dividing teeth, as an example, the selected input parameters are as follows, pitch size is 64, Batch size is 32, 64, 128, and Loss function is Diceloss, which is an example, and the Loss function can also be adjusted. Meanwhile, the number of cycles may be set to a number less than 100, for example, the segmentation accuracy is not increased to the cutoff within 10 cycles, where Patch size represents the size of the image block, generally a square edge length value, in pixels, and Batch size represents the size of the image put into the network of the deep learning training, generally a square edge length value, in pixels. Equation 1 below is an example of a function that is a loss,
dice=2TP/(2TP+TP+FN) (1)
wherein TP is true positive, TN is true negative, FP is false positive, FN is false negative.
Patch is a small block input in an image selected by the convolutional neural network during calculation, and the image is overlapped with the image to select a corresponding region from the Patch. If the overlapping rate is 30%, then Patch 1 and Patch 2 have an overlapping space of 30%,
the size of Batch size refers to the size of the image that is calculated for the neural network at a time. It determines the time required to complete each batch (epoch) and the degree of smoothness of the gradient between each iteration (iteration) in the deep learning training process.
As described above, in order to make the existing training samples more robust, the number of training samples is increased by changing the training samples to obtain new training samples, and a data amplification (augment) method is used in the training process, and parameters of the amplification include magnification, rotation, translation, gray value change, gaussian noise increase, clipping, amplification, and the like of the data image.
Fig. 6 shows a block diagram of training the U-net model with 1 to n cases of raw data (CBCT data) and corresponding labeled data, and finally obtaining the parameters of the optimal segmented data model.
Fig. 7 shows an example of a summary of relevant parameter settings when training the U-net network model, summarized overall.
In step S160, the trained neural network model is used to test the new CBCT image data sample, so as to obtain a full-automatic segmentation result of the tissue.
It should be noted that, after the test result and the artificial evaluation of the new CBCT image data sample are obtained, the new CBCT image data sample can be added into the training sample set as a training sample, and the training sample set is returned to the U-net network model for iterative training.
In the foregoing examples, the oral CBCT image is subjected to a plurality of tissue fully-automatic segmentations, but this is merely an example, and the three-dimensional slice image or the time series image may be subjected to a plurality of tissue fully-automatic segmentations, such as Magnetic Resonance Imaging (MRI), Computed Tomography (CT), micro-CT (micro-Computed Tomography), laser scanning Confocal microscope (CLSM), and the like.
It should be noted that before the CBCT data is normalized in fig. 1, the data may be filtered to remove metal artifacts, dental restorative material artifacts, cavity artifacts due to tissue distribution, motion artifacts generated by the machine during operation, and so on. For metal artifact removal, the metal artifact in the mouth is removed by using the principle of filtering, and the removing method includes, for example, gaussian filtering method, median filtering method, Mean shift, anistropic dispersion, binary, Kuwahara and the like.
After the CBCT image is subjected to tissue segmentation using the computer-implemented method for performing a plurality of fully automatic tissue segmentation on the oral CBCT image according to the embodiment of the present invention, the segmentation results (e.g., gingiva, teeth) may be subjected to morphological analysis and pathological analysis manually or automatically.
For the results of the multi-organization segmentation, various forms of evaluation can be performed, such as training set self-evaluation, validation set evaluation, and introduction of other format data as reference standard evaluation set evaluation, as shown in fig. 8.
According to another embodiment of the invention, the classes obtained by the trained neural network model test are further subdivided. For example using a watershed algorithm.
The classification and identification of CBCT data of the oral cavity can be divided into two levels, namely, the identification of different classes, such as teeth and air; second, after distinguishing the different classes, further subdivision of the same class, e.g., the teeth that are separated, are concatenated, including multiple teeth, and that further classification allows for classification and individual identification of the teeth, as shown in fig. 9.
How to perform accurate segmentation of individual teeth after segmentation of the tooth as a whole. Because each tooth is a single tooth and multiple teeth. As a preferred embodiment, by identifying the root of the pulp of each tooth, the pulp of both teeth is simultaneously grown up to the ROI boundary using the region growing method.
In one example, the segmented region formed by connecting a plurality of teeth is segmented into individual teeth, and the specific process may include:
pulp chamber of each tooth obtained using the multi-tissue fully automatic segmentation method described in connection with fig. 1:
taking dental pulp as a center, and segmenting teeth by using a watershed algorithm;
searching seed points of a watershed algorithm, taking pulp cavity voxel information of each tooth as seed information, and taking boundary information of the tooth as termination information;
and (4) segmenting the single tooth based on a watershed algorithm of distance transformation or gradient transformation. The voxel of the dental pulp is used as a seed point, and the watershed algorithm is utilized to independently label the contacted teeth.
The segmented teeth may then be boundary smoothed to smooth the outer contour of the tooth model.
In one example, after performing the multi-tissue segmentation, the segmented region formed by connecting a plurality of teeth is further segmented into single teeth, and three-dimensional mesh reconstruction is performed, including:
firstly, training a tooth recognition model which is smaller than the actual teeth by one circle, for example, by 10 percent;
searching for a seed point of a region growing method, taking the dental pulp cavity voxel information of each tooth as seed information, and taking the boundary information of the tooth as termination information;
the growth is performed by a region growing method with a pulp cavity voxel as a center, and specifically, the growth is stopped after a boundary of a tooth model smaller than an actual tooth is encountered by a region growing method (region growing method) with pulp as a seed point. The tooth exists as an independent ROI, but the tooth is one turn smaller than the actual tooth, and the region growing method is used to determine which tooth a certain point belongs to the neighboring teeth by analyzing the shortest distance to the boundary after the tooth and the tooth are touched.
And performing boundary smoothing on the segmented teeth, smoothing the outline of the tooth model, and reconstructing a three-dimensional surface model of the teeth. Regarding the smoothing of the boundary, the boundary may be smoothed 2-5 times using the laplacian operator.
In one example, training a tooth recognition model that is one turn smaller than the actual teeth may be accomplished as follows: a U-net neural network model is trained to identify neural networks that are 10% smaller than the boundaries of the teeth, and then the teeth are separated individually by separating the teeth using the neural networks. But each tooth and tooth is a separate ROI with no contact points.
FIG. 10 shows a block diagram of the segmentation and labeling of all teeth identified at the first level. First, the teeth are divided into independent teeth having spatial positions, and then the divided teeth are encoded according to rules.
In one example, the method further comprises identifying the position of the individual tooth to be segmented, for example, using probability distribution to identify the position of the tooth, including: a standard oral CBCT data template is established, and tooth position information of teeth is marked in the template. I.e. a spatial position belonging to 11, 12, 13, 14, etc., up to 48. And calculating a probability value of the coincidence degree of the space where the tooth position of the actually processed image is located and the standard template space so as to determine the tooth position to which the segmented tooth belongs, wherein the larger the probability value is, the more likely the tooth belongs to a certain tooth position. Meanwhile, through the real distribution space of teeth, since the tooth distribution of each person is scientifically regular, i.e., distributed in sequence from 11, 12, 13, 14, 15, 16, 17, the real tooth position of a certain tooth can be determined and verified by the tooth position information determined by the result values of the distribution rule and probability value calculation.
Fig. 11 schematically shows a schematic view of the division of the dentition into individual teeth.
Fig. 12 illustrates several exemplary methods of segmentation and coding labeling methods, in particular, a rule-based segmentation method, a deep learning method, and a probability distribution method.
Fig. 13 illustrates several tooth coding methods: FDI dentition representation, site registration, and universal registration. The FDI tooth position representation method is also called a digital marking method and is universal in the world, and each tooth is recorded by 2 Arabic numerals. The position recording method is also called palm tooth position representation method, and records by quadrant symbols and an Arabic number. General registration systems (Universal numbering systems) numbered from 1-32.
As a preferred example, in analyzing single tooth data, a local coordinate system may be set, the centroid of a single tooth is selected as the origin, the tooth long axis is the Z-axis, and the direction of the bucco-lingual direction from the lingual side to the labial side is the X-axis.
As a preferable example, the embodiment of the invention can also review the oral cavity multi-tissue segmentation results of the patient over time to perform tracking analysis, for example, the alveolar bone retraction rule and the alveolar bone density change rule of the periodontal patient are analyzed, and the position change and the alveolar bone density change of the teeth and the tooth roots during the orthodontic treatment of the orthodontic patient are analyzed. The invention provides accurate multi-tissue segmentation results of the oral CBCT image, thereby not only providing reference basis for clinical diagnosis of different oral tissue function conditions, but also predicting the treatment effect of patients in clinical practice, establishing a disease prediction model and scientifically guiding clinical treatment of the patients.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A computer-implemented method for fully automatically segmenting a plurality of tissues from CBCT images of an oral cavity, comprising:
acquiring a plurality of CBCT image data of the oral cavity;
the CBCT data is subjected to space coordinate system adjustment, a coordinate origin is determined, and the spatial position similarity of the same organization is ensured to meet a preset standard based on the definition of the coordinate origin and a space coordinate system;
normalizing the gray value of each CBCT image data;
segmenting and labeling a plurality of tissue regions on each CBCT image data;
training a specific neural network model by using the labeled CBCT image data;
and testing the new CBCT image data sample by using the trained neural network model to obtain a full-automatic segmentation result of the tissue.
2. The method of claim 1, the segmenting and labeling a plurality of tissue regions on each CBCT image data comprising:
labeling a plurality of tissue regions of each CBCT image data sample in m CBCT image data, wherein m is an integer greater than or equal to 1;
training a U-net network model A by using the m labeled CBCT image data samples;
applying the U-net network model A to the (m + 1) th unlabelled CBCT image data sample to obtain the (m + 1) labeled CBCT image data sample, and performing semi-automatic or manual correction after machine labeling;
optimizing the U-net network model A using the m +1 labeled CBCT image data samples; and
and repeating the steps until all CBCT image data samples are labeled.
3. The method of claim 2, labeling a plurality of tissue regions on each CBCT image data sample of the m CBCT image data comprising:
selecting representative n pictures in the same CBCT data, marking the pictures, wherein the marked content is information of a certain area displayed by a CBCT single layer, n is less than the total layer number of the same CBCT data, n is an integer greater than or equal to 1, and each picture corresponds to one layer; wherein the CBCT monolayer is sagittal, coronal, or axial;
the marking of the n pictures is manual marking, or is performed as follows: determining an attention area in each picture, labeling q attention areas, training a neural network model B by using the labeled q attention areas, automatically labeling a (q + 1) th area by using the neural network model B, manually or semi-automatically correcting a labeling result, optimizing the neural network model B by using the (q + 1) th labeled area, and repeating the steps until all the n pictures are labeled;
and testing the same CBCT data as a whole by using the established neural network model B, segmenting a required tissue, and if the segmentation effect does not meet the preset standard, repeatedly marking and optimizing the network model B until a termination condition is reached, thereby realizing the automatic segmentation of the CBCT sample.
4. The method of claim 3, wherein the data is augmented by a generative confrontation network method when training the network model B.
5. The method of claim 1, the spatially-coordinated localization of CBCT data comprising:
the occlusal plane is taken as an XOY plane, and the left and right symmetrical planes are taken as XOZ planes.
6. The method of claim 1, the normalizing the gray scale values of the respective CBCT image data comprising:
the gray values of the whole image are counted and normalized to be between (0, 1),
before the data is normalized, if the shot data is the data of the whole skull, an oral skull gray value distribution standard template map can be established, the standard template can be an average distribution template with gray information established on the basis of hundreds of samples,
in the data normalization process, if the small-field data are shot, the oral cavity skull gray value distribution standard template picture is taken as a standard, the small-field data and the distribution standard template picture are registered, and then normalization is carried out according to the average distribution template of the gray information of the distribution standard template picture.
7. The method of claim 1, the labeled categories comprising:
dentine, enamel, background, alveolar cortical bone, alveolar cancellous bone, mandibular nerve canal, periodontal ligament, dental sclerite and pulp cavities, wisdom teeth, trabecular bone, adipose tissue, air and maxillary sinuses.
8. A computer-implemented method for fully automatically segmenting a plurality of tissues and/or regions of an oral cavity image, the oral cavity image being a three-dimensional slice image or a time series image, the method comprising:
acquiring three-dimensional body layer data with a space structure of the oral cavity from the oral cavity image;
adjusting a coordinate system of the three-dimensional body layer data, determining a coordinate origin of a space coordinate system, and ensuring that the spatial position similarity of the same organization meets a preset standard;
normalizing the gray value of the three-dimensional body layer data;
segmenting and labeling a plurality of tissues on the three-dimensional body layer data;
training a specific neural network model by using the marked three-dimensional body layer data;
and testing the new sample by using the trained neural network model to obtain a full-automatic segmentation result of the organization.
9. A computing device comprising a memory and a processor, the memory having stored thereon computer instructions that, when executed by the processor, perform the method of any of the preceding claims 1 to 7.
10. A computer readable medium having stored thereon computer instructions which, when executed by a computer, perform the method of any of the preceding claims 1 to 7.
CN202110435626.5A 2021-04-22 2021-04-22 Method and system for multi-tissue full-automatic segmentation of oral cavity image Active CN113223010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110435626.5A CN113223010B (en) 2021-04-22 2021-04-22 Method and system for multi-tissue full-automatic segmentation of oral cavity image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110435626.5A CN113223010B (en) 2021-04-22 2021-04-22 Method and system for multi-tissue full-automatic segmentation of oral cavity image

Publications (2)

Publication Number Publication Date
CN113223010A true CN113223010A (en) 2021-08-06
CN113223010B CN113223010B (en) 2024-02-27

Family

ID=77088459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110435626.5A Active CN113223010B (en) 2021-04-22 2021-04-22 Method and system for multi-tissue full-automatic segmentation of oral cavity image

Country Status (1)

Country Link
CN (1) CN113223010B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643446A (en) * 2021-08-11 2021-11-12 北京朗视仪器股份有限公司 Automatic marking method and device for mandibular neural tube and electronic equipment
CN113822904A (en) * 2021-09-03 2021-12-21 上海爱乐慕健康科技有限公司 Image labeling device and method and readable storage medium
CN114187293A (en) * 2022-02-15 2022-03-15 四川大学 Oral cavity palate part soft and hard tissue segmentation method based on attention mechanism and integrated registration
CN114496254A (en) * 2022-01-25 2022-05-13 首都医科大学附属北京同仁医院 Gingivitis evaluation system construction method, gingivitis evaluation system and gingivitis evaluation method
CN115619810A (en) * 2022-12-19 2023-01-17 中国医学科学院北京协和医院 Prostate partition method, system and equipment
CN115661141A (en) * 2022-12-14 2023-01-31 上海牙典医疗器械有限公司 Tooth and alveolar bone segmentation method and system based on CBCT image
WO2023246463A1 (en) * 2022-06-24 2023-12-28 杭州朝厚信息科技有限公司 Oral panoramic radiograph segmentation method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109712703A (en) * 2018-12-12 2019-05-03 上海牙典软件科技有限公司 A kind of correction prediction technique and device based on machine learning
US20190197691A1 (en) * 2016-08-24 2019-06-27 Carestream Dental Technology Topco Limited Method and system for hybrid mesh segmentation
CN110189352A (en) * 2019-05-21 2019-08-30 重庆布瑞斯科技有限公司 A kind of root of the tooth extracting method based on oral cavity CBCT image
CN110503654A (en) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 A kind of medical image cutting method, system and electronic equipment based on generation confrontation network
CN110930421A (en) * 2019-11-22 2020-03-27 电子科技大学 Segmentation method for CBCT (Cone Beam computed tomography) tooth image
CN110974288A (en) * 2019-12-26 2020-04-10 北京大学口腔医学院 Periodontal disease CBCT longitudinal data recording and analyzing method
CN111161290A (en) * 2019-12-27 2020-05-15 西北大学 Image segmentation model construction method, image segmentation method and image segmentation system
CN111388125A (en) * 2020-03-05 2020-07-10 深圳先进技术研究院 Method and device for calculating tooth movement amount before and after orthodontic treatment
CN112381098A (en) * 2020-11-19 2021-02-19 上海交通大学 Semi-supervised learning method and system based on self-learning in target segmentation field

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190197691A1 (en) * 2016-08-24 2019-06-27 Carestream Dental Technology Topco Limited Method and system for hybrid mesh segmentation
CN109712703A (en) * 2018-12-12 2019-05-03 上海牙典软件科技有限公司 A kind of correction prediction technique and device based on machine learning
CN110189352A (en) * 2019-05-21 2019-08-30 重庆布瑞斯科技有限公司 A kind of root of the tooth extracting method based on oral cavity CBCT image
CN110503654A (en) * 2019-08-01 2019-11-26 中国科学院深圳先进技术研究院 A kind of medical image cutting method, system and electronic equipment based on generation confrontation network
CN110930421A (en) * 2019-11-22 2020-03-27 电子科技大学 Segmentation method for CBCT (Cone Beam computed tomography) tooth image
CN110974288A (en) * 2019-12-26 2020-04-10 北京大学口腔医学院 Periodontal disease CBCT longitudinal data recording and analyzing method
CN111161290A (en) * 2019-12-27 2020-05-15 西北大学 Image segmentation model construction method, image segmentation method and image segmentation system
CN111388125A (en) * 2020-03-05 2020-07-10 深圳先进技术研究院 Method and device for calculating tooth movement amount before and after orthodontic treatment
CN112381098A (en) * 2020-11-19 2021-02-19 上海交通大学 Semi-supervised learning method and system based on self-learning in target segmentation field

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HUIFANG YANG ET AL.: "Research on a segmentation and evaluation method combining tooth morphology features", 《INT. J. MORPHOL.》, vol. 38, no. 5 *
WEI DUAN ET AL.: "Refined Tooth and Pulp Segmentation using U-Net in CBCT image", 《DENTOMAXILLOFACIAL RADIOLOGY》, vol. 6, pages 176 - 178 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643446A (en) * 2021-08-11 2021-11-12 北京朗视仪器股份有限公司 Automatic marking method and device for mandibular neural tube and electronic equipment
CN113822904A (en) * 2021-09-03 2021-12-21 上海爱乐慕健康科技有限公司 Image labeling device and method and readable storage medium
CN113822904B (en) * 2021-09-03 2023-08-08 上海爱乐慕健康科技有限公司 Image labeling device, method and readable storage medium
CN114496254A (en) * 2022-01-25 2022-05-13 首都医科大学附属北京同仁医院 Gingivitis evaluation system construction method, gingivitis evaluation system and gingivitis evaluation method
CN114187293A (en) * 2022-02-15 2022-03-15 四川大学 Oral cavity palate part soft and hard tissue segmentation method based on attention mechanism and integrated registration
WO2023246463A1 (en) * 2022-06-24 2023-12-28 杭州朝厚信息科技有限公司 Oral panoramic radiograph segmentation method
CN115661141A (en) * 2022-12-14 2023-01-31 上海牙典医疗器械有限公司 Tooth and alveolar bone segmentation method and system based on CBCT image
CN115619810A (en) * 2022-12-19 2023-01-17 中国医学科学院北京协和医院 Prostate partition method, system and equipment
CN115619810B (en) * 2022-12-19 2023-10-03 中国医学科学院北京协和医院 Prostate partition segmentation method, system and equipment

Also Published As

Publication number Publication date
CN113223010B (en) 2024-02-27

Similar Documents

Publication Publication Date Title
CN113223010B (en) Method and system for multi-tissue full-automatic segmentation of oral cavity image
US11651494B2 (en) Apparatuses and methods for three-dimensional dental segmentation using dental image data
Jader et al. Deep instance segmentation of teeth in panoramic X-ray images
US20200402647A1 (en) Dental image processing protocol for dental aligners
US11464467B2 (en) Automated tooth localization, enumeration, and diagnostic system and method
US9439610B2 (en) Method for teeth segmentation and alignment detection in CBCT volume
US11443423B2 (en) System and method for constructing elements of interest (EoI)-focused panoramas of an oral complex
Duan et al. Refined tooth and pulp segmentation using U-Net in CBCT image
CN109712703B (en) Orthodontic prediction method and device based on machine learning
Kim et al. Automatic extraction of inferior alveolar nerve canal using feature-enhancing panoramic volume rendering
US20220084267A1 (en) Systems and Methods for Generating Quick-Glance Interactive Diagnostic Reports
CN110889850B (en) CBCT tooth image segmentation method based on central point detection
CN114757960B (en) Tooth segmentation and reconstruction method based on CBCT image and storage medium
CN111260672B (en) Method for guiding and segmenting teeth by using morphological data
Cristian et al. A cone beam computed tomography annotation tool for automatic detection of the inferior alveolar nerve canal
Jang et al. Fully automatic integration of dental CBCT images and full-arch intraoral impressions with stitching error correction via individual tooth segmentation and identification
Yau et al. An adaptive region growing method to segment inferior alveolar nerve canal from 3D medical images for dental implant surgery
Kakehbaraei et al. 3D tooth segmentation in cone-beam computed tomography images using distance transform
CN116958169A (en) Tooth segmentation method for three-dimensional dental model
Pavaloiu et al. Knowledge based segmentation for fast 3D dental reconstruction from CBCT
CN114241173B (en) Tooth CBCT image three-dimensional segmentation method and system
Zhang et al. Advancements in oral and maxillofacial surgery medical images segmentation techniques: An overview
Harrison et al. Segmentation and 3D-modelling of single-rooted teeth from CBCT data: an automatic strategy based on dental pulp segmentation and surface deformation
Hao et al. Ai-enabled automatic multimodal fusion of cone-beam ct and intraoral scans for intelligent 3d tooth-bone reconstruction and clinical applications
Zhu et al. An algorithm for automatically extracting dental arch curve

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant