CN111685899A - Dental orthodontic treatment monitoring method based on intraoral images and three-dimensional models - Google Patents

Dental orthodontic treatment monitoring method based on intraoral images and three-dimensional models Download PDF

Info

Publication number
CN111685899A
CN111685899A CN202010533803.9A CN202010533803A CN111685899A CN 111685899 A CN111685899 A CN 111685899A CN 202010533803 A CN202010533803 A CN 202010533803A CN 111685899 A CN111685899 A CN 111685899A
Authority
CN
China
Prior art keywords
stage
tooth
dimensional digital
model
dental
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010533803.9A
Other languages
Chinese (zh)
Inventor
苏剑
王贺升
顾晨岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yinma Technology Co ltd
Original Assignee
Shanghai Yinma Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yinma Technology Co ltd filed Critical Shanghai Yinma Technology Co ltd
Priority to CN202010533803.9A priority Critical patent/CN111685899A/en
Publication of CN111685899A publication Critical patent/CN111685899A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C7/00Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
    • A61C7/002Orthodontic computer assisted systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Abstract

The invention discloses a tooth orthodontic treatment monitoring method based on an intraoral image and a three-dimensional model, which comprises the following steps: step one, acquiring a preprocessed P1-stage intraoral image and a dental three-dimensional digital model of a patient; secondly, carrying out tooth-gum and tooth-tooth segmentation on the dental jaw three-dimensional digital model based on the P1-stage patient intraoral image; obtaining a segmented tooth jaw three-dimensional digital model; step three, acquiring a preprocessed P2-stage intraoral image of the patient; step four, transforming the segmented dental three-dimensional digital model according to the intraoral image of the patient in the stage P2; and generating a three-dimensional digital model of the jaw in the stage P2 and coordinate transformation data and morphological change data of the jaw model from the stage P1 to the stage P2. The invention can obtain the coordinate transformation and form change conditions of the teeth of the patient after the orthodontic treatment by only shooting the intraoral image, thereby avoiding scanning the oral cavity by using a scanner after the treatment.

Description

Dental orthodontic treatment monitoring method based on intraoral images and three-dimensional models
Technical Field
The invention belongs to the technical field of tooth monitoring, and particularly relates to a tooth orthodontic treatment monitoring method based on an intraoral image and a three-dimensional model.
Background
In the field of orthodontic treatment, Dental Monitoring (Dental Monitoring) technology allows orthodontists to more conveniently, quickly, and effectively monitor the condition of a patient's treatment. With the help of devices such as smart phones, doctors can remotely monitor the tooth treatment state even without the need for patients to go to hospitals or clinics. Not only saves the time and money for the patient to go to the hospital or clinic for treatment for many times, but also greatly facilitates the tracking and monitoring of the treatment process of the patient by the doctor, and is more beneficial to the timely promotion, adjustment and perfection of the treatment scheme.
With the development of computer technology, the acquisition and processing technology of two-dimensional images and three-dimensional digital models has also been rapidly developed. Image segmentation and recognition methods, and detection methods of objects in images are becoming more sophisticated. The acquisition of the three-dimensional digital model is more and more convenient due to the development of hardware equipment. The scanner is used for scanning the interior of the oral cavity of a patient, a three-dimensional digital model is obtained and is introduced into a computer, then the segmentation method based on geometric information or the machine learning method is used for segmenting and processing the oral cavity, finally, the result is visually presented to a doctor and the patient, the establishment and the promotion of a treatment scheme are assisted and even completely taken over, and the orthodontic treatment in the new mode is more and more widely applied to clinical treatment.
At present, the acquisition of a three-dimensional digital model still depends on the data acquisition of a scanner, but in the scanning process, a cheek retractor is required to open and fix the oral cavity of a patient, and the whole data acquisition process causes certain discomfort of the patient, so that the scanner scanning operation of the patient is not suitable for multiple times in the monitoring of the orthodontic treatment of the teeth.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method for monitoring orthodontic treatment based on intraoral images and three-dimensional models, which can obtain the conditions of coordinate transformation and form change of teeth of a patient after orthodontic treatment by only shooting intraoral images, and avoid scanning the oral cavity by using a scanner after treatment. The orthodontic treatment system has the advantages that the orthodontic state of a patient can be monitored by the patient, the patient does not need to go to a store for a return visit, problems are found and improved in time, and the orthodontic treatment period is shortened.
The invention provides a tooth orthodontic treatment monitoring method based on an intraoral image and a three-dimensional model in a first aspect, which comprises the following steps: step one, acquiring a preprocessed P1-stage intraoral image and a dental three-dimensional digital model of a patient;
secondly, carrying out tooth-gum and tooth-tooth segmentation on the dental jaw three-dimensional digital model based on the P1-stage patient intraoral image; obtaining a segmented tooth jaw three-dimensional digital model;
step three, acquiring a preprocessed P2-stage intraoral image of the patient;
step four, transforming the segmented dental three-dimensional digital model according to the intraoral image of the patient in the stage P2; generating a three-dimensional digital model of the jaw in the stage P2 and coordinate transformation data and morphological change data of the jaw model from the stage P1 to the stage P2;
the stage P1 refers to the pre-treatment stage, and the stage P2 refers to the post-treatment stage.
In the above method for monitoring orthodontic treatment based on the intraoral image and the three-dimensional model, the tooth-gum and tooth-tooth segmentation of the three-dimensional digital model of the jaw in the second step includes the following steps:
step A, carrying out region segmentation processing on the intraoral image in the stage P1, wherein each tooth is an independent region, and obtaining a tooth intraoral image; converting the tooth intraoral image into an intraoral image binary image, wherein the tooth in the intraoral image binary image is 1, and the rest part is 0;
b, acquiring a projection drawing of the three-dimensional digital model of the jaw on a two-dimensional plane, and converting the projection drawing into a projection binary drawing, wherein the projection coverage area in the projection binary drawing is 1, and the rest part is 0; outputting an initial value of the projection matrix;
c, traversing all projection matrixes in the fluctuation range of the initial values of the projection matrixes, calculating the correlation matching degree of the projection binary image and the intra-oral image binary image, and selecting the projection matrix under the condition of the highest matching degree to output;
d, selecting the area central point of each tooth in the intraoral image of the tooth, mapping the selected area central point into the three-dimensional digital model of the jaw according to the projection matrix information output in the step C, searching the nearest matching point, and taking the point obtained by searching as a seed point;
step E, utilizing a region growing algorithm to perform tooth-gum segmentation and tooth-tooth segmentation in the dental jaw three-dimensional digital model according to the seed points; and obtaining the segmented three-dimensional digital model.
According to the monitoring method for orthodontic treatment based on the intraoral image and the three-dimensional model, the region growing algorithm comprises three parameters of curvature q, a curvature change value dq and a curvature change rate dq/dl, when all three parameters of a neighborhood point accord with a set threshold range, the neighborhood point and the seed point can be judged to belong to the same component, a label corresponding to the seed point is marked on the neighborhood point, and the neighborhood point is brought into a new seed point.
According to the method for monitoring the orthodontic treatment based on the intraoral image and the three-dimensional model, when the region growth is carried out by using the region growth algorithm, after all neighborhood points of one seed point are searched and grown, the neighborhood points are discarded, and the growth process of the next seed point is carried out.
According to the monitoring method for orthodontic treatment based on the intraoral image and the three-dimensional model, the threshold value of the number of growing points is set in the region growing algorithm, and when the number of the growing points exceeds the threshold value, the threshold value parameters of q, dq and dq/dl are updated to grow again.
In the above method for monitoring orthodontic treatment based on intraoral images and three-dimensional models, after completing tooth-gum and tooth-tooth segmentation in the second step, the method further comprises an optimization process of the segmentation result, wherein the optimization process comprises the following steps:
firstly, selecting unmarked vertexes;
searching neighborhood points according to the selected unmarked vertex;
calculating label proportion of the neighborhood points, wherein the labels comprise gum labels, tooth labels and gap labels, and the tooth labels have tooth number information;
judging whether the label occupation ratio of the highest occupation ratio is larger than a threshold value or not; if yes, entering the fifth step, otherwise, entering the sixth step;
fifthly, marking the top point according to the label with the highest proportion;
step sixthly, judging whether the top points in the top point queue have the top points which are not selected and not marked, and entering the step I if so.
In the above method for monitoring orthodontic treatment based on intraoral images and three-dimensional models, after completing tooth-gum and tooth-tooth segmentation in the second step, the method further comprises an optimization process of the segmentation result, wherein the optimization process comprises the following steps:
selecting unmarked vertexes;
secondly, searching the nearest marked point according to the selected unmarked vertex;
marking the top point according to the searched label of the marked point; the labels comprise a gum label, a tooth label and a gap label, wherein the tooth label has the number information of teeth;
step four, judging whether unmarked vertexes which are not selected still exist in the vertex queue, and if yes, entering step.
In the above method for monitoring orthodontic treatment based on intraoral images and three-dimensional models, after completing tooth-gum and tooth-tooth segmentation in the second step, the method further comprises performing optimization processing on the segmentation result, wherein the optimization processing comprises first optimization and second optimization;
the first optimization comprises the following steps:
firstly, selecting unmarked vertexes;
searching neighborhood points according to the selected unmarked vertex;
calculating label proportion of the neighborhood points, wherein the labels comprise gum labels, tooth labels and gap labels, and the tooth labels have tooth number information;
judging whether the label occupation ratio of the highest occupation ratio is larger than a threshold value or not; if yes, entering the fifth step, otherwise, entering the sixth step;
fifthly, marking the top point according to the label with the highest proportion;
judging whether an unselected unmarked vertex exists in the vertex queue, and entering the first step if the unmarked vertex exists;
after the first sub-optimization is finished, performing second optimization;
the second optimization comprises the following steps:
selecting unmarked vertexes;
secondly, searching the nearest marked point according to the selected unmarked vertex;
marking the top point according to the searched label of the marked point; the labels comprise a gum label, a tooth label and a gap label, wherein the tooth label has the number information of teeth;
step four, judging whether unmarked vertexes which are not selected still exist in the vertex queue, and if yes, entering step.
According to the tooth orthodontic treatment monitoring method based on the intraoral image and the three-dimensional model, in the fourth step, the segmented dentognathic three-dimensional digital model is transformed according to the intraoral image of the patient in the stage P2; generating a P2 stage dental three-dimensional digital model, comprising the steps of:
step 401, traversing the transformable postures of the segmented dental three-dimensional digital model;
step 402, projecting the three-dimensional digital model of the jaw under each posture according to a projection direction beta to obtain a projection drawing; the projection direction β is: the projection direction with the highest degree of matching of the projection drawing of the P1 stage dental three-dimensional digital model and the intra-oral image correlation of the P1 stage;
step 403, searching all the projection graphs obtained in step 402, and searching out a projection graph with the highest degree of matching with the intra-oral image correlation in the stage P2;
and step 404, outputting the segmented dental three-dimensional digital model in the posture corresponding to the projection diagram searched in the step 403 as a dental three-dimensional digital model at a stage P2, and outputting coordinate transformation data and morphological change data of the dental three-dimensional digital model from the stage P1 to the stage P2.
According to the tooth orthodontic treatment monitoring method based on the intraoral image and the three-dimensional model, in the fourth step, the segmented dentognathic three-dimensional digital model is transformed according to the intraoral image of the patient in the stage P2; generating a P2 stage dental three-dimensional digital model, comprising the steps of:
step 401, traversing the transformable postures of the segmented dental three-dimensional digital model;
step 402, traversing the projection drawing of the three-dimensional digital model of the jaw in each posture in each projection direction;
step 403, searching all the projection graphs obtained in step 402, and searching out a projection graph with the highest degree of matching with the intra-oral image correlation in the stage P2;
and step 404, outputting the segmented dental three-dimensional digital model in the posture corresponding to the projection diagram searched in the step 403 as a dental three-dimensional digital model at a stage P2, and outputting coordinate transformation data and morphological change data of the dental three-dimensional digital model from the stage P1 to the stage P2.
In the above method for monitoring orthodontic treatment based on the intraoral image and the three-dimensional model, the tooth-gum and tooth-tooth segmentation of the three-dimensional digital model of the jaw in the second step includes the following steps:
step (ii) of
Figure BDA0002535927040000061
Obtaining a plurality of training sample data, wherein each training sample data comprises a dental three-dimensional digital model before segmentation and a dental three-dimensional digital model after segmentation;
step (ii) of
Figure BDA0002535927040000062
Training a deep neural network model or a support vector machine model by using sample data;
step (ii) of
Figure BDA0002535927040000063
Testing the trained deep neural network model or support vector machine model by using a plurality of test sample data, wherein each test sample data comprises a dental three-dimensional digital model before segmentation and a dental three-dimensional digital model after segmentation; if the test is passed, outputting a deep neural network model or a support vector machine model; if the test is not passed, adjusting the parameters of the deep neural network model or the support vector machine model, and repeating the steps
Figure BDA0002535927040000064
Step (ii) of
Figure BDA0002535927040000065
Inputting the dental three-dimensional digital model in the stage P1 into the deep neural network model or the support vector machine model passing the test, and outputting the segmented dental three-dimensional digital model.
In a second aspect, the present invention provides a method for monitoring orthodontic treatment based on intraoral images and three-dimensional models, comprising the steps of:
acquiring a plurality of training sample data, wherein each training sample data comprises an intra-oral image of a patient in a stage P1, an intra-oral image of a patient in a stage P2, coordinate transformation data and morphological change data of a dental model from a stage P1 to a stage P2;
step two, training a deep neural network model or a support vector machine model by using sample data;
testing the trained deep neural network model or support vector machine model by using a plurality of test sample data, wherein each test sample data comprises an intraoral image of a patient at the stage P1, an intraoral image of a patient at the stage P2, and coordinate transformation data and morphological change data of the dental model from the stage P1 to the stage P2; if the test is passed, outputting a deep neural network model or a support vector machine model; if the test is not passed, adjusting parameters of the deep neural network model or the support vector machine model, and repeating the first step to the third step;
step four, acquiring an intra-oral image of the patient at the stage P1 and an intra-oral image of the patient at the stage P2 of the current patient, inputting a depth neural network model or a support vector machine model passing the test, and outputting coordinate transformation data and morphological change data of the dental model from the stage P1 to the stage P2 of the current patient;
step five, acquiring a dental three-dimensional digital model of the current patient at the stage P1; and according to the coordinate transformation data and the form change data of the dental model from the P1 stage to the P2 stage of the current patient, carrying out coordinate transformation and form change on the dental three-dimensional digital model from the P1 stage of the current patient, and generating and outputting the dental three-dimensional digital model from the P2 stage.
According to the method for monitoring the orthodontic treatment based on the intraoral image and the three-dimensional model, the deep neural network model or the support vector machine model is a model with an attention mechanism.
According to the tooth orthodontic treatment monitoring method based on the intraoral image and the three-dimensional model, the deep neural network model is a convolutional neural network model.
Compared with the prior art, the invention has the following advantages:
1. according to the invention, the association between the intraoral image in the P1 stage and the dental three-dimensional digital model is utilized to carry out tooth-gum and tooth-tooth segmentation work on the dental three-dimensional digital model, so that the segmentation efficiency, accuracy and robustness are improved.
2. According to the invention, the divided three-dimensional digital model of the jaw is transformed according to the intraoral image of the stage P2, and the three-dimensional digital model of the jaw at the stage P2, and the coordinate transformation data and the form change data of the jaw model from the stage P1 to the stage P2 are generated. Not only is the accuracy of the dental jaw three-dimensional digital model in the stage P2 generated high, and the coordinate transformation data and the form change data are more accurate, but also the oral cavity of the patient can be prevented from being scanned in the actual orthodontic treatment of the teeth.
3. When the teeth-gum and the teeth-teeth are divided, the tooth-gum division is carried out firstly, and then the teeth-teeth division is carried out, so that the layered division idea is also beneficial to improving the division result.
4. The invention can further improve the segmentation result by optimizing the segmentation.
5. The invention combines the intraoral images before and after treatment and the segmented three-dimensional model of the jaw, can accurately calculate the tooth coordinate transformation and the form change at different stages in the treatment process, and finally generates the three-dimensional model of the jaw at each stage, thereby providing intuitive, effective and easily quantized tooth state feedback and monitoring in the orthodontic treatment process and assisting the completion of the orthodontic treatment process. And the invention combines with computer vision technology, can make the whole orthodontic treatment process more intelligent, humanized, convenient, not merely provide data basis and help for doctor's treatment, have saved trouble and discomfort in the course of treating of the patient as far as possible.
6. In the second tooth orthodontic treatment monitoring method provided by the invention, by introducing the idea of machine learning, complicated artificial feature design and extraction are avoided in the implementation process, and the implementation is more convenient and faster.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a flow chart of a method according to an embodiment of the present invention.
FIG. 2 is a flow chart of a segmentation method according to an embodiment of the present invention.
FIG. 3 is a flowchart of a region growing parameter adaptive method according to an embodiment of the present invention.
FIG. 4 is a flowchart of an optimization method according to an embodiment of the invention.
FIG. 5 is a flowchart of an optimization method according to another embodiment of the present invention.
FIG. 6 is a flow chart of a segmentation method according to another embodiment of the present invention.
Fig. 7 is a flow chart of a method according to another embodiment of the present invention.
FIG. 8 is a schematic diagram of a convolutional neural network structure;
FIG. 9 is an exemplary diagram of a set of intraoral images provided by the present invention;
FIG. 10 is a schematic diagram of a set of initial three-dimensional digital models provided by the present invention;
FIG. 11 is a schematic diagram of a set of preprocessed three-dimensional models provided by the present invention;
fig. 12 is a schematic diagram of a set of segmented individual tooth models (with virtual roots) provided by the present invention.
Detailed Description
As shown in fig. 1, the present invention comprises the steps of:
step one, acquiring a preprocessed P1-stage intraoral image and a dental three-dimensional digital model of a patient;
secondly, carrying out tooth-gum and tooth-tooth segmentation on the dental jaw three-dimensional digital model based on the P1-stage patient intraoral image; obtaining a segmented tooth jaw three-dimensional digital model;
step three, acquiring a preprocessed P2-stage intraoral image of the patient;
step four, transforming the segmented dental three-dimensional digital model according to the intraoral image of the patient in the stage P2; and generating a three-dimensional digital model of the jaw in the stage P2 and coordinate transformation data and morphological change data of the jaw model from the stage P1 to the stage P2.
It should be noted that the stage P1 refers to before treatment, and the stage P2 refers to after treatment.
It should be noted that the intraoral image acquisition and preprocessing includes the following steps:
step 1: opening and fixing the oral cavity by using a cheek retractor;
step 2: taking five visual angle images of the upper jaw, the lower jaw, the left view, the right view and the front view of the oral cavity according to the specified angles and distances, as shown in fig. 9; importing the data into a computer;
and step 3: and preprocessing the acquired intraoral images by using methods such as filtering, morphological operation and the like in an OpenCV library.
It should be noted that the acquisition and preprocessing of the three-dimensional model of the jaw in the stage P1 includes the following steps:
step 1: opening and fixing the oral cavity by using a cheek retractor;
step 2: scanning the oral cavity by using a specified scanner (such as an iTero Element oral cavity scanner), generating a three-dimensional digital model by using a three-dimensional imaging technology, storing the three-dimensional digital model in an STL or PLY format, and importing the three-dimensional digital model into a computer as shown in figure 10;
and step 3: aiming at an initial dental three-dimensional model obtained by a scanner, software such as Geomagic studio or MeshLab is utilized to carry out model cleaning, unnecessary parts are cut off, and model data of dental parts which are useful for a subsequent treatment process are reserved;
and 4, step 4: the cleaned model is preprocessed by vertex contraction simplification, smoothing and the like, geometric features (such as normal vectors, curvatures and the like) are calculated, and as shown in fig. 11, the final result is stored in a PLY format.
It should be further noted that, as can be seen from the above steps one to five, the scanner is only needed to scan the oral cavity of the patient at the stage P1, the oral cavity is not needed to be scanned at the stage P2, and the three-dimensional digital model of the dental jaw at the stage P2 can be known through data calculation, so that secondary scanning of the oral cavity of the patient is avoided, pain of the patient in orthodontic treatment is reduced, and the working efficiency of a doctor is improved.
As shown in fig. 2, when the tooth-gum and tooth-tooth segmentation is performed on the three-dimensional digital model of the jaw in the second step, the method comprises the following steps:
step A, carrying out region segmentation processing on the intraoral image in the stage P1, wherein each tooth is an independent region, and obtaining a tooth intraoral image; converting the tooth intraoral image into an intraoral image binary image, wherein the tooth in the intraoral image binary image is 1, and the rest part is 0;
b, acquiring a projection drawing of the three-dimensional digital model of the jaw on a two-dimensional plane, and converting the projection drawing into a projection binary drawing, wherein the projection coverage area in the projection binary drawing is 1, and the rest part is 0; outputting an initial value of the projection matrix;
c, traversing all projection matrixes in the fluctuation range of the initial values of the projection matrixes, calculating the correlation matching degree of the projection binary image and the intra-oral image binary image, and selecting the projection matrix under the condition of the highest matching degree to output; (it should be noted that the correlation matching evaluation function can be designed according to the difference of each pixel and the overall geometric information characteristics. the matching degree is calculated for the projection binary image and the binary image of the intra-oral image, and the optimal matching relationship (the corresponding relationship of the matching elements, such as the pixel points) between the binary images is calculated through the optimization process of the correlation matching.)
D, selecting the area central point of each tooth in the intraoral image of the tooth, mapping the selected area central point into the three-dimensional digital model of the jaw according to the projection matrix information output in the step C, searching the nearest matching point, and taking the point obtained by searching as a seed point;
step E, utilizing a region growing algorithm to perform tooth-gum segmentation and tooth-tooth segmentation in the dental jaw three-dimensional digital model according to the seed points; and obtaining the segmented three-dimensional digital model.
In this embodiment, the region growing algorithm includes three parameters, i.e., a curvature q, a curvature change value dq, and a curvature change rate dq/dl, and when all three parameters of a neighborhood point conform to a set threshold range, it may be determined that the neighborhood point and the seed point belong to the same component, and a corresponding label is marked on the neighborhood point, and the neighborhood point is incorporated into a new seed point.
It should be noted that, when the region growing is performed by using the region growing algorithm, after all the neighborhood points of one seed point are searched and grown, the neighborhood points are discarded, and the growing process of the next seed point is performed. The parameter judgment of the growth condition can be carried out by independent judgment of q, dq and dq/dl, or by joint judgment of the three, namely designing a joint evaluation function, integrating the information of the three together for consideration, and balancing the relationship among the three by weight. And regarding the selection of the neighborhood points, the information of the three-dimensional geometric model is fully utilized to search the adjacent points which form the same plane with the seed points.
As shown in fig. 3, in the region growing algorithm in this embodiment, a threshold value for the number of growing points is also set, and when the number of growing points exceeds the threshold value, the threshold parameters of q, dq, and dq/dl are updated and regrown.
It should be noted that, in order to adapt to the differences of different models and reduce the manual debugging work, the threshold of the number of growing points is set according to the characteristic that the number of growing points is increased when the over-corrosion occurs during the growth of the region, so that the growth of all the seed points can be finally and well completed.
In practice, after the tooth-gum and tooth-tooth segmentation is completed, the segmentation result is in the vicinity of the boundary, the occlusal surface (curvature change is large), the abnormal value and the noise point, the segmentation boundary is not smooth, the segmentation is wrong, the segmentation is omitted, and the segmentation result needs to be further optimized.
The embodiment specially designs two optimization methods for utilizing neighborhood label information in the three-dimensional model.
As shown in fig. 4, the first segmentation optimization method includes the following steps:
firstly, selecting unmarked vertexes;
searching neighborhood points according to the selected unmarked vertex;
calculating label proportion of the neighborhood points, wherein the labels comprise gum labels, tooth labels and gap labels, and the tooth labels have tooth number information;
judging whether the label occupation ratio of the highest occupation ratio is larger than a threshold value or not; if yes, entering the fifth step, otherwise, entering the sixth step;
fifthly, marking the top point according to the label with the highest proportion;
step sixthly, judging whether the top points in the top point queue have the top points which are not selected and not marked, and entering the step I if so.
The first segmentation optimization method is explained by taking an embodiment, an unmarked vertex alpha is selected, labels of the neighborhood points of the vertex alpha are 10% of the gum label, 80% of the tooth labels and 10% of the gap labels, wherein the number 8 tooth labels in the tooth labels account for 80%, the number of the other tooth labels account for 20%, the number 8 tooth labels account for the highest percentage, and the label percentage is 80% and is more than the threshold value 20%; the vertex alpha is labeled with a tooth label number 8.
As shown in fig. 5, the second segmentation optimization method includes the following steps:
selecting unmarked vertexes;
secondly, searching the nearest marked point according to the selected unmarked vertex;
marking the top point according to the searched label of the marked point; the labels comprise a gum label, a tooth label and a gap label, wherein the tooth label has the number information of teeth;
step four, judging whether unmarked vertexes which are not selected still exist in the vertex queue, and if yes, entering step.
It should be noted that, the further optimization of the segmentation result may be performed by using only the first segmentation optimization method or the second segmentation optimization method, or may be performed by using the second segmentation optimization method after the optimization of the first segmentation optimization method is finished. The first segmentation optimization method can be repeatedly optimized for multiple times during optimization, and the second segmentation optimization method is used for optimizing once after multiple suboptimal optimization is finished.
As shown in fig. 12, in the present embodiment, after the optimization of the division result is completed, the gum three-dimensional digital model is virtualized from among the three-dimensional digital models of the teeth and the jaw with the three-dimensional digital models of the respective teeth after the division.
In the embodiment, in the fourth step, the segmented three-dimensional digital model of the jaw is transformed according to the intraoral image of the patient in the stage P2; generating a P2 stage dental three-dimensional digital model, comprising the steps of:
step 401, traversing the transformable postures of the segmented dental three-dimensional digital model;
step 402, projecting the three-dimensional digital model of the jaw under each posture according to a projection direction beta to obtain a projection drawing; the projection direction β is: the projection direction with the highest degree of matching of the projection drawing of the P1 stage dental three-dimensional digital model and the intra-oral image correlation of the P1 stage;
step 403, searching all the projection graphs obtained in step 402, and searching out a projection graph with the highest degree of matching with the intra-oral image correlation in the stage P2;
and step 404, outputting the segmented dental three-dimensional digital model in the posture corresponding to the projection diagram searched in the step 403 as a dental three-dimensional digital model at a stage P2, and outputting coordinate transformation data and morphological change data of the dental three-dimensional digital model from the stage P1 to the stage P2.
In another embodiment, the segmented three-dimensional digital model of the dental jaw is transformed according to the intraoral image of the patient in the stage P2 in the fourth step; generating a P2 stage dental three-dimensional digital model, comprising the steps of:
step 401, traversing the transformable postures of the segmented dental three-dimensional digital model;
step 402, traversing the projection drawing of the three-dimensional digital model of the jaw in each posture in each projection direction;
step 403, searching all the projection graphs obtained in step 402, and searching out a projection graph with the highest degree of matching with the intra-oral image correlation in the stage P2;
and step 404, outputting the segmented dental three-dimensional digital model in the posture corresponding to the projection diagram searched in the step 403 as a dental three-dimensional digital model at a stage P2, and outputting coordinate transformation data and morphological change data of the dental three-dimensional digital model from the stage P1 to the stage P2.
In another embodiment of the invention, when the tooth-gum and tooth-tooth segmentation is performed on the three-dimensional digital model of the jaw in the second step, a machine learning-based method, such as a support vector machine, a deep neural network, etc., is adopted; specifically, as shown in fig. 6, the method includes the following steps:
step (ii) of
Figure BDA0002535927040000131
Obtaining a plurality of training sample data, wherein each training sample data comprises a dental three-dimensional digital model before segmentation and a dental three-dimensional digital model after segmentation; three dimensions of the jaw before the segmentationThe character model is composed of geometric data, and the segmented three-dimensional digital model of the dental jaw is formed by labeling the geometric data;
step (ii) of
Figure BDA0002535927040000132
Training a deep neural network model or a support vector machine model by using sample data;
step (ii) of
Figure BDA0002535927040000133
Testing the trained deep neural network model or support vector machine model by using a plurality of test sample data, wherein each test sample data comprises a dental three-dimensional digital model before segmentation and a dental three-dimensional digital model after segmentation; if the test is passed, outputting a deep neural network model or a support vector machine model; if the test is not passed, adjusting the parameters of the deep neural network model or the support vector machine model, and repeating the steps
Figure BDA0002535927040000134
Step (ii) of
Figure BDA0002535927040000135
Inputting the dental three-dimensional digital model in the stage P1 into the deep neural network model or the support vector machine model passing the test, and outputting the segmented dental three-dimensional digital model.
It should be noted that the tooth-gum and tooth-tooth segmentation adopts a machine learning-based method, the tooth-gum segmentation and the tooth-tooth segmentation are separately trained when a deep neural network model or a support vector machine model is trained, the basic data unit of the tooth jaw three-dimensional digital model adopts geometric elements such as edges, triangular planes and the like, the characteristics of the three-dimensional digital model are fully utilized, and the complicated artificial characteristic design and extraction are avoided.
In another embodiment, as shown in fig. 7, the present invention comprises the steps of:
acquiring a plurality of training sample data, wherein each training sample data comprises an intra-oral image of a patient in a stage P1, an intra-oral image of a patient in a stage P2, coordinate transformation data and morphological change data of a dental model from a stage P1 to a stage P2;
step two, training a deep neural network model or a support vector machine model by using sample data;
testing the trained deep neural network model or support vector machine model by using a plurality of test sample data, wherein each test sample data comprises an intraoral image of a patient at the stage P1, an intraoral image of a patient at the stage P2, and coordinate transformation data and morphological change data of the dental model from the stage P1 to the stage P2; if the test is passed, outputting a deep neural network model or a support vector machine model; if the test is not passed, adjusting parameters of the deep neural network model or the support vector machine model, and repeating the first step to the third step;
step four, acquiring an intra-oral image of the patient at the stage P1 and an intra-oral image of the patient at the stage P2 of the current patient, inputting a depth neural network model or a support vector machine model passing the test, and outputting coordinate transformation data and morphological change data of the dental model from the stage P1 to the stage P2 of the current patient;
step five, acquiring a dental three-dimensional digital model of the current patient at the stage P1; and according to the coordinate transformation data and the form change data of the dental model from the P1 stage to the P2 stage of the current patient, carrying out coordinate transformation and form change on the dental three-dimensional digital model from the P1 stage of the current patient, and generating and outputting the dental three-dimensional digital model from the P2 stage.
It should be noted that the deep neural network model or the support vector machine model is a model with attention mechanism.
In practice, the calculation and generation of the three-dimensional model of the jaw during the orthodontic treatment process of the tooth are performed according to the coordinate transformation and morphological change of the tooth at different stages of the segmented three-dimensional model of the jaw and the treatment process, and the three-dimensional model of the jaw at each stage is calculated and generated and stored in the STL or PLY format.
In this embodiment, the deep neural network model is a convolutional neural network model. Fig. 8 is a schematic diagram of a convolutional neural network structure.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (14)

1. A method for monitoring orthodontic treatment based on intraoral images and three-dimensional models is characterized by comprising the following steps:
step one, acquiring a preprocessed P1-stage intraoral image and a dental three-dimensional digital model of a patient;
secondly, carrying out tooth-gum and tooth-tooth segmentation on the dental jaw three-dimensional digital model based on the P1-stage patient intraoral image; obtaining a segmented tooth jaw three-dimensional digital model;
step three, acquiring a preprocessed P2-stage intraoral image of the patient;
step four, transforming the segmented dental three-dimensional digital model according to the intraoral image of the patient in the stage P2; generating a three-dimensional digital model of the jaw in the stage P2 and coordinate transformation data and morphological change data of the jaw model from the stage P1 to the stage P2;
the stage P1 refers to the pre-treatment stage, and the stage P2 refers to the post-treatment stage.
2. The method for monitoring orthodontic treatment based on intraoral images and three-dimensional models according to claim 1, wherein the tooth-gum and tooth-tooth segmentation of the three-dimensional digital model of the jaw in the second step comprises the following steps:
step A, carrying out region segmentation processing on the intraoral image in the stage P1, wherein each tooth is an independent region, and obtaining a tooth intraoral image; converting the tooth intraoral image into an intraoral image binary image, wherein the tooth in the intraoral image binary image is 1, and the rest part is 0;
b, acquiring a projection drawing of the three-dimensional digital model of the jaw on a two-dimensional plane, and converting the projection drawing into a projection binary drawing, wherein the projection coverage area in the projection binary drawing is 1, and the rest part is 0; outputting an initial value of the projection matrix;
c, traversing all projection matrixes in the fluctuation range of the initial values of the projection matrixes, calculating the correlation matching degree of the projection binary image and the intra-oral image binary image, and selecting the projection matrix under the condition of the highest matching degree to output;
d, selecting the area central point of each tooth in the intraoral image of the tooth, mapping the selected area central point into the three-dimensional digital model of the jaw according to the projection matrix information output in the step C, searching the nearest matching point, and taking the point obtained by searching as a seed point;
step E, utilizing a region growing algorithm to perform tooth-gum segmentation and tooth-tooth segmentation in the dental jaw three-dimensional digital model according to the seed points; and obtaining the segmented three-dimensional digital model.
3. A method of monitoring orthodontic treatment based on intraoral images and three-dimensional models according to claim 2, characterized in that: the region growing algorithm comprises three parameters of curvature q, a curvature change value dq and a curvature change rate dq/dl, when all three parameters of a neighborhood point accord with a set threshold range, the neighborhood point and the seed point can be judged to belong to the same component, a label corresponding to the seed point is marked on the neighborhood point, and the neighborhood point is brought into a new seed point.
4. A method of monitoring orthodontic treatment based on intraoral images and three-dimensional models according to claim 3, characterized in that: when the region growing is carried out by using the region growing algorithm, after all neighborhood points of one seed point finish searching and growing, the neighborhood points are discarded, and the growing process of the next seed point is carried out.
5. A method for monitoring orthodontic treatment based on intraoral images and three-dimensional models according to claim 3 or 4, characterized in that: in the region growing algorithm, a threshold value of the number of growing points is also set, and when the number of growing points exceeds the threshold value, threshold parameters of q, dq and dq/dl are updated to grow again.
6. A method for monitoring orthodontic treatment based on intraoral images and three-dimensional models according to claim 1, characterized in that: in the second step, after the tooth-gum and tooth-tooth segmentation is completed, optimization processing is further performed on the segmentation result, and the optimization processing comprises the following steps:
firstly, selecting unmarked vertexes;
searching neighborhood points according to the selected unmarked vertex;
calculating label proportion of the neighborhood points, wherein the labels comprise gum labels, tooth labels and gap labels, and the tooth labels have tooth number information;
judging whether the label occupation ratio of the highest occupation ratio is larger than a threshold value or not; if yes, entering the fifth step, otherwise, entering the sixth step;
fifthly, marking the top point according to the label with the highest proportion;
step sixthly, judging whether the top points in the top point queue have the top points which are not selected and not marked, and entering the step I if so.
7. A method for monitoring orthodontic treatment based on intraoral images and three-dimensional models according to claim 1, characterized in that: in the second step, after the tooth-gum and tooth-tooth segmentation is completed, optimization processing is further performed on the segmentation result, and the optimization processing comprises the following steps:
selecting unmarked vertexes;
secondly, searching the nearest marked point according to the selected unmarked vertex;
marking the top point according to the searched label of the marked point; the labels comprise a gum label, a tooth label and a gap label, wherein the tooth label has the number information of teeth;
step four, judging whether unmarked vertexes which are not selected still exist in the vertex queue, and if yes, entering step.
8. A method for monitoring orthodontic treatment based on intraoral images and three-dimensional models according to claim 1, characterized in that: after the tooth-gum and tooth-tooth segmentation is completed in the second step, optimizing the segmentation result, wherein the optimizing includes first optimizing and second optimizing;
the first optimization comprises the following steps:
firstly, selecting unmarked vertexes;
searching neighborhood points according to the selected unmarked vertex;
calculating label proportion of the neighborhood points, wherein the labels comprise gum labels, tooth labels and gap labels, and the tooth labels have tooth number information;
judging whether the label occupation ratio of the highest occupation ratio is larger than a threshold value or not; if yes, entering the fifth step, otherwise, entering the sixth step;
fifthly, marking the top point according to the label with the highest proportion;
judging whether an unselected unmarked vertex exists in the vertex queue, and entering the first step if the unmarked vertex exists;
after the first sub-optimization is finished, performing second optimization;
the second optimization comprises the following steps:
selecting unmarked vertexes;
secondly, searching the nearest marked point according to the selected unmarked vertex;
marking the top point according to the searched label of the marked point; the labels comprise a gum label, a tooth label and a gap label, wherein the tooth label has the number information of teeth;
step four, judging whether unmarked vertexes which are not selected still exist in the vertex queue, and if yes, entering step.
9. A method for monitoring orthodontic treatment based on intraoral images and three-dimensional models according to claim 1, characterized in that: in the fourth step, the segmented three-dimensional digital model of the jaw is transformed according to the intraoral image of the patient in the stage P2; generating a P2 stage dental three-dimensional digital model, comprising the steps of:
step 401, traversing the transformable postures of the segmented dental three-dimensional digital model;
step 402, projecting the three-dimensional digital model of the jaw under each posture according to a projection direction beta to obtain a projection drawing; the projection direction β is: the projection direction with the highest degree of matching of the projection drawing of the P1 stage dental three-dimensional digital model and the intra-oral image correlation of the P1 stage;
step 403, searching all the projection graphs obtained in step 402, and searching out a projection graph with the highest degree of matching with the intra-oral image correlation in the stage P2;
and step 404, outputting the segmented dental three-dimensional digital model in the posture corresponding to the projection diagram searched in the step 403 as a dental three-dimensional digital model at a stage P2, and outputting coordinate transformation data and morphological change data of the dental three-dimensional digital model from the stage P1 to the stage P2.
10. A method for monitoring orthodontic treatment based on intraoral images and three-dimensional models according to claim 1, characterized in that: in the fourth step, the segmented three-dimensional digital model of the jaw is transformed according to the intraoral image of the patient in the stage P2; generating a P2 stage dental three-dimensional digital model, comprising the steps of:
step 401, traversing the transformable postures of the segmented dental three-dimensional digital model;
step 402, traversing the projection drawing of the three-dimensional digital model of the jaw in each posture in each projection direction;
step 403, searching all the projection graphs obtained in step 402, and searching out a projection graph with the highest degree of matching with the intra-oral image correlation in the stage P2;
and step 404, outputting the segmented dental three-dimensional digital model in the posture corresponding to the projection diagram searched in the step 403 as a dental three-dimensional digital model at a stage P2, and outputting coordinate transformation data and morphological change data of the dental three-dimensional digital model from the stage P1 to the stage P2.
11. A method for monitoring orthodontic treatment based on intraoral images and three-dimensional models according to claim 1, characterized in that: and in the second step, when the tooth-gum and tooth-tooth segmentation is carried out on the three-dimensional digital model of the jaw, the method comprises the following steps:
step (ii) of
Figure FDA0002535927030000051
Obtaining a plurality of training sample data, wherein each training sample data comprises a dental three-dimensional digital model before segmentation and a dental three-dimensional digital model after segmentation;
step (ii) of
Figure FDA0002535927030000052
Training a deep neural network model or a support vector machine model by using sample data;
step (ii) of
Figure FDA0002535927030000053
Testing the trained deep neural network model or support vector machine model by using a plurality of test sample data, wherein each test sample data comprises a dental three-dimensional digital model before segmentation and a dental three-dimensional digital model after segmentation; if the test is passed, outputting a deep neural network model or a support vector machine model; if the test is not passed, adjusting the parameters of the deep neural network model or the support vector machine model, and repeating the steps
Figure FDA0002535927030000054
Step (ii) of
Figure FDA0002535927030000055
Inputting the dental three-dimensional digital model in the stage P1 into the deep neural network model or the support vector machine model passing the test, and outputting the segmented dental three-dimensional digital model.
12. A method for monitoring orthodontic treatment based on intraoral images and three-dimensional models, comprising the steps of:
acquiring a plurality of training sample data, wherein each training sample data comprises an intra-oral image of a patient in a stage P1, an intra-oral image of a patient in a stage P2, coordinate transformation data and morphological change data of a dental model from a stage P1 to a stage P2;
step two, training a deep neural network model or a support vector machine model by using sample data;
testing the trained deep neural network model or support vector machine model by using a plurality of test sample data, wherein each test sample data comprises an intraoral image of a patient at the stage P1, an intraoral image of a patient at the stage P2, and coordinate transformation data and morphological change data of the dental model from the stage P1 to the stage P2; if the test is passed, outputting a deep neural network model or a support vector machine model; if the test is not passed, adjusting parameters of the deep neural network model or the support vector machine model, and repeating the first step to the third step;
step four, acquiring an intra-oral image of the patient at the stage P1 and an intra-oral image of the patient at the stage P2 of the current patient, inputting a depth neural network model or a support vector machine model passing the test, and outputting coordinate transformation data and morphological change data of the dental model from the stage P1 to the stage P2 of the current patient;
step five, acquiring a dental three-dimensional digital model of the current patient at the stage P1; and according to the coordinate transformation data and the form change data of the dental model from the P1 stage to the P2 stage of the current patient, carrying out coordinate transformation and form change on the dental three-dimensional digital model from the P1 stage of the current patient, and generating and outputting the dental three-dimensional digital model from the P2 stage.
13. The method for monitoring orthodontic treatment based on intraoral images and three-dimensional models according to claim 12, characterized in that: the deep neural network model or the support vector machine model is a model with an attention mechanism.
14. The method for monitoring orthodontic treatment based on intraoral images and three-dimensional models according to claim 13, characterized in that: the deep neural network model is a convolutional neural network model.
CN202010533803.9A 2020-06-12 2020-06-12 Dental orthodontic treatment monitoring method based on intraoral images and three-dimensional models Pending CN111685899A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010533803.9A CN111685899A (en) 2020-06-12 2020-06-12 Dental orthodontic treatment monitoring method based on intraoral images and three-dimensional models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010533803.9A CN111685899A (en) 2020-06-12 2020-06-12 Dental orthodontic treatment monitoring method based on intraoral images and three-dimensional models

Publications (1)

Publication Number Publication Date
CN111685899A true CN111685899A (en) 2020-09-22

Family

ID=72480633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010533803.9A Pending CN111685899A (en) 2020-06-12 2020-06-12 Dental orthodontic treatment monitoring method based on intraoral images and three-dimensional models

Country Status (1)

Country Link
CN (1) CN111685899A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112807108A (en) * 2021-01-27 2021-05-18 清华大学 Method for detecting tooth correction state in orthodontic correction process
CN113712587A (en) * 2021-09-06 2021-11-30 吉林大学 Invisible orthodontic progress monitoring method, system and device based on oral scanning model
CN114491700A (en) * 2022-02-15 2022-05-13 杭州雅智医疗技术有限公司 Display coordinate system calculation method and device of three-dimensional tooth model and application
CN115983082A (en) * 2023-03-20 2023-04-18 佛山科学技术学院 Tooth model generation method for predicting orthodontic treatment
CN116168185A (en) * 2022-12-02 2023-05-26 广州黑格智造信息科技有限公司 Three-dimensional tooth model segmentation method and device
CN116649995A (en) * 2023-07-25 2023-08-29 杭州脉流科技有限公司 Method and device for acquiring hemodynamic parameters based on intracranial medical image
CN116712193A (en) * 2023-06-19 2023-09-08 佛山科学技术学院 Treatment course prediction method for orthodontic treatment
CN117095145A (en) * 2023-10-20 2023-11-21 福建理工大学 Training method and terminal of tooth grid segmentation model
CN117315161A (en) * 2023-10-31 2023-12-29 广州穗华口腔门诊部有限公司 Image acquisition and processing system for digital tooth model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170128163A1 (en) * 2015-11-09 2017-05-11 Naif Bindayel Orthodontic systems
CN109363786A (en) * 2018-11-06 2019-02-22 上海牙典软件科技有限公司 A kind of Tooth orthodontic correction data capture method and device
EP3461457A1 (en) * 2017-09-28 2019-04-03 Otmar Kronenberg AG Sensor and system for monitoring the wearing period of orthodontic elastic traction devices

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170128163A1 (en) * 2015-11-09 2017-05-11 Naif Bindayel Orthodontic systems
EP3461457A1 (en) * 2017-09-28 2019-04-03 Otmar Kronenberg AG Sensor and system for monitoring the wearing period of orthodontic elastic traction devices
CN109363786A (en) * 2018-11-06 2019-02-22 上海牙典软件科技有限公司 A kind of Tooth orthodontic correction data capture method and device

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112807108B (en) * 2021-01-27 2022-03-01 清华大学 Method for detecting tooth correction state in orthodontic correction process
CN112807108A (en) * 2021-01-27 2021-05-18 清华大学 Method for detecting tooth correction state in orthodontic correction process
CN113712587A (en) * 2021-09-06 2021-11-30 吉林大学 Invisible orthodontic progress monitoring method, system and device based on oral scanning model
CN113712587B (en) * 2021-09-06 2023-07-18 吉林大学 Invisible orthodontic progress monitoring method, system and device based on oral cavity scanning model
CN114491700A (en) * 2022-02-15 2022-05-13 杭州雅智医疗技术有限公司 Display coordinate system calculation method and device of three-dimensional tooth model and application
CN116168185A (en) * 2022-12-02 2023-05-26 广州黑格智造信息科技有限公司 Three-dimensional tooth model segmentation method and device
CN115983082A (en) * 2023-03-20 2023-04-18 佛山科学技术学院 Tooth model generation method for predicting orthodontic treatment
CN115983082B (en) * 2023-03-20 2023-05-23 佛山科学技术学院 Method for generating tooth model after predictive orthodontic treatment
CN116712193B (en) * 2023-06-19 2024-01-23 佛山科学技术学院 Treatment course prediction method for orthodontic treatment
CN116712193A (en) * 2023-06-19 2023-09-08 佛山科学技术学院 Treatment course prediction method for orthodontic treatment
CN116649995A (en) * 2023-07-25 2023-08-29 杭州脉流科技有限公司 Method and device for acquiring hemodynamic parameters based on intracranial medical image
CN116649995B (en) * 2023-07-25 2023-10-27 杭州脉流科技有限公司 Method and device for acquiring hemodynamic parameters based on intracranial medical image
CN117095145A (en) * 2023-10-20 2023-11-21 福建理工大学 Training method and terminal of tooth grid segmentation model
CN117095145B (en) * 2023-10-20 2023-12-19 福建理工大学 Training method and terminal of tooth grid segmentation model
CN117315161A (en) * 2023-10-31 2023-12-29 广州穗华口腔门诊部有限公司 Image acquisition and processing system for digital tooth model
CN117315161B (en) * 2023-10-31 2024-03-29 广州穗华口腔门诊部有限公司 Image acquisition and processing system for digital tooth model

Similar Documents

Publication Publication Date Title
CN111685899A (en) Dental orthodontic treatment monitoring method based on intraoral images and three-dimensional models
US11232573B2 (en) Artificially intelligent systems to manage virtual dental models using dental images
CN109310488B (en) Method for estimating at least one of shape, position and orientation of a dental restoration
US11735306B2 (en) Method, system and computer readable storage media for creating three-dimensional dental restorations from two dimensional sketches
CN109785374B (en) Automatic real-time unmarked image registration method for navigation of dental augmented reality operation
US11464467B2 (en) Automated tooth localization, enumeration, and diagnostic system and method
US9191648B2 (en) Hybrid stitching
US20190148005A1 (en) Method and system of teeth alignment based on simulating of crown and root movement
CN111862171B (en) CBCT and laser scanning point cloud data tooth registration method based on multi-view fusion
US11443423B2 (en) System and method for constructing elements of interest (EoI)-focused panoramas of an oral complex
US20220084267A1 (en) Systems and Methods for Generating Quick-Glance Interactive Diagnostic Reports
JP2023552589A (en) Automatic processing of dental scans using geometric deep learning
CN111265317B (en) Tooth orthodontic process prediction method
CN115619773A (en) Three-dimensional tooth multi-mode data registration method and system
CN116958169A (en) Tooth segmentation method for three-dimensional dental model
WO2023194500A1 (en) Tooth position determination and generation of 2d reslice images with an artificial neural network
Hao et al. Ai-enabled automatic multimodal fusion of cone-beam ct and intraoral scans for intelligent 3d tooth-bone reconstruction and clinical applications
Hosseinimanesh et al. Improving the quality of dental crown using a transformer-based method
CN115830287B (en) Tooth point cloud fusion method, device and medium based on laser mouth scanning and CBCT reconstruction
EP4307229A1 (en) Method and system for tooth pose estimation
US20230419631A1 (en) Guided Implant Surgery Planning System and Method
US20230298272A1 (en) System and Method for an Automated Surgical Guide Design (SGD)
KR20230041560A (en) Inferior alveolar nerve inference apparatus and method through artificial neural network learning
CN116749522A (en) 3D printing system and method for orthodontic correction tool
Dhar et al. A Deep Learning Approach to Teeth Segmentation and Orientation from Panoramic X-rays

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200922

RJ01 Rejection of invention patent application after publication