WO2020133180A1 - 一种基于人工智能的正畸方法及装置 - Google Patents

一种基于人工智能的正畸方法及装置 Download PDF

Info

Publication number
WO2020133180A1
WO2020133180A1 PCT/CN2018/124746 CN2018124746W WO2020133180A1 WO 2020133180 A1 WO2020133180 A1 WO 2020133180A1 CN 2018124746 W CN2018124746 W CN 2018124746W WO 2020133180 A1 WO2020133180 A1 WO 2020133180A1
Authority
WO
WIPO (PCT)
Prior art keywords
orthodontic
image data
generator
tooth
oral
Prior art date
Application number
PCT/CN2018/124746
Other languages
English (en)
French (fr)
Inventor
田烨
李鹏
周迪曦
Original Assignee
上海牙典软件科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海牙典软件科技有限公司 filed Critical 上海牙典软件科技有限公司
Priority to PCT/CN2018/124746 priority Critical patent/WO2020133180A1/zh
Publication of WO2020133180A1 publication Critical patent/WO2020133180A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C7/00Orthodontics, i.e. obtaining or maintaining the desired position of teeth, e.g. by straightening, evening, regulating, separating, or by correcting malocclusions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images

Definitions

  • the invention relates to the technical field of orthodontics, in particular to an orthodontic method and device based on artificial intelligence.
  • Oral disease is a common multiple disease. According to statistics from the World Health Organization, staggered jaw deformities have become one of the three major oral diseases (caries, periodontal disease and staggered jaw deformities). Dental deformity has a great influence on oral health, oral function, the development of maxillofacial bones and appearance. Orthodontics has been regarded as an indispensable and important part of oral health treatment.
  • Orthodontics is to fix orthodontic teeth or misaligned teeth, using fixed orthodontic appliances composed of archwires, brackets, etc., or invisible removable orthodontic appliances such as braces, to apply three-dimensional orthodontic force and torque to the teeth, adjust facial bones, teeth and
  • the balance and coordination between the maxillofacial muscles improves the face shape, aligns the dentition and improves the chewing efficiency after a period of correction.
  • Traditional orthodontic treatment mainly relies on the experience of doctors to formulate correction plans.
  • the manual tooth arrangement experiment can help the orthodontist to predict the treatment process involved in the orthodontic treatment, and inform the patient about the movement of the teeth and the final treatment effect.
  • the main disadvantage of the manual tooth arranging process is that each tooth needs to be operated separately.
  • the degree of automation is low, the efficiency of tooth arranging is also low, and it consumes a lot of materials.
  • the observability is not strong. It is difficult for patients to clearly understand the effect of orthodontics.
  • the artificial orthodontic solution in the prior art is limited by the professional qualities of doctors. Therefore, the effect of the orthodontic solution is difficult to guarantee, and cannot be displayed to the user in the form of dynamic images, nor is it conducive to the user to understand the orthodontic process.
  • the present invention provides an orthodontic method and device based on artificial intelligence, which specifically includes the following:
  • An orthodontic method based on artificial intelligence including:
  • the tooth area on each frame of the image is circled in the form of a label; the circled area is marked with the corresponding tooth number, and the non-teeth area is set to 0;
  • the appliance is made based on the dentition model.
  • the error between the obtained three-dimensional digital model of the tooth and the surface of the real crown meets the preset precision requirements, and is directly used for the modeling of the printable dentition; otherwise, the three-dimensional Take the mold to obtain the crown surface model. Take the mold to obtain the crown surface model to obtain a printable dentition model.
  • the orthodontic solution includes stages in which the orthodontic process is divided,
  • each orthodontic stage includes one or more treatment measures.
  • the entire orthodontic treatment plan is represented by a coding sequence, one coding element in the coding sequence is a treatment measure, and each coding element includes 16 digits to correspond to 16 teeth of a single jaw, each digit is used Yu indicates the treatment of their corresponding teeth.
  • a preset machine learning model is used for training to obtain a second generator;
  • the preset machine learning model includes a neural network machine with two convolutional layers, two pooling layers, two fully connected layers and one output layer Learning model
  • the training set of the second generator includes two parts, the first part is the labeled oral CT image data before orthodontics, and the second part is the orthodontic solution in coded form corresponding to the labeled oral CT image data before orthodontics Adjusting the model parameters of the preset machine learning model during training until it can output a reasonable orthodontic solution expressed in coded form to any labeled CT image data before orthodontics.
  • it also includes:
  • the three-dimensional digital model of the tooth and the orthodontic solution characterized in coded form are input to a third generator so as to obtain a prediction result of the orthodontic solution, and the prediction result is displayed in the form of animation.
  • the training method of the third generator includes:
  • the training data includes a three-dimensional digital model of the tooth, the corresponding orthodontic scheme in the form of encoding, the movement data of each stage of the orthodontic scheme, and the time required to complete each stage;
  • the third generation takes the three-dimensional digital model of the tooth and the corresponding orthodontic solution in the form of coding as input, and uses the movement data at each stage of the orthodontic solution and The time required to complete each stage is the output.
  • An orthodontic device based on artificial intelligence including:
  • the labeled oral CT image data acquisition module is used to acquire labeled oral CT image data; the labeled oral CT image data encloses the tooth area on each frame of the image in the form of annotation; the enclosed area is labeled with the corresponding tooth number, The non-tooth area is set to 0; the labeled oral CT image data is obtained by inputting the original oral CT image data into a pre-trained first generator;
  • An orthodontic solution acquisition module used to input the labeled oral CT image data into the second generator to obtain an orthodontic solution characterized in coded form
  • a printable dentition model acquisition module configured to obtain a printable dentition model according to the orthodontic solution and the three-dimensional digital model of the tooth obtained by labeling the oral CT image data;
  • the appliance making module is used for making appliances based on the dentition model.
  • it also includes:
  • the prediction module is used to input the three-dimensional digital model of the tooth and the orthodontic solution in coded form into a third generator so as to obtain the prediction result of the orthodontic solution, and the prediction result is displayed in the form of animation.
  • it also includes:
  • a second generator training module used for training using a preset machine learning model to obtain a second generator
  • the third generator training module is used to train the third generator.
  • the third generator training module includes:
  • the training data unit is used to obtain training data, the training data includes a three-dimensional digital model of the tooth, the corresponding orthodontic scheme in the form of coding, the movement data of each stage of the orthodontic scheme, and the time required for the completion of each stage;
  • the training unit is used to train a preset neural network according to the training data to obtain a third generator, the third generation takes the three-dimensional digital model of the tooth and the corresponding orthodontic scheme in the form of coding as input, each orthodontic scheme
  • the movement data of the stages and the time required for each stage to complete are output.
  • the present invention provides an artificial intelligence-based orthodontic method and device that uses a variety of machine learning models to realize the automation of the entire process from three-dimensional digital acquisition of teeth, orthodontic solution generation to model manufacturing, thereby replacing or partially replacing physicians Judgment and decision-making process.
  • the present invention is less susceptible to the subjective influence of doctors and can also improve the diagnosis and treatment rate.
  • FIG. 1 is a flow chart of a method for acquiring three-dimensional digital teeth based on machine learning provided by an embodiment of the present specification
  • FIG. 2 is a flowchart of a training method of a first generator provided by an embodiment of this specification
  • 5(a) is a schematic diagram of the original state of the upper teeth before orthodontics provided by the embodiment of the present specification
  • 5(b) is a schematic diagram of the operation of aligning the upper teeth in the direction of the dental arch provided by the embodiment of the present specification
  • 5(c) is a schematic view of the operation of pushing the molar back in the second stage of orthodontics of the upper teeth provided by the embodiment of the present specification;
  • 5(d) is an operation schematic diagram of the upper teeth comprehensive adjustment alignment (multiple operations appear simultaneously) provided by the embodiment of the present specification;
  • FIG. 5(e) is a schematic diagram of anterior teeth adduction operation provided by an embodiment of the present specification.
  • 5(f) is a schematic diagram of the operation of aligning the entire upper fine-tuning provided by an embodiment of the present specification
  • FIG. 6(a) is a schematic diagram of the original state of the lower teeth before orthodontics provided by the embodiment of the present specification
  • FIG. 6(b) is a schematic diagram of the operation of aligning anterior teeth of lower teeth provided by an embodiment of the present specification
  • FIG. 6(c) is a schematic diagram of the operation of aligning the posterior teeth of the lower teeth provided by the embodiment of the present specification
  • FIG. 6(d) is a schematic diagram of the operation of aligning and aligning the lower molar push molar provided by the embodiment of the present specification
  • 6(e) is a schematic view of the operation of aligning the lower teeth along the dental arch provided by the embodiment of the present specification
  • FIG. 6(f) is a schematic diagram of the operation of aligning and adjusting the entire lower teeth provided by the embodiment of the present specification
  • FIG. 7 is a schematic structural diagram of a two-layer neural network provided by an embodiment of this specification.
  • FIG. 8 is a flow chart of a method for manufacturing a bracketless appliance required for invisible correction provided by an embodiment of this specification
  • FIG. 9 is a block diagram of an orthodontic device based on artificial intelligence provided by an embodiment of the present specification.
  • Dental CT image data has the advantages of low cost and easy access, but compared with professional oral 3D printing equipment, the accuracy of image data is not high, and it is difficult to directly obtain a three-dimensional digital model of teeth based on oral CT image data, which is also oral CT image data The reason why it is difficult to use on a large scale.
  • an embodiment of the present invention provides a method for acquiring three-dimensional digital teeth based on machine learning. As shown in FIG. 1, the method includes:
  • the original oral CT image data contains complete dental information (including the overall root and crown).
  • the tooth area on each frame of the image is circled in the form of a label.
  • the circled area is marked with the corresponding tooth number, and the non-tooth area is set to 0.
  • the first generator is a neural network with image identification function obtained through machine learning.
  • the first generator can take the original oral CT image as input and output the recognition result of the tooth area in the original oral CT image ,
  • the recognition result is output in the form of annotated oral CT image data.
  • the training process of the first generator will be described in detail below.
  • the two-dimensional image corresponding to the original oral CT image data may be zoomed and normalized.
  • the scaling process refers to scaling the image to a predetermined size (such as 512*512).
  • Normalization refers to the normalization of pixel values to a standard data range (such as a 0-1 data range) through linear transformation; correspondingly, the areas identified in the labeled oral CT image data are also scaled and normalized ⁇ Treatment.
  • the labeled dental CT image data records the labeled tooth information to obtain the three-dimensional voxel data of the tooth.
  • the data can also be used to obtain three-dimensional surface data through common surface reconstruction algorithms (such as the moving cube method) to obtain a three-dimensional digital model of the tooth.
  • the embodiment of the present invention provides a method for intelligently acquiring a complete three-dimensional digital model of teeth from raw oral CT image data based on a neural network. Further, in a preferred embodiment, other characteristic points or characteristic lines may be additionally marked in the obtained three-dimensional digital model of the teeth, such as, for example, the position of the ear point, the outline of the face, and the like.
  • the GAN network is specifically used to obtain the CT image data of the labeled oral cavity.
  • the embodiment of the present invention first introduces the GAN network.
  • the main idea of GAN is to use the generator network to generate the two-dimensional images corresponding to the CT image data of the oral cavity, and then use the discriminator network to judge the authenticity of the two-dimensional images generated by the generator, and so on until the discriminator network cannot judge the The authenticity of the two-dimensional image is specifically reflected in the possibility of authenticity given by the discriminator network for any piece of raw oral CT image data input.
  • the generator network at this time is based on the embodiment of the present invention. What is needed is the first generator, and the discriminator can be discarded. That is, the embodiment of the present invention uses this trained first generator to obtain labeled CT image data.
  • the embodiment of the present invention embeds a residual network in the generator network.
  • the deep network can achieve the same effect of the shallow network as long as the upper layers do an equivalent mapping on the basis of the shallow network, thereby significantly reducing the training difficulty.
  • the residual network in the embodiment of the present invention designs a plurality of residual blocks (residual blocks), and each residual block includes a convolution layer (Conv) and a normalization layer (Batchnormlize).
  • Conv convolution layer
  • Batchnormlize normalization layer
  • the number of residual blocks can be adjusted according to the complexity of the task before the network training. The higher the complexity of the task, the more the number can be designed.
  • the generator network input raw oral CT image data and output labeled oral CT image data
  • the generator network includes a convolution layer (Conv), a normalization layer (Batchnormlize), an activation layer (PReLU), and residuals Network (N*residual block).
  • Conv convolution layer
  • Batchnormlize normalization layer
  • PReLU activation layer
  • N*residual block residuals Network
  • an embodiment of the present invention provides a training method for a first generator, as shown in FIG. 2, including:
  • the training data includes the original oral CT image data of the pre-stored patient and the labeled oral CT image data corresponding to the original oral CT image data.
  • the labeled oral CT image data is obtained by a professional doctor manually labeling the original oral CT image data.
  • a professional doctor marks each dentition according to the standard dental position representation method.
  • the tooth position notation is a method for each human tooth number; the upper and lower dentition is divided into four areas of upper, lower, left and right with a cross symbol, the upper right area is also called A area, the upper left area is also called B area, and the lower right The area is also called area C, and the lower left area is also called area D.
  • the common tooth position notation is FDI tooth position notation (number notation), where each tooth is recorded with 2 Arabic numerals; each tooth is represented with two Arabic numerals, the first one represents the quadrant where the tooth is located: the patient
  • the upper right, upper left, lower left, and lower right are 1, 2, 3, 4 in permanent teeth, 5, 6, 7, 8 in deciduous teeth; the second position indicates the position of the teeth: 1-8 from the middle incisor to the third molar ;
  • Table 1 is based on the dentist's position (the left side corresponds to the patient's right side), but the distinction between left and right is reversed, based on the patient's actual teeth.
  • a standard set of original dentition models is a model of 16 teeth in the upper jaw and 16 teeth in the lower jaw.
  • the position that is not recognized as a crown is assigned a value of 0; the position that is recognized as a crown is marked with a corresponding number according to the tooth position notation; at the same time, the corresponding dentition shape information is further identified and matched.
  • a semi-automatic method may also be used to obtain annotated oral CT image data.
  • the semi-automatic annotation method is shown in FIG. 3 and includes:
  • the pixel data of each point on the original oral CT image data is called voxel data in three-dimensional space.
  • adjacent and homogeneous voxel data are grouped into several groups.
  • the preset rules include:
  • the pixel value of the entire tooth (excluding non-bony tissues such as endodontic tissue) is within a small threshold interval. Or it can be divided into one category. Then a complete tooth must be within a set of data.
  • Adjacent teeth may be grouped in the same group due to overcrowding.
  • Teeth and alveolar bone tissue may be grouped in the same group due to their similar density and contact.
  • the discriminator network In the trained GAN network, after inputting the original oral CT image data into the generator network to obtain the labeled oral image data, the discriminator network inputs the labeled oral CT image data output from the generator network to discriminate the labeled oral CT image data
  • the authenticity of the labeled oral image data generated by the generator network is false, and the labeled oral image data marked by a professional doctor or based on a semi-automatic method is true. If the discriminator network cannot distinguish between true and false, the generator network at this time is It can be used as the first generator.
  • the discriminator network and the generator network have similar structures.
  • the advantage that the first generator is applied to acquiring the labeled CT image data is obvious.
  • One original oral CT image data has several slices, and one slice has several dental images.
  • Using manual methods requires the processing of hundreds of images, which must be completed by doctors with relevant professional experience and anatomical knowledge, which is very complicated and time-consuming.
  • Even in the semi-automatic mode it is necessary to select appropriate threshold data, and additionally cooperate with some interactive processing to complete the labeling work.
  • artificial intelligence model-the first generator it can complete accurate marking intelligently and reduce the dependence on professional doctors.
  • the embodiment of the present invention provides a three-dimensional digital acquisition method for teeth based on machine learning, which can automatically obtain labeled oral image data based on the first generator, which reduces costs compared to professional oral 3D printing equipment and improves based on machine learning.
  • Orthodontics or orthodontic treatment refers to gradually adjusting the relative position of the teeth through a series of medical means, aligning them correctly, and adjusting the bite state, so as to achieve the effect of aligning them, improving the function and beauty face. Orthodontic treatment usually requires multiple stages, and each stage solves one or more problems and gradually completes the treatment effect. Multiple stages of orthodontic treatment form a complete treatment plan.
  • An embodiment of the present invention provides an artificial intelligence-based orthodontic solution automatic planning method. As shown in FIG. 4, the method includes:
  • the three-dimensional digital model of the tooth may be obtained in step S103, or may be obtained by a professional doctor or based on a semi-automatic labeling method.
  • the orthodontic solution includes the following:
  • Orthodontic treatment measures carried out at different stages (expansion, adduction, tooth extraction, alignment of midline, fine adjustment of alignment, etc.); specifically, there can be multiple treatment measures for each stage.
  • the orthodontic solution can be divided into multiple stages, each stage can perform multiple operations, each operation can contain one or more teeth.
  • the operation content of each tooth in each stage can be represented by coding.
  • the entire orthodontic treatment plan can be represented by one coding sequence.
  • the teeth are numbered from 1 to 16 from left to right in the following numbering process.
  • Orthodontics is divided into upper and lower teeth, which can be performed simultaneously or on one side.
  • the code is 0, and when it moves, the code is the operation code corresponding to the movement.
  • FIG. 5(a) shows the original state before orthodontics.
  • FIG. 5(b) shows the operation of aligning the orthodontics in the direction of the dental arch.
  • FIG. 5(c) shows the operation of pushing the molar back in the second stage of orthodontics.
  • the corresponding operation code is 0660000000000660, and the number 6 represents the action of pushing back the molars.
  • the corresponding teeth are 2, 3, 14, and 15 teeth. As shown in FIG. 5(d), it shows the operation of comprehensive adjustment alignment (multiple operations appear simultaneously). The corresponding operation number is 0062220000222770. At this stage, multiple operations are performed at the same time. After the molar is pushed back, the shape of the dental arch has changed. The adjacent 4, 5, 6, 11, 12, 13 need to be aligned (movement in the direction of the back molar). The corresponding operation number is 2. Teeth 3 and 3 do not recede and need to be pushed back (corresponding to the operation number 6) 13, teeth 15 deviate toward the buccal side and need to be retracted (posterior teeth) (corresponding to the operation number 7). As shown in FIG. 5(e), it shows the anterior teeth adduction operation.
  • the corresponding code is 0000088888800000.
  • the inward alignment front teeth
  • the operation number is indicated by 8.
  • FIG. 5(f) shows the overall fine-tuning alignment operation.
  • the corresponding code is 0009999999999990.
  • the result of the previous stage is close to the target effect. At this stage, fine-tuning is performed to achieve the target arranging effect. Use 9 to indicate fine tuning operation.
  • FIG. 6(a) shows the original state before orthodontics.
  • FIG. 6(b) shows the inward alignment operation of the anterior teeth, and the corresponding code is 0000088888800000.
  • the inward alignment (aligning the center line) corresponds to 8.
  • FIG. 6(c) shows the posterior teeth adduction and alignment operation, and the corresponding code is 077700000077700.
  • the adduction and alignment (aligning the midline) corresponds to the operation 7.
  • FIG. 6(d) shows the operation of pushing the molars back and aligning them.
  • the corresponding code is 0630000000000000.
  • the tooth No. 3 is blocked by the tooth No. 2 molar.
  • FIG. 6(e) shows the alignment operation along the dental arch, which corresponds to the code 0000200000000000.
  • FIG. 6(f) shows the overall fine-tuning alignment, which corresponds to the code 0009999999999000.
  • the last step is to fine-tune the alignment to achieve the desired goal.
  • the output is a coding sequence (upper teeth on the left, lower teeth on the right)
  • the combination of upper and lower teeth can form an overall plan. In some cases, when one tooth does not move, it can be set to 000000000000000000.
  • three-dimensional tooth digital information is used as an input to automatically obtain an orthodontic solution represented by a coding sequence.
  • a preset machine learning model is used for training to obtain a second generator.
  • the training set of the second generator includes two parts, the first part is the labeled oral CT image data before orthodontics, and the second part is the encoded form corresponding to the labeled oral CT image data before orthodontics Expressed orthodontic solution.
  • the model parameters of the preset machine learning model are adjusted until a reasonable orthodontic solution expressed in coded form can be output to the labeled oral CT image data before any orthodontics.
  • the general learning model can be set to include:
  • each hidden layer has corresponding model parameters.
  • the model parameters of each layer can be multiple.
  • One model parameter in each hidden layer changes the input data linearly or nonlinearly to obtain operations Results; each hidden layer receives the operation result of the previous hidden layer, and after its own operation, outputs the operation result of this layer to the next output;
  • the weight W and bias b are the output
  • the process of fine-tuning the weights and biases according to the input data is called the neural network training process, so the optimal weights and biases of the neural network are obtained during the training of the neural network.
  • the neural network model in this embodiment may use existing machine learning algorithms that implement the training process, but is not limited to the use of machine learning algorithms such as convolutional neural networks, recurrent neural networks, or logistic regression networks.
  • the preset machine learning model in the embodiment of the present invention may include a neural network machine learning model of two convolutional layers, two pooling layers, two fully connected layers, and one output layer.
  • the convolutional layer may perform convolution processing on the input orthodontic input training data to realize feature extraction.
  • the pooling layer may perform a downsampling operation on the output of the previous layer, that is, return the maximum value in the sampling window as the output of the downsampling.
  • the computational complexity can be simplified; on the other hand, feature compression can be performed to extract the main features.
  • the fully-connected layer can be used as a connection layer between the nodes of the upper and lower layers, establishes a connection relationship between the data of the nodes obtained by the upper and lower layers, and sends the output value to a classifier (such as a softmax classifier).
  • a classifier such as a softmax classifier
  • each layer output is a linear function of the input of the previous layer.
  • a nonlinear factor can be introduced by adding an activation function , That is, increase the linearity correction layer.
  • the output layer may use a softmax function to output orthodontic output training data.
  • the Softmax function includes a non-linear classifier to perform classifier training on the orthodontic input training data. Specifically, the probability value that the orthodontic input training data matches the orthodontic output training data can be determined.
  • machine learning model described in the embodiments of the present invention is not limited to the above-mentioned neural network machine learning model. In practical applications, it may also include other machine learning models, such as a decision tree machine learning model. The embodiments are not limited to the above.
  • the preset machine learning model may be set to include:
  • a first convolutional layer and a first pooling layer connected to the first convolutional layer; and a second convolutional layer connected to the first pooling layer; and a second convolutional layer Second pooling layer; and a first fully connected layer connected to the second pooling layer; and a second fully connected layer connected to the first fully connected layer; and connected to the first fully connected layer A linear correction layer; and a neural network machine learning model of the output layer connected to the second fully connected layer.
  • each layer output is a linear function of the input of the previous layer.
  • a nonlinear factor can be introduced by adding an activation function .
  • the first generator and the second generator can be used in a combined form, and the combined result as a whole can be used to realize the raw oral CT image data as input, and the encoded orthodontic scheme as Output technical solutions.
  • the artificial intelligence-based automatic planning method for orthodontic solutions proposed in the embodiments of the present invention can rely on the second generator to quickly and automatically derive the orthodontic solution.
  • the orthodontic solution has objective criteria and is not affected by subjective and external factors .
  • the main goal of orthodontic treatment is the expected movement of the teeth. It can be represented by the spatial transformation of teeth relative to the previous stage (mathematically, it can be expressed as a three-dimensional spatial transformation matrix). Each stage completes one or more treatment purposes, such as (closing the gap between teeth, expanding the arch to obtain the gap, pushing the molar back to form the gap, etc.)
  • an embodiment of the present invention further provides a method for predicting the treatment effect of an orthodontic solution, the prediction method includes:
  • the traditional method is to use a three-dimensional scanning method to obtain a digital model of the tooth, obtain a separate crown model after digital segmentation, and then use a three-dimensional visualization method to display the three-dimensional arrangement of the crown (dentition model), and move the targeted The position of the crown (including translation and torsion) to obtain the subjectively predicted (or expected) target alignment (dentition model) status.
  • the CT-based three-dimensional tooth model (including the crown and root at the same time) obtained in the embodiment of the present invention provides an intelligent algorithm and can obtain the same effect.
  • an embodiment of the present invention provides a training method of a third generator.
  • the method includes:
  • the training data includes a three-dimensional digital model of the tooth, a corresponding coding form of the orthodontic solution, movement data of each stage of the orthodontic solution, and the time required to complete each stage.
  • the three-dimensional digital model of the tooth may be obtained based on the method in steps S101-S105 in the embodiment of the present invention, or may be a crown model in a traditional three-dimensional scanning mode.
  • the third generator may also draw an animation based on the movement data of each stage of the orthodontic solution and the time required to complete each stage.
  • the first generator, the second generator, and the third generator can be used in a combined form, and the combined result as a whole can be used to realize the raw oral CT image data as an input to predict the result The technical solution for the output.
  • the second generator and the third generator may use the same or different neural networks for training.
  • the method for predicting the treatment effect of the orthodontic solution provided by the embodiments of the present invention can quickly and conveniently obtain the prediction effect of the orthodontic solution, which reduces the work difficulty of practitioners and significantly reduces the burden on doctors. It also enhances the patient's sensory perception of the orthodontic solution that he will accept.
  • a bracketless brace required for invisible correction can be made.
  • the manufacturing method is shown in FIG. 8 and includes:
  • a printable dentition model is obtained by combining the three-dimensional digital model of the tooth obtained by labeling the oral CT image data.
  • the animation data corresponding to the orthodontic solution can be collected, and the modeling output can be a 3D printed dentition model.
  • the bracket-free appliance is manufactured by hot-molding of polymer materials.
  • the labeled oral CT image data, the three-dimensional digital model of the tooth obtained based on the labeled oral CT image data, the orthodontic solution, and the animation data of the orthodontic solution can all be obtained using the method provided in the embodiments of the present invention.
  • the process of making the mold and the appliance can be combined into one step, that is, directly printed into the shape of the appliance, reducing the steps of laminating the film; further improving the efficiency of the appliance.
  • the obtained three-dimensional digital model of the tooth and the surface of the real crown meet the requirements, and can be directly used for modeling of the printable dentition.
  • the error may be related to a specific orthodontic solution, and different orthodontic solutions have different requirements for the error of the tooth model crown surface and the actual crown surface.
  • the labeled CT image data may contain noise information such as soft tissue, it may not be clear enough. Therefore, a professional doctor can also evaluate the printable dentition model obtained and evaluate whether it can be used to make an appliance.
  • the crown outer surface model can be obtained by three-dimensional model acquisition.
  • the manufacture of bracketless appliances has higher requirements on the conformity of the outer surface of the crown.
  • the external surface model of the crown is obtained by scanning the mouth or taking the plaster model of the tooth outside the mouth and scanning again.
  • the dental crown model obtained by scanning and the dental three-dimensional digital model obtained in the embodiments of the present invention have very little difference in the shape of the crown.
  • the dental three-dimensional digital model crown may be slightly smaller than the real crown
  • the 3D scan may be slightly different from the 3D digital model of the tooth.
  • the three-dimensional digital model of the tooth based on the CT scan and the model obtained by the three-dimensional scan are two representations in three-dimensional space. Because their respective three-dimensional coordinate systems are different, they do not coincide in three-dimensional space. Based on commonly used three-dimensional spatial data matching algorithms (such as the ICP algorithm), the crown model can be matched and scanned to the position of the crown in the three-dimensional digital model of the teeth to make them coincide.
  • the ICP algorithm commonly used three-dimensional spatial data matching algorithms
  • the methods of the present invention can be freely combined to achieve the purpose of automated orthodontics.
  • the medical imaging to the treatment plan usually depend on the doctor to a large extent, and the degree of automation is difficult to improve.
  • machine learning methods are mostly used, and an intelligent calculation model for orthodontic treatment is obtained through big data training, thereby replacing or partially replacing the judgment and decision-making process of physicians.
  • the present invention is less susceptible to the subjective influence of doctors and can also improve the diagnosis and treatment rate.
  • the present invention decomposes the entire diagnosis and treatment process into a plurality of independent calculation processes.
  • the first, second, and third generators of the agent can be used for machine learning training based on deep learning or neural networks, respectively.
  • the decoupling of the various medical links required reduces the dependence on the original data and improves the accuracy of the various processes of diagnosis and treatment.
  • An embodiment of the present invention also discloses an orthodontic device based on artificial intelligence, as shown in FIG. 9, including:
  • the labeled oral CT image data acquisition module 501 is used to acquire labeled oral CT image data; the labeled oral CT image data encloses the tooth area on each frame of the image in the form of annotation; the enclosed area is labeled with the corresponding tooth number , The non-tooth area is set to 0; the labeled oral CT image data is obtained by inputting the original oral CT image data into the pre-trained first generator;
  • the orthodontic solution acquisition module 502 is used to input the labeled oral CT image data into the second generator to obtain an orthodontic solution characterized in coded form;
  • the prediction module 503 is configured to input the three-dimensional digital model of the tooth and the orthodontic solution characterized in coded form into a third generator to obtain a prediction result of the orthodontic solution, and the prediction result is displayed in an animation form.
  • a printable dentition model acquisition module 504 configured to obtain a printable dentition model according to the orthodontic solution and the three-dimensional digital model of the tooth obtained by labeling the oral CT image data;
  • the appliance making module 505 is used to make appliances based on the dentition model.
  • a second generator training module used for training using a preset machine learning model to obtain a second generator
  • the third generator training module is used to train the third generator.
  • the third generator training module includes:
  • the training data unit is used to obtain training data, the training data includes a three-dimensional digital model of the tooth, the corresponding orthodontic scheme in the form of coding, the movement data of each stage of the orthodontic scheme, and the time required for the completion of each stage;
  • the training unit is used to train a preset neural network according to the training data to obtain a third generator, the third generation takes the three-dimensional digital model of the tooth and the corresponding orthodontic scheme in the form of coding as input, each orthodontic scheme
  • the movement data of the stages and the time required for each stage to complete are output.
  • the division of the modules/units in the present invention is only a division of logical functions. In actual implementation, there may be additional divisions, for example, multiple units or components may be combined or integrated into another system, or some features may be ignored , Or not executed. Some or all of the modules/units can be selected according to actual needs to achieve the purpose of the solution of the present invention.
  • each module/unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or software function unit.

Landscapes

  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Epidemiology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本发明提供了一种基于人工智能的正畸方法及装置,所述方法包括获取标注口腔CT影像数据;所述标注口腔CT影像数据中以标注的形式圈出每一帧图像上的牙齿区域;圈出的区域标注出相应牙齿编号,非牙齿区域置为0;将所述标注口腔CT影像数据输入第二生成器以得到以编码形式表征的正畸方案;根据所述正畸方案,结合所述标注口腔CT影像数据得到的牙齿三维数字化模型得到可打印的牙列模型;基于所述牙列模型制作矫治器。本发明使用了多种机器学习模型,实现了从牙齿三维数字化获取到模型制造全过程的自动化,不易受到医生个人的主观影响,还能够提高诊疗效率。

Description

一种基于人工智能的正畸方法及装置 技术领域
本发明涉及牙齿矫治技术领域,尤其涉及一种基于人工智能的正畸方法及装置。
背景技术
口腔疾病是一种常见的多发性疾病。据世界卫生组织统计,错颌畸形已经成为三大口腔疾病(龋齿、牙周病和错颌畸形)之一。牙齿畸形对口腔健康、口腔功能、颌面骨骼的发育及外貌都有很大的影响。口腔正畸学已被认为是口腔保健治疗中的一个必不可少的重要部分。口腔正畸是针对牙齿排列畸形或错颌,利用弓丝、托槽等组成的固定矫治器械,或者牙套等隐形可摘式矫治器械,对牙齿施加三维矫治力和力矩,调整颜面骨骼、牙齿和颌面肌肉三者间的平衡和协调,经过一段时间的矫治后改善面型、排齐牙列并提高咀嚼效能。传统的正畸治疗主要依靠医生的经验制定矫治方案。
在治疗方案确定前,手动排牙实验可以帮助正畸医生预计正畸治疗所涉及的治疗过程,并且告知患者可能涉及到的牙齿移动和最终治疗效果。手动排牙过程的主要缺点在于需对每颗牙齿进行单独操作,自动化程度低,排牙效率也较低,并且消耗大量材料,可观性不强,患者难以明确了解正畸的效果。
随着计算机图像技术和机器学习技术的发展,自动化的正畸治疗正在快速发展。为了获取正畸治疗所需的牙齿三维模型数据,现有技术中通常需要依托于专业的3D扫描设备获取牙齿的影像数据,3D扫描设备价格昂贵,影像数据的获取成本过高,必然增加医疗机构和用户的负担;而具备高普适度并且成本相对较低的CT影像的准确度不高,难以基于CT影像获取准确的三维模型数据,还需要依托于人工干预。
进一步地,现有技术中的人工正畸方案受限于医生的专业素养,因此, 正畸方案的效果难以保证,并且无法以动态影像的形式展现给用户,也不利于用户了解正畸过程。
发明内容
本发明提出了一种基于人工智能的正畸方法及装置,具体包括下述内容:
一种基于人工智能的正畸方法,包括:
获取标注口腔CT影像数据;所述标注口腔CT影像数据中以标注的形式圈出每一帧图像上的牙齿区域;圈出的区域标注出相应牙齿编号,非牙齿区域置为0;
将所述标注口腔CT影像数据输入第二生成器以得到以编码形式表征的正畸方案;
根据所述正畸方案,结合所述标注口腔CT影像数据得到的牙齿三维数字化模型得到可打印的牙列模型;
基于所述牙列模型制作矫治器。
优选的,若标注口腔CT影像数据满足预设清晰度要求,所获得的牙齿三维数字化模型与真实牙冠表面误差满足预设精度要求,直接用于可打印牙列的建模,否则,通过三维取模获取牙冠外表面模型取模获取牙冠外表面模型以得到可打印的牙列模型。
优选的,所述正畸方案包括正畸过程被划分的阶段、
不同阶段发生正畸移动的牙齿编号、
以及不同阶段进行的正畸治疗措施,每个正畸阶段包括一个或多个治疗措施。
优选的,通过一个编码序列表示整个正畸治疗方案,所述编码序列中的一个编码元素即为一个治疗措施,每个编码元素包括16位数字以对应单颌的16颗牙齿,每位数字用于表示其相应的牙齿的处理方式。
优选的,使用预设机器学习模型进行训练以得到第二生成器;预设机器学习模型包括两层卷积层、两层池化层、两层全连接层和一层输出层的神经网络机器学习模型;
第二生成器的训练集包括两部分内容,第一部分为正畸前的标注口腔CT影像数据,第二部分为所述正畸前的标注口腔CT影像数据对应的以编码形式表示的正畸方案;在训练中调整所述预设机器学习模型的模型参数直至能够对任意正畸前的标注口腔CT影像数据输出合理的以编码形式表示的正畸方案为止。
优选的,还包括:
将所述牙齿三维数字化模型和所述以编码形式表征正畸方案输入第三生成器以便于得到所述正畸方案的预测结果,所述预测结果以动画形式展示。
优选的,第三生成器的训练方法包括:
获取训练数据,所述训练数据包括牙齿三维数字化模型、对应的编码形式的正畸方案、正畸方案每个阶段的移动数据以及每个阶段完成所需的时间;
根据所述训练数据训练预设神经网络以得到第三生成器,所述第三生成以牙齿三维数字化模型和对应的编码形式的正畸方案为输入,以正畸方案每个阶段的移动数据以及每个阶段完成所需的时间为输出。
一种基于人工智能的正畸装置,包括:
标注口腔CT影像数据获取模块,用于获取标注口腔CT影像数据;所述标注口腔CT影像数据中以标注的形式圈出每一帧图像上的牙齿区域;圈出的区域标注出相应牙齿编号,非牙齿区域置为0;所述标注口腔CT影像数据通过将原始口腔CT影像数据输入预先训练好的第一生成器而得到;
正畸方案获取模块,用于将所述标注口腔CT影像数据输入第二生成器以得到以编码形式表征的正畸方案;
可打印的牙列模型获取模块,用于根据所述正畸方案,结合所述标注口腔CT影像数据得到的牙齿三维数字化模型得到可打印的牙列模型;
矫治器制作模块,用于基于所述牙列模型制作矫治器。
优选的,还包括:
预测模块,用于将所述牙齿三维数字化模型和所述以编码形式表征正畸方案输入第三生成器以便于得到所述正畸方案的预测结果,所述预测结 果以动画形式展示。
优选的,还包括:
第二生成器训练模块,用于使用预设机器学习模型进行训练以得到第二生成器;
第三生成器训练模块,用于训练第三生成器,所述第三生成器训练模块包括:
训练数据单元,用于获取训练数据,所述训练数据包括牙齿三维数字化模型、对应的编码形式的正畸方案、正畸方案每个阶段的移动数据以及每个阶段完成所需的时间;
训练单元,用于根据所述训练数据训练预设神经网络以得到第三生成器,所述第三生成以牙齿三维数字化模型和对应的编码形式的正畸方案为输入,以正畸方案每个阶段的移动数据以及每个阶段完成所需的时间为输出。
本发明提供的一种基于人工智能的正畸方法及装置,使用了多种机器学习模型,实现了从牙齿三维数字化获取、正畸方案生成到模型制造全过程的自动化,从而替代或部分替代医师的判断和决策过程。相对于现有技术本发明不易受到医生个人的主观影响,还能够提高诊疗效率。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本说明书实施例提供的种基于机器学习的牙齿三维数字化获取方法流程图;
图2是本说明书实施例提供的第一生成器的训练方法流程图;
图3是本说明书实施例提供的半自动化的标注方法流程图;
图4是本说明书实施例提供的一种基于人工智能的正畸方案自动规划 方法流程图;
图5(a)是本说明书实施例提供的上牙正畸前的原始状态示意图;
图5(b)是本说明书实施例提供的上牙正畸沿牙弓方向排齐的操作示意图;
图5(c)是本说明书实施例提供的上牙正畸第二阶段推磨牙向后的操作示意图;
图5(d)是本说明书实施例提供的上牙综合调整排齐(多个操作同时出现)的操作示意图;
图5(e)是本说明书实施例提供的上牙前牙内收操作示意图;
图5(f)是本说明书实施例提供的上牙整体微调排齐操作示意图;
图6(a)是本说明书实施例提供的下牙正畸前的原始状态示意图;
图6(b)是本说明书实施例提供的下牙前牙内收排齐的操作示意图;
图6(c)是本说明书实施例提供的下牙后牙内收排齐的操作示意图;
图6(d)是本说明书实施例提供的下牙推磨牙向后与排齐操作示意图;
图6(e)是本说明书实施例提供的下牙沿牙弓排齐操作示意图;
图6(f)是本说明书实施例提供的下牙整体微调排齐操作示意图;
图7是本说明书实施例提供的两层神经网络的结构示意图;
图8是本说明书实施例提供的制作隐形矫治所需要的无托槽矫治器方法流程图;
图9是本说明书实施例提供的一种基于人工智能的正畸装置框图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分的实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本发明保护的范围。
需要说明的是,术语“包括”和“具有”以及他们的任何变形,意图在于覆 盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、***、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
口腔CT影像数据具有成本低,易获取等优势,但是相比于专业的口腔3D打印设备其影像数据的精度不高,难以基于口腔CT影像数据直接得到牙齿三维数字化模型,这也是口腔CT影像数据难以大规模使用的原因,为了基于口腔CT影像数据得到高精度的牙齿三维数字化模型,本发明实施例提供一种基于机器学习的牙齿三维数字化获取方法,如图1所示,所述方法包括:
S101.获取原始口腔CT影像数据。
原始口腔CT影像数据包含了完整的的牙齿信息(包括整体牙根与牙冠)。
S103.将所述原始口腔CT影像数据输入预先训练好的第一生成器,以得到标注口腔CT影像数据。
具体地,所述标注口腔CT影像数据中以标注的形式圈出每一帧图像上的牙齿区域。圈出的区域标注出相应牙齿编号,非牙齿区域置为0。
所述第一生成器为一种通过机器学习获得的具备图像标识功能的神经网络,所述第一生成器可以以原始口腔CT影像为输入,并输出对于原始口腔CT影像中牙齿区域的识别结果,所述识别结果以标注口腔CT影像数据的形式输出。所述第一生成器的训练过程将在下文详述。
在一个优选的实施方式中,在将原始口腔CT影像数据输入第一生成器之前,还可以对原始口腔CT影像数据对应的二维图像进行缩放处理和归一化处理。缩放处理指把图像缩放到某预先设定的大小(比如512*512)。归一化处理是指通过线性变换把像素数值归一化到标准数据范围(比如0-1的数据范围);相应的,对标注口腔CT影像数据中标识出的区域也进行缩放处理和归一化处理。
S105.根据所述标注口腔CT影像数据生成牙齿三维数字化模型。
所述标注口腔CT影像数据中记录了经过标记的牙齿信息,得到了牙齿 的三维立体体素数据。该数据也可经过常见的面重建算法(比如移动立方体法)获得三维面数据,得到牙齿三维数字化模型。
本发明实施例提供了一种基于神经网络智能化从原始口腔CT影像数据中获取完整的牙齿三维数字化模型的方法。进一步地,在一种优选的实施方式中,还可以在得到的所述牙齿三维数字化模型中额外标注其它特征点或特征线,比如,如耳点位置,面型外轮廓等。
本发明实施例中具体使用GAN网络获取标注口腔CT影像数据。为了下文详述第一生成器的训练方法,本发明实施例首先对于GAN网络进行介绍。GAN的主要思想是利用生成器网络生成标注口腔CT影像数据对应的二维图像,再利用判别器网络判断该生成器生成的所述二维图像的真伪,如此循环直到判别器网络无法判断所述二维图像的真伪,具体体现在对任意一张输入的原始口腔CT影像数据,判别器网络给出的真伪的可能性都是0.5,此时的生成器网络是本发明实施例所需要的,即为第一生成器,而判别器可以丢弃。即本发明实施例使用这个训练好的第一生成器获取标注口腔CT影像数据。
然而,在具体地GAN训练过程中,当网络层数达到一定的数目以后,网络的性能就会饱和,再增加网络的性能就会开始退化,训练精度和测试精度都在下降,为了保证训练精度,并且在网络深度增加的时候,时间和计算复杂度也不会急剧上升,保证快速收敛和避免梯度消失和梯度弥散现象,本发明实施例在生成器网络中嵌入残差网络。
残差网络使用跳跃结构来作为网络的基本结构,并通过跳跃结构把优化的目标由H(x)转化为H(x)-x,其中,H(x)=F(x)+x,从而使得深网络在浅网络的基础上只要上面几层做一个等价映射就可以达到浅网络同样的效果,从而显著降低训练难度。
具体地,本发明实施例中的残差网络设计了多个残差块(residual block),每个残差块中都包括卷积层(Conv)和归一化层(Batchnormlize)。 残差块的数量在网络训练之前可根据任务的复杂度自行调整,任务复杂度越高,可以将其数量设计的越多。
在生成器网络中,输入原始口腔CT影像数据并输出标注口腔CT影像数据,所述生成器网络包括卷积层(Conv)、归一化层(Batchnormlize)、激活层(PReLU),以及残差网络(N*residual block)。
具体地,本发明实施例提供一种第一生成器的训练方法,如图2所示,包括:
S10.获取训练数据,所述训练数据包括预存患者的原始口腔CT影像数据和所述原始口腔CT影像数据对应的标注口腔CT影像数据。
在一个可行的实施方式中,所述标注口腔CT影像数据由专业医生对于原始口腔CT影像数据进行人工手动标注而获得。
本实施例中专业医生按照齿科标准的牙位表示法对各个牙列进行标记。其中,牙位表示法是给每颗人类牙齿编号表示的方法;用十字符号将上下牙列分为上下左右四个区,右上区又称为A区,左上区又称为B区,右下区又称为C区,左下区又称为D区。常见的牙位表示法为FDI牙位表示法(数字标记法),其中的每颗牙用2个***数字记录;每颗牙用两位***数字表示,第一位表示牙齿所在的象限:患者的右上、左上、左下、右下在恒牙为1、2、3、4,在乳牙为5、6、7、8;第二位表示牙齿的位置:从中门齿到第三臼齿为1-8;表1所示是以牙医的方位来看(左边对应患者的右侧),但左、右的区分则反过来,以患者实际牙齿为准。
表1:
Figure PCTCN2018124746-appb-000001
Figure PCTCN2018124746-appb-000002
需要给予说明的是,一组标准的原始牙列模型为上颌16颗牙齿,下颌16颗牙齿的模型。没有被识别为牙冠(牙齿空缺)的位置赋值为0;有识别为牙冠的位置,根据牙位表示法标记为相应的编号;同时进一步识别匹配出对应的牙列形状信息。
在另一个可行的实施方式中,还可以使用半自动化的方法得到标注口腔CT影像数据,所述半自动化的标注方法如图3所述,包括:
S01.基于原始口腔CT影像数据得到多张切片,进而得到三维坐标系的体素数据。
原始口腔CT影像数据上每一个点的像素数据称为三维空间的体素数据。
S02.获取分类阈值,所述分类阈值用于对体素数据进行分类。
S03.基于位置关系和所述分类阈值对所述体素数据进行分组。
具体地,基于相邻关系,对相邻且同类的体素数据归集为若干组。
S04.根据预设规则从所述分组结果中分析出独立牙齿区域。
具体地,所述预设规则包括:
a.参考牙齿的解剖结构,牙齿整体(不包含牙髓质等非骨性组织)的像素数值在较小的阈值区间内。或者说可分在一类之内。则一颗完整的牙齿必然在一组数据内。
b.相邻牙齿可能由于过于拥挤,归集在同组。
c.牙齿和牙槽骨组织可能由于密度相近,且有接触而归集在同组。
进一步地,对上述b,c的情况,需要分割获取独立牙齿区域。具体方法可以为:
1)通过面重建算法(比如移动立方体法等),获得上述数据的三维面数据,并指定其牙齿编号。
2)使用对应编号的标准牙齿三维面数据进行匹配处理(比如采用ICP匹配算法,或手工匹配调整),把标准模型通过三维空间变换(包括平移变换,旋转变换,仿射变换等)变换到重建模型位置,得到预计的模型形态。
3)像素点不在匹配后模型范围内的可删除,获取独立牙齿区域。
S05.对所述独立牙齿区域进行标记。
相对上一个实施方式中由医生得到标注口腔CT影响数据的方法,本发明实施方式中不需要对几百张图片逐一标记。自动分析结果在阈值选择适当时可自动化绝大多数牙齿区域的获取。部分未识别牙齿也可通过较少的手动操作完成。
S20.将所述训练数据输入GAN网络,以对所述GAN网络中的生成器网络和判别器网络进行训练,直至所述判别器无法区分出生成器网络获取的标注口腔CT影像数据和由专业医生或基于半自动方法标注出的标注口腔CT影像数据。
在训练好的GAN网络中,将原始口腔CT影像数据输入生成器网络获取标注口腔影像数据后,所述判别器网络输入生成器网络输出的标注口腔CT影像数据后判别所述标注口腔CT影像数据的真伪(生成器网络生成的标注口腔影像数据为假,由专业医生或基于半自动方法标注的标注口腔影像数据为真),若判别器网络无法分辨真假,则此时的生成器网络即可作为第一生成器,本发明实施例中判别器网络与生成器网络具备相似结构。
S30.将训练后的生成器网络作为第一生成器。
在本发明实施例中第一生成器应用于获取标注口腔CT影像数据的优势是显而易见的。一份原始口腔CT影像数据有若干切片,一份切片有若干牙齿影像。采用手工方式需要几百张影像的标记处理工作,必须由相关专业经验和解剖知识的医生完成,非常繁复和耗费时间。即便是半自动方式, 也需要选择适当的阈值数据,额外配合一些交互处理的工作,完成标注工作。采用人工智能模型——第一生成器,可智能化完成精确标记,减少对专业医生的依赖。
本发明实施例中提供了一种基于机器学习的牙齿三维数字化获取方法,能够基于第一生成器自动得到标注口腔影像数据,相较于专业口腔3D打印设备降低了成本,并且基于机器学习提升了从CT影像数据中获取牙齿模型体素数据的精准度和自动化程度。
在获得牙齿三维数字化模型的基础上,即可获取牙齿正畸方案。牙齿正畸或矫正治疗是指通过一列医疗手段,逐渐调整牙齿的相对位置,排列正齐,调校咬合状态,以达到排列正齐,改善功能,美容面容的效果。正畸治疗多需要多个阶段,每个阶段解决一个或多个问题,逐步完成治疗效果。正畸治疗的多个阶段,综合形成一个完整治疗方案。本发明实施例提供一种基于人工智能的正畸方案自动规划方法,如图4所示,所述方法包括:
S201.获取标注口腔CT影像数据。
具体地,所述牙齿三维数字化模型可以由步骤S103获得,也可以有专业医生或基于半自动标注法获得。
S202.将所述标注口腔CT影像数据输入第二生成器以得到以编码形式表征的正畸方案。
具体地,所述正畸方案包括下述内容:
(1)正畸过程可划分的若干个阶段(数量);
(2)不同阶段发生正畸移动的牙齿编号(部分牙齿移动,部分牙齿不移动);
(3)不同阶段进行的正畸治疗措施(扩弓,内收,拔牙,中线排齐,微调排齐等);具体地,每个阶段可以有多个治疗措施。
事实上,正畸方案可分为多个阶段,每个阶段可执行多个操作,每个操作可包含一颗或多颗牙齿。本发明实施例中可通过编码的方式,表示每一颗牙齿在每一个阶段的操作内容。如此可通过一个编码序列表示整个正 畸治疗方案。通常单颌有16颗牙齿,可用16个数字表示。为方便说明,以下编号过程中对牙齿按照从左到右1到16编号表示。
下面以示例的方式说明。
正畸分上下牙两部分,可同时进行,也可一侧进行。某个牙齿不移动的情况编码为0,移动的情况下编码为该移动对应的操作编码。
上牙举例如下:
如图5(a)所示,其示出正畸前的原始状态。如图5(b)所示,其示出正畸沿牙弓方向排齐的操作。执行前牙牙齿向中线排齐的操作编码:0000001111100000,16颗牙齿,每个牙齿都有一个数字表示0表示牙齿没有操作,1表示沿牙弓排齐(对准中线),这一串编码可以表示对7,8,9,10,11同时排齐这个操作。如图5(c)所示,其示出了正畸第二阶段推磨牙向后的操作。其对应的操作编码为0660000000000660,用编号6表示推后磨牙这个动作,对应牙齿编号为2,3,14,15四颗牙齿。如图5(d)所示,其示出了综合调整排齐(多个操作同时出现)的操作。其对应的操作编号为0062220000222770。这个阶段多个操作同时进行,由于磨牙后推之后,牙弓形状有所变化,相邻的4,5,6,11,12,13需要牙弓排齐(后磨牙方向移动)对应操作编号为2、3号牙齿后退不到位,需要持续后推(对应操作编号为6)13,15号牙齿向颊侧偏离,需要内收排齐(后牙)(对应操作编号为7)。如图5(e)所述,其示出了前牙内收操作。其对应的编码为0000088888800000。前阶段调整后,需要内收排齐(前牙)(操作编号用8表示)。如图5(f)所述,其示出了整体微调排齐操作。其对应的编码为0009999999999990。前一阶段结果已经接近目标效果。这一阶段进行微调,以达到目标排牙效果。用9表示微调操作。
下牙举例如下:
如图6(a)所示,其示出正畸前的原始状态。如图6(b)所示,其示出前牙内收排齐操作,其对应编码为0000088888800000,如前所述,内收排齐(对齐中线)对应操作为8。如图6(c)所示,其示出后牙内收排齐操作,其对应编码为077700000077700,如前所述,内收排齐(对齐中线)对应操作为7。如图6(d)所示,其示出推磨牙向后与排齐操作,其对应 编码为0630000000000000,上述阶段3号牙齿受到2号磨牙牙齿阻挡。需要推2号磨牙向后,同时单独对3号牙齿进行扭转排齐(用操作3表示)。如图6(e)所示,其示出沿牙弓排齐操作,其对应编码0000200000000000。如图6(f)所示,其示出整体微调排齐,其对应编码0009999999999000。同上牙,最后一步微调排齐以达到预期目标。
最后,输出为编码序列(左侧为上牙牙齿,右侧为下牙牙齿)
Figure PCTCN2018124746-appb-000003
上下牙齿组合,可形成整体方案。在一些情况下,当单侧牙齿无移动时,可设置为0000000000000000。
本发明实施例中以三维牙齿数字信息为输入,即可自动得到以编码序列表示的正畸方案。
具体地,本发明实施例中使用预设机器学习模型进行训练以得到第二生成器。具体地,所述第二生成器的训练集包括两部分内容,第一部分为正畸前的标注口腔CT影像数据,第二部分为所述正畸前的标注口腔CT影像数据对应的以编码形式表示的正畸方案。在训练中调整所述预设机器学习模型的模型参数直至能够对任意正畸前的标注口腔CT影像数据输出合理的以编码形式表示的正畸方案。
一般地学习模型可以被设置成包括:
一个输入层,x;
任意数量的隐含层;每层隐藏层都有对应的模型参数,每层的模型参数可以是多个,每层隐藏层中的一个模型参数对输入的数据进行线性或非线性变化,得到运算结果;每个隐藏层接收前一隐藏层的运算结果,经过自身的运算,对下一输出本层的运算结果;
一个输出层,
Figure PCTCN2018124746-appb-000004
每两层之间都有一组权重和偏置(W和b);
如图7所示的神经网络的结构所示;其中,权重W和偏置b是影响输出
Figure PCTCN2018124746-appb-000005
根据输入数据微调权重和偏置的过程称为神经网络训练过程,所以,神经网络的最佳权重和偏置是在训练神经网络的过程中得到的。
其中,本实施例中的神经网络模型可以利用现有的实现训练过程的机器学习算法,但不限于采用卷积神经网络、递归神经网络或逻辑回归网络等机器学习算法。
具体地,本发明实施例中所述预设机器学习模型,可以包括两层卷积层、两层池化层、两层全连接层和一层输出层的神经网络机器学习模型。
具体的,所述卷积层可以对所述输入的正畸输入训练数据进行卷积处理,实现特征提取。
具体的,所述池化层可以对上一层的输出进行降采样操作,即返回采样窗口中最大值作为降采样的输出。一方面可以简化计算复杂度;另一方面可以进行特征压缩,提取主要特征。
具体的,所述全连接层可以作为上下两层的节点之间的连接层,将上下两层所得到的各节点数据建立连接关系,将输出值送给分类器(如softmax分类器)。
在上述的预设机器学习模型中,每一层输出的都是上一层输入的线性函数,考虑到在实际应用中数据往往不是线性可分的,可以通过增加激活函数的方式引入非线性因数,即增加线性校正层。
具体的,所述输出层可以采用softmax函数进行正畸输出训练数据的输出,Softmax函数中包含的是一个非线性分类器,对正畸输入训练数据进行分类器训练。具体的,可以确定所述正畸输入训练数据与正畸输出训练数据匹配的概率值。
此外,需要说明的是,本发明实施例所述机器学习模型并不仅限于上述的神经网络机器学习模型,在实际应用中,还可以包括其他机器学习模型,例如决策树机器学习模型等,本发明实施例并不以上述为限。
在一个具体的实施例中,所述预设机器学习模型可以被设置成,包括:
第一卷积层;以及与所述第一卷积层相连的第一池化层;以及与所述第一池化层相连的第二卷积层;以及与所述第二卷积层相连的第二池化层; 以及第二池化层相连接的第一全连接层;以及与所述第一全连接层相连的第二全连接层;以及与所述第一全连接层相连的线性校正层;以及与所述第二全连接层连接的输出层的神经网络机器学习模型。
在上述的预设机器学习模型中,每一层输出的都是上一层输入的线性函数,考虑到在实际应用中数据往往不是线性可分的,可以通过增加激活函数的方式引入非线性因数。
此外,需要说明的是,上述仅仅是本发明进行参数识别模型训练所采用的预设机器学习模型的一种示例,在实际应用中,还可以结合实际应用需求包括更多或更少的层。
在一个优选的实施例中第一生成器和第二生成器可以以组合形式进行使用,其组合结果作为一个整体可以用于实现以原始口腔CT影像数据为输入,以编码化的正畸方案为输出的技术方案。
本发明实施例中提出的一种基于人工智能的正畸方案自动规划方法,能够依托于第二生成器快速自动化的得出正畸方案,该正畸方案客观标准,不受主观及外部因素影响。
正畸方案实施过程中,正畸治疗的主要目标为牙齿的预期移动。可以用牙齿相对前一个阶段的空间变换表示(数学上可以表示为一个三维空间变换矩阵)。每个阶段完成一个或多个治疗目的,比如(关闭牙齿间缝隙,扩弓获取间隙,推磨牙向后形成间隙等)
在获取编码化的正畸方案的相关描述可知正畸治疗过程长,可分解为多个阶段,每个阶段针对若干牙齿可分别采取标准操作方法。该方法采用特定施力措施,使得牙齿发生移动,达到阶段矫治效果。为更加精准的完成治疗方案设计,若能够定量或以三维可视化的方式,表现该阶段牙齿的矫治目标,则更加有利于医生判定正畸方案是否合理或者有利于患者对于牙齿的变化做到心中有数。基于此,本发明实施例进一步提供一种正畸方案的治疗效果预测方法,所述预测方法包括:
S301.获取牙齿三维数字化模型。
S303.获取以编码形式表征正畸方案。
S305.将所述牙齿三维数字化模型和所述以编码形式表征正畸方案输入第三生成器以便于得到所述正畸方案的预测结果,所述预测结果以动画形式展示。
传统的方法是采用三维扫描的方式获取牙齿的数字模型,经过数字化分割获取独立的牙冠模型,然后采用三维可视化的方法显示牙冠的三维排列(牙列模型),并通过交互方式移动所针对牙冠的位置(包括平移与扭转),以获取主观预测(或期望)的目标排列(牙列模型)状态。显然,本发明实施例中基于CT获取的三维牙齿模型(同时包含牙冠与牙根),提供智能化算法,能够获取同样效果。
具体地,本发明实施例提供了一种第三生成器的训练方法,所述方法包括:
S100.获取训练数据,所述训练数据包括牙齿三维数字化模型、对应的编码形式的正畸方案、正畸方案每个阶段的移动数据以及每个阶段完成所需的时间。
具体地,所述牙齿三维数字化模型可以基于本发明实施例中步骤S101-S105中的方法得到,也可以是传统三维扫描方式下的牙冠模型。
获取牙冠的三维模型,采用传统方法进行排牙操作,获取牙齿在各个阶段的移动数据。基于经验方法,获取每个阶段完成所需时间。
S200.根据所述训练数据训练预设神经网络以得到第三生成器,所述第三生成以牙齿三维数字化模型和对应的编码形式的正畸方案为输入,以正畸方案每个阶段的移动数据以及每个阶段完成所需的时间为输出。
优选的,所述第三生成器还可以基于正畸方案每个阶段的移动数据以及每个阶段完成所需的时间绘制动画。
在一个优选的实施例中第一生成器、第二生成器和第三生成器可以以组合形式进行使用,其组合结果作为一个整体可以用于实现以原始口腔CT影像数据为输入,以预测结果为输出的技术方案。第二生成器和第三生成器可以使用相同或不同的神经网络进行训练。
相对于现有技术,本发明实施例提供的一种正畸方案的治疗效果预测 方法能够快速便捷形象地得到正畸方案的预测效果,降低了从业人员的工作难度,显著降低了医生的负担,也提升了患者对于其将要接受的正畸方案的形象感官认识。
为了实施正畸方案,需要根据正畸方案制作相应的矫治器械。具体地,本发明实施例中可以制作隐形矫治所需要的无托槽矫治器。所述制作方法如图8所示,包括:
S401.获取标注口腔CT影像数据及基于所述标注口腔CT影像数据得到的牙齿三维数字化模型。
S403.根据所述标注口腔CT影像数据获取正畸方案。
S405.根据所述正畸方案,结合所述标注口腔CT影像数据得到的牙齿三维数字化模型得到可打印的牙列模型。
具体地,可以基于无托槽矫治器的制作原理,集合正畸方案对应的动画数据,建模输出为可进行3D打印的牙列模型。通过高分子材料热压膜成型的方式,制作无托槽矫治器。
具体地,所述标注口腔CT影像数据、基于所述标注口腔CT影像数据得到的牙齿三维数字化模型、正畸方案及正畸方案的动画数据均可以使用本发明实施例所提供的方法得到。
S407.基于所述牙列模型制作矫治器。
制作模具和制作矫治器的过程可合并为一个步骤,即直接打印成矫治器的形状,减少压膜制作的步骤;进一步提升了矫治器的制作效率。
在本发明一个可行的实施方式中,若标注口腔CT影像数据足够清晰,所获得的牙齿三维数字化模型与真实牙冠表面误差满足要求,可直接用于可打印牙列的建模。具体地,所述误差可能与具体的正畸方案有关,不同的正畸方案对于牙齿模型牙冠表面与真实牙冠表面误差的要求不同。
有鉴于标注口腔CT影像数据可能包含了软组织等噪声信息,可能不够清晰。因此,还可以由专业医生对得到的可打印的牙列模型进行评估,评估其是否可以用于制作矫治器。
在本发明的另一个可行的实施方式中,若考虑更精确反应牙冠外表面 模型形态,可通过三维取模获取牙冠外表面模型取模获取牙冠外表面模型。无托槽矫治器的制作对牙冠外表面的形态符合程度要求较高。通常需要三维扫描取模后处理。即通过口内扫描方式,或口外取牙齿石膏模型再扫描的方式,获取牙冠外表面模型。扫描获取的牙冠模型与本发明实施例中获取的牙齿三维数字化模型牙冠部分形态差异极小,在某些情况下,由于阈值选取的原因,牙齿三维数字化模型牙冠可能略小于真实牙冠;或者由于扫描的误差原因,三维扫描可能局部略不同于牙齿三维数字化模型。
进一步地,本发明实施例中基于CT扫描并最终得到的牙齿三维数字化模型与三维扫描得到的模型属于在三维空间的两个表现形式。由于各自的三维坐标系不同,因此在三维空间上并不重合。可基于常用的三维空间数据匹配算法(比如ICP算法),匹配扫描牙冠模型到牙齿三维数字化模型中牙冠位置,使之重合。
进一步地,本发明各个方法可以自由结合使用,以达到自动化正畸的目的。现有技术中从医疗影像到治疗方案通常在较大程度上依赖于医生,而自动化程度难以提高。本发明实施例中较多地使用了机器学习方法,通过大数据训练获得用于进行正畸治疗的智能化计算模型,从而替代或部分替代医师的判断和决策过程。相对于现有技术本发明不易受到医生个人的主观影响,还能够提高诊疗效率。
本发明把整个诊疗过程分解为多个独立的计算过程,使用的智能体第一生成器、第二生成器和第三生成器可以分别进行基于深度学习或神经网络的机器学习训练,从而对于诊疗所需的各个医疗环节解耦合,减轻了了对原始数据的依赖,也提高了诊疗各个过程的精准度。
本发明实施例还公开了一种基于人工智能的正畸装置,如图9所述,包括:
标注口腔CT影像数据获取模块501,用于获取标注口腔CT影像数据;所述标注口腔CT影像数据中以标注的形式圈出每一帧图像上的牙齿区域;圈出的区域标注出相应牙齿编号,非牙齿区域置为0;所述标注口腔CT影 像数据通过将原始口腔CT影像数据输入预先训练好的第一生成器而得到;
正畸方案获取模块502,用于将所述标注口腔CT影像数据输入第二生成器以得到以编码形式表征的正畸方案;
预测模块503,用于将所述牙齿三维数字化模型和所述以编码形式表征正畸方案输入第三生成器以便于得到所述正畸方案的预测结果,所述预测结果以动画形式展示。
可打印的牙列模型获取模块504,用于根据所述正畸方案,结合所述标注口腔CT影像数据得到的牙齿三维数字化模型得到可打印的牙列模型;
矫治器制作模块505,用于基于所述牙列模型制作矫治器。
进一步地,还包括:
第二生成器训练模块,用于使用预设机器学习模型进行训练以得到第二生成器;
第三生成器训练模块,用于训练第三生成器,所述第三生成器训练模块包括:
训练数据单元,用于获取训练数据,所述训练数据包括牙齿三维数字化模型、对应的编码形式的正畸方案、正畸方案每个阶段的移动数据以及每个阶段完成所需的时间;
训练单元,用于根据所述训练数据训练预设神经网络以得到第三生成器,所述第三生成以牙齿三维数字化模型和对应的编码形式的正畸方案为输入,以正畸方案每个阶段的移动数据以及每个阶段完成所需的时间为输出。
需要说明的是,装置实施例具有与方法实施例相同的发明构思。
本发明中所述模块/单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。可以根据实际的需要选择其中的部分或者全部模块/单元来达到实现本发明方案的目的。
另外,在本发明各个实施例中的各模块/单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软 件功能单元的形式实现。
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (10)

  1. 一种基于人工智能的正畸方法,其特征在于,包括:
    获取标注口腔CT影像数据;所述标注口腔CT影像数据中以标注的形式圈出每一帧图像上的牙齿区域;圈出的区域标注出相应牙齿编号,非牙齿区域置为0;
    将所述标注口腔CT影像数据输入第二生成器以得到以编码形式表征的正畸方案;
    根据所述正畸方案,结合所述标注口腔CT影像数据得到的牙齿三维数字化模型得到可打印的牙列模型;
    基于所述牙列模型制作矫治器。
  2. 根据权利要求1所述的方法,其特征在于:
    若标注口腔CT影像数据满足预设清晰度要求,所获得的牙齿三维数字化模型与真实牙冠表面误差满足预设精度要求,直接用于可打印牙列的建模,否则,通过三维取模获取牙冠外表面模型取模获取牙冠外表面模型以得到可打印的牙列模型。
  3. 根据权利要求1所述的方法,其特征在于:
    所述正畸方案包括正畸过程被划分的阶段、
    不同阶段发生正畸移动的牙齿编号、
    以及不同阶段进行的正畸治疗措施,每个正畸阶段包括一个或多个治疗措施。
  4. 根据权利要求3所述的方法,其特征在于:
    通过一个编码序列表示整个正畸治疗方案,所述编码序列中的一个编 码元素即为一个治疗措施,每个编码元素包括16位数字以对应单颌的16颗牙齿,每位数字用于表示其相应的牙齿的处理方式。
  5. 根据权利要求1所述的方法,其特征在于:
    使用预设机器学习模型进行训练以得到第二生成器;预设机器学习模型包括两层卷积层、两层池化层、两层全连接层和一层输出层的神经网络机器学习模型;
    第二生成器的训练集包括两部分内容,第一部分为正畸前的标注口腔CT影像数据,第二部分为所述正畸前的标注口腔CT影像数据对应的以编码形式表示的正畸方案;在训练中调整所述预设机器学习模型的模型参数直至能够对任意正畸前的标注口腔CT影像数据输出合理的以编码形式表示的正畸方案为止。
  6. 根据权利要求1所述的方法,其特征在于,还包括:
    将所述牙齿三维数字化模型和所述以编码形式表征正畸方案输入第三生成器以便于得到所述正畸方案的预测结果,所述预测结果以动画形式展示。
  7. 根据权利要求6所述的方法,其特征在于,第三生成器的训练方法包括:
    获取训练数据,所述训练数据包括牙齿三维数字化模型、对应的编码形式的正畸方案、正畸方案每个阶段的移动数据以及每个阶段完成所需的时间;
    根据所述训练数据训练预设神经网络以得到第三生成器,所述第三生成以牙齿三维数字化模型和对应的编码形式的正畸方案为输入,以正畸方案每个阶段的移动数据以及每个阶段完成所需的时间为输出。
  8. 一种基于人工智能的正畸装置,其特征在于,包括:
    标注口腔CT影像数据获取模块,用于获取标注口腔CT影像数据;所述标注口腔CT影像数据中以标注的形式圈出每一帧图像上的牙齿区域;圈出的区域标注出相应牙齿编号,非牙齿区域置为0;所述标注口腔CT影像数据通过将原始口腔CT影像数据输入预先训练好的第一生成器而得到;
    正畸方案获取模块,用于将所述标注口腔CT影像数据输入第二生成器以得到以编码形式表征的正畸方案;
    可打印的牙列模型获取模块,用于根据所述正畸方案,结合所述标注口腔CT影像数据得到的牙齿三维数字化模型得到可打印的牙列模型;
    矫治器制作模块,用于基于所述牙列模型制作矫治器。
  9. 根据权利要求8所述的装置,其特征在于,还包括:
    预测模块,用于将所述牙齿三维数字化模型和所述以编码形式表征正畸方案输入第三生成器以便于得到所述正畸方案的预测结果,所述预测结果以动画形式展示。
  10. 根据权利要求8所述的装置,其特征在于,还包括:
    第二生成器训练模块,用于使用预设机器学习模型进行训练以得到第二生成器;
    第三生成器训练模块,用于训练第三生成器,所述第三生成器训练模块包括:
    训练数据单元,用于获取训练数据,所述训练数据包括牙齿三维数字化模型、对应的编码形式的正畸方案、正畸方案每个阶段的移动数据以及每个阶段完成所需的时间;
    训练单元,用于根据所述训练数据训练预设神经网络以得到第三生成 器,所述第三生成以牙齿三维数字化模型和对应的编码形式的正畸方案为输入,以正畸方案每个阶段的移动数据以及每个阶段完成所需的时间为输出。
PCT/CN2018/124746 2018-12-28 2018-12-28 一种基于人工智能的正畸方法及装置 WO2020133180A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/124746 WO2020133180A1 (zh) 2018-12-28 2018-12-28 一种基于人工智能的正畸方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/124746 WO2020133180A1 (zh) 2018-12-28 2018-12-28 一种基于人工智能的正畸方法及装置

Publications (1)

Publication Number Publication Date
WO2020133180A1 true WO2020133180A1 (zh) 2020-07-02

Family

ID=71127331

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/124746 WO2020133180A1 (zh) 2018-12-28 2018-12-28 一种基于人工智能的正畸方法及装置

Country Status (1)

Country Link
WO (1) WO2020133180A1 (zh)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1686058A (zh) * 2005-04-28 2005-10-26 上海隐齿丽医学技术有限公司 计算机辅助牙齿隐形正畸***
CN101427256A (zh) * 2006-02-28 2009-05-06 奥姆科公司 用于牙科治疗计划的软件和方法
CN101977564A (zh) * 2008-01-29 2011-02-16 矫正技术公司 用于优化牙科矫正器几何形状的方法和***
CN103405276A (zh) * 2013-07-10 2013-11-27 浙江工业大学 牙齿正畸矫治器的数字化制作方法及其固定矫治器
CN103932807A (zh) * 2013-01-18 2014-07-23 无锡时代天使医疗器械科技有限公司 获得牙齿目标矫治状态的方法、牙齿矫治器制造方法以及牙齿矫治器
US20150006465A1 (en) * 2010-03-17 2015-01-01 ClearCorrect Holdings, Inc. Methods and systems for employing artificial intelligence in automated orthodontic diagnosis & treatment planning
CN105726142A (zh) * 2016-02-01 2016-07-06 北京正齐口腔医疗技术有限公司 自动化模拟排牙的方法及装置
CN105761252A (zh) * 2016-02-02 2016-07-13 北京正齐口腔医疗技术有限公司 图像分割的方法及装置
US9504538B2 (en) * 2001-04-13 2016-11-29 Orametrix, Inc. Unified three dimensional virtual craniofacial and dentition model and uses thereof
CN106510867A (zh) * 2016-12-08 2017-03-22 上海牙典医疗器械有限公司 牙齿模型匹配方法
US20170086943A1 (en) * 2010-03-17 2017-03-30 ClearCorrect Holdings, Inc. Methods and Systems for Employing Artificial Intelligence in Automated Orthodontic Diagnosis and Treatment Planning
CN107260335A (zh) * 2017-06-26 2017-10-20 达理 一种基于人工智能的牙列畸形自动化分类和设计方法
CN107530141A (zh) * 2015-04-24 2018-01-02 阿莱恩技术有限公司 比较正畸治疗计划工具
WO2018022752A1 (en) * 2016-07-27 2018-02-01 James R. Glidewell Dental Ceramics, Inc. Dental cad automation using deep learning
CN107909630A (zh) * 2017-11-06 2018-04-13 南京齿贝犀科技有限公司 一种牙位图生成方法
CN108210095A (zh) * 2017-11-24 2018-06-29 上海牙典医疗器械有限公司 一种正畸排牙方法
CN109528323A (zh) * 2018-12-12 2019-03-29 上海牙典软件科技有限公司 一种基于人工智能的正畸方法及装置
CN109712703A (zh) * 2018-12-12 2019-05-03 上海牙典软件科技有限公司 一种基于机器学习的正畸预测方法及装置

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9504538B2 (en) * 2001-04-13 2016-11-29 Orametrix, Inc. Unified three dimensional virtual craniofacial and dentition model and uses thereof
CN1686058A (zh) * 2005-04-28 2005-10-26 上海隐齿丽医学技术有限公司 计算机辅助牙齿隐形正畸***
CN101427256A (zh) * 2006-02-28 2009-05-06 奥姆科公司 用于牙科治疗计划的软件和方法
CN101977564A (zh) * 2008-01-29 2011-02-16 矫正技术公司 用于优化牙科矫正器几何形状的方法和***
US20170086943A1 (en) * 2010-03-17 2017-03-30 ClearCorrect Holdings, Inc. Methods and Systems for Employing Artificial Intelligence in Automated Orthodontic Diagnosis and Treatment Planning
US20150006465A1 (en) * 2010-03-17 2015-01-01 ClearCorrect Holdings, Inc. Methods and systems for employing artificial intelligence in automated orthodontic diagnosis & treatment planning
CN103932807A (zh) * 2013-01-18 2014-07-23 无锡时代天使医疗器械科技有限公司 获得牙齿目标矫治状态的方法、牙齿矫治器制造方法以及牙齿矫治器
CN103405276A (zh) * 2013-07-10 2013-11-27 浙江工业大学 牙齿正畸矫治器的数字化制作方法及其固定矫治器
CN107530141A (zh) * 2015-04-24 2018-01-02 阿莱恩技术有限公司 比较正畸治疗计划工具
CN105726142A (zh) * 2016-02-01 2016-07-06 北京正齐口腔医疗技术有限公司 自动化模拟排牙的方法及装置
CN105761252A (zh) * 2016-02-02 2016-07-13 北京正齐口腔医疗技术有限公司 图像分割的方法及装置
WO2018022752A1 (en) * 2016-07-27 2018-02-01 James R. Glidewell Dental Ceramics, Inc. Dental cad automation using deep learning
CN106510867A (zh) * 2016-12-08 2017-03-22 上海牙典医疗器械有限公司 牙齿模型匹配方法
CN107260335A (zh) * 2017-06-26 2017-10-20 达理 一种基于人工智能的牙列畸形自动化分类和设计方法
CN107909630A (zh) * 2017-11-06 2018-04-13 南京齿贝犀科技有限公司 一种牙位图生成方法
CN108210095A (zh) * 2017-11-24 2018-06-29 上海牙典医疗器械有限公司 一种正畸排牙方法
CN109528323A (zh) * 2018-12-12 2019-03-29 上海牙典软件科技有限公司 一种基于人工智能的正畸方法及装置
CN109712703A (zh) * 2018-12-12 2019-05-03 上海牙典软件科技有限公司 一种基于机器学习的正畸预测方法及装置

Similar Documents

Publication Publication Date Title
CN109528323B (zh) 一种基于人工智能的正畸方法及装置
US11651494B2 (en) Apparatuses and methods for three-dimensional dental segmentation using dental image data
CN109712703B (zh) 一种基于机器学习的正畸预测方法及装置
CN109363786B (zh) 一种牙齿正畸矫治数据获取方法及装置
US20200350059A1 (en) Method and system of teeth alignment based on simulating of crown and root movement
EP2134290B1 (en) Computer-assisted creation of a custom tooth set-up using facial analysis
US20210118132A1 (en) Artificial Intelligence System For Orthodontic Measurement, Treatment Planning, And Risk Assessment
Imak et al. Dental caries detection using score-based multi-input deep convolutional neural network
CN111274666B (zh) 一种数字化牙齿位姿变化量的设计、模拟排牙方法及装置
CN113052902B (zh) 牙齿治疗监测方法
US20210357688A1 (en) Artificial Intelligence System For Automated Extraction And Processing Of Dental Claim Forms
US20210358604A1 (en) Interface For Generating Workflows Operating On Processing Dental Information From Artificial Intelligence
KR20100016180A (ko) 형상 정보를 얻기 위한 방법
CN112419476A (zh) 牙科病人三维虚拟图像的创建方法及***
Singi et al. Extended arm of precision in prosthodontics: Artificial intelligence
Jang et al. Fully automatic integration of dental CBCT images and full-arch intraoral impressions with stitching error correction via individual tooth segmentation and identification
Deleat-Besson et al. Automatic segmentation of dental root canal and merging with crown shape
Orhan et al. Artificial intelligence in dentistry
CN111275808B (zh) 一种牙齿正畸模型的建立方法及装置
CN112201349A (zh) 一种基于人工智能的正畸手术方案生成***
WO2020133180A1 (zh) 一种基于人工智能的正畸方法及装置
JP7269587B2 (ja) セグメンテーション装置
Bhatia et al. Artificial intelligence: an advancing front of dentistry
Sornam Artificial Intelligence in Orthodontics-An exposition
Widiasri et al. Alveolar Bone and Mandibular Canal Segmentation on Cone Beam Computed Tomography Images Using U-Net

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18944409

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18944409

Country of ref document: EP

Kind code of ref document: A1