CN112989954B - Three-dimensional tooth point cloud model data classification method and system based on deep learning - Google Patents

Three-dimensional tooth point cloud model data classification method and system based on deep learning Download PDF

Info

Publication number
CN112989954B
CN112989954B CN202110192628.6A CN202110192628A CN112989954B CN 112989954 B CN112989954 B CN 112989954B CN 202110192628 A CN202110192628 A CN 202110192628A CN 112989954 B CN112989954 B CN 112989954B
Authority
CN
China
Prior art keywords
tooth
point cloud
classification
cloud model
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110192628.6A
Other languages
Chinese (zh)
Other versions
CN112989954A (en
Inventor
周元峰
马乾
魏广顺
马龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202110192628.6A priority Critical patent/CN112989954B/en
Publication of CN112989954A publication Critical patent/CN112989954A/en
Application granted granted Critical
Publication of CN112989954B publication Critical patent/CN112989954B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional tooth point cloud model data classification method and system based on deep learning, which comprises the following steps: acquiring a tooth three-dimensional model, and extracting a whole set of tooth point cloud model from the tooth three-dimensional model; dividing the whole set of tooth point cloud models to obtain a plurality of single tooth point cloud models; extracting the point cloud model characteristics, the relative position characteristics and the adjacent similarity characteristics of the single tooth, and respectively inputting the characteristics, the relative position characteristics and the adjacent similarity characteristics into a classifier; the input various characteristics are classified in the classifier, and the primary classification result of the single tooth is output.

Description

Three-dimensional tooth point cloud model data classification method and system based on deep learning
Technical Field
The invention relates to the technical field of point cloud data processing, in particular to a three-dimensional tooth point cloud model data classification method and system based on deep learning.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
The 'digital dentistry' is a concept which has been proposed for many years, namely, the diagnosis and treatment process of the dentistry is digitalized, and the time and energy consumption of a dentist on the basic diagnosis and treatment steps are reduced by means of the CAD \ CAM technology, so that the diagnosis and treatment efficiency is improved practically.
To perform dental digitization, data acquisition is performed first. In the current dentistry, a plurality of medical imaging technologies are widely applied, such as X-ray film, CT and the like, and the imaging technologies form two-dimensional images. However, the dental treatment process needs to establish a spatial relationship, and these imaging techniques are not intuitive. The CBCT and the oral cavity scanning technology solve the problem, medical images established by the CBCT and the oral cavity scanning technology are three-dimensional, the development of the oral cavity medical technology is greatly promoted, and the CBCT and the oral cavity scanning technology are widely applied at present. For a digital orthodontic procedure, an oral cavity model is a data base, and on the data base, processing work is firstly carried out, namely, segmentation and classification of a tooth model. Since the parts other than the teeth are not important to the dentist, the parts are cut off from the model first, and the teeth are connected in the three-dimensional model, so that the invention also divides the teeth to obtain a single-tooth model of each tooth. The specific technical means can adopt the curved surface normal direction to determine the segmentation position, and a large number of related technologies exist here and are not described in detail.
Most of the existing tooth Classification methods are around two-dimensional dental images Miki y., muramatsu c., hayashi t., zhou x., harat, katsumata a, fujita h.: classification of tooth on CT data using deep convolutional neural networks, kuo Y-F., lin S. -Y., wu C.H., chen S. -L.J., lin T. -L.L., lin N.H., mai C.H., vilave J.F.: A logical network adaptive for dental Imaging and Health Information 7,8 (2017), 1693-1704.2, et al, classify dental panoramas, eun H., kim C., oriented dental localization for dental image X-ray images via a logical network, in 2016 an Association-Pacific Signal and Information Processing Association annular patient and Information (APSIPA) (2016), IEEE, pp.1-7.2 classify X-ray images, which are all classifications on two-dimensional Medical images, are relatively simple and are not as intuitive as three-dimensional Medical images. Pavaloiu i. -b., vasilanteu a., goga n., marin i., ungar a.,
Figure BDA0002945670400000021
I.:Teeth labeling from cbct data using the circular hough transform.In 2016International Symposium on Fundamentals of Electrical Engineering (ISFEE) (2016), IEEE, pp.1-4.2, uses a circular hough transform for tooth classification, requiring manual assistance, increasing learning and operating costs, which is not conducive to clinical use.
However, even three-dimensional medical images require analysis by a doctor, and therefore preliminary intelligent processing of these images is a problem to be solved. This is done by first identifying each tooth in the image. Human teeth have a distribution rule, the teeth are distributed along an arch line, various teeth are arranged from the middle to two sides according to a fixed sequence, but not all the teeth can be completely arranged according to the rule, according to relevant data statistics, the prevalence rate of the teeth in China reaches over nine times astonishingly, and various problems of tooth deformity, dislocation, deficiency, damage and the like are caused by different reasons of many people, so that the teeth can also have various abnormal conditions in medical images, and great difficulty is caused for tooth classification in the medical images.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a three-dimensional tooth point cloud model data classification method and system based on deep learning;
in a first aspect, the invention provides a deep learning-based three-dimensional tooth point cloud model data classification method;
the three-dimensional tooth point cloud model data classification method based on deep learning comprises the following steps:
acquiring a tooth three-dimensional model, and extracting a whole set of tooth point cloud model from the tooth three-dimensional model;
dividing the whole set of tooth point cloud models to obtain a plurality of single tooth point cloud models;
extracting the point cloud model characteristics, the relative position characteristics and the adjacent similarity characteristics of the single tooth, and respectively inputting the characteristics, the relative position characteristics and the adjacent similarity characteristics into a classifier; the input various characteristics are classified in the classifier, and the primary classification result of the single tooth is output.
In a second aspect, the invention provides a deep learning-based three-dimensional tooth point cloud model data classification system;
three-dimensional tooth point cloud model data classification system based on deep learning includes:
an acquisition module configured to: acquiring a tooth three-dimensional model, and extracting a whole set of tooth point cloud model from the tooth three-dimensional model;
a segmentation module configured to: dividing the whole set of tooth point cloud models to obtain a plurality of single tooth point cloud models;
a classification module configured to: extracting the point cloud model characteristics, the relative position characteristics and the adjacent similarity characteristics of the single tooth point cloud model, and inputting the characteristics into a classifier respectively; the input multiple characteristics are classified in the classifier, and the primary classification result of the single tooth is output.
In a third aspect, the present invention further provides an electronic device, including: one or more processors, one or more memories, and one or more computer programs; wherein a processor is connected to the memory, the one or more computer programs being stored in the memory, and when the electronic device is running, the processor executes the one or more computer programs stored in the memory, so as to make the electronic device execute the method according to the first aspect.
In a fourth aspect, the present invention also provides a computer-readable storage medium for storing computer instructions which, when executed by a processor, perform the method of the first aspect.
Compared with the prior art, the invention has the beneficial effects that:
firstly, a set of tooth point cloud models are segmented, independent single tooth models are separated from the segmented tooth point cloud models, and meanwhile, the relative position and the adjacency relation characteristics of the models are calculated on the set of tooth point cloud models. The method comprises the steps of inputting point cloud of a single tooth into a point cloud feature extraction network, extracting features of a point cloud model, combining the point cloud feature extraction network with the relative position and the adjacent relation features of the model to serve as features finally used for classification, inputting a classification network formed by full connection layers, and adding two kinds of additional feature information into each layer of the full connection layers to strengthen the effect of the two kinds of additional feature information so as to improve classification accuracy. Because the number of various tooth models in the data set has an unbalanced condition, the invention also uses a FocalLoss loss function to supervise and predict information and realize tooth classification. The classification result obtained from the network model can be subjected to the abnormal detection of the tooth abnormal detection processing module, so that the classification error caused by the deformity in the tooth model can be corrected, and the accurate classification of each tooth can be finally obtained.
The forehead teeth category can be judged only by the experience of a doctor originally, and a better classification result can be obtained by learning the characteristics of the teeth in the image. The tooth classification problem is solved based on the idea. For the current three-dimensional dental image, no matter CBCT imaging or a three-dimensional model obtained by oral cavity scanning, the model can be converted into a point cloud model through operations such as model reconstruction, curved surface smoothing, curved surface subdivision and the like, and the model is simple in structure and can describe main three-dimensional geometrical information of teeth. Thus, the present invention classifies tooth-based point cloud models.
The invention defines two new characteristics for tooth point cloud model classification: a relative position feature vector and a neighborhood similarity vector. Both vectors provide additional feature information for tooth classification based on prior information of tooth arrangement.
The invention is based on the deep learning method, effectively extracts and utilizes the three-dimensional geometric characteristic information of the point cloud model, and performs high abstraction and summarization of characteristics through sampling and a multilayer perceptron
The method can realize high-precision classification of the tooth point cloud model, and can obtain better effect compared with other point cloud classification methods.
The invention relates to an orthodontic tooth classification method, which can treat various abnormal conditions including tooth deficiency, deformity, malposition, adhesion in classification tasks and the like and has good robustness.
The invention is a basic data processing process of the digital dental technology, plays an important role in the development of the digital dental technology, and can greatly promote the development of the digital dental technology.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is a flow chart of a method of the first embodiment;
FIG. 2 illustrates the tooth categories and their corresponding labels of the first embodiment;
FIG. 3 is a relative position feature vector and an adjacency similarity vector for the first embodiment;
FIG. 4 is a point cloud feature abstraction module of the first embodiment;
FIG. 5 is a classifier structure for classification of a dental point cloud model according to a first embodiment;
FIG. 6 is a first embodiment of a method for correcting abnormal situations in tooth classification;
FIG. 7 (a) is a classification ambiguity resulting from missing teeth for the first embodiment;
FIG. 7 (b) is a graph showing that missing teeth of the first embodiment do not lead to classification ambiguity;
FIG. 8 is a flowchart of a tooth classification method according to a first embodiment;
FIG. 9 is a diagram illustrating a structure of a point cloud feature extraction module according to a first embodiment;
FIG. 10 is a diagram of a point cloud classifier of the first embodiment;
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention. As used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and it should be understood that the terms "comprises" and "comprising", and any variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Example one
The embodiment provides a three-dimensional tooth point cloud model data classification method based on deep learning;
as shown in fig. 1 and 8, the method for classifying data of a three-dimensional tooth point cloud model based on deep learning includes:
s101: acquiring a tooth three-dimensional model, and extracting a whole set of tooth point cloud model from the tooth three-dimensional model;
s102: dividing the whole set of tooth point cloud models to obtain a plurality of single tooth point cloud models;
s103: extracting the point cloud model characteristics, the relative position characteristics and the adjacent similarity characteristics of the single tooth point cloud model, and inputting the characteristics into a classifier respectively; the input various characteristics are classified in the classifier, and the primary classification result of the single tooth is output.
Further, the method further comprises:
s104: and carrying out tooth category abnormity detection on the classification result of the single tooth, and correcting the abnormal tooth category to obtain a final tooth classification result.
Further, the step S101 of obtaining the tooth three-dimensional model is obtaining in the form of intra-oral scanning, or obtaining in the form of CBCT reconstruction.
Further, after the step of obtaining the tooth three-dimensional model in S101 and before the step of extracting the entire tooth point cloud model from the tooth three-dimensional model, the method further includes:
and performing curved surface smoothing treatment on the tooth three-dimensional model.
Further, the S101: acquiring a tooth three-dimensional model, and extracting a whole set of tooth point cloud model from the tooth three-dimensional model; the method specifically comprises the following steps:
reading tooth mouth scanning data, carrying out curved surface shaping, filling a scanning model cavity, and carrying out curved surface smoothing treatment; and if the read data is CBCT data, directly reading the tooth DICOM file and performing file format conversion, obtaining a bitmap file corresponding to each slice, reconstructing three-dimensional data based on the bitmap file, obtaining complete tooth three-dimensional information based on the three-dimensional data through a ToothNet network model, and reconstructing a tooth three-dimensional model.
Illustratively, DICOM data storing CBCT data is read, and reconstruction, completion and smoothing are carried out on the tooth model. If the input data is oral cavity scanning data, STL data obtained by scanning needs to be read firstly, a three-dimensional model can be directly obtained, the three-dimensional model data is unprocessed, and the three-dimensional model data needs to be subjected to denoising, completion, smoothing, rough segmentation and other processing, and then the three-dimensional model with practical application value is obtained.
And (3) building a test environment of the ToothNet network model, wherein the test environment comprises the steps of installing a PyTorch deep learning framework and downloading the network model which is pre-trained.
The processed data is directly input into the ToothNet for testing, predicted segmentation data is input, the segmentation result is in a voxel form, point cloud data is needed for a tooth classification task, and due to the fact that three-dimensional spatial interpolation and other operations are conducted on the point cloud data, the voxel data is converted into a tooth point cloud model.
Further, as shown in fig. 2, the S102: dividing the whole set of tooth point cloud models to obtain a plurality of single tooth point cloud models; the method comprises the following specific steps:
and (3) segmenting the whole set of tooth point cloud model by using the normal information of the point cloud, separating teeth from other tissues and teeth, and separating a single tooth model from the whole set of tooth point cloud model.
Illustratively, in the whole set of tooth point cloud models, the curvature is calculated through each point method in the point cloud, the boundary between the tooth model and the gum and between the teeth is located in a furrow-shaped area in the model, if the curvature of the furrow-shaped area is negative, the point with the negative curvature in the tooth model is regarded as a boundary point, and the boundary is optimized by using an optimization method, so that the boundary line can be obtained. And (4) dividing the tooth model along the boundary to obtain the single-tooth model.
Illustratively, an STL file storing point cloud data is read, a point cloud normal is estimated according to local feature information of the point cloud, and point cloud model segmentation is performed according to the normal. And (3) dividing the point cloud into single tooth models and other organization parts, asking professionals to carry out data inspection and marking, and marking the category of the tooth where the single tooth model belongs on each single tooth model obtained by division. A data set is constructed that can be used for a tooth classification learning task.
Further, as shown in fig. 9, the S103: extracting the point cloud model characteristics of the single tooth from the point cloud model of the single tooth; the method comprises the following specific steps:
s1031: respectively performing up-sampling on the single tooth point cloud model by adopting a first sampling radius, a second sampling radius and a third sampling radius to obtain first sampling data, second sampling data and third sampling data;
s1032: inputting the first sampling data into a first multilayer perceptron, and outputting a first feature vector;
inputting the second sampling data into a second multilayer perceptron, and outputting a second feature vector;
inputting the third sampling data into a third multilayer perceptron, and outputting a third feature vector;
fusing the first, second and third feature vectors to obtain a first fused vector;
s1033: respectively performing upsampling on the first fusion vector by adopting fourth, fifth and sixth sampling radii to obtain fourth sampling data, fifth sampling data and sixth sampling data;
s1034: inputting the fourth sampling data into a fourth multilayer perceptron, and outputting a fourth feature vector;
inputting the fifth sampling data into a fifth multilayer perceptron, and outputting a fifth feature vector;
inputting the sixth sampling data into a sixth multilayer perceptron, and outputting a sixth feature vector;
fusing the fourth, fifth and sixth feature vectors to obtain a second fused vector;
s1035: and inputting the second fusion vector into a seventh multilayer perceptron to obtain the point cloud model characteristic of the single tooth.
Illustratively, point cloud data input into the network is first sampled at a plurality of different radii, with the sampling radii being 0.1,0.2, and 0.3, respectively. R in the graph is a sampling radius, sampled point cloud data are input into a multilayer perceptron (MLP), parameters of the multilayer perceptron are determined according to the size of input and output vectors, sampling points are input into different multilayer perceptrons, the multilayer perceptron (MLP) selects three layers of the multilayer perceptrons (MLP), point cloud features with different sizes are obtained, and feature combination is carried out on the point cloud features to obtain combined feature vectors.
Illustratively, the characteristic point cloud is continuously sampled by a plurality of different radiuses, and the sampling radius is doubled, wherein the sampling radius is 0.2,0.4 and 0.6 respectively. And inputting the sampling points into different multilayer perceptrons, merging the characteristics, and finally performing characteristic fusion on the vectors obtained by merging by using the multilayer perceptrons to obtain the characteristics extracted from the point cloud.
As shown in fig. 4 and 9, the point cloud feature abstraction module used in the present invention can be divided into three function operation modules connected in a multi-layer manner.
(1) First-time feature abstraction: the input point cloud is sampled in different scales, the sampling radius used in the sampling process is respectively 0.1,0.2 and 0.4, different sampling results are obtained by sampling 16 points, 32 points and 128 points, and the sampling results are input into three layers of perceptrons with different neuron numbers, wherein the three perceptrons respectively comprise (32, 64), (64, 128) and (64, 96, 128). I.e., the vector output from the final sensing engine is a 320-dimensional vector of (64 + 128).
(2) And (3) second-time feature abstraction: a 320-dimensional feature point cloud is obtained from the result of the first feature abstraction. And performing second-time feature abstraction on the data, sampling again with the sampling radius of 0.2,0.4 and 0.8, respectively obtaining sampling results on different feature point clouds by sampling 32 points, 64 points and 128 points, and inputting the sampling results into three layers of perceptrons with different neuron numbers, wherein the three perceptrons respectively comprise (64, 128), (128, 256) and (128, 256). The output result is the feature vector with 640 dimensions (128 + 256).
(3) Feature fusion: and performing feature fusion on the result of the second feature abstraction, and performing exchange fusion between features through a three-layer perceptron. The three layers of perceptrons are (256, 512 and 1024), and a 1024-dimensionalde feature vector is obtained by abstracting the final point cloud.
And (4) encoding the features of different scales through multiple feature extraction and fusion to finally obtain a feature vector and output a feature point cloud.
Further, the step S103: extracting relative position characteristics of the point cloud model of the single tooth; the method comprises the following specific steps:
calculating the center of each single-tooth model, wherein the calculation method comprises the following steps:
Figure BDA0002945670400000111
and C (i) is the center of the ith tooth, if n points exist in the point cloud model of the single tooth, p (j) represents the coordinates of the jth point, and the point cloud model of the single tooth is the mean value of the coordinates of each point.
For each set of tooth model, the tongue root point is set as an original point, the direction from the original point to the central point of the incisor is the positive direction of the ordinate axis, the direction from the original point to the left side is the positive direction of the abscissa axis, the direction is perpendicular to the two axes, and the upward direction is the positive direction of the vertical axis. And establishing a coordinate system by taking the tooth arrangement central line as a coordinate axis.
F pos (i)=C(i)-P 0 (1)
Wherein C (i) is the center of the ith tooth, F pos (i) Is the relative position characteristic, P, corresponding to the ith tooth 0 Is the origin of coordinates.
And calculating a vector from the origin of the reference system to the center of each tooth point cloud model, and taking the vector as a relative position feature vector of the tooth.
Further, as shown in fig. 3, the S103: extracting the adjacent similarity characteristic of the point cloud model of the single tooth; the method comprises the following specific steps:
calculating the center positions of the tooth with the index number i and the tooth with the index number i-1, i +1 (namely the current tooth and the adjacent tooth) in the teeth which are sequentially arranged from left to right and are sequentially arranged to obtain a vector from the middle tooth to the adjacent tooth, and calculating a vector formed by the included angles of the vector and the opposite vector of the relative position coordinate vector of the middle tooth, wherein the vector is an adjacent similarity vector:
S 1 =cos(-F pos (i),F pos (i-1)-F pos (i));(2-1)
S 2 =cos(-F pos (i),F pos (i+1)-F pos (i));(2-2)
F adj (i)=[S 1 (i),S 2 (i)];(2-3)
wherein, F adj (i) I.e. the adjacent similarity feature vector, F, corresponding to the ith tooth pos (i) Is a relative position characteristic component of the ith tooth, S 1 Representing the first component, S, of the neighboring similarity vector 2 Representing the second component of the neighboring similarity vector.
The adjacent similarity vector is calculated by using the triangle rule of the vectors. The abutment similarity of each tooth is obtained, and the extreme teeth on the left and right sides can be used to complement the components of the vector with 0.
Further, the S103 further includes: and storing the relative position coordinate vector and the adjacent similarity vector of each tooth, and matching the relative position coordinate vector and the adjacent similarity vector with the point cloud model data of the single tooth one by one.
Further, as shown in fig. 7 (a), 7 (b) and 10, the S103: extracting the point cloud model characteristics, the relative position characteristics and the adjacent similarity characteristics of the single tooth, and respectively inputting the characteristics, the relative position characteristics and the adjacent similarity characteristics into a classifier; the input multiple characteristics are classified in a classifier, and a primary classification result of a single tooth is output; the method comprises the following specific steps:
the classifier comprises three full connection layers and a softmax function layer which are sequentially connected in series;
fusing the point cloud model feature of the single tooth, the relative position feature and the adjacent similarity feature to obtain a fusion feature;
the fusion characteristics are input into a first full-connection layer for processing to obtain a first processing result;
fusing the first processing result with the relative position characteristic and the adjacent similarity characteristic, and sending the fused result into a second full-connection layer to obtain a second processing result;
fusing the second processing result with the relative position characteristic and the adjacent similarity characteristic, and sending the fused result into a third full-connection layer to obtain a third processing result;
and sending the third processing result into a softmax function layer, and outputting a primary classification result of the single tooth.
Illustratively, the feature vector passes through multiple fully connected layers, and the input of each layer is to combine the output of the upper layer with the relative position feature vector and the adjacent similarity vector. I.e. these two features are reinforced layer by layer.
Illustratively, the dimensionality of the input feature vector obtained from the last layer is the number of classes of the teeth to be classified, the dimensionality is input into a Softmax layer, the probability of inputting each class by the point cloud model is obtained, and the class with the maximum probability is taken as the primary classification result of the point cloud.
As shown in fig. 5, the training step of the trained classifier includes:
constructing a classifier; the classifier comprises three full-connection layers and a softmax function layer which are sequentially connected in series;
constructing a training set; the training set includes tooth features of known tooth classification labels; wherein, the tooth characteristic refers to fusion characteristic, relative position characteristic and adjacent similarity characteristic; the fusion characteristic refers to a fusion result of the point cloud model characteristic of the single tooth, the relative position characteristic and the adjacent similarity characteristic;
and inputting the training set into a classifier, and training the classifier to obtain the trained classifier.
Further, the S104: carrying out tooth category abnormity detection on the classification result of the single tooth, and correcting the abnormal tooth category to obtain a final tooth classification result; the method comprises the following specific steps:
s1041: calculating the position mean value of the center of each type of tooth point cloud model in the data set, and taking the position mean value as a template;
s1042: comparing each classified tooth point cloud model with the template, and comparing the similarity of the relative position characteristics of the corresponding positions according to the tooth arrangement sequence with the relative position characteristics of the template; if the similarity is smaller than the set threshold, determining that the classification result is acceptable, otherwise, entering S1043;
s1043: checking the classification condition, and processing the condition of the adhered teeth;
s1044: and checking the classification condition and processing the tooth missing condition.
Further, as shown in fig. 6, the S1043: checking the classification condition, and processing the adhesion teeth; the method comprises the following specific steps:
and reclassifying the results which are not in the acceptable classification result range, and calculating the similarity between the relative position coordinate vector of the current tooth model and each template vector to classify the results into the most similar class.
The process is expressed as follows:
Figure BDA0002945670400000141
if the preliminary classification prediction is correct, the relative position feature vector is F pos (i) The corresponding template vector is F T (i) Calculating F pos (i) And F T (i) If the cosine of the included angle is greater than 0.85, the angle is determined to beThe result of the classification is acceptable and,
if there is a problem in prediction, e.g. the relative position feature vector is F pos (k) The corresponding template vector is F T (k) If the cosine of the included angle of the two vectors is not more than 0.85, the included angle is not in the acceptable prediction range, prediction is carried out again, and F is carried out pos (k) Calculating similarity with all template vectors, and selecting the vector with highest similarity, such as vector F T (j) Then the tooth is re-classified into the jth category.
Further, the S1044: checking the classification condition and processing the tooth missing condition; the method comprises the following specific steps:
firstly, checking whether the missing tooth causes semantic ambiguity of classification;
as shown in fig. 7 (b), if no ambiguity is caused, the original classification result is retained;
as shown in fig. 7 (a), if an ambiguity occurs, the classification result of the neighboring tooth is used as a classification basis to classify the tooth of the tooth-missing part according to the similarity with the template.
Further, the S104: carrying out tooth category abnormity detection on the classification result of the single tooth, and correcting the abnormal tooth category to obtain a final tooth classification result; further comprising:
s1045: and checking the classification condition and processing the wisdom tooth condition.
Further, in S1045: checking the classification condition and processing the wisdom tooth condition; the method comprises the following specific steps:
classifying the wisdom teeth using an improved convolutional neural network;
the improved convolutional neural network has the improvement points that: and replacing the loss function of the convolutional neural network with a FocalLoss loss function.
Wisdom teeth, also called wisdom teeth, is the third molar known to the oral medicine profession. Wisdom tooth itself is not considered as a manifestation of tooth deformity, and different from other teeth, wisdom tooth grows slowly, and different wisdom teeth of genetic genes generally grow between 16 and 35 years old and are the last teeth grown in life of a human, so that wisdom teeth are closest to throat, and people in the period are named after more mature. Not everyone can grow wisdom teeth, but not all can grow four wisdom teeth, so that the number of wisdom teeth in tooth point cloud model data collected by the method is small.
Moreover, because wisdom tooth growth and development are comparatively postponed, so the gum does not leave sufficient growth and development space for wisdom tooth, and this just causes the easy malformation development of wisdom tooth, even forms the impacted tooth, can't sprout to and push other teeth, lead to the form deformation of other tooth forms or gum bone. This phenomenon has caused very big difficulty for tooth classification, because not everyone all possess the wisdom tooth, so the wisdom tooth model sample quantity that can gather is extremely limited, and these wisdom tooth models often the shape is not unified simultaneously, and is also irregular, is unfavorable for the learning process of deep learning model extremely.
Under the condition, in order to obtain a better classification effect on wisdom teeth by a deep learning model, a loss function is improved. The general classification task uses multi-class cross entropy as a loss function, but the multi-class cross entropy loss function is more suitable for the condition that the number of samples is more balanced.
The method is characterized in that FocalLoss is selected as a loss function, the loss function is proposed to be applied to separation of an object from a background in object detection, for most images, the object always occupies fewer pixels than the background, so that the problem of class imbalance is also solved, the FocalLoss achieves a better effect on the task, and the basic action principle of the FocalLoss is to reduce loss of samples which are easy to classify by adjusting parameters, so that difficult and difficult to classify samples are paid more attention. The mathematical expression of the focaloss loss function in the classification task is:
FL(p t )=-α(1-p t ) γ log(p t ) (4)
wherein alpha and gamma are different hyper-parameters, respectively, alpha is used for balancing the difference between different classes, gamma is used for controlling the decreasing rate of the loss function, p t I.e. the output of the Softmax layer.
The general concept of the invention is as follows: and reading CBCT data or oral scanning model data, and obtaining a three-dimensional tooth point cloud model through data preprocessing. And uniformly standardizing the three-dimensional model, placing the three-dimensional model in a uniform three-dimensional coordinate system, carrying out tooth segmentation, and separating the monodentate model and other parts. Then, relative position feature vectors and adjacent similarity vectors of the single-tooth model are calculated. And corresponding to the three-dimensional tooth point cloud model, marking data in a semi-automatic mode by a dental professional to form a data set for learning. Inputting the data into a network, firstly carrying out point cloud up-sampling with uniform point number to enable a point cloud model to reach uniform scale, and then extracting point cloud characteristics through a point cloud characteristic abstract network. Combining the feature vectors of the features and the adjacent similarity vectors for tooth classification, performing feature fusion on the combined features through a multi-classification perceptron, inputting a classifier consisting of a full-connection layer and softmax, and performing feature combination on the full-connection layer of each layer to obtain a primary classification result. And checking the preliminary classification result according to the priori knowledge of tooth arrangement, and processing the tooth with the problem.
And constructing a simple visual interactive system. Writing a visual interactive system based on OpenGL language requires that the system comprises the following simple functions:
(1) A data reading function: a full set of dental data can be read and displayed in the system.
(2) Tooth movement function: rigid operations such as rotation and translation can be carried out on a single tooth, rotation and movement operations on the whole set of teeth can also be realized, and a doctor can conveniently observe the shape of the single-tooth model at a three-dimensional visual angle and carry out correct marking.
(3) Marking the category of the single tooth: the professional is required to click on the model of the single tooth and input the digital label of the corresponding class of the tooth, and the FID international standard, namely the two-digit digital label, is used, and the specific representation method is shown in figure 1.
The method is mainly used for training and learning based on a deep learning framework PyTorch, and the PyTorch1.7.0 version needs to be installed in the Ubuntu 18.04 environment.
The data input of the network model comprises the spatial position information and normal direction information of the surface points of the tooth model, and the network model is designed by classifying the whole model, so that the coordinates and the normal direction of the points of the tooth model need to be read in sequence in a data reading part.
Given that input data Nx 6 (N is the number of surface points of the single tooth model and is set to be 2048) is sent to a sampling layer, the sampling layer selects a series of points in output point cloud, a sampling algorithm uses a sampling method of iteration farthest points, thereby defining the center of an outgoing part, and then characteristics to be abstracted and summarized are finally determined by setting different radius sizes.
In the method, the sampling mode specifically used is iterative farthest point sampling (iterative farthest point sampling): randomly selecting a point on the surface of the model, sampling in a certain radius range, selecting a point farthest from the point as a starting point, sampling in the same range, and continuing iteration until the required number is selected. The farthest point sampling can result in a larger acquisition area, and a generalization ability for point clouds, than random sampling.
Relative position feature vectors and adjacency similarity features, which are added to each layer of the fully connected network in the classifier, are enhanced.
And compressing the features through feature fusion and enhancement of the full-connection layer, compressing the features into a vector of tooth number, inputting the vector into a Softmax layer, and performing classification prediction to obtain the probability of corresponding classes with the same dimensionality. And taking the class with the highest probability as a primary classification class.
The classification result obtained from the deep learning model is based on the single tooth model and two additional characteristics, and is difficult to process for some special cases. First, an examination of the classified abnormal conditions is performed, using the template feature vectors of the teeth. The template vector is calculated based on the whole data set, and the relative position characteristic vectors of all teeth are calculated, and the average value of the relative position characteristic vectors of each type of teeth is taken as the template vector. And carrying out similarity comparison between the relative position feature vector and the corresponding class template vector on the classification prediction result obtained by each tooth, and if the similarity is higher, the tooth can be regarded as one class.
Abnormal correction of tooth classification: and (4) checking the used classification result, when the classification result has an abnormal phenomenon, correcting the classification result, and selecting a heuristic method as a correction method, namely comparing the similarity with the template vector and classifying the teeth into categories with similar relative position feature vectors.
The repeated manual operation in the traditional mode wastes a large amount of working time of doctors, limits the number of people who can receive a diagnosis every year and is not beneficial to the rapid development of the industry; in the reclassification process, the classification standard used by the doctor is usually determined by the experience of the doctor, and no strict quantitative index exists. The invention provides a classification method for three-dimensional teeth. A method for using relative position features and adjacent similarity features existing in a tooth point cloud model for point cloud model classification is provided. The data of the invention are marked by experienced orthodontists, and then the data are trained through a deep learning method and the experience and professional knowledge of the orthodontists are learned. The mode greatly improves the working efficiency of dentists, assists dental treatment and is beneficial to the development of rapid and automatic digital dental technology. Prior to data training again, the results using only the spatial geometry information of a single tooth model are poor and not robust against malformed teeth. Therefore, the invention adds additional information of spatial position and relation, uses improved loss function and improves the accuracy of the method. In the result obtained by primarily using the prediction of the deep learning model, the method calculates the template vector by using the distribution of the tooth model again, checks the primary classification result, and corrects the error in the result with obvious classification error. In the training process, point cloud data of the surface of the tooth model are given, segmentation is carried out, a single-tooth point cloud model is obtained through up-sampling, and the model can be input into a network in batches to be learned and predicted.
The invention discloses a three-dimensional tooth characteristic detection method and a device, wherein the method comprises the following steps: given three-dimensional tooth model data, tooth categories are labeled by the practitioner, and then features are calculated for each tooth model, including: the characteristics of the point cloud model and the characteristics of the tooth model with the fixed arrangement, namely the relative position characteristic vector and the adjacent similarity vector, are classified through the full connection layer and the softmax, and finally, the detection and correction of the classified prediction result are carried out.
Example two
The embodiment provides a three-dimensional tooth point cloud model data classification system based on deep learning;
three-dimensional tooth point cloud model data classification system based on deep learning includes:
an acquisition module configured to: acquiring a tooth three-dimensional model, and extracting a whole set of tooth point cloud model from the tooth three-dimensional model;
a segmentation module configured to: dividing the whole set of tooth point cloud models to obtain a plurality of single tooth point cloud models;
a classification module configured to: extracting the point cloud model characteristics, the relative position characteristics and the adjacent similarity characteristics of the single tooth point cloud model, and inputting the characteristics into a classifier respectively; the input various characteristics are classified in the classifier, and the primary classification result of the single tooth is output.
It should be noted here that the above-mentioned obtaining module, the dividing module and the classifying module correspond to steps S101 to S103 in the first embodiment, and the above-mentioned modules are the same as examples and application scenarios implemented by the corresponding steps, but are not limited to what is disclosed in the first embodiment. It should be noted that the modules described above as part of a system may be implemented in a computer system such as a set of computer executable instructions.
In the foregoing embodiments, the descriptions of the embodiments have different emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The proposed system can be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above-described modules is merely a logical functional division, and in actual implementation, there may be another division, for example, a plurality of modules may be combined or may be integrated into another system, or some features may be omitted, or not executed.
EXAMPLE III
The present embodiment further provides an electronic device, including: one or more processors, one or more memories, and one or more computer programs; wherein, a processor is connected to the memory, the one or more computer programs are stored in the memory, and when the electronic device runs, the processor executes the one or more computer programs stored in the memory, so as to make the electronic device execute the method according to the first embodiment.
Example four
The present embodiments also provide a computer-readable storage medium for storing computer instructions, which when executed by a processor, perform the method of the first embodiment.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. The three-dimensional tooth point cloud model data classification method based on deep learning is characterized by comprising the following steps of:
acquiring a tooth three-dimensional model, and extracting a whole set of tooth point cloud model from the tooth three-dimensional model;
dividing the whole set of tooth point cloud models to obtain a plurality of single tooth point cloud models;
extracting the point cloud model characteristics, the relative position characteristics and the adjacent similarity characteristics of the single tooth, and respectively inputting the characteristics, the relative position characteristics and the adjacent similarity characteristics into a classifier; the input various characteristics are classified in a classifier, and a primary classification result of a single tooth is output;
extracting the point cloud model characteristics of the single tooth from the point cloud model of the single tooth; the method comprises the following specific steps:
(1) Respectively performing up-sampling on the single tooth point cloud model by adopting a first sampling radius, a second sampling radius and a third sampling radius to obtain first sampling data, second sampling data and third sampling data;
(2) Inputting the first sampling data into a first multilayer perceptron, and outputting a first feature vector;
inputting the second sampling data into a second multilayer perceptron and outputting a second feature vector;
inputting the third sampling data into a third multilayer perceptron, and outputting a third feature vector;
fusing the first, second and third feature vectors to obtain a first fused vector;
(3) Respectively performing upsampling on the first fusion vector by adopting fourth, fifth and sixth sampling radii to obtain fourth sampling data, fifth sampling data and sixth sampling data;
(4) Inputting the fourth sampling data into a fourth multilayer perceptron, and outputting a fourth feature vector;
inputting the fifth sampling data into a fifth multilayer perceptron, and outputting a fifth feature vector;
inputting sixth sampling data into a sixth multilayer perceptron, and outputting a sixth feature vector;
fusing the fourth, fifth and sixth feature vectors to obtain a second fused vector;
(5) Inputting the second fusion vector into a seventh multilayer perceptron to obtain the point cloud model characteristic of the single tooth;
extracting the point cloud model characteristics, the relative position characteristics and the adjacent similarity characteristics of the single tooth, and respectively inputting the characteristics, the relative position characteristics and the adjacent similarity characteristics into a classifier; the input multiple characteristics are classified in a classifier, and a primary classification result of a single tooth is output; the method comprises the following specific steps:
the classifier comprises three full-connection layers and a softmax function layer which are sequentially connected in series;
fusing the point cloud model feature of the single tooth, the relative position feature and the adjacent similarity feature to obtain a fusion feature;
the fusion characteristics are input into a first full-connection layer for processing to obtain a first processing result;
fusing the first processing result with the relative position characteristic and the adjacent similarity characteristic, and sending the fused result into a second full-connection layer to obtain a second processing result;
fusing the second processing result with the relative position characteristic and the adjacent similarity characteristic, and sending the fused result into a third full-connection layer to obtain a third processing result;
sending the third processing result into a softmax function layer, and outputting a primary classification result of the single tooth;
S 1 =cos(-F pos (i),F pos (i-1)-F pos (i));
S 2 =cos(-F pos (i),F pos (i+1)-F pos (i));
F adj (i)=[S 1 (i),S 2 (i)];
wherein, F adj (i) I.e. the adjacent similarity feature vector, F, corresponding to the ith tooth pos (i) Is a relative position characteristic component of the ith tooth, S 1 Representing the first component, S, of the neighboring similarity vector 2 Representing the second component of the neighboring similarity vector.
2. The deep learning-based three-dimensional dental point cloud model data classification method of claim 1, further comprising:
and carrying out tooth category abnormity detection on the classification result of the single tooth, and correcting the abnormal tooth category to obtain a final tooth classification result.
3. The deep learning-based three-dimensional tooth point cloud model data classification method as claimed in claim 1, wherein a whole set of tooth point cloud models is segmented to obtain a plurality of single tooth point cloud models; the method comprises the following specific steps:
and (3) dividing the whole set of tooth point cloud model by using the normal information of the point cloud, separating the teeth from other tissues and the teeth, and separating a single tooth model from the whole set of tooth point cloud model.
4. The deep learning-based three-dimensional tooth point cloud model data classification method as claimed in claim 2, wherein tooth category abnormality detection is performed on the classification result of a single tooth, and the abnormal tooth category is corrected to obtain a final tooth classification result; the method comprises the following specific steps:
(a) Calculating the position mean value of the center of each type of tooth point cloud model in the data set, and taking the position mean value as a template;
(b) Comparing each classified tooth point cloud model with the template, and comparing the similarity of the relative position characteristics of the corresponding positions according to the tooth arrangement sequence and the relative position characteristics of the template; if the similarity is smaller than the set threshold, determining that the classification result is acceptable, otherwise entering (c);
(c) Checking the classification condition, and processing the condition of the adhered teeth;
(d) And checking the classification condition and processing the tooth missing condition.
5. The deep learning-based three-dimensional tooth point cloud model data classification method as claimed in claim 4, wherein the classification condition is checked, and adhered teeth are processed; the method comprises the following specific steps:
reclassifying the results which are not in the acceptable classification result range, calculating the similarity between the relative position coordinate vector of the current tooth model and each template vector, and classifying the similarity into the most similar class;
checking the classification condition and processing the tooth missing condition; the method comprises the following specific steps:
firstly, checking whether the missing tooth causes semantic ambiguity of classification;
if no ambiguity is caused, keeping the original classification result;
if the ambiguity is caused, the classification result of the adjacent teeth is used as a classification basis for classification, and the teeth of the tooth lacking part are classified according to the similarity with the template.
6. Three-dimensional tooth point cloud model data classification system based on deep learning, characterized by includes:
an acquisition module configured to: acquiring a tooth three-dimensional model, and extracting a whole set of tooth point cloud model from the tooth three-dimensional model;
a segmentation module configured to: dividing the whole set of tooth point cloud models to obtain a plurality of single tooth point cloud models;
a classification module configured to: extracting the point cloud model characteristics, the relative position characteristics and the adjacent similarity characteristics of the single tooth point cloud model, and inputting the characteristics into a classifier respectively; the input various characteristics are classified in a classifier, and a primary classification result of a single tooth is output;
extracting the point cloud model characteristics of the single tooth from the point cloud model of the single tooth; the method comprises the following specific steps:
(1) Respectively performing upsampling on the point cloud model of the single tooth by adopting a first sampling radius, a second sampling radius and a third sampling radius to obtain first sampling data, second sampling data and third sampling data;
(2) Inputting the first sampling data into a first multilayer perceptron, and outputting a first feature vector;
inputting the second sampling data into a second multilayer perceptron, and outputting a second feature vector;
inputting the third sampling data into a third multilayer perceptron and outputting a third feature vector;
fusing the first, second and third feature vectors to obtain a first fused vector;
(3) Respectively performing upsampling on the first fusion vector by adopting fourth, fifth and sixth sampling radii to obtain fourth sampling data, fifth sampling data and sixth sampling data;
(4) Inputting the fourth sampling data into a fourth multilayer perceptron, and outputting a fourth feature vector;
inputting the fifth sampling data into a fifth multilayer perceptron, and outputting a fifth feature vector;
inputting sixth sampling data into a sixth multilayer perceptron, and outputting a sixth feature vector;
fusing the fourth, fifth and sixth feature vectors to obtain a second fused vector;
(5) Inputting the second fusion vector into a seventh multilayer perceptron to obtain the point cloud model characteristic of the single tooth;
extracting the point cloud model characteristics, the relative position characteristics and the adjacent similarity characteristics of the single tooth, and respectively inputting the characteristics, the relative position characteristics and the adjacent similarity characteristics into a classifier; the input various characteristics are classified in a classifier, and a primary classification result of a single tooth is output; the method comprises the following specific steps:
the classifier comprises three full connection layers and a softmax function layer which are sequentially connected in series;
fusing the point cloud model feature of the single tooth, the relative position feature and the adjacent similarity feature to obtain a fusion feature;
the fusion characteristics are input into a first full-connection layer for processing to obtain a first processing result;
fusing the first processing result with the relative position characteristic and the adjacent similarity characteristic, and sending the fused result into a second full-connection layer to obtain a second processing result;
fusing the second processing result with the relative position characteristic and the adjacent similarity characteristic, and sending the fused result into a third full-connection layer to obtain a third processing result;
sending the third processing result into a softmax function layer, and outputting a primary classification result of the single tooth;
S 1 =cos(-F pos (i),F pos (i-1)-F pos (i));
s 2 =cos(-F pos (i),F pos (i+1)-F pos (i));
F adj (i)=[S 1 (i),S 2 (i)];
wherein, F adj (i) I.e. the adjacent similarity feature vector, F, corresponding to the ith tooth pos (i) Is the relative position characteristic component of the ith tooth,S 1 Representing the first component of the neighboring similarity vector, S 2 Representing the second component of the neighboring similarity vector.
7. An electronic device, comprising: one or more processors, one or more memories, and one or more computer programs; wherein a processor is coupled to the memory, the one or more computer programs being stored in the memory, and wherein when the electronic device is running, the processor executes the one or more computer programs stored in the memory to cause the electronic device to perform the method of any of the preceding claims 1-5.
8. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the method of any one of claims 1 to 5.
CN202110192628.6A 2021-02-20 2021-02-20 Three-dimensional tooth point cloud model data classification method and system based on deep learning Active CN112989954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110192628.6A CN112989954B (en) 2021-02-20 2021-02-20 Three-dimensional tooth point cloud model data classification method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110192628.6A CN112989954B (en) 2021-02-20 2021-02-20 Three-dimensional tooth point cloud model data classification method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN112989954A CN112989954A (en) 2021-06-18
CN112989954B true CN112989954B (en) 2022-12-16

Family

ID=76393742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110192628.6A Active CN112989954B (en) 2021-02-20 2021-02-20 Three-dimensional tooth point cloud model data classification method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN112989954B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782343A (en) * 2022-04-12 2022-07-22 先临三维科技股份有限公司 Oral cavity detection method, device, electronic equipment and medium based on artificial intelligence
CN114757960B (en) * 2022-06-15 2022-09-09 汉斯夫(杭州)医学科技有限公司 Tooth segmentation and reconstruction method based on CBCT image and storage medium
CN115796306B (en) * 2023-02-07 2023-04-18 四川大学 Training of permanent tooth maturity grading model and permanent tooth maturity grading method
CN115953583B (en) * 2023-03-15 2023-06-20 山东大学 Tooth segmentation method and system based on iterative boundary optimization and deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428021A (en) * 2019-09-26 2019-11-08 上海牙典医疗器械有限公司 Correction attachment planing method based on oral cavity voxel model feature extraction
CN112017196A (en) * 2020-08-27 2020-12-01 重庆邮电大学 Three-dimensional tooth model mesh segmentation method based on local attention mechanism
CN112200843A (en) * 2020-10-09 2021-01-08 福州大学 CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10032271B2 (en) * 2015-12-10 2018-07-24 3M Innovative Properties Company Method for automatic tooth type recognition from 3D scans
JP6658308B2 (en) * 2016-05-30 2020-03-04 富士通株式会社 Tooth type determination program, crown position determination apparatus and method
EP3462373A1 (en) * 2017-10-02 2019-04-03 Promaton Holding B.V. Automated classification and taxonomy of 3d teeth data using deep learning methods
CN108776992B (en) * 2018-05-04 2022-08-05 正雅齿科科技(上海)有限公司 Tooth type identification method and device, user terminal and storage medium
JP6831432B2 (en) * 2019-10-17 2021-02-17 株式会社モリタ製作所 Identification device, tooth type identification system, identification method, and identification program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428021A (en) * 2019-09-26 2019-11-08 上海牙典医疗器械有限公司 Correction attachment planing method based on oral cavity voxel model feature extraction
CN112017196A (en) * 2020-08-27 2020-12-01 重庆邮电大学 Three-dimensional tooth model mesh segmentation method based on local attention mechanism
CN112200843A (en) * 2020-10-09 2021-01-08 福州大学 CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels

Also Published As

Publication number Publication date
CN112989954A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN112989954B (en) Three-dimensional tooth point cloud model data classification method and system based on deep learning
JP7412334B2 (en) Automatic classification and classification method for 3D tooth data using deep learning methods
KR102559819B1 (en) Automated 3D root shape prediction system and method using deep learning method
Tian et al. Automatic classification and segmentation of teeth on 3D dental model using hierarchical deep learning networks
US11651494B2 (en) Apparatuses and methods for three-dimensional dental segmentation using dental image data
EP2598034B1 (en) Adaptive visualization for direct physician use
CN112991273B (en) Orthodontic feature automatic detection method and system of three-dimensional tooth model
Kong et al. Automated maxillofacial segmentation in panoramic dental x-ray images using an efficient encoder-decoder network
CN114757960B (en) Tooth segmentation and reconstruction method based on CBCT image and storage medium
CN114638852A (en) Jaw bone and soft tissue identification and reconstruction method, device and medium based on CBCT image
CN114004970A (en) Tooth area detection method, device, equipment and storage medium
WO2023242757A1 (en) Geometry generation for dental restoration appliances, and the validation of that geometry
Tian et al. Efficient tooth gingival margin line reconstruction via adversarial learning
Ben-Hamadou et al. 3DTeethSeg'22: 3D Teeth Scan Segmentation and Labeling Challenge
Liu et al. Tracking-based deep learning method for temporomandibular joint segmentation
CN115761226A (en) Oral cavity image segmentation identification method and device, electronic equipment and storage medium
Khan et al. TOOTH SEGMENTATION IN 3D CONE-BEAM CT IMAGES USING DEEP CONVOLUTIONAL NEURAL NETWORK.
Xie et al. Automatic Individual Tooth Segmentation in Cone-Beam Computed Tomography Based on Multi-Task CNN and Watershed Transform
EP4307229A1 (en) Method and system for tooth pose estimation
WO2024127311A1 (en) Machine learning models for dental restoration design generation
WO2024127316A1 (en) Autoencoders for the processing of 3d representations in digital oral care
Kumar et al. Dental Disease Detection and Classification in Radiograph Images using Deep Learning Model
WO2023242774A1 (en) Validation for rapid prototyping parts in dentistry
WO2023242763A1 (en) Mesh segmentation and mesh segmentation validation in digital dentistry
WO2024127308A1 (en) Classification of 3d oral care representations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant