CN114782343A - Oral cavity detection method, device, electronic equipment and medium based on artificial intelligence - Google Patents

Oral cavity detection method, device, electronic equipment and medium based on artificial intelligence Download PDF

Info

Publication number
CN114782343A
CN114782343A CN202210382053.9A CN202210382053A CN114782343A CN 114782343 A CN114782343 A CN 114782343A CN 202210382053 A CN202210382053 A CN 202210382053A CN 114782343 A CN114782343 A CN 114782343A
Authority
CN
China
Prior art keywords
tooth
points
teeth
dimensional
artificial intelligence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210382053.9A
Other languages
Chinese (zh)
Inventor
邱凯佳
王嘉磊
张健
江腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shining 3D Technology Co Ltd
Original Assignee
Shining 3D Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shining 3D Technology Co Ltd filed Critical Shining 3D Technology Co Ltd
Priority to CN202210382053.9A priority Critical patent/CN114782343A/en
Publication of CN114782343A publication Critical patent/CN114782343A/en
Priority to PCT/CN2023/087795 priority patent/WO2023198099A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computer Graphics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The embodiment of the disclosure relates to an oral cavity detection method, an oral cavity detection device, an electronic device and a medium based on artificial intelligence, wherein the method comprises the following steps: the tooth three-dimensional grid to be detected is input into a pre-trained deep neural network for processing to obtain three-dimensional characteristic points, and processing is carried out based on the three-dimensional characteristic points to obtain a detection result. By adopting the technical scheme, the three-dimensional characteristic points are subjected to related calculation in the oral cavity detection process, the conditions of missing teeth, crowding degree and the like can be obtained, the problems of high cost and low efficiency of manual detection are avoided, the three-dimensional characteristic points are identified based on the pre-trained deep neural network, the identification precision of the three-dimensional characteristic points can be greatly improved, the accuracy of detection results is improved, and the detection efficiency and the detection effect under the oral cavity detection scene are further improved.

Description

Oral cavity detection method and device based on artificial intelligence, electronic equipment and medium
Technical Field
The present disclosure relates to the field of intelligent oral medicine technologies, and in particular, to an oral cavity detection method, apparatus, electronic device, and medium based on artificial intelligence.
Background
With the continuous development and progress of society and the continuous improvement of living standard of people, people pay more and more attention to the situation of oral teeth.
In the related art, it takes a long time and is inefficient to measure oral conditions, such as missing teeth, crowdedness, etc., by manual recognition by an oral practitioner.
Disclosure of Invention
To solve the above technical problems, or at least partially solve the above technical problems, the present disclosure provides an artificial intelligence based oral cavity detection method, apparatus, electronic device, and medium.
The embodiment of the disclosure provides an oral cavity detection method based on artificial intelligence, which comprises the following steps:
acquiring a three-dimensional grid of a tooth to be detected;
inputting the tooth three-dimensional grid to be detected into a pre-trained deep neural network for processing to obtain three-dimensional characteristic points;
and processing based on the three-dimensional feature points to obtain a detection result.
The disclosed embodiment also provides an oral cavity detection device based on artificial intelligence, the device includes:
the acquisition module is used for acquiring a three-dimensional grid of the tooth to be detected;
the processing module is used for inputting the tooth three-dimensional grid to be detected into a pre-trained deep neural network for processing to obtain three-dimensional characteristic points;
and the generating module is used for processing based on the three-dimensional characteristic points to obtain a detection result.
An embodiment of the present disclosure further provides an electronic device, which includes: a processor; a memory for storing the processor-executable instructions; the processor is used for reading the executable instructions from the memory and executing the instructions to realize the artificial intelligence based oral cavity detection method provided by the embodiment of the disclosure.
The disclosed embodiments also provide a computer-readable storage medium storing a computer program for executing the artificial intelligence based oral cavity detection method provided by the disclosed embodiments.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages: according to the oral cavity detection scheme based on artificial intelligence, the three-dimensional mesh of the tooth to be detected is obtained, the three-dimensional mesh of the tooth to be detected is input into a pre-trained deep neural network for processing, three-dimensional feature points are obtained, and processing is carried out based on the three-dimensional feature points, so that a detection result is obtained. By adopting the technical scheme, the three-dimensional feature points are subjected to related calculation in the oral cavity detection process, the conditions of missing teeth, crowdedness and the like can be obtained, the problems of high cost and low efficiency due to manual detection are avoided, and the three-dimensional feature points are identified based on the pre-trained deep neural network, so that the identification precision of the three-dimensional feature points can be greatly improved, the accuracy of detection results is improved, and the detection efficiency and the detection effect under the oral cavity detection scene are further improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of an artificial intelligence-based oral cavity detection method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a three-dimensional feature point provided in an embodiment of the present disclosure;
FIG. 3 is a schematic view of an archwire provided in accordance with an embodiment of the present disclosure;
FIG. 4 is a schematic view of an occlusal plane provided by embodiments of the present disclosure;
FIG. 5 is a schematic diagram of a detection result provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of another detection result provided by an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of another detection result provided by an embodiment of the present disclosure;
fig. 8 is a schematic flow chart of another artificial intelligence-based oral detection method provided by the embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of an artificial intelligence-based oral cavity detection device according to an embodiment of the present disclosure;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and the embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein is intended to be open-ended, i.e., "including but not limited to". The term "based on" is "based at least in part on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will appreciate that references to "one or more" are intended to be exemplary and not limiting unless the context clearly indicates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
In practical application, in the detection and identification process of common oral conditions such as missing teeth, crowding degree and coverage, the manual identification and measurement of an oral doctor are needed, so that the time is long, and the efficiency is low.
In order to solve the problems, the present disclosure provides an oral cavity detection method based on artificial intelligence, which obtains a three-dimensional mesh of a tooth to be detected, inputs the three-dimensional mesh of the tooth to be detected into a pre-trained deep neural network for processing to obtain three-dimensional feature points, and performs processing based on the three-dimensional feature points to obtain a detection result. Therefore, the oral cavity condition can be quickly and accurately identified and provided for the user, and the oral cavity detection efficiency is improved.
Specifically, fig. 1 is a schematic flowchart of an artificial intelligence based oral cavity detection method provided by an embodiment of the present disclosure, which may be executed by an artificial intelligence based oral cavity detection apparatus, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 1, the method includes:
step 101, obtaining a tooth three-dimensional grid to be detected.
The tooth three-dimensional grid to be detected can be obtained by scanning the upper row of teeth or the lower row of teeth of the human mouth in real time by scanning equipment, and can also be a tooth three-dimensional grid corresponding to the upper row of teeth or the lower row of teeth, which is obtained by sending the tooth three-dimensional grid based on a download address or other equipment. The tooth three-dimensional grid refers to a three-dimensional point set and a set formed by connection inside the three-dimensional point set.
And 102, inputting the three-dimensional mesh of the tooth to be detected into a pre-trained deep neural network for processing to obtain three-dimensional feature points.
Wherein the three-dimensional feature points include, but are not limited to, one or more of a tooth identification for each three-dimensional point (one specific tooth identification for each tooth), a mesial-distal point of an incisor, a cusp point of a cuspid, a plurality of cusp and cusp points of a molar, and a tooth local coordinate system; the near-middle far-middle point of the incisors refers to one three-dimensional point obtained from the three-dimensional point set corresponding to the incisors and serves as a near-middle point, and the other three-dimensional point serves as a far-middle point.
In the embodiment of the present disclosure, tooth three-dimensional grid samples (i.e., a set formed by connecting a three-dimensional point set and the interior of the point set) are obtained, one tooth three-dimensional grid sample is respectively provided for the upper jaw and the lower jaw, and three-dimensional feature points of teeth can be identified through deep neural network training and learning.
In some embodiments, required three-dimensional feature points are marked in a training set (a plurality of tooth three-dimensional grid samples), and the marked training set is directly input into the deep neural network for learning to obtain the pre-trained deep neural network;
in other embodiments, the tooth three-dimensional grid samples are rendered into two-dimensional pictures including a depth map, a curvature map, a texture map and the like at each sampling angle, then characteristic two-dimensional points are marked on each picture and input into a depth neural network for learning, and the three-dimensional characteristic points are obtained by the two-dimensional points obtained through learning through a mapping relation.
The above two manners are only examples of pre-training the deep neural network, and the disclosure does not specifically limit the manner of pre-training the deep neural network.
And further, inputting the three-dimensional mesh of the tooth to be detected into a pre-trained deep neural network for processing to obtain three-dimensional characteristic points such as tooth identification, tangent points, cusps and the like of each tooth.
And 103, processing based on the three-dimensional feature points to obtain a detection result.
In some embodiments, the tooth identifier of each tooth is obtained based on the three-dimensional feature points, matching is performed based on the tooth identifier of each tooth and the tooth identifier corresponding to the standard tooth three-dimensional mesh, and it is determined that there is no tooth corresponding to the target tooth identifier in the case of matching the non-existing target tooth identifier. The standard tooth three-dimensional grid corresponds to thirty-two teeth, such as the tooth three-dimensional grid corresponding to the upper row of teeth and the lower row of teeth of a person normally, so that thirty-two tooth marks are formed.
In other embodiments, the incisal points and cusps of the teeth (including the incisal points of the incisors, the cusps of the cuspids, and the cusps of the molars) are obtained based on the three-dimensional feature points, a fitting process is performed based on the incisal points and the cusps of the teeth to obtain an archwire, a calculation is performed based on the archwire to obtain the length of the dental arch, and a crowdedness value of the teeth is determined based on the length of the dental arch and the sum of the widths of all the teeth.
In still other embodiments, the incisal points and cusps of the teeth (including the incisal points of the incisors, the cusps of the cusps, and the cusps of the molars) are obtained based on the three-dimensional feature points, fitting processing is performed based on the incisal points and the cusps of the teeth to obtain an occlusal plane, the incisal points of the teeth corresponding to the upper and lower jaws are projected onto the occlusal plane to obtain first projected points, and the vector distance between the first projected points is calculated to obtain the coverage value.
In still other embodiments, the incisal points and cusps of the teeth (including incisal points of incisors, cusps of cuspids, and cusps of molars) are obtained based on the three-dimensional feature points, a fitting process is performed based on the incisal points and cusps of the teeth to obtain an occlusal plane, a plane normal of the occlusal plane is obtained, the incisal points, the maxillofacial target points, and the maxillofacial target points of the corresponding teeth of the upper and lower jaws are projected onto the plane normal to obtain second projected points, a first line segment and a second line segment are determined based on the second projected points, and the overbite amount value is determined based on the first line segment and the second line segment.
In still other embodiments, the incisal points and cusps of the teeth (including the incisal points of the incisors, the cusps of the cusps, and the cusps of the molars) are obtained based on the three-dimensional feature points, fitting processing is performed based on the incisal points and the cusps of the teeth to obtain an occlusion plane, any one pair of teeth of the upper jaw and the lower jaw corresponding to the target of occlusion (referring to the pair of teeth of the upper jaw and the lower jaw which are occluded with each other) is projected onto the occlusion plane to obtain a third projection point, and the correlation between the third projection points is obtained to obtain the occlusion type.
In some further embodiments, the incisal points and cusps of the teeth (including the incisal points of the incisors, the cusps of the cuspids, and the cusps of the molars) are obtained based on the three-dimensional feature points, fitting processing is performed based on the incisal points and the cusps of the teeth to obtain an occlusal plane, the cusps of the upper and lower teeth are projected onto the occlusal plane to obtain a fourth projected point, and a judgment is performed based on position information of the fourth projected point in the direction of the archwire normal to obtain whether the teeth are underbite or locked jaw.
The above manners are merely examples, and different three-dimensional feature points may be selected for processing according to an application scenario to obtain different detection results.
Based on the description of the embodiment, the three-dimensional mesh of the tooth to be detected is obtained, the three-dimensional mesh of the tooth to be detected is input into the pre-trained deep neural network for processing, so that the three-dimensional feature points are obtained, and the processing is performed based on the three-dimensional feature points, so that the detection result is obtained. By adopting the technical scheme, the three-dimensional characteristic points are subjected to related calculation in the oral cavity detection process, the conditions of missing teeth, crowding degree and the like can be obtained, the problems of high cost and low efficiency of manual detection are avoided, the three-dimensional characteristic points are identified based on the pre-trained deep neural network, the identification precision of the three-dimensional characteristic points can be greatly improved, the accuracy of detection results is improved, and the detection efficiency and the detection effect under the oral cavity detection scene are further improved.
In some embodiments, inputting a tooth three-dimensional mesh to be detected into a pre-trained deep neural network for processing to obtain three-dimensional feature points, including: inputting the tooth three-dimensional mesh to be detected into a pre-trained deep neural network for processing to obtain a tooth mark (each tooth has a specific tooth mark), a near-middle far-middle point of an incisor, a cusp point of a cuspid, a plurality of cusp points and pit points of a molar and a tooth local coordinate system of each three-dimensional point as three-dimensional characteristic points.
As an example, as shown in fig. 2, the three-dimensional mesh of the to-be-detected teeth corresponding to the upper jaw and the lower jaw is input into a pre-trained deep neural network for processing, so as to obtain three-dimensional feature points, such as tooth identifiers (1, 2, 3, and so on) of each tooth, each tooth identifier can identify a unique tooth, such as the near-middle far-middle points of incisors, the near-middle points (a1, a2, a3, a4) and the far-middle points (b1, b2, b3, b4) on four incisors shown in fig. 2; also for example, the cusps of the cuspids, the cusps on the six cuspids shown in fig. 2 are j1-j6, and for example, the cusps and the pits of the molars, the cusps and the pits of the six molars shown in fig. 2, for example, one molars, the cusps are m1-m4, and the pits w1 and w2, and the local coordinate system of the tooth is xy as shown in fig. 2.
In some embodiments, the processing based on the three-dimensional feature points to obtain the detection result includes: and acquiring the tooth identification of each tooth based on the three-dimensional feature points, matching based on the tooth identification of each tooth and the tooth identification corresponding to the standard tooth three-dimensional grid, and determining the tooth corresponding to the target tooth identification which does not exist under the condition of matching the non-existing target tooth identification.
In the embodiment of the present disclosure, the tooth identifier can uniquely represent one tooth, and therefore, the three-dimensional mesh of the tooth to be detected is input to the pre-trained deep neural network for processing, so as to obtain the tooth identifier of each tooth corresponding to the three-dimensional mesh of the tooth to be detected, and further, the tooth identifier of each tooth is matched with the tooth identifier corresponding to the standard three-dimensional mesh of teeth, for example, the tooth identifier of each tooth is 1 to 16, and all the tooth identifiers corresponding to the standard three-dimensional mesh of teeth are also 1 to 16, so as to determine that no missing tooth exists, and for example, the tooth identifier of each tooth is 1 to 7, 9 to 16, and all the tooth identifiers corresponding to the three-dimensional mesh of the tooth to be detected are 1 to 16, and it is determined that the target tooth identifier 8 does not exist, it is determined that there is no tooth corresponding to the target tooth identifier 8, and a missing situation exists, that is to say, all the existing tooth identifiers can be determined by the pre-trained deep neural network, the non-existing tooth identification is a missing tooth.
In some embodiments, the processing based on the three-dimensional feature points to obtain the detection result includes: the tooth crowding degree determination method comprises the steps of obtaining incisal points and cusp points (including incisal points, cusp points of cuspids and a plurality of cusp points of molars) of teeth based on three-dimensional feature points, conducting fitting processing based on the incisal points and the cusp points of the teeth to obtain an arch line, conducting calculation based on the arch line to obtain the length of the arch, and determining crowding degree values of the teeth based on the length of the arch and the sum of the widths of all teeth.
Specifically, the tangent points and cusps are fitted using a least squares fit to obtain the archwire, such as using a bi-quadratic equation (the fitting objective is to minimize the sum of the squared distances of all the tangent points and cusps to the archwire), as shown in fig. 3 (partially shown).
Further, after the arch line is obtained, a calculation is performed based on the arch line to obtain an arch length, a crowding degree value of the teeth is determined based on the arch length and the sum of the widths of all teeth, for example, the arch length of the teeth marked as six left to six right (i.e. the length of the arch curve of the six teeth on the left to the six teeth on the right) is calculated, and the crowding degree value of the teeth of the single jaw is obtained by subtracting the sum of the widths of all teeth. Wherein, the calculation mode of each tooth width is as follows: on the medial-lateral axis shown in fig. 3, the maximum and minimum coordinate values of all the tooth points are calculated, and the tooth width is obtained.
In some embodiments, the processing based on the three-dimensional feature points to obtain the detection result includes: the method comprises the steps of obtaining tangent points and cusps of teeth based on three-dimensional feature points, conducting fitting processing based on the tangent points and the cusps of the teeth to obtain an occlusion plane, projecting the tangent points of the teeth corresponding to the upper jaw and the lower jaw to the occlusion plane respectively to obtain first projection points, calculating vector distances between the first projection points, and obtaining a coverage quantity value.
Specifically, the respective tangent points and cusps are fitted to the occlusal plane using a least squares method (so that the sum of squares of distances from all the respective tangent points and cusps to the occlusal plane is minimized), as shown in fig. 4.
Further, tangent points of the teeth corresponding to the upper jaw and the lower jaw are projected to the occlusion plane respectively to obtain first projection points, and a vector distance between the first projection points is calculated to obtain a coverage quantity value. For example, as shown in fig. 5, incisor points (a1 and a2 in fig. 5) of the upper and lower jaws are projected onto the occlusal plane, and vector distances (for example, in fig. 5, B2 is positive on the right side of B1, and vice versa) of the first projection points (B1 and B2) are calculated to be covering quantity values, so that detection results of the opposite jaw, 1, 2, and 3-degree covering, and normality can be obtained (the vector distance between the upper jaw B1 and the lower jaw B2 is larger, the larger the vector distance is, the higher the covering degree is (1, 2, and 3 degrees), the smaller the vector distance is, the positive is, the normal covering is, and the negative the vector distance is, the opposite jaw is obtained).
In some embodiments, further comprising: the method comprises the steps of obtaining a plane normal of an occlusion plane, projecting tangent points of teeth corresponding to upper and lower jaws, an upper maxillofacial target point and a lower maxillofacial target point to the plane normal respectively to obtain a second projection point, determining a first line segment and a second line segment based on the second projection point, and determining a jaw covering quantity value based on the first line segment and the second line segment.
Specifically, continuing with fig. 5 as an example, the incisor points (a1 and a2 in fig. 5) of the maxillofacial region and the points farthest from the maxillofacial region (C1 and C2 in fig. 5) are projected onto the plane normal Y of the occlusal plane (the second projection points are D1, D2, D3, and D4). Thus, the upper and lower teeth are respectively a line segment above the normal of the plane, the position of the tangent point (D1 and D2 in figure 5) of the upper jaw on the lower jaw line segment (longer arrow) is calculated, namely the ratio of the first line segment between the shorter arrow D1 and D2 to the second line segment of the longer arrow D1 and D4, and five results of jaw opening, 1 degree, 2 degree and 3 degree and normal can be obtained according to the ratio (the larger the ratio is, the larger the values of the overlapping jaws 1, 2 and 3 are, the smaller the ratio is normal, and the negative the ratio is jaw opening). Here, the arrow directions of fig. 5 are only examples, and the ratio is positive when the arrow directions of the first line segments of D1 and D2 and the second line segments of D1 and D4 are the same, or negative when the arrow directions of the first line segments of D1 and D2 and the second line segments of D1 and D4 are opposite.
In some embodiments, further comprising: and projecting any one of the mutually occluded target tooth pairs corresponding to the upper jaw and the lower jaw to an occlusion plane to obtain a third projection point, and obtaining a correlation between the third projection points to obtain an occlusion type.
Specifically, the cusps (E, F1 shown in fig. 6 and F2) of any one of the teeth (such as the number 6 tooth, and other teeth may be similar) identified by the occlusal target tooth of the upper and lower jaws are projected onto the occlusal plane, so as to obtain a third projection point. Based on the relationship between these third projection points, class 1, class 2, and class 3 occlusions of the anderson classification are obtained (as shown in fig. 6, the third projection point corresponding to E is class 1 occlusion (neutral occlusion) in the middle of the third projection points corresponding to F1 and F2, and when the third projection point corresponding to E appears on both sides of the third projection points corresponding to F1 and F2, class 2 (mesial direction) and class 3 (distal direction), respectively, are obtained).
In some embodiments, the cusps of the upper and lower teeth are projected onto the occlusal plane to obtain a fourth projection point, and a judgment is made according to the position information of the fourth projection point in the direction of the archwire normal to obtain whether the teeth are in the opposite jaw or locked jaw.
Specifically, each of the maxillary and mandibular cusps (buccal cusps and lingual cusps) is projected onto the occlusal plane to obtain a fourth projection point, and as shown in fig. 7, G1-G4 are projected onto the occlusal plane, and based on the position of the fourth projection point in the direction of the archwire normal, it is determined whether these teeth are inverted (when the maxillary buccal cusp is inside the mandibular buccal cusp) or locked (when the maxillary lingual cusp is outside the mandibular buccal cusp), that is, if the maxillary lingual cusp is closer to the buccal cusp than the mandibular buccal cusp, the locked jaw is determined, then G3 is left in G2 and the maxillary buccal cusp is closer to the lingual cusp than the mandibular buccal cusp, the inverted jaw is determined, and if G1 is right in G2, then fig. 7 is determined.
Specifically, fig. 8 is a schematic flow chart of another artificial intelligence-based oral cavity detection method according to an embodiment of the present disclosure, and the present embodiment further optimizes the artificial intelligence-based oral cavity detection method based on the above embodiment. As shown in fig. 8, the method includes:
step 201, obtaining a tooth three-dimensional grid to be detected.
Step 202, inputting the tooth three-dimensional grid to be detected into a pre-trained deep neural network for processing to obtain three-dimensional feature points.
And 203, acquiring a tooth identifier of each tooth based on the three-dimensional feature points, matching based on the tooth identifier of each tooth and a tooth identifier corresponding to the standard tooth three-dimensional grid, and determining that the tooth corresponding to the target tooth identifier does not exist under the condition of matching the nonexistent target tooth identifier.
And 204, acquiring tangent points and cusps of the teeth based on the three-dimensional characteristic points, and performing fitting processing based on the tangent points and the cusps of the teeth to obtain an arch line and an occlusal plane.
After step 204, steps 205 and/or 206 and/or 207 and/or 208 and/or 209 may be performed.
And step 205, calculating based on the arch line to obtain the length of the dental arch, and determining the crowdedness numerical value of the teeth based on the length of the dental arch and the sum of the widths of all teeth.
And step 206, respectively projecting tangent points of the corresponding teeth of the upper jaw and the lower jaw to an occlusal plane to obtain first projection points, and calculating the vector distance between the first projection points to obtain a coverage quantity value.
And step 207, acquiring a plane normal of an occlusal plane, projecting tangent points of teeth corresponding to the upper jaw and the lower jaw, an upper maxillofacial target point and a lower maxillofacial target point to the plane normal respectively to obtain a second projection point, determining a first line segment and a second line segment based on the second projection point, and determining a jaw covering quantity value based on the first line segment and the second line segment.
And 208, projecting any one of the mutually occluded target tooth pairs corresponding to the upper jaw and the lower jaw to an occlusion plane to obtain a third projection point, and obtaining a correlation between the third projection points to obtain an occlusion type.
And 209, projecting the cusps of the upper and lower teeth to an occlusion plane to obtain a fourth projection point, and judging according to the position information of the fourth projection point in the direction of the archwire normal to obtain whether the teeth are in an anti-jaw state or a locked jaw state.
It should be noted that fig. 8 of the present disclosure refers to the descriptions of fig. 1-7 of the foregoing embodiments in a specific real-time manner, and will not be described in detail here.
Therefore, all oral conditions can be quickly and accurately provided for the user, the oral detection efficiency is further improved, and the use requirements and experience of the user are met.
Fig. 9 is a schematic structural diagram of an artificial intelligence-based oral cavity detection apparatus provided in an embodiment of the present disclosure, which may be implemented by software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 9, the apparatus includes:
an obtaining module 301, configured to obtain a three-dimensional mesh of a tooth to be detected;
a processing module 302, configured to input the three-dimensional mesh of the tooth to be detected into a pre-trained deep neural network for processing, so as to obtain three-dimensional feature points;
and a generating module 303, configured to perform processing based on the three-dimensional feature points to obtain a detection result.
Optionally, the generating module 303 is specifically configured to:
acquiring tooth identification of each tooth based on the three-dimensional feature points;
matching based on the tooth identification of each tooth and the tooth identification corresponding to the standard tooth three-dimensional grid;
in the event that a matching does not exist for a target tooth identification, it is determined that there is no tooth corresponding to the target tooth identification.
Optionally, the generating module 303 is further specifically configured to:
acquiring tangent points and cusp points of the teeth based on the three-dimensional feature points;
fitting processing is carried out on the basis of the tangent points and the cusps of the teeth to obtain an arch line;
calculating based on the arch wire to obtain the length of the dental arch;
determining a crowdedness value for the tooth based on the arch length and the sum of the widths of all teeth.
Optionally, the generating module 303 is further specifically configured to:
acquiring tangent points and cusp points of the teeth based on the three-dimensional feature points;
fitting processing is carried out on the basis of the tangent points and the cusps of the teeth to obtain an occlusion plane;
respectively projecting tangent points of the corresponding teeth of the upper jaw and the lower jaw to the occlusion plane to obtain first projection points;
and calculating the vector distance between the first projection points to obtain a coverage quantity value.
Optionally, the generating module 303 is further specifically configured to:
acquiring a plane normal of the occlusion plane;
respectively projecting tangent points of the corresponding teeth of the upper jaw and the lower jaw, the maxillofacial target point and the maxillofacial target point to the plane normal to obtain second projection points;
determining a first line segment and a second line segment based on the second projection point;
determining an overbite quantity value based on the first line segment and the second line segment.
Optionally, the generating module 303 is further specifically configured to:
projecting any one of the mutually occluded target tooth pairs corresponding to the upper jaw and the lower jaw to the occlusion plane to obtain a third projection point;
and acquiring the correlation among the third projection points to obtain the occlusion type.
Optionally, the generating module 303 is further specifically configured to:
projecting cusps of the upper and lower jaw teeth to the occlusion plane to obtain a fourth projection point;
and judging according to the position information of the fourth projection point in the direction of the archwire normal line to obtain whether the tooth is under the anti-jaw state or the locked jaw state.
The oral cavity detection device based on artificial intelligence provided by the embodiment of the disclosure can execute the oral cavity detection method based on artificial intelligence provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method.
Embodiments of the present disclosure also provide a computer program product comprising computer programs/instructions that, when executed by a processor, implement the artificial intelligence based oral detection method provided by any of the embodiments of the present disclosure.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. Referring now specifically to fig. 10, a schematic diagram of an electronic device 400 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 400 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle-mounted terminal (e.g., a car navigation terminal), and the like, and fixed terminals such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 10 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 10, the electronic device 400 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage means 408 into a Random Access Memory (RAM) 403. In the RAM403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM403 are connected to each other through a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 10 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or installed from the storage device 408, or installed from the ROM 402. The computer program, when executed by the processing device 401, performs the above-described functions defined in the artificial intelligence based oral cavity detection method of the embodiments of the present disclosure.
It should be noted that the computer readable medium of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving information display triggering operation of a user in the video playing process; acquiring at least two pieces of target information related to the video; displaying first target information in the at least two pieces of target information in an information display area of a playing page of the video, wherein the size of the information display area is smaller than that of the playing page; and receiving a first switching trigger operation of a user, and switching the first target information displayed in the information display area into second target information in the at least two pieces of target information.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, including conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In accordance with one or more embodiments of the present disclosure, there is provided an electronic device including:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement any of the artificial intelligence based oral cavity detection methods provided by the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for performing any of the artificial intelligence based oral detection methods provided by the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other combinations of features described above or equivalents thereof without departing from the spirit of the disclosure. For example, the above features and the technical features disclosed in the present disclosure (but not limited to) having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. An oral cavity detection method based on artificial intelligence, which is characterized by comprising the following steps:
acquiring a three-dimensional grid of a tooth to be detected;
inputting the tooth three-dimensional grid to be detected into a pre-trained deep neural network for processing to obtain three-dimensional characteristic points;
and processing based on the three-dimensional feature points to obtain a detection result.
2. The artificial intelligence-based oral cavity detection method according to claim 1, wherein the processing based on the three-dimensional feature points to obtain a detection result comprises:
acquiring tooth identification of each tooth based on the three-dimensional feature points;
matching based on the tooth identification of each tooth and the tooth identification corresponding to the standard tooth three-dimensional grid;
in the event that a matching does not exist for the target dental identification, it is determined that there is no tooth corresponding to the target dental identification.
3. The artificial intelligence-based oral cavity detection method according to claim 1, wherein the processing based on the three-dimensional feature points to obtain a detection result comprises:
acquiring tangent points and cusps of the teeth based on the three-dimensional feature points;
fitting processing is carried out on the basis of the tangent points and the cusps of the teeth to obtain an arch line;
calculating based on the arch wire to obtain the length of the dental arch;
determining a crowdedness value of the teeth based on the arch length and the sum of the widths of all teeth.
4. The artificial intelligence based oral cavity detection method according to claim 1, wherein the processing based on the three-dimensional feature points to obtain a detection result comprises:
acquiring tangent points and cusp points of the teeth based on the three-dimensional feature points;
fitting processing is carried out on the basis of the tangent point and the cusp of the tooth, and an occlusion plane is obtained;
respectively projecting tangent points of the corresponding teeth of the upper jaw and the lower jaw to the occlusion plane to obtain first projection points;
and calculating the vector distance between the first projection points to obtain a coverage quantity value.
5. The artificial intelligence-based oral detection method of claim 4, further comprising:
acquiring a plane normal of the occlusion plane;
respectively projecting tangent points of the corresponding teeth of the upper jaw and the lower jaw, an upper maxillofacial target point and a lower maxillofacial target point to the plane normal to obtain second projection points;
determining a first line segment and a second line segment based on the second projection point;
determining an overbite amount value based on the first line segment and the second line segment.
6. The artificial intelligence based oral detection method of claim 4, further comprising:
projecting any one pair of the upper and lower jaws corresponding to the mutually occluded target teeth to the occlusion plane to obtain a third projection point;
and acquiring the correlation between the third projection points to obtain the occlusion type.
7. The artificial intelligence-based oral detection method of claim 4, further comprising:
projecting cusps of the upper and lower jaw teeth to the occlusion plane to obtain a fourth projection point;
and judging according to the position information of the fourth projection point in the direction of the archwire normal to obtain whether the tooth is under-jaw or locked jaw.
8. An oral cavity detection device based on artificial intelligence, comprising:
the acquisition module is used for acquiring a three-dimensional grid of the tooth to be detected;
the processing module is used for inputting the tooth three-dimensional grid to be detected into a pre-trained deep neural network for processing to obtain three-dimensional characteristic points;
and the generating module is used for processing based on the three-dimensional characteristic points to obtain a detection result.
9. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
the processor configured to read the executable instructions from the memory and execute the instructions to implement the artificial intelligence based oral detection method of any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the storage medium stores a computer program for executing the artificial intelligence based oral cavity detection method according to any one of the preceding claims 1-7.
CN202210382053.9A 2022-04-12 2022-04-12 Oral cavity detection method, device, electronic equipment and medium based on artificial intelligence Pending CN114782343A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210382053.9A CN114782343A (en) 2022-04-12 2022-04-12 Oral cavity detection method, device, electronic equipment and medium based on artificial intelligence
PCT/CN2023/087795 WO2023198099A1 (en) 2022-04-12 2023-04-12 Artificial-intelligence-based oral cavity detection method and apparatus, and electronic device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210382053.9A CN114782343A (en) 2022-04-12 2022-04-12 Oral cavity detection method, device, electronic equipment and medium based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN114782343A true CN114782343A (en) 2022-07-22

Family

ID=82428588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210382053.9A Pending CN114782343A (en) 2022-04-12 2022-04-12 Oral cavity detection method, device, electronic equipment and medium based on artificial intelligence

Country Status (2)

Country Link
CN (1) CN114782343A (en)
WO (1) WO2023198099A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023198099A1 (en) * 2022-04-12 2023-10-19 先临三维科技股份有限公司 Artificial-intelligence-based oral cavity detection method and apparatus, and electronic device and medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108491850B (en) * 2018-03-27 2020-04-10 北京正齐口腔医疗技术有限公司 Automatic feature point extraction method and device of three-dimensional tooth mesh model
CN109363786B (en) * 2018-11-06 2021-03-05 上海牙典软件科技有限公司 Tooth orthodontic correction data acquisition method and device
CN112991273B (en) * 2021-02-18 2022-12-16 山东大学 Orthodontic feature automatic detection method and system of three-dimensional tooth model
CN112989954B (en) * 2021-02-20 2022-12-16 山东大学 Three-dimensional tooth point cloud model data classification method and system based on deep learning
CN114782343A (en) * 2022-04-12 2022-07-22 先临三维科技股份有限公司 Oral cavity detection method, device, electronic equipment and medium based on artificial intelligence

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023198099A1 (en) * 2022-04-12 2023-10-19 先临三维科技股份有限公司 Artificial-intelligence-based oral cavity detection method and apparatus, and electronic device and medium

Also Published As

Publication number Publication date
WO2023198099A1 (en) 2023-10-19

Similar Documents

Publication Publication Date Title
US20220218449A1 (en) Dental cad automation using deep learning
US11234798B2 (en) Scanning sequence for an intra-oral imaging system
US11357602B2 (en) Monitoring of dentition
US11107218B2 (en) Method for analyzing an image of a dental arch
US20190026893A1 (en) Method for analyzing an image of a dental arch
US20200405447A1 (en) Method for monitoring the position of teeth
CN114219897A (en) Tooth orthodontic result prediction method and system based on feature point recognition
CN114782343A (en) Oral cavity detection method, device, electronic equipment and medium based on artificial intelligence
CN115457196A (en) Occlusion adjustment method, device, equipment and medium
WO2024067033A1 (en) Mark point recognition method and apparatus, and device and storage medium
CN113516639B (en) Training method and device for oral cavity abnormality detection model based on panoramic X-ray film
WO2024087910A1 (en) Orthodontic treatment monitoring method and apparatus, device, and storage medium
WO2023198101A1 (en) Artificial intelligence-based oral cavity examination method and apparatus, electronic device, and medium
CN117338456A (en) Virtual jaw frame transfer method, device, equipment and storage medium
CN117152407A (en) Automatic positioning method for head shadow measurement mark points
US20170228924A1 (en) Method, apparatus and computer program for obtaining images
CN115546413A (en) Method and device for monitoring orthodontic effect based on portable camera and storage medium
CN114748195A (en) Scanning device, connection method and device thereof, electronic device and medium
EP4376694A1 (en) Method and system for presenting dental scan
CN115641325A (en) Tooth width calculation method of oral tooth scanning model, storage medium and electronic equipment
CN118161283A (en) Scanning processing method, device, equipment and medium
CN111292320A (en) Occlusion evaluation method and system based on three-dimensional digital model and machine learning
KR102377629B1 (en) Artificial Intelligence Deep learning-based orthodontic diagnostic device for analyzing patient image and generating orthodontic diagnostic report and operation method thereof
Popović et al. Teledentistry in dental care of children
CN117257499A (en) Occlusion cross section-based occlusion measurement method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination