CN111612795B - Method for automatically extracting feature points of preoperative nasal-alveolar process appliance digital model - Google Patents

Method for automatically extracting feature points of preoperative nasal-alveolar process appliance digital model Download PDF

Info

Publication number
CN111612795B
CN111612795B CN202010338450.7A CN202010338450A CN111612795B CN 111612795 B CN111612795 B CN 111612795B CN 202010338450 A CN202010338450 A CN 202010338450A CN 111612795 B CN111612795 B CN 111612795B
Authority
CN
China
Prior art keywords
points
model
coordinates
point
plane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010338450.7A
Other languages
Chinese (zh)
Other versions
CN111612795A (en
Inventor
李立
周志鹏
彭天翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202010338450.7A priority Critical patent/CN111612795B/en
Publication of CN111612795A publication Critical patent/CN111612795A/en
Application granted granted Critical
Publication of CN111612795B publication Critical patent/CN111612795B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/08Projecting images onto non-planar surfaces, e.g. geodetic screens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for automatically extracting characteristic points of a preoperative nasal-alveolar process appliance digital model, which comprises the following steps of: s1, obtaining an oral impression type file and preprocessing the oral impression type file; s2, carrying out coordinate system transformation on the pretreated oral impression model; s3, carrying out segmentation treatment on the oral impression model after the coordinate system transformation; s4, projecting two-dimensional planes of the digital models of the alveolar bones on the two sides, and performing image fitting on projected images of the alveolar bones on the two sides to obtain a fitting image; and S5, performing side plane projection on the healthy side alveolar bone and the affected side alveolar bone according to the fitting graph, segmenting out ridge lines according to the side plane projection, and extracting and obtaining the coordinates of the characteristic points according to the trend of the ridge lines. The method comprises the steps of performing dimensionality reduction processing on three-dimensional digital model data, and extracting feature points by using image fitting and feature recognition technologies; the method can provide guidance for the design of the preoperative cleft lip and palate correction model, reduces surgical risks, improves the correction success rate, and has good commercial prospect and value.

Description

Method for automatically extracting feature points of preoperative nasal-alveolar process appliance digital model
Technical Field
The invention relates to the field of medical image processing, in particular to an automatic extraction method for feature points of a preoperative nasal-alveolar process appliance digital model.
Background
The digital model of the preoperative nasal-alveolar process corrector (PNAM) is a correction model for cleft lip and palate patients in the infant period, and is helpful for reducing the difficulty of cleft lip and palate operations, improving the effect of the operations and reducing the times of the operations. The PNAM model uses a three-dimensional file store with the suffix stl, which contains a finite number of triangle patches.
The characteristic points of the PNAM digital model are characteristic points marked for evaluating the correction effect of the PNAM digital model, which are the final points T and T 'of the bilateral alveolar ridges, healthy buccal frenulum C, affected buccal frenulum C' and labial frenulum I, respectively. The final stage of the correction should have the feature points in a symmetrical relationship.
The existing processing method for the digital model is to manually mark points on the three-dimensional model by using engineering software, is seriously dependent on the experience level of doctors, has low automation degree and poor accuracy, and cannot realize automatic extraction. In order to improve the marking accuracy of the characteristic points and realize the automatic extraction of the coordinates of the characteristic points, the characteristic point extraction method is provided.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an automatic feature point extraction method for a preoperative nasal-alveolar process appliance digital model aiming at the defects in the prior art, which is used for objectively evaluating the correction effect of a PNAM appliance.
The technical scheme adopted by the invention for solving the technical problems is as follows:
the invention provides a method for automatically extracting characteristic points of a preoperative nasal-alveolar process appliance digital model, which comprises the following steps of:
s1, obtaining an oral impression type file and preprocessing the oral impression type file, wherein the oral model is a digital model comprising a plurality of triangular surface patches;
s2, carrying out coordinate system transformation on the preprocessed oral cavity impression model, and eliminating the offset angle of the bottom plane;
s3, carrying out segmentation treatment on the oral impression model after the coordinate system transformation, removing redundant parts, and only reserving digital models of the alveolar bone parts of the healthy side and the affected side to be treated;
s4, projecting two-dimensional planes of the digital models of the alveolar bones on the two sides, and performing image fitting on the projected images of the alveolar bones on the two sides to obtain a fitting image;
and S5, performing side plane projection on the healthy side alveolar bone and the affected side alveolar bone according to the fitting graph, segmenting out ridge lines according to the side plane projection, and extracting and obtaining the coordinates of the characteristic points according to the trend of the ridge lines.
Further, the method for preprocessing the oral cavity printing model in the step S1 of the present invention comprises:
acquiring stl files of the oral impression model, wherein triangular patch data are stored in the stl files, and finite elements are utilized to approximate a three-dimensional digital model by using a finite number of triangular planes; analyzing the oral impression model by using mathematic software MATLAB, separating the coordinates of each vertex of a triangular patch, and drawing the coordinates in software by using a patch function.
Further, the method for transforming the coordinate system in step S2 of the present invention comprises:
calculating a base plane offset angle, taking any two points p1 (x 1, y0, z 1) and p2 (x 2, y0, z 2) with the same y coordinate on the base plane, and calculating an offset angle theta 1= arctan ((z 2-z 1)/(x 2-x 1)) in the xoz direction; traversing all triangle vertexes p (x, y, z 0) on the oral impression model, modifying the z coordinates of the points to ensure that the z coordinates of the p points are z0- (x-x 1) × tan theta 1, and the converted point coordinates are p3 (x, y, z 3);
taking any two points p1 (x, y1, z 1) and p2 (x, y2, z 2) with the same x coordinate on the base plane, and calculating a declination angle theta 2= arctan ((z 2-z 1)/(y 2-y 1)) in the yoz direction; traversing all triangle vertexes p3 (x, y, z 3) on the oral impression model, modifying the z coordinates of the points to make the z coordinates of the p3 points be z3- (y-y 1) × tan θ 2, and the coordinates of the points after conversion be p4 (x, y, z 4).
Further, the method for performing the dental impression model segmentation in step S3 of the present invention includes:
three points on the bottom plane are marked with MATLAB, from which one plane Ax + By + Cz =1 is determined. Taking the upper part of the plane, i.e.
Figure BDA0002467602070000021
Then, the digital model is divided into a healthy side and an affected side according to the boundary of the coordinate values, and the model is divided.
Further, the method for performing image fitting in step S4 of the present invention is:
s41, projecting the two-dimensional plane of the digital models of the healthy side alveolar bone and the affected side alveolar bone after the segmentation;
s42, performing binarization processing on the projection image;
s43, traversing each pixel of the binary image, searching edge pixel points of the image, and storing the edge point set by using a sequence table;
and S44, respectively carrying out ellipse fitting on the images on the two sides, optimizing the target to be the minimum sum of squares of errors of each data point according to a least square method, determining the position of the fitted ellipse, and restoring the two ellipses to the original three-dimensional digital model to obtain a fitted graph.
Further, the method for extracting the feature points in step S5 of the present invention includes:
s51, performing side plane projection on the fitted healthy side alveolar bone and the fitted diseased side alveolar bone, and performing binarization on the projection;
s52, separating the lower boundary line of the projected boundary line to obtain a ridge line of a healthy side and a ridge line of an affected side;
and S53, analyzing the gradient of the ridge line, and judging to obtain the coordinates of the characteristic points according to the trend of the ridge line.
Further, the specific method for performing two-dimensional plane projection on the model in step S41 of the present invention is as follows: reading all vertex coordinates of the model, storing the vertex coordinates into an array, traversing points (x, y, z) in the array, and transforming coordinate points to generate new coordinates:
Figure BDA0002467602070000031
at the moment, the plane formed by the point set is the plane projection H1 of the model to xoy;
the specific method for performing binarization processing on the projection image in step S42 is as follows: traversing pixel points of the picture H1, and converting the value p of the pixel points (x, y) to generate a binary image H2;
Figure BDA0002467602070000032
the specific method for acquiring the boundary point of the binarized image in step S43 is: scanning the binary image line by line, wherein the first scanned point with the pixel value of 255 in each line is a boundary point, and the boundary point is stored into a point set omega (x, y);
the specific method for performing ellipse fitting on the image in step S44 is: assuming that the ellipse equation of any position of the plane is x ^2+ Axy + By ^2+ Cx + Dy + E =0, for all points in the point set omega (x, y), the objective function to be fitted is
Figure BDA0002467602070000041
To minimize F needs to be satisfied
Figure BDA0002467602070000042
Obtaining parameters A, B, C, D and E according to the parameters to obtain an elliptic equation; and restoring the ellipse into the three-dimensional model to find out two vertex coordinates which are two feature points T and T'.
Further, the specific method of performing the side plane projection and binarization on the healthy side and the affected side in step S51 of the present invention is as follows: storing all the points of the healthy side into an array, traversing the array to perform the following coordinate transformation to obtain the projection of the model to the xoz plane;
Figure BDA0002467602070000043
and traversing the array to perform the following coordinate transformation to obtain the projection of the model to the yoz plane.
Figure BDA0002467602070000044
Then, carrying out binarization processing on the two projection planes, wherein the binarization method is the same as that in the step S41; the projection images after binarization are respectively called as P1, namely to xoz projection, and P2, namely to yoz projection;
the specific method for obtaining the healthy lateral affected side ridge line in step S52 is: scanning the binary images P1 and P2 from bottom to top column by column, wherein the first scanned point with the pixel value of 255 in each column is a boundary point, the boundary points are stored into point sets omega 1 (x, z) and omega 2 (y, z), and a line formed by the point sets is a ridge line;
the specific method for analyzing the ridge gradient in step S53 is as follows: using a Matlab mathematical software input point set omega 1 (x, z) to perform polynomial fitting by using a ployfit function, and then using a ployder function to derive a curve; obtaining points p1_1 (x 1, z 1), p1_2 (x 2, z 2) with derivative of 0; the same operation is carried out on the point set omega 2, and the point p2_1 (y 1, z 1) and the point p2_2 (y 2, z 2) with the derivative of 0 are obtained; therefore, coordinates (x 1, y1, z 1) and (x 2, y2, z 2) are extracted as coordinates of two feature points C and I;
and (5) performing the same operation of the step S51-the step S53 on the affected side to obtain the coordinates of the characteristic point C'.
The invention has the following beneficial effects: the method for automatically extracting the feature points of the digital model of the preoperative nasal-alveolar process corrector realizes the extraction work of the feature points of the PNAM digital model by integrating the processing of the three-dimensional digital model, the segmentation of the two-dimensional image and the fitting technology, and can be used for the effect test of the orthodontic treatment. The surgical operation device is beneficial to reducing the difficulty of cleft lip and palate operation, reducing the operation risk, reducing the operation times and improving the correction success rate.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 shows projection results of healthy and affected side models according to an embodiment of the present invention.
FIG. 2 shows the result of ellipse fitting of two alveolar bones according to an embodiment of the present invention.
FIG. 3 shows an embodiment of the present invention in which a two-dimensional image is restored to three-dimensional feature point positions.
FIG. 4 is a strong side ridge diagram of an embodiment of the present invention.
Fig. 5 shows feature points I and C obtained after the robust side ridge line is acquired according to the embodiment of the present invention.
Fig. 6 is a characteristic point C' obtained after obtaining the affected side ridge line in the embodiment of the present invention.
FIG. 7 is a flowchart of a method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The method for automatically extracting the feature points of the digital model of the preoperative nasal-alveolar process appliance comprises the following steps of:
s1, obtaining an oral impression type file and preprocessing the oral impression type file, wherein the oral cavity model is a digital model comprising a plurality of triangular surface patches;
s2, carrying out coordinate system transformation on the preprocessed oral cavity impression model, and eliminating the offset angle of the bottom plane;
s3, carrying out segmentation treatment on the oral impression model after the coordinate system transformation, removing redundant parts, and only reserving the digital models of the alveolar bone parts of the healthy side and the affected side to be treated;
s4, projecting two-dimensional planes of the digital models of the alveolar bones on the two sides, and performing image fitting on projected images of the alveolar bones on the two sides to obtain a fitting image;
and S5, performing side plane projection on the healthy side alveolar bone and the affected side alveolar bone according to the fitting graph, segmenting out ridge lines according to the side plane projection, and extracting and obtaining the coordinates of the characteristic points according to the trend of the ridge lines.
The method for automatically extracting the feature points of a preoperative nasal-alveolar process appliance (PNAM) digital model comprises five parts, namely oral impression pretreatment, model coordinate system transformation, segmentation of a healthy side and an affected side of the model, fitting of alveolar bone images on two sides and extraction of the feature points.
S1, pre-treating an oral cavity impression.
Stl file, the stored content is triangle patch, using finite element to approach three-dimensional digital model using finite triangle plane, the more triangle patch, the finer model, the bigger size of file. In order to operate the position coordinates of each point in the file, the digital model file can be analyzed by using math software MATLAB, the coordinates of each vertex of the triangle in the file are separated, and then the triangle is drawn in the software again by using a patch function.
And S2, transforming a coordinate system.
Since the impression is made manually, the base plane will have an offset angle.
Calculating the offset angle of the bottom plane, taking any two points p1 (x 1, y0, z 1) and p2 (x 2, y0, z 2) with the same y coordinate of the bottom plane, and calculating the offset angle theta 1= arctan ((z 2-z 1)/(x 2-x 1)) in the xoz direction. Traversing all triangle vertexes p (x, y, z 0) on the model, modifying the z coordinates of the points to ensure that the z coordinates of the p points are z0- (x-x 1) × tan theta 1, and the transformed point coordinates are p3 (x, y, z 3);
taking any two points p1 (x, y1, z 1) and p2 (x, y2, z 2) with the same x coordinate on the base plane, the declination angle theta 2= arctan ((z 2-z 1)/(y 2-y 1)) in the yoz direction is calculated. Traversing all triangle vertexes p3 (x, y, z 3) on the model, modifying the z coordinates of the points to ensure that the z coordinates of the p3 points are z3- (y-y 1) tan theta 2, and the converted point coordinates are p4 (x, y, z 4);
and S3, segmenting the healthy side and the affected side of the model.
A digital model of a preoperative nasal-alveolar process appliance (PNAM) is generally used for preoperative orthodontics of oral cleft lip and palate, so the region of interest for the model is two impression portions of the alveolar bone, which determine the quality of the result of the preoperative repair of the cleft lip and palate. Three points on the bottom plane (vertices on both sides of the semicircle and the center point of the circular arc) are marked using MATLAB, and one plane Ax + By + Cz =1 is determined from these three points. Taking the upper part of the plane, i.e.
Figure BDA0002467602070000061
The model is called M2. Then, the digital model is divided into a healthy side and an affected side according to a boundary of the coordinate values, and the model is divided, wherein the healthy side is named as M2_0, and the affected side is named as M2_1.
And S4, fitting the alveolar bone images on the two sides.
And (4) projecting a two-dimensional plane on the model M2, wherein the scene is black after binarization, and the projection is white. And (5) carrying out coordinate transformation on the point set in the M2, and naming the picture as H1.
Figure BDA0002467602070000071
And (3) carrying out binarization on the picture: and traversing pixel points of the picture H1, and converting the value p of the pixel points (x, y) to generate a binary image (H2).
Figure BDA0002467602070000072
The projection results are shown in fig. 1.
Observing the alveolar bone parts on both sides determines fitting with an ellipse. And respectively carrying out ellipse fitting on the graphs on the two sides, wherein the ellipse fitting adopts a fitting scheme based on a least square method as an optimization target. The binarized image H2 is scanned line by line, and the first scanned point with a pixel value of 255 in each line is a boundary point, and the boundary points are stored as a point set Ω (x, y). According to the least square method, the optimization target is that the error sum of squares of all data points is minimum.
Let the ellipse equation at any position of the plane be x ^2+ Axy + By ^2+ Cx + Dy + E =0, and for all points in the point set omega (x, y), the objective function to be fitted is
Figure BDA0002467602070000073
Figure BDA0002467602070000074
To minimize F needs to be satisfied
Figure BDA0002467602070000075
Figure BDA0002467602070000076
The parameters A, B, C, D and E can be obtained to obtain an elliptic equation. After the position of the fitted ellipse is determined, it is superimposed on the original image with a black ellipse, as shown in fig. 2. Two ellipses are restored to the original three-dimensional digital model, as shown in FIG. 3. From this fitted graph, the lower vertices of the major axes of the two ellipses are considered as two feature points (the final point T of the alveolar ridge on the healthy side and the final point T' of the alveolar ridge on the affected side).
And S5, extracting the characteristic points.
Storing all the points of the healthy side model into an array, traversing the array, and performing the following coordinate transformation to obtain the projection of the model to the xoz plane;
Figure BDA0002467602070000081
and traversing the array to perform the following coordinate transformation to obtain the projection of the model to the yoz plane.
Figure BDA0002467602070000082
The binarized projection images are referred to as P1 (for xoz projection) and P2 (for yoz projection), respectively. The binarized images P1 and P2 are scanned column by column from bottom to top, the point with the first pixel value of 255 scanned in each column is a boundary point, the boundary points are stored as point sets Ω 1 (x, z) and Ω 2 (y, z), and the line formed by the point sets is a ridge line, as shown in fig. 4. Polynomial fitting was performed using the ployfit function using the Matlab math software input point set Ω 1 (x, z) and then the curve was derived using the ployder function. The points p1_1 (x 1, z 1), p1_2 (x 2, z 2) at which the derivative is 0 can be obtained. The same operation is performed on the point set Ω 2, and the point p2_1 (y 1, z 1), p2_2 (y 2, z 2) with the derivative of 0 is obtained. The coordinates (x 1, y1, z 1) and (x 2, y2, z 2) are thus extracted as the coordinates of the two feature points C (healthy cheek ligament) and I (labial ligament), as shown in fig. 5.
The coordinates of the feature point C' (affected cheek frenulum) can be extracted by performing the same operation as above on the affected side model after the separation, as shown in fig. 6.
It will be appreciated that modifications and variations are possible to those skilled in the art in light of the above teachings, and it is intended to cover all such modifications and variations as fall within the scope of the appended claims.

Claims (4)

1. A method for automatically extracting feature points of a preoperative nasal-alveolar process appliance digital model is characterized by comprising the following steps:
s1, obtaining an oral impression model file and preprocessing the oral impression model file, wherein the oral impression model is a digital model comprising a plurality of triangular surface patches;
s2, carrying out coordinate system transformation on the preprocessed oral cavity impression model, and eliminating the offset angle of the bottom plane;
s3, carrying out segmentation treatment on the oral impression model after the coordinate system transformation, removing redundant parts, and only reserving digital models of the alveolar bone parts of the healthy side and the affected side to be treated;
s4, projecting two-dimensional planes of the digital models of the alveolar bones on the two sides, and performing image fitting on projected images of the alveolar bones on the two sides to obtain a fitting image;
s5, performing side plane projection on the alveolar bone on the healthy side and the affected side according to the fitting graph, segmenting ridge lines according to the side plane projection, and extracting to obtain feature point coordinates according to the trend of the ridge lines;
the method for performing image fitting in step S4 is:
s41, projecting the two-dimensional plane of the digital models of the healthy side alveolar bone and the affected side alveolar bone after the segmentation;
s42, performing binarization processing on the projection image;
s43, traversing each pixel of the binary image, searching edge pixel points of the image, and storing the edge point set by using a sequence table;
s44, respectively carrying out ellipse fitting on the images on the two sides, determining the fitted ellipse position by optimizing the target to be the minimum sum of squares of errors of each data point according to a least square method, and reducing the two ellipses into the original three-dimensional digital model to obtain a fitted graph;
the method for extracting the feature points in the step S5 comprises the following steps:
s51, performing side plane projection on the fitted healthy side alveolar bone and the fitted diseased side alveolar bone, and performing binarization on the projection;
s52, separating the lower boundary line of the projected boundary line to obtain a ridge line of the healthy side and a ridge line of the affected side;
s53, analyzing the gradient of the ridge line, and judging according to the trend of the ridge line to obtain a characteristic point coordinate;
the specific method for performing two-dimensional plane projection on the model in step S41 is as follows: reading all vertex coordinates of the model, storing the vertex coordinates into an array, traversing points (x, y, z) in the array, and transforming coordinate points to generate new coordinates:
Figure FDA0003814847130000021
at the moment, the plane formed by the point set is the plane projection H1 of the model to xoy;
the specific method for performing binarization processing on the projection image in step S42 is as follows: traversing pixel points of the picture H1, and converting the value p of the pixel points (x, y) to generate a binary image H2;
Figure FDA0003814847130000022
the specific method for acquiring the boundary point of the binarized image in step S43 is as follows: scanning the binary image line by line, wherein the first scanned point with the pixel value of 255 in each line is a boundary point, and the boundary point is stored into a point set omega (x, y);
the specific method for performing ellipse fitting on the image in step S44 is: let the ellipse equation at any position of the plane be x ^2+ Axy + By ^2+ Cx + Dy + E =0, and for all points in the point set omega (x, y), the objective function to be fitted is
Figure FDA0003814847130000023
To minimize F needs to be satisfied
Figure FDA0003814847130000024
Obtaining parameters A, B, C, D and E according to the parameters to obtain an elliptic equation; restoring the ellipse into a three-dimensional model to find out two vertex coordinates which are two feature points T and T';
the specific method of performing the lateral plane projection and binarization on the healthy lateral affected side in the step S51 comprises the following steps: storing all the points of the healthy side into an array, traversing the array to perform the following coordinate transformation to obtain the projection of the model to the xoz plane;
Figure FDA0003814847130000025
traversing the array, and performing the following coordinate transformation to obtain the projection of the model to the yoz plane;
Figure FDA0003814847130000026
then, carrying out binarization processing on the two projection planes, wherein the binarization method is the same as that in the step S41; the projection images after binarization are respectively called as P1, namely to xoz projection, and P2, namely to yoz projection;
the specific method for obtaining the healthy lateral affected side ridge line in step S52 is: scanning the binary images P1 and P2 column by column from bottom to top, wherein the first scanned point with the pixel value of 255 in each column is a boundary point, the boundary points are stored into point sets omega 1 (x, z) and omega 2 (y, z), and a line formed by the point sets is a ridge line;
the specific method for analyzing the ridge gradient in step S53 is as follows: using Matlab mathematical software to input a point set omega 1 (x, z), performing polynomial fitting by using a ployfit function, and then deriving a curve by using a ployder function; obtaining points p1_1 (x 1, z 1), p1_2 (x 2, z 2) with derivative of 0; the same operation is carried out on the point set omega 2, and the point p2_1 (y 1, z 1) and the point p2_2 (y 2, z 2) with the derivative of 0 are obtained; therefore, coordinates (x 1, y1, z 1) and (x 2, y2, z 2) are extracted as coordinates of two feature points C and I;
and (5) performing the same operation of the step S51-the step S53 on the affected side to obtain the coordinates of the characteristic point C'.
2. The method for automatically extracting the feature points of the digital model of the preoperative nasal-alveolar process appliance according to claim 1, wherein the method for preprocessing the oral impression model in the step S1 comprises the following steps:
acquiring stl files of the oral impression model, wherein triangular patch data are stored in the stl files, and finite elements are utilized to approximate a three-dimensional digital model by using a finite number of triangular planes; analyzing the oral impression model by using mathematic software MATLAB, separating the coordinates of each vertex of a triangular patch, and drawing the coordinates in software by using a patch function.
3. The method for automatically extracting the feature points of the digital model of the preoperative nasal-alveolar process appliance according to claim 1, wherein the method for transforming the coordinate system in the step S2 comprises the following steps:
calculating a base plane offset angle, taking any two points p1 (x 1, y0, z 1) and p2 (x 2, y0, z 2) with the same y coordinate on the base plane, and calculating an offset angle theta 1= arctan ((z 2-z 1)/(x 2-x 1)) in the xoz direction; traversing all triangle vertexes p (x, y, z 0) on the oral impression model, modifying the z coordinates of the points to ensure that the z coordinates of the p points are z0- (x-x 1) × tan theta 1, and the converted point coordinates are p3 (x, y, z 3);
taking any two points p1 (x, y1, z 1) and p2 (x, y2, z 2) with the same x coordinate on the base plane, and calculating a declination angle theta 2= arctan ((z 2-z 1)/(y 2-y 1)) in the yoz direction; traversing all triangle vertexes p3 (x, y, z 3) on the oral impression model, modifying the z coordinates of the points to make the z coordinates of the p3 points be z3- (y-y 1) × tan θ 2, and the coordinates of the points after conversion be p4 (x, y, z 4).
4. The method for automatically extracting the feature points of the digital model of the preoperative nasal-alveolar process appliance according to claim 1, wherein the method for segmenting the oral impression model in the step S3 comprises the following steps: three points on the bottom plane were marked using MATLAB: determining a plane Ax + By + Cz =1 according to the vertexes of the two sides of the semicircle and the center point of the circular arc; taking the upper part of the plane, i.e.
Figure FDA0003814847130000041
And dividing the digital model into a healthy side and an affected side according to the boundary of the coordinate values, and dividing the model.
CN202010338450.7A 2020-04-26 2020-04-26 Method for automatically extracting feature points of preoperative nasal-alveolar process appliance digital model Active CN111612795B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010338450.7A CN111612795B (en) 2020-04-26 2020-04-26 Method for automatically extracting feature points of preoperative nasal-alveolar process appliance digital model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010338450.7A CN111612795B (en) 2020-04-26 2020-04-26 Method for automatically extracting feature points of preoperative nasal-alveolar process appliance digital model

Publications (2)

Publication Number Publication Date
CN111612795A CN111612795A (en) 2020-09-01
CN111612795B true CN111612795B (en) 2022-10-18

Family

ID=72204684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010338450.7A Active CN111612795B (en) 2020-04-26 2020-04-26 Method for automatically extracting feature points of preoperative nasal-alveolar process appliance digital model

Country Status (1)

Country Link
CN (1) CN111612795B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113205594B (en) * 2021-05-20 2022-08-02 合肥工业大学 STL-based bent pipe model skeleton extraction method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105769352A (en) * 2014-12-23 2016-07-20 上海晖银信息科技有限公司 Direct step-by-step method for generating tooth correcting state
CN110619646A (en) * 2019-07-23 2019-12-27 同济大学 Single-tooth extraction method based on panoramic image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105769352A (en) * 2014-12-23 2016-07-20 上海晖银信息科技有限公司 Direct step-by-step method for generating tooth correcting state
CN110619646A (en) * 2019-07-23 2019-12-27 同济大学 Single-tooth extraction method based on panoramic image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Long-Term Effects of Clefts on Craniofacial Morphology in Patients with Unilateral Cleft Lip and Palate;Dr.Yu-Fang等;《The Cleft Palate-Craniofacial Journal》;20051101;全文 *
单侧唇裂患者的鼻唇部石膏模型测量研究;韩雅丽 等;《宁夏医科大学学报 》;20161231;全文 *
新生儿单侧唇腭裂数字化术前矫治方法;刘琛;《中国优秀硕士学位论文全文数据库医药卫生科技辑》;20140215;第2014年卷(第02期);全文 *

Also Published As

Publication number Publication date
CN111612795A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN111415419B (en) Method and system for making tooth restoration model based on multi-source image
CN105447908B (en) Dental arch model generation method based on oral cavity scan data and CBCT data
AU2007224085B2 (en) Model- based dewarping method and apparatus
WO2018211361A1 (en) Automatic alignment and orientation of digital 3d dental arch pairs
CN106663327B (en) Automatic rejoining of 3-D surfaces
CN112200843A (en) CBCT and laser scanning point cloud data tooth registration method based on hyper-voxels
CN111798571A (en) Tooth scanning method, device, system and computer readable storage medium
JP2004040395A (en) Image distortion correction apparatus, method, and program
CN113160036B (en) Face changing method for image keeping face shape unchanged
KR101759188B1 (en) the automatic 3D modeliing method using 2D facial image
CN111612795B (en) Method for automatically extracting feature points of preoperative nasal-alveolar process appliance digital model
CN112308895A (en) Method for constructing realistic dentition model
KR102461343B1 (en) Automatic tooth landmark detection method and system in medical images containing metal artifacts
CN116309302A (en) Extraction method of key points of skull lateral position slice
CN107146232B (en) Data fusion method of oral CBCT image and laser scanning tooth grid
CN110889892B (en) Image processing method and image processing device
JP4013060B2 (en) Image correction method and image correction apparatus
CN111951216B (en) Automatic measuring method for balance parameters of spine coronal plane based on computer vision
CN110223396A (en) One kind is based on morphologic backbone simulation antidote and device
CN114322841B (en) Dynamic three-dimensional measurement method and system for projection grating phase shift generation
CN115944416B (en) Auxiliary determination method, system, medium and electronic equipment for oral implantation scheme
US20230048005A1 (en) Method of Operating Intraoral Scanner for Fast and Accurate Full Mouth Reconstruction
CN114848180B (en) Soft tissue surface type change prediction method and system based on self-adaptive orthodontic software
EP4280155A1 (en) Improved manufacturing of dental implants based on digital scan data alignment
CN112184743A (en) Segmentation and pre-labeling method for pectoral muscle and nipple area in breast molybdenum target image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant