CN112308895B - Method for constructing realistic dentition model - Google Patents

Method for constructing realistic dentition model Download PDF

Info

Publication number
CN112308895B
CN112308895B CN201910698286.8A CN201910698286A CN112308895B CN 112308895 B CN112308895 B CN 112308895B CN 201910698286 A CN201910698286 A CN 201910698286A CN 112308895 B CN112308895 B CN 112308895B
Authority
CN
China
Prior art keywords
image
camera
point
axis
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910698286.8A
Other languages
Chinese (zh)
Other versions
CN112308895A (en
Inventor
柯永振
赵文杰
杨帅
王凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Polytechnic University
Original Assignee
Tianjin Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Polytechnic University filed Critical Tianjin Polytechnic University
Priority to CN201910698286.8A priority Critical patent/CN112308895B/en
Publication of CN112308895A publication Critical patent/CN112308895A/en
Application granted granted Critical
Publication of CN112308895B publication Critical patent/CN112308895B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Veterinary Medicine (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

The invention provides a method for constructing a realistic dentition model, belonging to the field of oral medicine. The method comprises the following steps: (1) acquiring a three-dimensional dental model and at least three intraoral photographs; the mouth photographs are all color photographs of the mouth; (2) obtaining the optimal camera pose corresponding to each intraoral photograph by utilizing the three-dimensional dental model and the intraoral photographs; (3) and mapping each intraoral photograph onto the three-dimensional dental model according to the optimal camera posture corresponding to each intraoral photograph, finding texture seams on the three-dimensional dental model, and then eliminating the texture seams to obtain the realistic dentition model. The method can semi-automatically construct the color dentition model with the real image texture by using a small amount of simple manual operation, and the texture image on the dentition model has no obvious distortion, dislocation and seam, so that a better simulation effect can be obtained; the method improves the simulation effect of the dental cast and reduces the manual intervention in the dental medical auxiliary systems for aesthetic restoration of teeth and the like.

Description

Method for constructing realistic dentition model
Technical Field
The invention belongs to the field of oral medicine, and particularly relates to a method for constructing a realistic dentition model.
Background
With the development of oral medicine, people have higher and higher requirements on digital technology. Among them, the data display and simulation effect with higher requirement is one of the key researches at present. Tooth color is important in the aesthetic restoration of teeth. It has been found that tooth color is one of the important factors affecting the aesthetic restorative result of the patient's teeth. Many patients represent their great interest in having realistic textures on digital restorations. To obtain a dentition model with realistic colors, one approach is to obtain it directly from an expensive intraoral scanner, which is obviously too costly and increases the operating costs of the oral care facility. Moreover, most manufacturers obtain dentition models with color textures by using intraoral scanners, and the dentition models can only be used in a software system at home, so that the sealing performance is high. Yet another approach is to easily obtain realistic color dentition models using less expensive equipment by some effective methods of computer aided techniques and computer graphics.
In computer graphics, a method for obtaining a realistic model by overlaying a plurality of photos on a 3D model has been paid attention to, and a general method mainly comprises three parts, namely 2D/3D registration, projection texture mapping and texture fusion. Estimating the relative pose between a 3D object and its 2D projection is a fundamental problem for 2D/3D registration in computer vision and medical imaging. And it is not easy to establish the corresponding relationship between 2D and 3D points, and although it is a rather mature technology to extract and match features between 3D point clouds or between 2D images, it is very difficult to match 2D and 3D features due to the loss of 3D geometric attributes in 2D projection, which easily causes problems of distortion, occlusion, etc. For example, features on 3D models typically explore the 3D geometry of objects, which is lost in 2D projections, while features on 2D images typically exploit image textures that are either not available in 3D models or severely distorted in 2D projections. In the process of constructing the third-dimensional model of reality, the relative pose between the 3D object and the 2D projection thereof is found, namely the internal parameter and the external parameter of the imaging camera are calculated. Thus, a perspective projection transformation matrix is calculated according to the internal reference and the external reference of the camera, two-dimensional texture coordinates are obtained, a mapping relation is established between the 2D texture image and the 3D model, and color information is transmitted to the 3D model from the image.
In the construction of realistic dental models, relevant workers have made targeted studies. However, the current research mainly has the following problems:
the first problem is how to use as few manual interactions as possible on the premise of satisfying the simulation effect. The traditional method for constructing the realistic dentition model is to adjust a proper camera view and then manually select characteristic points for registration. The method needs to label a large number of characteristic points, the operation process is complicated, the requirement on the professional skill of a person is high, the precision of the marking points directly influences the final registration precision, and if the marking points do not accurately correspond to each other, the mapping result obtained finally is obviously misplaced.
The second problem is how to perform multi-modal data registration of 2D intraoral photographs with 3D dentition models with limited features. Unlike regular buildings or other objects with complete outlines, dentition models comprise two parts of soft tissues and bones, and in the multi-view-based 2D/3D registration process, the 2D dental model projection images have too few similar features such as points, outlines, shapes and the like with intraoral photographs, so that full-automatic registration cannot be realized by feature matching through single features or similarity measurement calculation, and the registration difficulty of multi-modal data is increased.
The third problem is that when a plurality of photos are subjected to projection texture mapping, obvious seams appear, and the simulation effect is influenced. Because the characteristics of the intraoral illumination shot at different angles, such as illumination conditions, colors, contrast, brightness and the like, are greatly different, under the condition that the intraoral illumination corresponds to texture coordinates of the dental cast, obvious seams exist between adjacent textures, and the final simulation effect is seriously influenced.
Disclosure of Invention
The invention aims to solve the problems in the prior art, and provides a method for constructing a realistic dentition model, so that the simulation effect of the tooth model is improved.
The invention is realized by the following technical scheme:
a method of constructing a realistic dentition model, the method comprising:
(1) acquiring a three-dimensional dental model and at least three intraoral photographs; the oral photographs are color photographs of the oral cavity;
(2) obtaining the optimal camera pose corresponding to each intraoral photograph by utilizing the three-dimensional dental model and the intraoral photographs;
(3) and mapping each intraoral photograph onto the three-dimensional dental model according to the optimal camera posture corresponding to each intraoral photograph, finding texture seams on the three-dimensional dental model, and then eliminating the texture seams to obtain the realistic dentition model.
The operation of the step (1) comprises the following steps:
obtaining a tooth plaster model, and scanning the tooth plaster model to obtain a three-dimensional tooth model;
shoot the patient oral cavity from right side, dead ahead, left side respectively and obtain 3 intraoral photographs, do respectively: right side bit image P R Positive image P F And left side image P L
The operation of the step (2) comprises the following steps:
the following treatments were performed for each oral photograph:
marking points on the same teeth on the intraoral picture and the three-dimensional dental model to respectively obtain a set of marking points of the intraoral picture and the three-dimensional dental model; each marking point set comprises at least three marking points;
aiming at the intraoral illumination, carrying out projection sampling on the three-dimensional dental model to obtain all projection images corresponding to the intraoral illumination, and converting the mark points on the three-dimensional dental model to each projection image by perspective projection to be used as mark points on the projection images;
acquiring the intraoral picture and a target point set on each projection image corresponding to the intraoral picture;
and calculating to obtain the optimal camera attitude corresponding to the intraoral camera according to the target point set.
Aiming at the intraoral illumination, the operation of carrying out perspective projection sampling on the three-dimensional dental model to obtain all projection images corresponding to the intraoral illumination comprises the following steps:
a1, rotating the camera from the original position to the initial position corresponding to the picture in the mouth around the X axis and the Y axis respectively; the original position is as follows: the center point of the three-dimensional dental model is coincided with the origin of a world coordinate system, and the Z axis of the world coordinate system points to the right front of the three-dimensional dental model; the camera is positioned right in front of the three-dimensional dental model, the focus of the camera is arranged at the central point of the three-dimensional dental model, dentitions on the three-dimensional dental model are symmetrical about a Z axis, the plane where the camera is positioned is vertical to the Z axis, and the projection direction of the camera is positioned on the Z axis; drawing a spherical surface by taking the original point as a circle center and taking the distance from the intersection point of the camera and the Z axis to the original point at the original position as a radius, and taking the spherical surface as a rotating spherical surface; when the camera rotates around an X axis and a Y axis, the projection direction of the camera always points to the focal point of the camera, and the distance from the camera to the focal point is kept unchanged; the rotation around the X axis and the Y axis is to move the camera on the rotating spherical surface; the initial position comprises an initial angle of rotation about an X-axis and an initial angle of rotation about a Y-axis;
a2, rotating the camera around the X axis and the Y axis to obtain all projection images corresponding to the intraoral picture: rotating the camera by theta about the X-axis x Rotation of theta about Y-axis y Obtaining the rotation angles theta of the three-dimensional dental model in the camera x And theta y A projection image on the view plane of (a); theta x ∈[α,β],
Figure BDA0002150041230000031
Wherein, theta is the rotation step length of the camera, and s is the rotation period of the camera;
for the right side bit image P R
θ x ∈[α,β]Wherein, in the process,
Figure BDA0002150041230000032
Figure BDA0002150041230000033
wherein,
Figure BDA0002150041230000034
wherein alpha is an initial angle of the camera rotating around the X axis when the projection image corresponding to the right side bit image is collected;
Figure BDA0002150041230000037
the initial angle of the camera rotating around the Y axis when the projection image corresponding to the right side position image is collected;
when the projection image corresponding to the right position image is collected, the rotation angle of the camera is as follows:
Figure BDA0002150041230000035
wherein m is R ∈(0,s];
Figure BDA0002150041230000036
Wherein n is R ∈(0,s];
Wherein m is R Representing the number of rotations of the camera about the X-axis during acquisition of projection images corresponding to the right-hand side view, n R Representing the rotation times of the camera around the Y axis when the projection image corresponding to the right position image is collected; finally obtaining s x s projection images with different angles corresponding to the right side position image;
for the normal position image P F
θ x ∈[α,β]Wherein
Figure BDA0002150041230000041
Figure BDA0002150041230000042
wherein,
Figure BDA0002150041230000043
wherein alpha is an initial angle of rotation of the camera around the X axis when the projection image corresponding to the normal image is collected;
Figure BDA00021500412300000411
the initial angle of the camera rotating around the Y axis when the projection image corresponding to the normal image is collected;
when the projection image corresponding to the normal image is collected, the rotation angle of the camera is as follows:
Figure BDA0002150041230000044
wherein m is F ∈(0,s];
Figure BDA0002150041230000045
Wherein n is F ∈(0,s];
Wherein m is F Representing the number of rotations of the camera about the X-axis during the acquisition of the projection images corresponding to the normal image, n F Representing the rotation times of the camera around the Y axis when the projection image corresponding to the normal image is collected; finally obtaining s x s projection images with different angles corresponding to the normal image;
for the left side image P L
θ x ∈[α,β]Wherein
Figure BDA0002150041230000046
Figure BDA0002150041230000047
wherein,
Figure BDA0002150041230000048
wherein alpha is an initial angle of the camera rotating around the X axis when the projection image corresponding to the left side image is acquired;
Figure BDA00021500412300000412
the initial angle of the camera rotating around the Y axis when the projection image corresponding to the left side image is collected;
when the corresponding projection image of left side position picture is gathered, the rotation angle of camera is:
Figure BDA0002150041230000049
wherein m is L ∈(0,s],
Figure BDA00021500412300000410
Wherein n is L ∈(0,s]
Wherein m is L Representing the number of rotations of the camera about the X-axis, n, during acquisition of the projection image corresponding to the left view L Representing the rotation times of the camera around the Y axis when the projection image corresponding to the left side image is acquired; finally obtaining s projected images with different angles corresponding to the left side image;
the camera is rotated around the X axis by theta x Rotation of theta about Y-axis y The method is realized by the following steps:
moving the camera at the original position to form an included angle theta with the XZ plane along the intersecting line of the rotating spherical surface and the YZ plane x Then along the X axis and at an angle theta to the XZ plane x The intersecting line of the plane of the rotating spherical surface and the rotating spherical surface moves to form an included angle theta with the YZ plane y At an angle of (a);
or the camera at the original position is firstly moved to form an included angle theta with the YZ plane along the intersecting line of the rotating spherical surface and the XZ plane y And then the included angle between the Y axis and the YZ plane is theta y The intersection line of the plane (A) and the rotating spherical surface moves to form an included angle theta with the XZ plane x At an angle of (a).
Preferably, the operation of acquiring the intraoral photograph and the target point set on each projection image corresponding to the intraoral photograph includes:
the following processing is respectively carried out on the intraoral picture and each projection image corresponding to the intraoral picture:
performing binarization processing, obtaining all feature points on the image after binarization processing, and forming a point set B ═ B by the feature points 1 ,b 2 ,b 3 ,…,b g };
Grouping the point set B to obtain a plurality of sub-point sets B i ={b i ,b i+1 ,b i+2 ,…,b j };
Find out each sub-point set B i The point with the minimum y value in (b) is taken as the feature point b 'of the sub-point set' n Set of all sub-points B i Characteristic point b' n Forming a point set B';
finding out the point which is positioned at the left side of each marking point and has the minimum distance with the marking point, namely the inflection point at the adjacent position of the teeth, from the point set B', wherein the inflection points at the adjacent positions of all the teeth form a target point set.
Preferably, the point set B is grouped to obtain a plurality of sub-point sets B i ={b i ,b i+1 ,b i+2 ,…,b j The operations of (c) include:
searching backwards from a first point in the point set B, putting the first point in the point set B into a group of which the Euclidean distance from the first point is less than D, and completing the division of the first group until the Euclidean distance from the first point is greater than or equal to D, wherein the points in the group form a first sub-point set;
starting backward search by taking a point with the Euclidean distance from a first point of a previous group being greater than or equal to D as a new starting point, putting the point into a group with the Euclidean distance from the new starting point being less than D, and completing the division of the group until the Euclidean distance from the new starting point is greater than or equal to D, wherein the points in the group form a sub-point set;
by parity of reasoning, a plurality of sub-point sets B are obtained i ={b i ,b i+1 ,b i+2 ,…,b j };
D represents the removed boundary length, and D is L/p, wherein L represents the horizontal length of the whole dentition in the binary image, p represents the number of segments for horizontally segmenting the tooth, and 30< p < 40.
Preferably, the operation of finding out the point which is located at the left side of each mark point and has the smallest distance from the mark point in the point set B' includes:
for each marking point, the following processing is carried out:
traversing the x coordinate values of all the points in the point set B', finding all the points of which the x coordinate values are smaller than the x coordinate value of the mark point, and then calculating the distances between the points and the mark point;
finding out the minimum value in the distances, wherein the point corresponding to the minimum value is the point which is positioned at the left side of the marking point and has the minimum distance with the marking point.
The operation of obtaining the optimal camera pose corresponding to the intraoral camera by calculating according to the target point set comprises the following steps:
calculating an affine transformation matrix N of the intraoral photograph according to the intraoral photograph and the target point set on each projection image, and then carrying out affine transformation on the intraoral photograph to obtain an affine-transformed image of the intraoral photograph;
extracting the contour contourr (a) of the image which is photographed in the mouth and is subjected to affine transformation, and simultaneously extracting the contour contourr (b) of the binary image of each projection image corresponding to the mouth;
calculating the maximum similarity measure according to contourr (a) and contourr (b):
for right side bit image P R To adoptThe maximum similarity measure is calculated using the following formula:
Figure BDA0002150041230000061
for positive image P F And calculating to obtain the maximum similarity measure by adopting the following formula:
Figure BDA0002150041230000062
for left side image P L And calculating to obtain the maximum similarity measure by adopting the following formula:
Figure BDA0002150041230000063
wherein k represents a constant, C R ,C F ,C L Respectively represent P R ,P F ,P L The corresponding maximum similarity measure; p is a radical of i Is a point on the contour contourr (a) with the coordinate (x) i ,y i )。p i ' is profile contourr (b) upper and p i The closest point has the coordinate of (x) i ′,y i ′);
C R Corresponding camera rotation angle theta x 、θ y I.e. the optimum camera pose for the right-hand side view, C F Corresponding rotation angle theta of camera x 、θ y I.e. the optimum camera pose for the orthophoto, C L Corresponding rotation angle theta of camera x 、θ y I.e. the optimal camera pose for the right-hand position image.
The operation of the step (3) comprises:
(31) initial projection texture mapping: dividing the three-dimensional dental model into 3 areas, namely a right area, a middle area and a left area; mapping the right side position image to a right side area of the three-dimensional dental model by using the optimal camera posture of the right side position image, mapping the normal position image to a middle area of the three-dimensional dental model by using the optimal camera posture of the normal position image, and mapping the left side position image to a left side area of the three-dimensional dental model by using the optimal camera posture of the left side position image to obtain initial projection texture mapping; in the initial projection texture mapping, texture seams are arranged at the joint of the right region and the middle region and the joint of the middle region and the left region;
(32) finding texture seams: finding respective texture seams from the initial projected texture map obtained in step (31);
(33) and fusing images on two sides of each texture joint to eliminate obvious joints on the three-dimensional dental model so as to obtain a realistic dentition model.
The operation of step (33) comprises:
respectively perspective-projecting the texture joint L on the three-dimensional dental model onto two position images forming the texture joint to obtain projections L 'and L';
taking the vertex of the L 'as a reference, horizontally moving other points on the L' and q pixels on the left side and the right side of each point to align the other points on the L 'with the vertex of the L' in the vertical direction to obtain an image G 1
Using the vertex of the L 'as a reference, horizontally moving other points on the L' and q pixels on the left side and the right side of each point to align the other points on the L 'with the vertical direction of the vertex of the L' to obtain an image G 2
Image G 1 And image G 2 Fusing to obtain an image G;
inserting image G back into the two bit images forming the texture seam 1 Image G 2 Obtaining two updated bit images at the position of (2);
and respectively mapping the two updated unit images to the corresponding areas of the three-dimensional dental model by using the optimal camera postures respectively corresponding to the two unit images.
Compared with the prior art, the invention has the beneficial effects that: the method can semi-automatically construct the color dentition model with the real image texture by using a small amount of simple manual operation, and the texture image on the dentition model has no obvious distortion, dislocation and seam, so that a better simulation effect can be obtained; the method can improve the simulation effect of the dental cast in the oral medical auxiliary systems for aesthetic restoration of teeth and the like, reduce manual intervention and improve the communication efficiency and the cognitive degree of doctors, patients and technicians.
Drawings
FIG. 1 is a block diagram of the steps of the method of the present invention.
Fig. 2 is a block diagram of the steps of the multi-feature based 2D/3D registration method of the present invention.
FIG. 3 is a schematic representation of projection sampling in the method of the present invention.
Fig. 4(a) is a projection image;
FIG. 4(b) is a binarized image of the projected image;
FIG. 4(c) is the right side bit image in the mouth;
FIG. 4(d) is a binarized image corresponding to the right side bit image inside the mouth; in the figure, solid dots are mark points, hollow dots are points detected by the ORB, and solid triangular points are found target points.
FIG. 5 is a diagram illustrating contour information of two binarized images aligned according to feature points
FIG. 6 is a block diagram of the process of texture fusion in the method of the present invention.
FIG. 7 is a schematic diagram of three mapping regions corresponding to the intraoral shots obtained by rapid segmentation of the three-dimensional dental model.
FIG. 8(a) shows a seam L on a three-dimensional dental model.
Fig. 8(b) the projection L' of the three-dimensional seam L on the right side image in the mouth.
Fig. 8(c) projection L "of the three-dimensional seam L on the intraoral ortho-image.
FIG. 9(a) image G 1
FIG. 9(b) image G 2
FIG. 9(c) mask image
FIG. 10(a) left side view of initial texture mapping results for a first male volunteer in an embodiment of the present invention
FIG. 10(b) an elevation view of the initial texture mapping result of the first male volunteer in an embodiment of the present invention
FIG. 10(c) Right side view of initial texture mapping results for a first male volunteer in an embodiment of the present invention
FIG. 11(a) left side view of texture mapping results for a first male volunteer in an embodiment of the present invention
FIG. 11(b) is an elevation view of the texture mapping result for the first male volunteer in an embodiment of the present invention
FIG. 11(c) Right side view of texture mapping results of the first male volunteer in an embodiment of the present invention
FIG. 12 shows the experimental results in the examples of the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
the invention provides a method for constructing a realistic dentition model based on tooth multi-features based on clear inflection points and local contour features found on a dental crown, accurately maps a plurality of intraoral photographs onto a 3D dental model based on a 2D/3D registration method and texture fusion of points and contour features on teeth, obtains a color dentition model with real image textures, and improves the simulation effect of the dentition model in an oral medical auxiliary system.
The basic idea of the invention is to find the optimal camera pose corresponding to the shot intraoral photograph, and then map the intraoral photograph to the three-dimensional dental model by projection texture mapping.
The basic principle, whether it is a projection-based 2D/3D registration method or a projection texture mapping technique, is a classical pinhole Camera Model (ref. Sturm p. pinhole Camera Model [ M ].2014. "). In homogeneous coordinates, the physical coordinates of the object are in the form of [ x, y, z,1] 'and the coordinates of the corresponding point on the projection image are in the form of [ u, v,1 ]', so the camera matrix P as a matrix for mapping the object should be a 3 × 4 matrix. Thus, the camera matrix can be represented in the form:
p ═ M | -MC ] formula (1)
Where M represents a reversible 3 x 3 matrix and C is a column vector representing the position of the camera in the world coordinate system. Its projection matrix can be decomposed into:
k [ R | -RC ] ═ K [ R | T ] formula (2)
The 2D/3D registration process and the projection texture mapping process both assume that the camera internal parameter K and the external parameter T are fixed and only change the rotation matrix R of the camera external parameter. The rotation of the camera, in turn, involves 3 degrees of freedom, i.e., rotation about the X, Y, Z axes, where the rotation of the camera about the X and Y axes in the world coordinate system is changed. As for the remaining rotation of the Z-axis, the scaled color image will be rotated in the 2D image alignment, aligning the projection with the color image. Thus, the problem to be solved becomes a single-target optimization problem:
Figure BDA0002150041230000091
wherein theta is x Is the rotation angle theta of the camera around the X axis in the world coordinate system y Is the angle of rotation of the camera about the Y-axis in the world coordinate system. F (theta) xy ) Is that the camera rotates around the X axis in the world coordinate system x And rotation of theta about the Y axis y A measure of similarity of the corresponding sampled projection images to the intraoral photographs.
Thus, the optimal camera pose is obtained through calculation of a single-target optimization algorithm, texture mapping coordinates (u, v) are calculated according to a perspective projection transformation matrix corresponding to the optimal camera pose, and finally projection texture mapping aligned with intraoral illumination is achieved.
Projection texture mapping is a well-established technology, and is not described in detail in the present invention, which is briefly described as follows: based on the pinhole camera model, the rotation matrix corresponding to the projection texture mapping can be calculated with the optimal posture of the camera, and the mathematical formula is as follows:
Figure BDA0002150041230000092
Figure BDA0002150041230000093
where n is the rotation axis and θ is the rotation angle.
The method for constructing the realistic dentition model provided by the invention is shown in fig. 1, and comprises the following steps:
(1) acquisition of dental optical scan data and intraoral color photograph data: scanning a tooth plaster model by using an optical scanner to obtain a dentition triangular mesh model (namely tooth optical scanning data), and taking 3 intraoral photographs of a patient by using a common single lens reflex, wherein the intraoral photographs are all color photographs (namely intraoral color photograph data);
(2) multi-feature based 2D/3D registration: according to the points on the tooth image and the local contour features, the optimal camera postures respectively corresponding to the 3 intraoral photographs are calculated and obtained based on a multi-feature 2D/3D registration method, and the method specifically comprises the following steps:
(21) projection sampling;
(22) 2D image alignment based on tooth feature points;
(23) and calculating to obtain the optimal camera pose based on the dental crown local contour.
(3) And (3) texture fusion: finding texture seams and eliminating obvious seams on the three-dimensional dental model specifically comprises:
(31) initial projection texture mapping: performing initial projection texture mapping according to the optimal camera attitude obtained in the step (2);
(32) finding texture seams: searching texture seams from the initial projection texture mapping obtained in the step (31);
(33) texture fusion based on image pyramids: and (3) fusing the images on the two sides of the texture joint by using an image fusion method based on an image pyramid, eliminating the obvious joint on the three-dimensional dental model, and obtaining the realistic dentition model.
Through the steps, the realistic dentition model without distortion, dislocation and obvious seams can be constructed.
The specific implementation of each step is as follows:
the step (1) is specifically as follows:
the method comprises the steps of obtaining a tooth plaster model of a user by adopting the prior art, and then scanning the tooth plaster model to obtain a dentition triangular mesh model (hereinafter referred to as a three-dimensional dental model), wherein the triangular mesh model is tooth optical scanning data.
The oral cavity of the patient is photographed from the right side, the right front and the left side respectively by using a common single lens reflex (or other photographing equipment) to obtain 3 intraoral photographs which are respectively: right side bit image P R Positive image P F And left side image P L The labial surface and the facial surface textures of the crown of the patient can be more completely acquired through the 3 images. The reason for taking 3 photographs of the mouth is as follows: firstly, if only 1 intraoral picture is shot, complete dentition texture information cannot be obtained certainly, because obvious occlusion exists between crowns; secondly, if only shoot intraoral right side position image and left side position image, then when texture mapping, go up incisor and side incisor's texture image can be incomplete, so need shoot 3 intraoral photographs at least, during the actual use, can shoot more than 3 intraoral photographs according to actual demand.
The step (2) is specifically as follows:
the principle of the step (2) is as follows: the main research content of the invention is based on adult teeth, and most of adult teeth have similar structural features, such as similar tooth arrangement mode of each person, similar size and shape of the same-name teeth, and the like, so according to the ubiquitous adult tooth features, the method does not need to carry out global search, only needs to carry out local rotation search in a certain range, and has the advantages of avoiding global search and saving expenses. So here is θ x And theta y Setting a rotation search range of theta x ∈[α,β],
Figure BDA0002150041230000101
Within the range, the optimal camera pose theta of the projection texture mapping is found by finding the projection image with the maximum similarity measure through the camera rotation search x And theta y
The key of the 2D/3D registration is to calculate the similarity measure of the projection images of the two-dimensional intraoral photographs and the three-dimensional dental model, search the projection image most similar to the intraoral photographs by rotating the camera, and calculate the projection pose of the camera corresponding to the intraoral photographs. Specifically, firstly, manually clicking a mark 3 on a three-dimensional dental model and a single intraoral picture to obtain a binary image with clear cusp contour characteristics by binarizing the intraoral picture and a projection image, then calculating an obvious inflection point on the dental crown image by an ORB (organized FAST and retrieved BRIEF) ORB rapid characteristic point extraction algorithm (please refer to references "Xuebing B, Jin C, Xiaokai M U, et al. And then searching adjacent inflection points on the two binary images for the manual marking points according to the 3 pairs of inflection points, and carrying out affine transformation on the intraoral illumination based on the 3 pairs of inflection points so as to align the intraoral illumination with the projection image. After alignment, the outer surrounding contour, i.e. the tooth tip contour, of the two aligned images is extracted by a classical contour extraction algorithm (please refer to the document "Suzuki S, Be K. topological structural analysis of differentiated binding images by bottle cutter following [ J ]. Computer Vision Graphics and Image Processing,1985,30(1): 32-46"), and the Euclidean distance D between the two contour point sets is calculated, and then the similarity measure C of the two images is calculated. And the camera respectively rotates m times around the X axis and n times around the Y axis, a projection image most similar to the intraoral photographs is searched, and the optimal camera projection posture corresponding to a single intraoral photograph is obtained. Repeating the steps for 3 pieces of oral photographs.
The specific implementation mode of the step (2) comprises the following steps:
(21) projection sampling;
in a typical three-dimensional processing system, each three-dimensional object has its own center point, and the coordinates of the center point may represent the position coordinates of the three-dimensional object, which is a default attribute of the three-dimensional model, and changing the position of the center point is changing the position of the three-dimensional model. Projection sampling is to obtain three-dimensional dental model with the rotation angle theta of the camera x And theta y In the view plane, the projection sampling is prepared for calculating an optimal camera pose. In the projection sampling step, the three-dimensional dental model is firstly placed at the origin (0,0,0) of the world coordinate system, namely the central point of the three-dimensional dental model is coincided with the origin (0,0,0) of the world coordinate system, and then the focus of the camera is setAnd obtaining a plurality of sampling images by changing the rotation angles of the camera around the X axis and the Y axis at the central point of the three-dimensional dental model.
As shown in figure 3, in a world coordinate system, the central point of the three-dimensional dental model 1 and the focus of the camera are both on the origin (0,0,0) of the coordinate system, the Z axis is directed to the right front of the three-dimensional dental model, and the dentition on the three-dimensional dental model is symmetrical about the Z axis, namely two incisors are respectively positioned on two sides of the Z axis. The original position of the camera is right in front of the three-dimensional dental model 1, and the distance between the camera and the three-dimensional dental model is defined by the user, so long as a complete dental model projection can be obtained on the view window.
Before projection sampling, the original positions of the cameras are: the plane where the camera is located is perpendicular to the Z axis, the projection direction of the camera is located on the Z axis, a spherical surface is drawn by taking the origin (0,0,0) of a coordinate system as the center of a circle and the distance from the intersection point of the camera and the Z axis to the origin as the radius, and the spherical surface is taken as a rotating spherical surface. During the rotation process of the projection sampling, the projection direction of the camera always points to the focal point of the camera, namely the central point of the three-dimensional dental model 1, and the distance from the camera to the focal point is kept unchanged (namely the internal parameter of the camera is unchanged). Both rotation about the X-axis and rotation about the Y-axis move the camera over the rotating sphere, so that the focal point of the camera can always remain at the origin, regardless of where the camera is moved, since the camera is always on the rotating sphere. Rotation of the camera about the X-axis corresponds to adjustment of the pitch angle, and rotation about the Y-axis corresponds to adjustment of the left-right angle.
Moreover, the same camera is used when the three left, right and front images are respectively collected, and for different images, the camera is rotated from the original position to the initial position corresponding to each image, the initial position comprises an initial angle of rotation around the X axis and an initial angle of rotation around the Y axis, and for different images, the initial angle of rotation around the X axis of the camera is the same, but the initial angle of rotation around the Y axis is different, so that a plurality of projection images corresponding to the left image, the front image and the right image can be respectively collected.
The camera rotates around X-axis and Y-axis respectively x And theta y Obtaining a projection image 2, the projection view of FIG. 3Image corresponding to a rotation angle of theta x =3,θ y 12 (units are degrees). Rotating the camera around X-axis and Y-axis respectively according to the possible situation of the patient's intraoral picture x And theta y Wherein θ x ∈[α,β],
Figure BDA0002150041230000121
These intervals contain all possible camera poses of the intraoral camera at the time of capture. And (3) according to different rotation angles, calculating a corresponding perspective projection transformation matrix by using the formula (2), and performing perspective projection transformation on the three-dimensional dental model 1 to a camera view plane to obtain a projection image 2, thereby completing one-time projection sampling.
The cameras being rotated about the X-axis by theta x Rotation theta about the Y axis y The method comprises the following specific steps:
moving the camera at the original position to form an included angle theta with the XZ plane along the intersection line of the rotating spherical surface and the YZ plane x Then along the X axis and at an angle theta to the XZ plane x The intersecting line of the plane of the rotating spherical surface and the rotating spherical surface moves to form an included angle theta with the YZ plane y At an angle of (a);
or the camera at the original position is firstly moved to form an included angle theta with the YZ plane along the intersecting line of the rotating spherical surface and the XZ plane y Then the included angle between the Y axis and the YZ plane is theta y The intersecting line of the plane of (2) and the rotating spherical surface moves to form an included angle theta with the XZ plane x At an angle of (a).
Specifically, a fixed step size θ (unit is degree) and period s are set for the camera rotation, and the rotation angle of the camera is changed by changing the number of rotations (i.e., m, n mentioned later). The number of rotations of the camera about the X-axis is set to m, and the number of rotations about the Y-axis is set to n. The period s is a natural number and is a constant representing the maximum number of rotations, i.e., the total number of rotations. θ × s denotes the two ranges mentioned above
Figure BDA0002150041230000122
Length of (c):
Figure BDA0002150041230000123
theoretically, the smaller θ, the larger s, and the higher the calculation accuracy. In the following experiment, θ is 1, s is 40, i.e. 1 degree per rotation, and 40 times in total, i.e. 40 degrees.
In practical use, the camera can rotate around the X axis for a first angle and then rotate around the Y axis for all angles, namely, the camera is located on the X axis and forms theta with the XZ plane (the plane formed by the X axis and the Z axis) x On the intersecting line of the plane of the angle and the rotating spherical surface
Figure BDA0002150041230000124
Move to ω, each move θ; then rotate a second angle around the X-axis (the second angle is different from the first angle by theta), rotate all the angles around the Y-axis, and so on. Or the camera can rotate around the Y axis by an angle and then rotate around the X axis by all angles, namely the camera is positioned on the Y axis and forms theta with the YZ plane (the plane formed by the Y axis and the Z axis) y Moving from alpha to beta on an intersecting line of the plane of the angle and the rotating spherical surface, and moving by theta each time; and then rotate a second angle (the second angle is different from the first angle by theta) around the Y axis, then rotate all the angles around the X axis, and so on. The steps shown in fig. 2 are given by way of example in the first way, and if the second way is adopted, only the X-axis, the Y-axis and the related angles and rotation times need to be exchanged. The projection image acquisition system can also rotate around the X axis by an angle, rotate around the Y axis by an angle, rotate around the X axis by an angle and rotate around the Y axis by an angle, and the projection images of s different angles can be obtained by ensuring that the projection image acquisition system rotates around the X axis for s times and rotates around the Y axis for s times. In either way, the position of the camera corresponding to each projection image is the intersection of the following two intersecting lines: the Y axis is positioned and forms an included angle theta with the YZ plane y The intersection line of the plane of the angle and the rotating spherical surface, the X axis and the XZ plane form an included angle theta x The plane of the angle intersects the spherical surface of revolution.
Different rotation search ranges are respectively set for the camera postures corresponding to the 3 intraoral photographs under study. For the right side bit image P in 3 pieces of intraoral photographs R Of camerasThe rotational search range is:
θ x ∈[α,β]wherein
Figure BDA0002150041230000131
Figure BDA0002150041230000132
wherein,
Figure BDA0002150041230000133
alpha is an initial angle of the camera rotating around the X axis when the projection image corresponding to the right position image is collected;
Figure BDA0002150041230000139
the initial angle of the camera rotating around the Y axis when the projection image corresponding to the right side position image is collected; in practical use, the camera at the original position can be moved to the position where the camera is located on the YZ plane along the intersecting line of the rotating spherical surface and the XZ plane
Figure BDA00021500412300001310
Then the included angle between the Y axis and the YZ plane is
Figure BDA00021500412300001311
Or the camera at the original position is firstly moved to the position which forms an angle alpha with the XZ plane along the intersecting line of the rotating spherical surface and the YZ plane, and then is moved to the position which forms an angle alpha with the XZ plane along the intersecting line of the plane which is positioned at the X axis and forms an angle alpha with the XZ plane and the rotating spherical surface
Figure BDA00021500412300001312
The camera is moved in a similar way from its initial position below relative to the other images.
The rotation angle of the camera is:
Figure BDA0002150041230000134
wherein m is R ∈(0,s];
Figure BDA0002150041230000135
Wherein n is R ∈(0,s](ii) a Formula (6)
Wherein m is R Representing the number of rotations of the camera about the X-axis during acquisition of projection images corresponding to the right-hand side view, n R And the rotation times of the camera around the Y axis when the projection images corresponding to the right side bit images are collected is shown, namely, the projection images corresponding to the right side bit images are obtained by rotating for an angle, and the projection images corresponding to s right side bit images are obtained after the X axis and the Y axis are rotated.
For intraoral orthophotos P F The rotational search range of the camera is:
θ x ∈[α,β]wherein
Figure BDA0002150041230000136
Figure BDA0002150041230000137
wherein,
Figure BDA0002150041230000138
alpha is an initial angle of the camera rotating around the X axis when the projection image corresponding to the normal image is collected;
Figure BDA0002150041230000148
the initial angle of the camera rotating around the Y axis when the projection image corresponding to the normal image is collected;
the rotation angle of the camera is:
Figure BDA0002150041230000141
wherein m is F ∈(0,s];
Figure BDA0002150041230000142
Wherein m is F Representing the number of rotations of the camera about the X-axis during the acquisition of the projection images corresponding to the normal image, n F Representing the rotation times of the camera around the Y axis when the projection images corresponding to the normal images are collected, and finally obtaining s x s projection images with different angles corresponding to the normal images;
when shooting an orthostatic image, the actual camera may not be exactly opposite to the incisors, may be slightly left, and may be slightly right, so that when setting the rotation search range of the three-dimensional virtual camera, all possibilities of actual shooting need to be considered. If the initial position of the rotational search range is set to 0 and the actual picture may be left to the incisors, the situation that may exist at the left side of the initial position cannot be searched. Therefore, all possible cases can be included by setting a certain range in each of the upper, lower, left, and right sides of the camera when acquiring the normal image with the ideal position of the camera as the center.
For the intraoral left side image P L The rotational search range of the camera is:
θ x ∈[α,β]wherein
Figure BDA0002150041230000143
Figure BDA0002150041230000144
wherein,
Figure BDA0002150041230000145
alpha is an initial angle of the camera rotating around the X axis when the projection image corresponding to the left side image is collected;
Figure BDA0002150041230000149
the initial angle of the camera rotating around the Y axis when the projection image corresponding to the left side image is collected;
the rotation angle of the camera is:
Figure BDA0002150041230000146
wherein m is L ∈(0,s],
Figure BDA0002150041230000147
Wherein m is L Representing the number of rotations of the camera about the X-axis, n, during acquisition of the projection image corresponding to the left view L Representing the rotation times of the camera around the Y axis when the projection image corresponding to the left side image is collected, and finally obtaining s x s projection images with different angles corresponding to the left side image;
and substituting the experimental theta 1 and s 40 into the formulas (6) to (8) to obtain all the rotation angles of the projection sampling in the experiment.
(22) 2D image alignment based on tooth feature points (the tooth feature points comprise all feature points on a binary image found by an ORB rapid feature point detection algorithm and inflection points at the adjacent positions of teeth found according to mark points, which are the general names of all the feature points)
The pair of marked teeth is mainly used to find an inflection point (i.e., target point) at the abutment of the tooth with the tooth on the front view profile of the labial and buccal surfaces of the patient's crown as shown in fig. 4(a) to 4(d) (in which the original drawing of fig. 4(c) is a color photograph, the color is removed as the drawing of the specification). Firstly, the teeth pairs are marked on the intraoral photograph and the three-dimensional dental model (namely, marking points are respectively marked on the same teeth on the intraoral photograph and the three-dimensional dental model) (as shown by three solid round points in fig. 4(a) and 4 (c)), and two point sets A are respectively obtained 1 ,A 2 Each point set contains 3 points (the marked points are marked with solid dots in fig. 4(b) and 4 (d)). Three points are used because at least 3 points are required for affine transformation of the 2D image, and more points may be used as needed.
After all the marked points are marked, binarization processing is carried out on the oral illumination (the threshold method in OpenCV is used for binarization processing in the invention, wherein the threshold type is CV _ THRESH _ BINARY), and binarization processing is carried out on the oral illuminationI.e. the unwanted texture colours are removed. And then finding out a point set B ═ B formed by all feature points (shown as non-full circle hollow dots in fig. 4(B) and fig. 4 (d)) on the binarized image through the ORB rapid feature point detection algorithm 1 ,b 2 ,b 3 ,…,b g Because of more points of inflection, the Euclidean distance D between the points can be determined according to the points E Simple classification is carried out, the adjacent points are put into a subset, and a sub-point set B is obtained i ={b i ,b i+1 ,b i+2 ,…,b j }. According to the characteristic of the local contour of the tooth cusp, the smaller the value of the point coordinate y is, the closer the point coordinate y is to the target point. In all image processing of the invention, the origin of the image coordinate system is at the upper left corner of the image, the coordinates of the point at the lower left corner are x-minimum and y-maximum, and the coordinates of the point at the upper right corner are x-maximum and y-minimum. As can be seen from fig. 4 d, the inflection point of the abutment between the tooth and the cusp contour to be found is necessarily the point at which the y value is minimum in the set of points (in fig. 4 d, the solid point of the triangle is the inflection point of the abutment between the tooth and the cusp contour, and the non-full circle hollow circular dot is the cusp point). So different point sets B can be set i The minimum point of the y value in (1) is taken as the characteristic point b 'of the pixel area where the point set is located' n These points constitute a set of points B'. Finally, through a minimum distance search algorithm of a k-D tree, finding out a point which is positioned on the left side of each mark point and has the minimum distance with each mark point in the point set B', so that the three mark points can find out inflection points of adjacent positions of the teeth on the front view outline of the labial surface and the buccal surface of the dental crown, wherein the inflection points are respectively point 1 、point 2 、point 3 The three points form a target point set C on the intraoral picture 1 In fig. 4(d), three target points are marked with solid triangular dots.
Wherein, a sub-point set B is obtained i ={b i ,b i+1 ,b i+2 ,…,b j The concrete method of the methods is as follows:
and (3) dividing all the points in the B into groups, wherein the length and the number of the groups are not fixed and are determined by the number of the points. Searching backwards from the first point in B, one group with Euclidean distance less than D, until when the distance is greater than or equal to D, the group is divided completely to obtain the current point (the previous group)The first point of which the euclidean distance is greater than or equal to D) as a new starting point, and finally obtaining a plurality of sub-point sets B i ={b i ,b i+1 ,b i+2 ,…,b j }. Where D represents the removed boundary length, determined from the inter-dental distance in the image, and found by D default as L/p, L represents the horizontal length of the entire dentition in the binarized image, p represents the number of segments into which the tooth is horizontally divided, 30<p<In the present invention L is determined by the left and right boundaries of the white area of the binarized image, and p is 35.
Dividing the point set B into a plurality of sub-point sets B i Then, at each B i Finding a point b 'with the minimum coordinate y value' n As a set of sub-points B i This may filter out a large portion of the extraneous points. Representative point b 'of each sub-point set' n And form a point set B'.
The specific method for finding the point located on the left side of the mark point and having the minimum distance in the point set B' is as follows:
traversing the x coordinate value of each point in B', finding all points with the x coordinate value smaller than that of the mark point, then calculating the distance between each point with the x coordinate value smaller than that of the mark point and the mark point, finding out the minimum value in the distances, wherein the point corresponding to the minimum value is the point which is positioned at the left side of the mark point and has the minimum distance with the mark point, and the three points form a target point set C on the intraoral picture 1
After the target point on the intraoral picture is found, the target point on the projection image (i.e., the projection image obtained by projection sampling in step (21)) is found. It is to be noted here that the marker points on the projection image are obtained from the marker points on the 3D dental cast by perspective projection transformation. Then using the above-mentioned target point searching method to obtain point 1 ′、point 2 ′、point 3 ', constitute a set of target points C on the projection image 2 The target points are marked with solid triangular dots in fig. 4 (b).
According to the target point set C found on the intraoral picture and the projection image 1 And C 2 Calculating affine transformation matrix N, and simulating the mouth pictureRay transformation (calculating an affine transformation matrix through three points, and transforming an image is a common method in the prior art, which is not described herein again, and the affine transformation can be realized through an estimariegittranform function and a warpAffeine function of OpenCV), and after the affine transformation, the intra-oral photograph is aligned with the projection image. The affine transformation only transforms the image by 3 degrees of freedom of rotation, scaling and translation, and the others remain unchanged, and the affine transformation is a common image transformation method and is not described herein again.
(23) Obtaining optimal camera pose based on crown local contour calculation
Point1, point2 and point3 in fig. 5 are target points found on the binary image corresponding to the marker points; contourr (a) is the contour extracted from the image after affine transformation according to the binarized image in fig. 4(d) (the contour is extracted by the aforementioned classic contour extraction algorithm); contour (b) is the contour extracted from the binarized image in fig. 4(b) (the contour is extracted using the aforementioned classical contour extraction algorithm); d in FIG. 5 i ,d j Two nearest distances from a point on contour (a) to contour (b) are shown (the method of the present invention requires calculating the nearest distance from each point on contour (a) to each point on contour (b). The cusp contour is used because the two different source images used in the method have great differences in characteristics such as illumination conditions, colors, contrast, brightness and the like, for example, the projected image has completely different colors from the intraoral illumination, and highlights and shadows are completely different, which seriously affect the extraction of the image contour. Nevertheless, through a great deal of experimental comparison, it is found that the cusp contours of the dentition in the two images can be extracted more easily and accurately than the feature lines such as the gingival margin line and the crown abutment line.
Next, the local contour contourr (a) for the intraoral right side bit image, and the camera rotation θ about the X-axis in the world coordinate system x And rotation of theta about the Y axis y And (b) calculating the Euclidean distance mean value of the two contours, wherein the formula can be written as:
Figure BDA0002150041230000171
wherein e, f represents the starting sequence number of the effective point on contourr (a) (referring to fig. 5, for the right side bit image, where the distance between the two contours is the largest, in the left half of the image, the left half of the contour can be cut, and the interference contour possibly existing at the leftmost end is removed, the proportion is set as 1/30 of the whole contour, according to this rule, e represents the point sequence number at L/30 in the horizontal direction of the contour, and f is the point sequence number at L/2. p is a radical of formula i Is a point on contour (a) with coordinates (x) i ,y i )。p i ' is contour (b) upper and p i The closest point, coordinate (x) i ′,y i '). Here based on the Euclidean distance mean D m,n The rotation angle theta can be calculated x And theta y The similarity measure between the projected image and the intraoral picture of (a), that is, F (θ) in formula (3) xy ). Since different numbers of rotations correspond to different angles, F (theta) is set xy ) The middle rotation angle is expressed by the rotation times m and n, then the right side bit image P R The similarity measure formula (c) can be written as:
Figure BDA0002150041230000172
positive image P F The corresponding similarity measure formula can be written as:
Figure BDA0002150041230000173
left side image P L The corresponding similarity measure formula can be written as:
Figure BDA0002150041230000174
substituting the formulas (10-12) into the formula (3) respectively to obtain the oral photograph P R ,P F ,P L The corresponding calculation formula for the maximum similarity measure can be written as:
Figure BDA0002150041230000175
Figure BDA0002150041230000176
Figure BDA0002150041230000181
k represents a constant.
Wherein, C R ,C F ,C L Respectively represent intraoral photographs P R ,P F ,P L Corresponding maximum similarity measure.
C R The corresponding rotation angle of the camera is the optimal camera attitude of the right-side bit image, C F The corresponding rotation angle of the camera is the optimal camera pose of the normal image, C L The rotation angle of the corresponding camera is the optimal camera pose of the right-side bit image.
The flow in fig. 2 is as follows:
s1, setting m as 1 and n as 1, and marking tooth pairs on the tooth triangular mesh model and a single intraoral color photograph;
s2, finding characteristic points on the intraoral picture: all feature points on the binaryzation image of the intraoral picture found by adopting an ORB rapid feature point detection algorithm;
s3, determining whether m < ═ S holds, if yes, going to step S4, if no, going to step S13;
s4, rotating the camera around the X axis x
S5, determining whether n < ═ S holds, if yes, going to step S6, if no, going to step S11;
s6, rotating the camera around the Y axis y
S7, searching characteristic points on the projection image;
s8, aligning the intraoral shot with the projection image;
s9, calculating the similarity measure C of the two images according to the contour;
s10, n is n +1, then return to S5;
S11,n=1;
s12, m is m +1, and then returns to step S3;
s13, calculating the camera pose corresponding to the maximum similarity measure, namely the optimal camera pose;
and S14, outputting the optimal camera posture and an affine transformation matrix, wherein the optimal camera posture and the affine transformation matrix are the 2D/3D registration result.
The above-described procedure shown in fig. 2 is performed on each intraoral color photograph, and the optimal camera pose and affine transformation matrix corresponding to each intraoral color photograph are obtained.
The specific implementation mode of the step (3) is as follows: the principle of the step (3) is as follows:
the texture fusion aims to ensure the natural transition of images at two sides of a seam on a three-dimensional model, and the basic steps can be divided into three steps: the method comprises the following steps of firstly, calculating texture coordinates of different areas of a dental model according to the optimal camera posture corresponding to the intraoral picture, and finding out a seam between adjacent textures. And secondly, extracting images on two sides of the upper joint of the adjacent texture respectively, constructing corresponding Laplacian residual pyramids, keeping the images at the top of the lower sampling of the Gaussian pyramids, and weighting and combining the images of each layer of the pyramids (the second step adopts the existing fusion method based on the image pyramids). And thirdly, inserting the fused image into the original image to reconstruct a fused texture. The above steps were repeated for 3 different intraoral photographs, respectively. The method comprises the following specific steps:
firstly, the three-dimensional dental model is rapidly segmented, and the triangular mesh of the three-dimensional dental model is divided into 3 areas. The purpose of this is to only map the image information with complete tooth texture in the intraoral picture when the projection texture is mapped, avoid the excessive overlapping between different textures and improve the texture fusion effect. As shown in fig. 7, in the world coordinate system, the cutting plane is placed perpendicular to the xz plane with the center of gravity (the same point as the center) of the three-dimensional dental model as the center of rotation. According to the general characteristics of the tooth arrangement, the segmentation angle ∈ is set to 60, and the number g of cutting planes is set to 2, so that the left region, the middle region, and the right region are obtained, respectively.
Based on the optimal camera pose corresponding to different intraoral photographs in the 2D/3D registration result obtained in step (2), mapping 3 intraoral color photographs to corresponding triangular mesh regions by initial projection texture mapping (i.e. the left side image is mapped to the triangular mesh region on the left side of the segmented three-dimensional dental model, the righting image is mapped to the triangular mesh region in the middle of the segmented three-dimensional dental model, and the right side image is mapped to the triangular mesh region on the right side of the segmented three-dimensional dental model), and the initial projection texture mapping method can refer to documents "Segal M, Korobkin C, Van widefelt R, et al. In this case, a seam is clearly formed on the three-dimensional dental model as shown in FIG. 8(a), and a line indicates the seam L.
The lines in fig. 8(a) represent the seams L on the three-dimensional dental model; the line in fig. 8(b) represents the projection L' of the three-dimensional seam L on the intraoral right side bit image; the left line in fig. 8(c) represents the projection L ″ (the original images of the three images obtained by the method of the present invention are all color images, with colors removed as the drawings in the description) of the three-dimensional seam L on the intraoral ortho-image. "projection" refers to the transformation of all points and lines on a three-dimensional object into a plane by perspective transformation, which can be understood as the projection of a three-dimensional object onto a two-dimensional plane. And extracting the outer contour of different mapping areas when the texture mapping is projected. Note that the contour is projected into the two-dimensional texture coordinate system with the same perspective projection transformation matrix, and the contour shape changes when the perspective projection transformation matrix changes. As shown in fig. 8(b) and 8(c), the line segments in the figure correspond to the projections L' and L ″ of the seam L on different texture images, and their shapes are completely different.
To ensure that the texture seam L is overly natural on both sides, it is necessary to merge the L' left k columns of pixels with the L ″ left q columns of pixels. Similarly, the L ″ right q columns of pixels are merged with the L' right q columns of pixels (q may be designed according to actual needs, and q is set to 50 in this embodiment). However, the two straight line shapes are quite different and direct fusion does not make the seam transition natural, so before fusion, L 'and L' are "straightened" to become vertical segments. The process of straightening is based on the vertex of L', other points on the line segment and q pixels on the left and right sides of each point move horizontally, and the other points on the line segment (i.e. all points except the vertex) are aligned with the vertex in the vertical direction, so that the image G shown in FIG. 9(a) is obtained 1 (obtained by horizontally shifting each point other than the vertex on the L' line segment in FIG. 8(b) and the left and right q pixels of the point) and the image G shown in FIG. 9(b) 2 (the original images of both fig. 9(a) and 9(b) obtained by the method of the present invention are color images, and the colors are removed as the drawings of the specification) (each point other than the vertex on the L "line segment in fig. 8(c) and each point obtained by horizontally shifting the point by q pixels left and right). This ensures that each row of pixels in the image on both sides of the seam corresponds perfectly.
Extracting to an image G 1 、G 2 Then, the mask picture shown in fig. 9(c) is added to the image, and weighted fusion is performed by the image pyramid, so that a fused image G (including left and right 50 rows of pixels) is obtained. Fig. 9(c) only needs to use the mask pictures in the existing fusion method based on the image pyramid, and the mask pictures are placed between G1 and G2, and there is no requirement on the positions of G1 and G2. The mask picture can be regarded as a blending weight, black can be regarded as 0, white can be regarded as 1, and the greater the weight, the more detail is retained by the upper layer image. A fusion method based on an image pyramid is disclosed in a document' PandeY A, Pati U.A novel technique for non-overlapping image encoding based on pyramid method [ C]I/2013 annular IEEE India Conference (INDION). IEEE,2013, the present invention is not repeated. Finally, the image is processedAnd G is respectively inserted back to the original texture image, the texture information on the three-dimensional dental model is updated, a realistic dental model without obvious seams is obtained, namely the image G is respectively inserted back to the position of G1 in the right side bit image and the position of G2 in the righting bit image, the updated right side bit image and righting bit image are obtained, and then projection texture mapping is carried out to project and map the updated right side bit image and righting bit image to the three-dimensional dental model. And (4) respectively repeating the steps aiming at different seams to finish the treatment of all seams.
The specific process is shown in fig. 6, and includes:
inputting: a tooth triangular network model, a 2D/3D registration result and 3 intraoral color photographs;
t1, segmenting the texture of the dental model;
t2, initial projection texture mapping;
t3, searching for a seam, and setting T to be 1;
t4, determining that T < 2(T indicates that there are two seams in three intraoral photographs and two times of processing are required, if there are more intraoral photographs, setting a corresponding value for T, for example, 4 intraoral photographs, if there are 3 seams, then T < 3.) is true, if yes, going to T5, if no, going to T8;
t5, extracting texture images G of two sides of the seam respectively 1 、G 2
T6, merging (i.e. fusing) the two images extracted from the same joint based on the image pyramid to obtain a merged image (i.e. a fused image G);
t7, inserting the merged image back into the original texture image, respectively, where T is T +1, and then returning to T4;
t8, updating the texture on the dental cast;
t9, end.
The effectiveness and accuracy of the method proposed by the present invention are evaluated by experiments below. The experimental equipment used comprises an optical scanner, a single-lens reflex camera, a computer (Intel (R) core (TM) i7-6700HQ CPU @2.60GHz, a memory 16G and a display card Nvidia GeForce GTX 960M). The software systems used are Windows 10 operating system, VS 2017 compiler, OpenCV and VTK.
Experiments were conducted to take 3 intraoral color photographs of 2 male volunteers and 2 female volunteers, respectively, including intraoral right, orthostatic, and left images, and scan their dental plaster models with an optical scanner to obtain maxillary three-dimensional dentition data.
According to the method of the invention, a realistic dentition model was constructed for the teeth of 2 male and 2 female volunteers, respectively. The experimental results are shown in fig. 12, in which the original figure of the dentition chart in fig. 12 is a color photograph, and the colors are removed as the figure of the specification.
As can be seen from the mapping results in fig. 12, the experiment was conducted on a realistic dentition model constructed from 4 volunteers, which had no significant distortion or dislocation of surface texture, perfect registration of intraoral dental features, particularly gum line, with the three-dimensional model, and no significant seams between the textures.
Fig. 10(a), 10(b), and 10(c) are left, front, and right views, respectively, of the initial texture mapping result of the first volunteer, and it can be seen visually that there is no significant misalignment in the texture. Fig. 11(a), 11(b), and 11(c) are respectively a left view, a front view, and a right view of the final texture mapping result obtained after the texture information on the three-dimensional dental model is updated by the first volunteer and texture re-mapping is performed (all original figures of the six figures obtained by the method of the present invention are color figures, and colors are removed when the original figures are taken as the figures of the specification), comparing fig. 11(a), 11(b), and 11(c) with fig. 10(a), 10(b), and 10(c) can more intuitively see that texture seams in the final texture mapping result are eliminated, and the transition between adjacent textures is more natural.
Based on the method, the semi-automatic construction of the color dentition model with the real image texture can be realized by using a small amount of simple manual operation, and the texture image on the dentition model has no obvious distortion, dislocation or seam, so that a good simulation effect can be obtained. Experiments prove that the texture fusion method has effectiveness. The transition between adjacent textures is more natural without significant misalignment of the textures.
The method of the invention has the following characteristics:
1) the point, contour, shape and the like of the projection image and the intraoral picture are similar for the 2D dental modelThe problem of too few features is solved, and a small amount of manual interaction without special precision is used for searching the feature inflection points (namely a target point set C) on the 2D dental model projection image and the intraoral illumination 1 、C 2 ) Realizing 2D image alignment according to the characteristic inflection point;
2) in the 2D/3D registration process, according to the 2D image registration result, the similarity measure between the 2D dental model projection image and the intraoral picture is calculated by using the Euclidean distance of the local contour of the tooth, and the optimal camera pose is obtained through the local space rotation search of the camera and the single-target optimization calculation. When the texture mapping is carried out, the method ensures that the texture can be well aligned with the dental cast, and has better accuracy and robustness;
3) aiming at the images on the two sides of the texture joint, the Laplacian residual pyramid corresponding to the images is constructed, each layer of image of each pyramid is weighted and combined, the fused texture image is reconstructed, the obvious joint on the dental cast is eliminated, and the complete realistic color dental cast is constructed.
The above-described embodiments are merely exemplary embodiments of the present invention, and it will be readily apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit and scope of the invention as defined in the following claims.

Claims (9)

1. A method of constructing a realistic dentition model, comprising: the method comprises the following steps:
(1) acquiring a three-dimensional dental model and at least three intraoral photographs; the oral photographs are color photographs of the oral cavity;
(2) obtaining the optimal camera posture corresponding to each intraoral photograph by utilizing the three-dimensional dental model and the intraoral photographs;
(3) mapping each intraoral photograph onto the three-dimensional dental model according to the optimal camera posture corresponding to each intraoral photograph, finding out texture seams on the three-dimensional dental model, and then eliminating the texture seams to obtain a realistic dentition model;
the operation of the step (2) comprises the following steps:
the following treatments were performed for each oral photograph:
marking points on the same teeth on the intraoral photograph and the three-dimensional dental model to respectively obtain a set of marking points of the intraoral photograph and the three-dimensional dental model; each marking point set comprises at least three marking points;
aiming at the intraoral illumination, carrying out projection sampling on the three-dimensional dental model to obtain all projection images corresponding to the intraoral illumination, and converting the mark points on the three-dimensional dental model to each projection image by perspective projection to be used as mark points on the projection images;
acquiring the intraoral picture and a target point set on each projection image corresponding to the intraoral picture;
calculating to obtain the optimal camera attitude corresponding to the intraoral camera according to the target point set;
aiming at the intraoral illumination, the operation of carrying out perspective projection sampling on the three-dimensional dental model to obtain all projection images corresponding to the intraoral illumination comprises the following steps:
a1, rotating the camera from the original position to the initial position corresponding to the picture in the mouth around the X axis and the Y axis respectively; the original position is as follows: the center point of the three-dimensional dental model is coincided with the origin of a world coordinate system, and the Z axis of the world coordinate system points to the right front of the three-dimensional dental model; the camera is positioned right in front of the three-dimensional dental model, the focus of the camera is arranged at the central point of the three-dimensional dental model, dentitions on the three-dimensional dental model are symmetrical about a Z axis, the plane where the camera is positioned is vertical to the Z axis, and the projection direction of the camera is positioned on the Z axis; drawing a spherical surface by taking the original point as a circle center and the distance from the intersection point of the camera and the Z axis to the original point at the original position as a radius, and taking the spherical surface as a rotating spherical surface; when the camera rotates around an X axis and a Y axis, the projection direction of the camera always points to the focal point of the camera, and the distance from the camera to the focal point is kept unchanged; the rotation around the X axis and the Y axis is to move the camera on the rotating spherical surface; the initial position comprises an initial angle of rotation about an X-axis and an initial angle of rotation about a Y-axis;
a2, rotating the camera around the X axis and the Y axis to obtain all projection images corresponding to the intraoral picture: rotating the camera about the X-axis by theta x Rotation theta about the Y axis y Obtaining the rotation angles theta of the three-dimensional dental model in the camera x And theta y A projection image on a view plane of (a); theta x ∈[α,β],
Figure FDA0003779307210000021
Where θ is the rotation step of the camera and s is the rotation period of the camera.
2. The method of constructing a realistic dentition model according to claim 1 wherein: the operation of the step (1) comprises the following steps:
obtaining a tooth plaster model, and scanning the tooth plaster model to obtain a three-dimensional tooth model;
shoot the patient oral cavity from right side, dead ahead, left side respectively and obtain 3 intraoral photographs, do respectively: right side bit image P R Positive image P F And left side image P L
3. The method of constructing a realistic dentition model according to claim 2 wherein: the operation of the step A2 comprises the following steps:
for the right side bit image P R
θ x ∈[α,β]Wherein
Figure FDA0003779307210000022
Figure FDA0003779307210000023
wherein,
Figure FDA0003779307210000024
wherein alpha is an initial angle of the camera rotating around the X axis when the projection image corresponding to the right position image is collected;
Figure FDA0003779307210000025
the initial angle of the camera rotating around the Y axis when the projection image corresponding to the right side position image is collected;
when the projection image corresponding to the right position image is collected, the rotation angle of the camera is as follows:
Figure FDA0003779307210000026
wherein m is R ∈(0,s];
Figure FDA0003779307210000027
Wherein n is R ∈(0,s];
Wherein m is R Representing the number of rotations of the camera about the X-axis during acquisition of projection images corresponding to right-hand position images, n R Representing the rotation times of the camera around the Y axis when the projection image corresponding to the right-side bit image is collected; finally obtaining s-s projection images with different angles corresponding to the right side bit image;
for the normal image P F
θ x ∈[α,β]Wherein
Figure FDA0003779307210000031
Figure FDA0003779307210000032
wherein,
Figure FDA0003779307210000033
wherein alpha is an initial angle of rotation of the camera around the X axis when the projection image corresponding to the normal image is collected;
Figure FDA0003779307210000034
the initial angle of the camera rotating around the Y axis when the projection image corresponding to the normal image is collected;
when the projection image corresponding to the normal image is collected, the rotation angle of the camera is as follows:
Figure FDA0003779307210000035
wherein m is F ∈(0,s];
Figure FDA0003779307210000036
Wherein n is F ∈(0,s];
Wherein m is F Representing the number of rotations of the camera about the X-axis during the acquisition of the projection images corresponding to the normal image, n F Representing the rotation times of the camera around the Y axis when the projection image corresponding to the normal image is collected; finally obtaining s x s projection images with different angles corresponding to the normal image;
for the left side image P L
θ x ∈[α,β]Wherein
Figure FDA0003779307210000037
Figure FDA0003779307210000038
wherein,
Figure FDA0003779307210000039
wherein alpha is an initial angle of the camera rotating around the X axis when the projection image corresponding to the left side image is acquired;
Figure FDA00037793072100000310
the initial angle of the camera rotating around the Y axis when the projection image corresponding to the left side image is collected;
when gathering the corresponding projection image of left side position image, the rotation angle of camera is:
Figure FDA00037793072100000311
wherein m is L ∈(0,S],
Figure FDA00037793072100000312
Wherein n is L ∈(0,s]
Wherein m is L Representing the number of rotations of the camera about the X-axis, n, during acquisition of the projection image corresponding to the left view L Representing the rotation times of the camera around the Y axis when the projection image corresponding to the left side image is acquired; finally obtaining s projected images with different angles corresponding to the left side image;
the camera is rotated around the X axis by theta x Rotation theta about the Y axis y The method is realized by the following steps:
moving the camera at the original position to form an included angle theta with the XZ plane along the intersection line of the rotating spherical surface and the YZ plane x Then along the X axis and at an angle theta to the XZ plane x The intersecting line of the plane of the rotating spherical surface and the rotating spherical surface moves to form an included angle theta with the YZ plane y At an angle of (a);
or the camera at the original position is firstly moved to form an included angle theta with the YZ plane along the intersecting line of the rotating spherical surface and the XZ plane y And then the included angle between the Y axis and the YZ plane is theta y The intersection line of the plane (A) and the rotating spherical surface moves to form an included angle theta with the XZ plane x At an angle of (a).
4. The method of constructing a realistic dentition model according to claim 3 wherein: the operation of acquiring the intraoral photograph and the target point set on each projection image corresponding to the intraoral photograph includes:
the following processing is respectively carried out on the intraoral picture and each projection image corresponding to the intraoral picture:
performing binarization processing, obtaining all feature points on the image after binarization processing, and forming a point set B ═ B by the feature points 1 ,b 2 ,b 3 ,…,b g };
Grouping the point set B to obtain a plurality of sub-point sets B i ={b i ,b i+1 ,b i+2 ,…,b j };
Find out each sub-point set B i The point with the minimum y value in the sub-point set is used as the characteristic of the sub-point setPoint b' n Set of all sub-points B i Characteristic point b' n Forming a point set B';
finding out the point which is positioned at the left side of each marking point and has the minimum distance with the marking point, namely the inflection point at the adjacent position of the teeth, from the point set B', wherein the inflection points at the adjacent positions of all the teeth form a target point set.
5. The method of constructing a realistic dentition model according to claim 4 wherein: the point set B is grouped to obtain a plurality of sub-point sets B i ={b i ,b i+1 ,b i+2 ,…,b j The operations of (1) include:
searching backwards from a first point in the point set B, putting the first point in the point set B into a group of which the Euclidean distance from the first point is less than D, and completing the division of the first group until the Euclidean distance from the first point is greater than or equal to D, wherein the points in the group form a first sub-point set;
starting backward search by taking a point with the Euclidean distance from a first point of a previous group to be greater than or equal to D as a new starting point, putting the point with the Euclidean distance from the new starting point to a group with the Euclidean distance from the new starting point to be less than D, finishing the division of the group until the Euclidean distance from the new starting point to be greater than or equal to D, and forming a sub-point set by the points in the group;
by parity of reasoning, a plurality of sub-point sets B are obtained i ={b i ,b i+1 ,b i+2 ,…,b j };
D represents the removed boundary length, and D is L/p, wherein L represents the horizontal length of the whole dentition in the binary image, p represents the number of segments for horizontally segmenting the tooth, and 30< p < 40.
6. The method of constructing a realistic dentition model according to claim 5 wherein: the operation of finding out the point which is located at the left side of each marking point and has the minimum distance with the marking point in the point set B' comprises the following steps:
for each marking point, the following processing is carried out:
traversing the x coordinate values of all the points in the point set B', finding all the points of which the x coordinate values are smaller than the x coordinate value of the mark point, and then calculating the distances between the points and the mark point;
finding out the minimum value in the distances, wherein the point corresponding to the minimum value is the point which is positioned at the left side of the mark point and has the minimum distance with the mark point.
7. The method of constructing a realistic dentition model according to claim 6 wherein: the operation of obtaining the optimal camera attitude corresponding to the intraoral camera by calculating according to the target point set comprises the following steps:
according to the intraoral shot and a target point set on each projection image, an affine transformation matrix N of the intraoral shot is calculated, and then affine transformation is carried out on the intraoral shot to obtain an affine-transformed image of the intraoral shot;
extracting the contour contourr (a) of the image which is photographed in the mouth and is subjected to affine transformation, and simultaneously extracting the contour contourr (b) of the binary image of each projection image corresponding to the mouth;
calculating the maximum similarity measure according to contourr (a) and contourr (b):
for right side bit image P R And calculating to obtain the maximum similarity measure by adopting the following formula:
Figure FDA0003779307210000051
for positive image P F The maximum similarity measure is calculated by the following formula:
Figure FDA0003779307210000061
for left side image P L The maximum similarity measure is calculated by the following formula:
Figure FDA0003779307210000062
wherein k represents a constant, C R ,C F ,C L Respectively represent P R ,P F ,P L Corresponding maximum similarity measure; p is a radical of formula i Is a point on the contour contourr (a) with the coordinate (x) i ,y i );p i ' is the contour contourr (b) upper and p i The closest point has the coordinate of (x) i ′,y i ′);
C R Corresponding camera rotation angle theta x 、θ y I.e. the optimum camera pose for the right-hand side view, C F Corresponding camera rotation angle theta x 、θ y I.e. the optimum camera pose for the normal image, C L Corresponding rotation angle theta of camera x 、θ y I.e. the optimal camera pose for the right bit image.
8. The method of constructing a realistic dentition model according to claim 7 wherein: the operation of the step (3) comprises the following steps:
(31) initial projection texture mapping: dividing the three-dimensional dental model into 3 areas, namely a right area, a middle area and a left area; mapping the right side bit image to a right side area of the three-dimensional dental model by using the optimal camera posture of the right side bit image, mapping the orthostatic image to a middle area of the three-dimensional dental model by using the optimal camera posture of the orthostatic image, and mapping the left side bit image to a left side area of the three-dimensional dental model by using the optimal camera posture of the left side bit image to obtain initial projection texture mapping; in the initial projection texture mapping, texture seams are arranged at the joint of the right region and the middle region and the joint of the middle region and the left region;
(32) finding texture seams: finding respective texture seams from the initial projected texture map obtained in step (31);
(33) and fusing images on two sides of each texture joint to eliminate obvious joints on the three-dimensional dental model so as to obtain a realistic dentition model.
9. The method of constructing a realistic dentition model according to claim 8 wherein: the operation of step (33) comprises:
respectively perspective-projecting a texture joint L on the three-dimensional dental model onto two position images forming the texture joint to obtain projections L 'and L';
taking the vertex of the L 'as a reference, horizontally moving other points on the L' and q pixels on the left side and the right side of each point to align the other points on the L 'with the vertex of the L' in the vertical direction to obtain an image G 1
Using the vertex of the L 'as a reference, horizontally moving other points on the L' and q pixels on the left side and the right side of each point to align the other points on the L 'with the vertical direction of the vertex of the L' to obtain an image G 2
Image G 1 And image G 2 Fusing to obtain an image G;
inserting image G back into the two bit images forming the texture seam 1 Image G 2 Obtaining two updated bit images at the position of (2);
and respectively mapping the two updated unit images to the corresponding areas of the three-dimensional dental model by using the optimal camera postures respectively corresponding to the two unit images.
CN201910698286.8A 2019-07-31 2019-07-31 Method for constructing realistic dentition model Active CN112308895B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910698286.8A CN112308895B (en) 2019-07-31 2019-07-31 Method for constructing realistic dentition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910698286.8A CN112308895B (en) 2019-07-31 2019-07-31 Method for constructing realistic dentition model

Publications (2)

Publication Number Publication Date
CN112308895A CN112308895A (en) 2021-02-02
CN112308895B true CN112308895B (en) 2022-09-23

Family

ID=74485349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910698286.8A Active CN112308895B (en) 2019-07-31 2019-07-31 Method for constructing realistic dentition model

Country Status (1)

Country Link
CN (1) CN112308895B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991273B (en) * 2021-02-18 2022-12-16 山东大学 Orthodontic feature automatic detection method and system of three-dimensional tooth model
CN113487667B (en) * 2021-06-03 2023-07-25 北京大学深圳医院 Method and system for measuring palate volume of upper jaw, electronic device and storage medium
CN114663637A (en) * 2022-04-24 2022-06-24 杭州雅智医疗技术有限公司 Filling method, device and application of three-dimensional tooth model inverted concave area
CN116342849B (en) * 2023-05-26 2023-09-08 南京铖联激光科技有限公司 Method for generating dental model undercut region on three-dimensional grid
CN116804865B (en) * 2023-08-28 2023-12-08 成都飞机工业(集团)有限责任公司 Triaxial automatic programming characteristic identification and tool path generation method
CN117315161B (en) * 2023-10-31 2024-03-29 广州穗华口腔门诊部有限公司 Image acquisition and processing system for digital tooth model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106647796A (en) * 2016-06-22 2017-05-10 中国人民解放军63863部队 Three-dimensional model mechanism equipment motion general control method
WO2018069094A1 (en) * 2016-10-11 2018-04-19 Shin-Etsu Silicones Europe B.V. - Zweigniederlassung Deutschland Optical scanner for dental impressions, digitization method and system for dental models
CN108062784A (en) * 2018-02-05 2018-05-22 深圳市易尚展示股份有限公司 Threedimensional model texture mapping conversion method and device
CN108470370A (en) * 2018-03-27 2018-08-31 北京建筑大学 The method that three-dimensional laser scanner external camera joint obtains three-dimensional colour point clouds
CN109410316A (en) * 2018-09-21 2019-03-01 深圳前海达闼云端智能科技有限公司 Method, tracking, relevant apparatus and the storage medium of the three-dimensional reconstruction of object

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10483004B2 (en) * 2016-09-29 2019-11-19 Disney Enterprises, Inc. Model-based teeth reconstruction

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106647796A (en) * 2016-06-22 2017-05-10 中国人民解放军63863部队 Three-dimensional model mechanism equipment motion general control method
WO2018069094A1 (en) * 2016-10-11 2018-04-19 Shin-Etsu Silicones Europe B.V. - Zweigniederlassung Deutschland Optical scanner for dental impressions, digitization method and system for dental models
CN108062784A (en) * 2018-02-05 2018-05-22 深圳市易尚展示股份有限公司 Threedimensional model texture mapping conversion method and device
CN108470370A (en) * 2018-03-27 2018-08-31 北京建筑大学 The method that three-dimensional laser scanner external camera joint obtains three-dimensional colour point clouds
CN109410316A (en) * 2018-09-21 2019-03-01 深圳前海达闼云端智能科技有限公司 Method, tracking, relevant apparatus and the storage medium of the three-dimensional reconstruction of object

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Mapping intraoral photographs on virtual teeth model";Walter Y.H. Lam等;《Journal of Dentistry》;20181231;第79卷;第107-110页 *
"基于计算机视觉的三维重建技术研究";刘星明 等;《深圳信息职业技术学院学报》;20130930;第11卷(第3期);第13-19页 *

Also Published As

Publication number Publication date
CN112308895A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN112308895B (en) Method for constructing realistic dentition model
CN105447908B (en) Dental arch model generation method based on oral cavity scan data and CBCT data
US11735306B2 (en) Method, system and computer readable storage media for creating three-dimensional dental restorations from two dimensional sketches
CN109414306B (en) Historical scan reference for intraoral scanning
CN111784754B (en) Tooth orthodontic method, device, equipment and storage medium based on computer vision
CN110200710A (en) A kind of oral restoration method based on three-dimensional imaging and Real-time modeling set
CN108470365B (en) Dental arch line drawing method based on upper and lower dental jaws
RU2593741C2 (en) Method and system for two-dimensional image arrangement
CN112087985A (en) Simulated orthodontic treatment via real-time enhanced visualization
US20060127854A1 (en) Image based dentition record digitization
CN105427385A (en) High-fidelity face three-dimensional reconstruction method based on multilevel deformation model
WO2023284713A1 (en) Three-dimensional dynamic tracking method and apparatus, electronic device and storage medium
JP5842541B2 (en) 3D portrait creation device
CN111524195B (en) Camera calibration method in positioning of cutting head of heading machine
KR20160004865A (en) The face model generation method for the Dental procedure simulation
CN106408664A (en) Three-dimensional model curved surface reconstruction method based on three-dimensional scanning device
WO2021218724A1 (en) Intelligent design method for digital model for oral digital impression instrument
JP2006277293A (en) Three-dimensional information restoration device for rotary body
CN112807108B (en) Method for detecting tooth correction state in orthodontic correction process
CN111437057B (en) Three-dimensional tooth shape restoration method and system based on two-dimensional tooth beautifying characteristic line
Kim et al. Automatic registration of dental CT and 3D scanned model using deep split jaw and surface curvature
Zhang et al. An Implicit Parametric Morphable Dental Model
WO2023185405A1 (en) Design method for 3d printed denture framework, and apparatus and storable medium
CN115409932A (en) Texture mapping and completion method of three-dimensional human head and face model
CN114758073A (en) Oral cavity digital system based on RGBD input and flexible registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant