CN109903368A - Three-dimensional facial reconstruction system and its three-dimensional facial reconstruction method based on depth information - Google Patents
Three-dimensional facial reconstruction system and its three-dimensional facial reconstruction method based on depth information Download PDFInfo
- Publication number
- CN109903368A CN109903368A CN201711292737.5A CN201711292737A CN109903368A CN 109903368 A CN109903368 A CN 109903368A CN 201711292737 A CN201711292737 A CN 201711292737A CN 109903368 A CN109903368 A CN 109903368A
- Authority
- CN
- China
- Prior art keywords
- module
- face
- dimensional
- model
- depth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses one three-dimensional facial reconstruction system and its three-dimensional facial reconstruction method based on depth information, wherein the three-dimensional facial reconstruction system includes the depth information acquisition module by being in communication with each other connected and a three-dimensional face processing module.The three-dimensional face processing module pre-processes the deep image information about a measured target face that the depth information acquisition module acquires, to obtain a depth point cloud preprocessed data, wherein the human face three-dimensional model rebuild module and rebuild the measured target face based on the depth point cloud preprocessed data.
Description
Technical field
Three-dimensional facial reconstruction system the present invention relates to a three-dimensional facial reconstruction technology more particularly to one based on depth information
And its three-dimensional facial reconstruction method.
Background technique
With the development of society, the progress of science and technology, three-dimensional facial reconstruction technology are increasingly becoming the one of computer vision field
Hot technology.The development of three-dimensional facial reconstruction technology can not only meet the actual demand identified to automatic identity, meanwhile, for pushing away
The dynamic cognitive science based on recognition of face, the development of physiology, psychologic related fields have great significance.Specifically,
Three-dimensional facial reconstruction technology is to go out its three-dimensional face model according to individual or multiple face images of tested individual.It is existing
Three-dimensional facial reconstruction technology, generally according to the three-dimensional mould of one or more two-dimensional surface face image face of tested individual
Type.
Face has extremely complex geometry, in the two dimension for acquiring tested face using existing RGB camera module
In image information, all multi informations of face can be lost.Particularly, the absolute dimension of face, such as the height of nose, glasses recess
Depth etc., and due to that can not be captured by two-dimensional image from blocking and all multi informations such as sightless part.
The loss of learning process is inevitable, and essential reason is, face has complicated three-dimensional structure, and RGB images mould
Group, which can only capture tested face and be projected to two-dimensional surface from three-dimensional space, is formed by two-dimensional image information.Certainly, existing
In some technologies, in combination with the relevant parameter of image capture device parameter and shooting environmental, tested face information reduction is carried out.So
And, on the one hand, the parameter of image capture device and the relevant parameter non-constant value of shooting environmental, different shooting environmentals are different
Shooting time, different shooting angle etc. all can affecting parameters variation, cause the information lost that can not be reduced or only
Energy partial reduction, even if while can accurately obtain corresponding device parameter and shooting environmental relevant parameter, tested face letter
The algorithm for ceasing reduction is complicated, and the three-dimensional face modeling time is caused to be elongated.
To solve the above-mentioned problems, 3-d deformable model (3D Morphable Model) technology is developed.The technology
Core be faceform using database, the two-dimension human face image of input is fitted to as basis vector, using quasi-
Close the reconstruction that parameter realizes three-dimensional face.Those skilled in the art will be appreciated that three-dimensional deformation model substantially uses principal component
Analysis method constructs statistical model, and principal component analytical method is substantially a kind of low-pass filtering.Thus, such methods are being restored
Effect is still undesirable in terms of the minutia of face.More specifically, for example, number is not in order to which the expression of face complexity is presented
The small variation for winning the small fold and wrinkle and color and striped of number all be can not ignore, and three-dimensional deformation model is using low
The method of pass filter can not accurately capture and restore small details, cause the presentation ability of human face expression relatively weak.
In addition, during being rebuild using Deformable model to face, need to the two-dimension human face image of acquisition into
Row solving optimization, however, the solving optimization step is computationally intensive and algorithm is complicated, is led with improving the precision of three-dimensional facial reconstruction
Cause the three-dimensional facial reconstruction process occupied time longer.Particularly, three-dimensional reconstruction is being carried out using deformable model technique
In the process, first with the faceform of database as basis vector, in conjunction with the shape vector Si and line in face database
Manage vector T i and unknown parameter alpha and β, wherein in the calculating process of human face rebuilding, first random initializtion α and β obtain with
The 3D model that machine generates, and by 3D model projection to two-dimensional surface to obtain new face two dimensional image, and then utilize new images
Loss function is constructed with the face two dimensional image of input, by this method, carries out cycle iteration until final convergence effect meets
Preset required precision.Therefore, the three-dimensional facial reconstruction technology based on deformable model is computationally intensive, and real-time is poor, have compared with
Big time delay.
Summary of the invention
The main purpose of the present invention is to provide one three-dimensional facial reconstruction system and its three-dimensional face based on depth information
Method for reconstructing, wherein the three-dimensional facial reconstruction system carries out three-dimensional face model reconstruction based on the depth information of tested face,
Modeling accuracy is relatively higher.
Another object of the present invention is to provide one three-dimensional facial reconstruction system and its three-dimensional face based on depth information
Method for reconstructing, wherein the three-dimensional facial reconstruction system includes a depth information acquisition module, for acquiring the band of tested face
There is the image of depth information, to keep tested face information comparatively complete.
Another object of the present invention is to provide one three-dimensional facial reconstruction system and its three-dimensional face based on depth information
Method for reconstructing, wherein the three-dimensional facial reconstruction system is not necessarily to through complicated algorithm and grasps image capture device and shooting ring
In the case where the parameter of border, the information of tested face just can be relatively completely obtained, so that the three-dimensional facial reconstruction system has
There is relatively better real-time.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein the three-dimensional facial reconstruction system includes a preprocessing module, wherein the preprocessing module is to logical
It crosses the depth information acquisition module image collected with tested face depth information to be pre-processed, to extract depth
Point cloud primary data simultaneously denoises the initial point cloud data, to remove background dot cloud and exceptional value.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein the three-dimensional facial reconstruction system includes a reconstruction module, the reconstruction module is communicatively coupled
In the preprocessing module, to based on by the pretreated depth point cloud preprocessed data progress three-dimensional face model
Building, arithmetic speed is fast, and real-time, interactive is relatively more preferable.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein the naked Fig. 3 D face mould rebuilding module and being established based on the depth point cloud preprocessed data
Type carries out parameterized treatment, to improve the flexibility and accuracy of subsequent mapping operations.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein settable different parameter corresponds to the face gray scale 3D model, to carry out details adjustment and improve
Three-dimensional face model reconstruction precision.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein carrying out the positioning patch of interesting pattern based on point cloud data and face gray scale 3D model after parametrization
Figure, to increase the interest of three-dimensional face model reconstruction.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein the three-dimensional facial reconstruction system further includes a registration module, so that the three-dimensional face after rebuilding
Model has consistent geometric topology structure.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method fits through each of the three-dimensional face model that the reconstruction module is established wherein the registration module is calibrated
Item parameter is to be registrated to the standard three-dimensional faceform with consistent geometric topology structure.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein there is geometric topology structure by the three-dimensional face model that the three-dimensional facial reconstruction system constructs
Consistent feature, in favor of the exploitation, such as 3D human face animation etc. of subsequent 3D application.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein the three-dimensional facial reconstruction system further includes a RGB camera module, the RGB camera module is used for
The two-dimentional RGB image of measured target face is acquired to be used for mapping operations, wherein the RGB camera module is neighboringly set to
The TOF depth information camera module, so that the RGB camera module is with close with the TOF depth information camera module
Consistent acquisition position, to by the RGB camera module two-dimentional RGB image collected and pass through the TOF depth
Information camera module depth information image collected is corresponding.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein the RGB camera module be set with the TOF depth information camera module simultaneously operating, thus
When acquiring tested human face image information, the automatic shaking of face or other made by tested face itself unstability
At error can be effectively canceled, in favor of improving the matching degree of subsequent textures and the precision of modeling.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein the three-dimensional facial reconstruction system further includes an optimization module, with the three-dimensional face after optimized reconstruction
Model visual effect makes it integrally have more the sense of reality.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein the optimization module includes one group of decoration module, such as hair style module, glasses module, card-issuing module
Deng, wherein the decoration module can accurately be positioned relatively based on the three-dimensional face model after parametrization, with actualization,
Three-dimensional face model after interest and appeal reconstruction.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method swashs wherein the depth information acquisition module is TOF depth information camera module compared to high-precision three-dimensional
Photoscanner both can guarantee that the depth information of tested person's face had relatively high precision, and advantage of lower cost.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein the TOF depth information camera module has relatively small size, so that deep based on the TOF
The three-dimensional facial reconstruction system and device for spending photographing module have better portable performance relatively.
Another goal of the invention of the invention is to provide a three-dimensional facial reconstruction system and its three-dimensional based on depth information
Facial reconstruction method, wherein the TOF depth information camera module includes a light source module and a photosensitive control module, the sense
Photocontrol module is communicatively coupled with the light source module, wherein the light source module has safety protective mechanisms, to prevent
The laser that the light source module is emitted damages eye-safe.
By following description, other advantages of the invention and feature will be become apparent, and can pass through right
The means and combination particularly pointed out in claim are accomplished.
One aspect under this invention, the present invention provide a three-dimensional facial reconstruction system based on depth information, are used for
A human face three-dimensional model of a measured target face is rebuild, wherein the three-dimensional facial reconstruction system includes:
One depth information acquisition module, wherein the depth information acquisition module is for acquiring about the measured target people
One deep image information of face;With
One three-dimensional face processing module, wherein the three-dimensional face processing module includes a preprocessing module and can be communicated
Ground is connected to a reconstruction module of the preprocessing module, and the preprocessing module is communicatively connected in a network in the depth information
Acquisition module, wherein the preprocessing module to the depth information acquisition module acquire about the measured target face
The deep image information is pre-processed, to obtain one about the measured target face based on the deep image information
Depth point cloud preprocessed data, wherein the reconstruction module, which is based on the depth point cloud preprocessed data, rebuilds the measured target
The human face three-dimensional model of face.
According to one embodiment of present invention, the preprocessing module include a parsing module and be communicatively connected in a network in
One noise reduction module of the parsing module, the parsing module are communicatively connected in a network in the depth information acquisition module, institute
It states reconstruction module to be communicatively connected in a network in the noise reduction module, wherein institute of the parsing module to the measured target face
It states deep image information to be parsed to obtain a tested face depth point cloud primary data, the noise reduction module is to described tested
Face depth point cloud primary data carries out noise reduction to obtain the depth point cloud preprocessed data.
According to one embodiment of present invention, the reconstruction module includes a compartmentalization module and is communicably connected respectively
Be connected to the parameterized module and a textures module of the compartmentalization module, the compartmentalization module be communicatively connected in a network in
The preprocessing module, wherein the compartmentalization module carries out the face gray scale 3D model of the depth point cloud preprocessed data
Compartmentalization processing, the parameterized module carry out parameterized treatment, the patch to the face gray scale 3D model after compartmentalization
Module synthesizes the face gray scale 3D model of two-dimentional RGB facial image and parametrization, to rebuild the tested mesh
Mark the human face three-dimensional model of face.
According to one embodiment of present invention, the reconstruction module includes a compartmentalization module and is communicably connected respectively
Be connected to the parameterized module and a textures module of the compartmentalization module, the compartmentalization module be communicatively connected in a network in
The noise reduction module, wherein the compartmentalization module carries out area to the face gray scale 3D model of the depth point cloud preprocessed data
Domainization processing, the parameterized module carry out parameterized treatment, the textures to the face gray scale 3D model after compartmentalization
Module synthesizes the face gray scale 3D model of two-dimentional RGB facial image and parametrization, to rebuild the measured target
The human face three-dimensional model of face.
According to one embodiment of present invention, the three-dimensional facial reconstruction system further comprises a registration module, wherein
The registration module is communicatively connected in a network in the three-dimensional face processing module, wherein the registration module for calibrate and
Parameters with the three-dimensional face model, so that there is the three-dimensional face model after being registered consistent set to open up
Flutter structure feature.
According to one embodiment of present invention, the three-dimensional facial reconstruction system further comprises an optimization module, wherein
The optimization module is communicatively connected in a network in the three-dimensional face processing module, wherein the optimization module be used to optimize institute
State three-dimensional face model.
According to one embodiment of present invention, the depth information acquisition module includes a camera module, wherein described take the photograph
As mould group is communicatively connected in a network in the textures module, wherein the camera module is to shoot the side of the measured target face
Formula obtains the two dimension RGB facial image about the measured target face.
According to one embodiment of present invention, the depth information acquisition module is a TOF depth information camera module.
According to one embodiment of present invention, the depth information acquisition module is communicatively connected in a network in a TOF depth
Information camera module.
Other side under this invention, the present invention further provides a three-dimensional facial reconstruction sides based on depth information
Method, wherein the three-dimensional facial reconstruction method includes the following steps:
(a) deep image information of a measured target face is obtained;
(b) the depth point cloud pretreatment number about the measured target face is obtained according to the deep image information
According to;And
(c) human face three-dimensional model of the measured target face is rebuild based on the depth point cloud preprocessed data.
According to one embodiment of present invention, further comprise step in the step (b):
(b.1) deep image information is parsed, to obtain a tested face depth point cloud primary data;With
(b.2) noise reduction is carried out to the tested face depth point cloud primary data, to obtain the depth point cloud pretreatment
Data.
According to one embodiment of present invention, further comprise step in the step (c):
(c.1) the face gray scale 3D model of cloud preprocessed data in depth point described in compartmentalization;
(c.2) the face gray scale 3D model after parameterized treatment is localized;And
(c.3) the face gray scale 3D model after a two dimension RGB facial image and miserable ravings is synthesized, to rebuild face three
Dimension module.
According to one embodiment of present invention, further comprise step in the step (b.2):
(b.2.1) the background point cloud data in the depth point cloud preprocessed data is removed, so that the depth point cloud is pre-
Handle the effective coverage for only retaining tested face in data;
(b.2.2) the flying spot data in the presence of the data of the effective coverage of the tested face are identified, wherein described fly
Point data is that there are the data points of larger difference for adjacent laser;And
(b.2.3) flying spot data described in assignment again.
According to one embodiment of present invention, in the step (b.2.1), the depth point cloud primary data is carried out
Filtering processing, to remove the background point cloud data in the depth point cloud preprocessed data.
According to one embodiment of present invention, in the step (c.1), located in advance with depth point cloud described in triangle gridding
Manage the face gray scale 3D model of depth point cloud preprocessed data described in the mode compartmentalization of data.
According to one embodiment of present invention, further comprise step before the step (c.3): passing through a camera shooting
Mould group obtains the two dimension RGB facial image about the measured target face.
It according to one embodiment of present invention, is TOF depth with a depth information acquistion module in the step (a)
The mode for spending information camera module obtains the measured target face by TOF depth information camera module shooting
The deep image information.
According to one embodiment of present invention, in the step (a), with a depth information acquistion module by communicably
It is connected to the mode of a TOF depth information camera module, obtains the quilt by TOF depth information camera module shooting
Survey the deep image information of target face.
According to one embodiment of present invention, the three-dimensional facial reconstruction method further comprises step:
(d) it is registrated the three-dimensional face model, so that the three-dimensional face model has consistent geometric topology structure
Feature.
By the understanding to subsequent description and attached drawing, further aim of the present invention and advantage will be fully demonstrated.
These and other objects of the invention, feature and advantage, by following detailed descriptions, drawings and claims are obtained
To fully demonstrate.
Detailed description of the invention
Fig. 1 is the block diagram representation of a three-dimensional facial reconstruction system of preferred embodiment according to the present invention.
Fig. 2 is another block diagram representation according to the three-dimensional facial reconstruction system of aforementioned present invention preferred embodiment.
Fig. 3 is the another block diagram representation according to the three-dimensional facial reconstruction system of aforementioned present invention preferred embodiment.
Fig. 4 is the block diagram representation according to a TOF camera module of aforementioned present invention preferred embodiment.
Fig. 5 is the three-dimensional facial reconstruction schematic device according to aforementioned present invention preferred embodiment.
Fig. 6 is the stereoscopic schematic diagram according to the TOF camera module of aforementioned present invention preferred embodiment.
Fig. 7 is the perspective view of the explosion according to the TOF camera module of aforementioned present invention preferred embodiment.
Fig. 8 is the schematic cross-sectional view according to the TOF camera module of aforementioned present invention preferred embodiment.
Fig. 9 is the imaging schematic diagram according to the TOF camera module of aforementioned present invention preferred embodiment.
Figure 10 is the three-dimensional facial reconstruction method block diagram representation according to aforementioned present invention preferred embodiment.
Figure 11 is another block diagram representation according to the three-dimensional facial reconstruction method of above-mentioned preferred embodiment.
Figure 12 is the another block diagram representation according to the three-dimensional facial reconstruction method of above-mentioned preferred embodiment.
Specific embodiment
It is described below for disclosing the present invention so that those skilled in the art can be realized the present invention.It is excellent in being described below
Embodiment is selected to be only used as illustrating, it may occur to persons skilled in the art that other obvious modifications.It defines in the following description
Basic principle of the invention can be applied to other embodiments, deformation scheme, improvement project, equivalent program and do not carry on the back
Other technologies scheme from the spirit and scope of the present invention.
It will be understood by those skilled in the art that in exposure of the invention, term " longitudinal direction ", " transverse direction ", "upper",
The orientation of the instructions such as "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom" "inner", "outside" or position are closed
System is to be based on the orientation or positional relationship shown in the drawings, and is merely for convenience of description of the present invention and simplification of the description, without referring to
Show or imply that signified device or element must have a particular orientation, be constructed and operated in a specific orientation, therefore above-mentioned art
Language is not considered as limiting the invention.
It is understood that term " one " is interpreted as " at least one " or " one or more ", i.e., in one embodiment, unitary
The quantity of part can be one, and in a further embodiment, the quantity of the element can be it is multiple, term " one " cannot understand
For the limitation to quantity.
If Fig. 1 is to as shown in figure 3, a three-dimensional facial reconstruction system of a first preferred embodiment according to the present invention is explained
It is bright, wherein the three-dimensional facial reconstruction system based on the depth information image of tested face to carry out three-dimensional face model building,
To have higher precision relatively, together compared to the existing deformable model technique based on tested face two dimensional image
When the algorithm as corresponding to the three-dimensional facial reconstruction system compared to the algorithm of deformable model technique have essence change
Into modeling real-time is opposite to get a promotion.
As shown in Figure 1, the three-dimensional facial reconstruction system includes at a depth information acquisition module 10 and a three-dimensional face
Module 12 is managed, wherein the depth information acquisition module 10 is to acquire the facial image for being tested individual, the three-dimensional face
Processing module 12 is communicatively coupled with the depth information acquisition module 10, for receiving and based on the human face image information
Carry out three-dimensional face model reconstruction.It is noted that in the present invention, the depth information acquisition module 10 can acquire depth
Information is spent, thus, in the stage for acquiring tested human face image information using the depth information acquisition module 10, comparatively
Complete face information is collected and saves, such as the absolute dimension of tested face: the height of nose protrusion, the depth of hollow-eyed
Degree etc..It is relatively more complete by means of as a result, in the stage that the subsequent three-dimensional face processing module 12 carries out three-dimensional face modeling
Tested human face image information can be used directly, so as to accordingly reduce the difficulty of algorithm, the real-time, interactive of Optimization Modeling
Energy.
Correspondingly, through the depth information acquisition module 10 tested human face image information collected with depth information
It is transferred to the three-dimensional face processing module 12, so that the three-dimensional face processing module 12 can be according to preset algorithm routine
The tested human face image information is handled, to carry out tested face 3D modeling.More specifically, the three-dimensional face processing module 12
Module 122 is rebuild including a preprocessing module 121 and one, the preprocessing module 121 is communicatively coupled with the depth letter
Acquisition module 10 is ceased, so that the preprocessing module 121 can be received from the depth information acquisition module 10 has tested person
The image information of face image information, and the image information with tested face depth information is pre-processed, to extract a quilt
Survey face depth point cloud primary data.Further, the depth point cloud primary data drops in the preprocessing module 121
Processing make an uproar to generate a depth point cloud preprocessed data.Further, the reconstruction module 122 is communicatively coupled with described pre-
Processing module 121, and the depth point cloud preprocessed data is received and handled according to preset program, to carry out tested face
3D model construction.
It is noted that in the present invention, the depth information acquisition module 10 can be implemented as include but is not limited to
Depth information camera module based on structured light technique, the depth information based on laser radar acquire mould group, take the photograph mould group based on double
Depth information camera module or be based on time flight rule (TOF) depth information camera module.Those skilled in the art answers
Know, in general, what is had in the image information of the depth information acquisition module 10 tested face collected is tested
The depth information of face is only initial data, cannot be directly as depth information to carry out subsequent processing analysis.For example, the depth
Spending information acquisition module 10 is based on double depth information camera modules for taking the photograph mould group, the image information of tested face collected
It needs to be handled further according to certain algorithm, such as range of triangle algorithm, the face depth letter for being tested individual can be obtained
Breath.Therefore, the preprocessing module 121 need to be according to the corresponding algorithm journey of type matching of the depth information acquisition module 10
The depth of tested face is analyzed and extracted to formula to calculate the tested human face image information with depth information
Degree point cloud primary data.
Preferably, in this embodiment of the invention, the depth information acquisition module 10 is implemented as TOF depth letter
Breath camera module 10 ' or the depth information acquisition module 10 are communicatively connected in a network to be believed at least one TOF depth
Camera module 10 ' is ceased, wherein the TOF depth information camera module 10 ' utilizes time flight theory (the Time of of laser
Flight the initial data of the depth information of tested face) is obtained.At this point, being connected with the depth information acquisition module 10
The preprocessing module 121 be implanted corresponding time flight rule algorithm routine, thus to the depth information acquire mould
The initial data collected of block 10 is processed to obtain the depth point cloud primary data of tested face.It is worth
One is mentioned that, many miscellaneous by containing in the TOF depth information camera module 10 ' tested human face image information collected
News, for example, adjacent laser (point) exists in background information or initial data collected when shooting tested human face image information
The flying spot data of larger difference (too high or too low).Therefore, the TOF depth information camera module 10 ' original number collected
According to after calculating analysis via the preprocessing module 121 to obtain the depth point cloud primary data, need to further pass through
Noise reduction process is crossed, to filter the noise in the depth point cloud primary data.
That is, the preprocessing module 121 includes a parsing module 1211 in presently preferred embodiment of the invention
With a noise reduction module 1212, wherein the parsing module 1211 is communicatively connected in a network in the depth information acquisition module 10,
The parsing module 1211 is loaded into corresponding algorithm formula, such as time flight rule parses formula, thus when the TOF is deep
The depth information initial data of the degree tested face collected of information camera module 10 ' is transferred to the parsing module 1211
Afterwards, the parsing module 1211 is handled the initial data according to corresponding algorithm formula, to obtain the tested person
The depth point cloud primary data of face.Further, the noise reduction module 1212 of the preprocessing module 121 is communicably
It is connected to the parsing module 1211, for carrying out noise reduction process to the depth point cloud primary data.Particularly, in this hair
In bright the preferred embodiment, the noise reduction module 1212 is after receiving the depth point cloud primary data, to the data of acquisition
Bilateral filtering is carried out, (there are larger difference (mistakes for adjacent laser (point) in data with the flying spot that is effectively removed in the presence of data
It is high or too low) flying spot error) and background point cloud data when shooting tested face, and by the effective coverage of tested face into
Row retains.It is noted that in other embodiment of the invention, the noise reduction module 1212 is detecting the depth
After flying spot data in degree point cloud primary data, the flying spot data can be substituted in a manner of interpolation, more and more with opposite reservation
To be completely tested face depth information.
Those skilled in the art should be readily apparent that, in the preferred implementation of the invention, the noise reduction module 1212 can
The depth point cloud primary data is handled stage by stage.For example, in the first phase, 1212 pairs of noise reduction module acquisitions
Data be filtered, to remove background point cloud data and retain the effective coverage of tested face.In second stage, institute
It states noise reduction module 1212 to parse the depth point cloud primary data of the effective coverage of the tested face, to identify number
Flying spot data in the presence of, i.e., there are the data points of larger difference (too high or too low) for adjacent laser (point) in data.Into
One step, in the phase III, which is replaced by certain algorithm, for example, flying according to this by interpolation algorithm
The depth information of the adjacent data point in points strong point assigns the flying spot data corresponding depth information value, in this way,
The depth point cloud data of the tested face is optimized, in favor of subsequent 3D faceform building.
, it will be appreciated that noise reduction mode stage by stage not only simplifies the algorithm formula design difficulty of noise reduction module 1212,
Meanwhile the noise reduction module 1212 in each stage in only need to handle relatively single task, and difficulty series relative reduction, therefore
The optimised raising of processing speed of the noise reduction module 1212, and processing speed is relatively faster.
Further, the depth information acquisition module 10 initial data collected is passing through the pretreatment
After the parsing and noise reduction process of module 121, it is converted into a depth point cloud preprocessed data, wherein the depth point cloud is located in advance
Reason data include the depth point cloud information of the effective coverage of tested person's face.In the present invention, the noise reduction module 1212 can be led to
It is connected to letter the reconstruction module 12, the depth point cloud preprocessed data further, is transmitted to the three-dimensional face
The reconstruction module 122 of processing module 12, and the algorithm formula according to loaded by the reconstruction module 122 carries out three-dimensional people
Face 3D Model Reconstruction.
More specifically, the ash rebuild module 122 and establish tested face according to the depth point cloud preprocessed data
Spend 3D model.Those skilled in the art will be appreciated that, be based on TOF depth camera mould group tested face depth information collected
Only there is the gray scale and luminance information of tested face, without having the RGB color information of tested face.Correspondingly, based on described
The tested faceform that the depth point cloud information of tested face is established is gray level model, that is, the rgb color of not tested face
Information.Therefore, during tested face 3D modeling, textures further need to be carried out to the naked Fig. 3 D model for being tested face
Processing, that is, synthesize the two-dimentional RGB information of tested face and the gray scale 3D model, to realize the 3D mould of tested face
Type building.
In the present invention, before executing mapping operations, the reconstruction module 122 carries out region to the gray scale 3D model
Change processing, in this stage, the gray scale 3D model is differentiated according to default rule as a series of regions.Further, it passes through
The gray scale 3D model after crossing compartmentalization is by further parameterized treatment, in this stage, the institute of the gray scale 3D model
It states region and is individually set corresponding parameter, to synthesize in the subsequent two-dimentional RGB image by tested face in the gray scale
When 3D model, the two dimension RGB image can carry out matching one by one according to corresponding parameter and position, to improve the three-dimensional face mould
The precision and flexibility that type is rebuild.That is, the gray scale 3D model need to carry out further before executing mapping operations
Processing: compartmentalization and parametrization, wherein the gray scale 3D model is differentiated as a series of regions, example in the compartmentalization the step of
It such as, is a series of triangle gridding regions by the gray scale 3D model differential using traditional point cloud triangle gridding algorithm, and
It in the step of subsequent parametrization, sets corresponding parameter and is mapped in the triangle gridding region respectively, to be mapping operations
Corresponding control reference system is provided.
It is noted that in other embodiment of the invention, the gray scale 3D model can by other algorithms with
Different modes carries out region division, for example, passing through gridding algorithm for the gray scale 3D model differential is some column squares
Shape net region, and corresponding parameter further is set separately in the rectangular mesh region.Similarly, the present invention in addition
Embodiment in, the gray scale 3D model can also carry out region division by way of characteristic point region division, i.e., according to default
Human face characteristic point differential described in gray scale 3D model, and corresponding parameter is further set separately in corresponding region.Also
It is to say, in the present invention, before executing mapping operations, the gray scale 3D model need to carry out compartmentalization and parameterized treatment, so
And the mode of the compartmentalization is not limited to by the present invention.Simultaneously, it will be appreciated that, settable different parameters are to subsequent patch
Graphic operation carries out details adjustment, to improve the precision of textures precision and 3D faceform.
It should also be appreciated that the positioning textures of interesting pattern can be carried out based on the naked figure face 3D model after parametrization, with
Expand the interest and functionality that three-dimensional face model is rebuild.For example, according to the relevant parameter of gray scale 3D model, it is specific to face
Position carries out repairing figure processing, or, adding interesting pattern according to the relevant parameter of gray scale 3D model in face specific part, increasing
Interaction interest, for example, in the cheek addition beard etc. of face.
Correspondingly, in presently preferred embodiment of the invention, the reconstruction module 122 includes a compartmentalization module 1221,
One parameterized module 1222 and a textures module 1223.The compartmentalization module 1221 is communicatively coupled with the pretreatment mould
Block 121, with after receiving the depth point cloud preprocessed data, by the face ash based on the depth point cloud preprocessed data
It spends 3D model and carries out compartmentalization processing.Particularly, in the present invention, the compartmentalization module 1221 is implanted triangle gridding calculation
Method formula, by the gray scale 3D model carry out triangle gridding processing, that is, by the gray scale 3D model differential be a series of three
Angle net region.Certainly, in additional embodiment of the invention, the compartmentalization module 1221 can also be implanted other algorithm journeys
The gray scale 3D model is carried out the division of different modes by formula, for example, being loaded into gridding algorithm formula in the region
Change module 1221, using by the gray scale 3D model differential as series of rectangular net region, or, according to characteristic point region division calculation
The gray scale 3D model is carried out region division, i.e. the gray scale 3D model according to preset human face characteristic point differential by method formula.
Further, the parameterized module 1222 is communicatively coupled with the compartmentalization module 1221, by region
The gray scale 3D model after change is parameterized, to provide corresponding control reference system for subsequent mapping operations.It is described
The formula that parameterized module 1222 is set corresponds to formula possessed by the compartmentalization module 1221, by the institute after compartmentalization
State gray scale 3D model parameterization.Particularly, in the preferred implementation of the invention, the parameterized module 1222 is by triangle gridding
Gray scale 3D model parameterization processing after change, to set corresponding parameter in each triangle gridding region, thus subsequent
In mapping operations, positioning textures can be carried out according to set parameter, to improve the precision and processing speed of modeling.It is worth mentioning
, the parameterized module 1222 can set different Parameterized Algorithm formulas to be adjusted to subsequent textures details,
To expand the interest and functionality of three-dimensional face model reconstruction.
In addition, the textures module 1223 is communicatively coupled with the compartmentalization module 1221, by two-dimentional RGB face
Image and the gray scale 3D model of parametrization are synthesized, and realize faceform's building.Particularly, of the invention this preferably
In embodiment, according to the parameter information of the gray scale 3D model, by two-dimentional RGB facial image respectively correspondingly textures in the ash
3D model is spent, so that the gray scale 3D model has RGB color multimedia message.It is noted that in the present invention, the two dimension
RGB facial image can be pre-stored in background data base, to call two dimension RGB face figure when needing to carry out textures
As in the corresponding gray scale 3D model.
Certainly, those skilled in the art should be readily apparent that, two dimension RGB facial image can also be by the side that acquires in real time
Formula is collected, at this point, the three-dimensional facial reconstruction system provided by the present invention further includes a RGB camera module 11, it is described
RGB camera module 11 is used to acquire the two-dimentional RGB image of measured target face.Preferably, in the present invention, the RGB camera shooting
Mould group 11 is neighboringly set to the TOF depth information camera module 10 ', so that the RGB camera module 11 has and institute
The almost consistent acquisition position of TOF depth information camera module 10 ' is stated, to pass through the RGB camera module 11 collected two
It is corresponding with by the TOF depth information camera module 10 ' depth information image collected to tie up RGB image, in favor of mentioning
The matching degree and precision of high subsequent textures.
It is highly preferred that the RGB camera module 11 be set with the 10 ' simultaneously operating of TOF depth information camera module,
So that the RGB images mould when acquiring the depth information of tested face using the TOF depth information camera module 10 '
Group 11 can synchronously acquire the two-dimentional RGB information of the tested face.It is noted that due to 11 He of RGB camera module
The TOF depth information camera module 10 ' is set to be operated in a synchronous manner, thus acquiring tested facial image letter
When breath, the automatic shaking of face or other errors due to caused by tested face itself unstability can effectively be disappeared
It removes, in favor of improving the matching degree of subsequent textures and the precision of modeling.
Further, those skilled in the art will be appreciated that, although based on human face three-dimensional model constructed by the above method
With preferable real-time and precision, however can not accomplish to generate every time based on above method three-dimensional face model generated
The consistent faceform of geometric topology structure, therefore also can not just carry out the further exploitation of face 3D application, such as 3D people
Face animation etc..More specifically, to be unable to ensure each three-dimensional face model generated based on the above method having the same each
Item parameter, such as vertex quantity, face quantity, vertex index sequence, so that working as three-dimensional face described in same face multipass
When reconstructing system is rebuild, it may appear that different degrees of distortion and distortion influences the exploitation of subsequent face 3D application.
To solve the above-mentioned problems, the three-dimensional face processing module 12 of the three-dimensional facial reconstruction system further includes one
Registration module 123 matches the parameters of the three-dimensional face model wherein the registration module 123 is calibrated, so that registration
The three-dimensional face model afterwards has consistent geometric topology structure feature.More specifically, the registration module 123 can
It is communicatively coupled to the reconstruction module 122, is adjusted with the parameters to the three-dimensional face model, for example, number of vertex
Amount, face quantity, vertex index sequence, to be registrated to the three-dimensional face model of standard.At this point, the three-dimensional face model of the standard has
There is consistent geometric topology structure, to provide technical support for subsequent application and development.
It is noted that in order to reduce the difficulty of three-dimensional face modeling, existing three-dimensional facial reconstruction technology is rebuild
Three-dimensional face model be usually without hair style model, that is, in human face three-dimensional model reconstruction process, be tested face hair portion
It is blocked, or is artificially eliminated in the treatment process in later period.That is, existing three-dimensional face model is all bare headed mould
Type, so that faceform's visual effect after rebuilding is poor, interest is low.Therefore, in presently preferred embodiment of the invention,
The three-dimensional facial reconstruction system further includes that an optimization module 124 is made with the three-dimensional face model visual effect after optimized reconstruction
It integrally has more the sense of reality.More specifically, the optimization module 124 includes one group of decoration module, such as hair style module, glasses
Module, card-issuing module etc., wherein the decoration module can be carried out relatively accurately based on the three-dimensional face model after parametrization
Positioning, the three-dimensional face model with actualization, after interest and appeal reconstruction.
In the present invention, the three-dimensional facial reconstruction system, the depth information image based on tested face carry out three-dimensional people
Face model construction, so that compared to the existing three-dimensional face model reconstructing system by the tested human face image information of two dimension,
It on the one hand, can be relatively more completely using tested face information, on the other hand, since the tested facial image itself of acquisition has
There is depth information, so that algorithm needed for the three-dimensional reconstruction system is relatively more to simple, thus the three-dimensional facial reconstruction
System has higher real-time.Those skilled in the art should be easily understood that, in the present invention, the three-dimensional facial reconstruction system
The performance and cost of the depth information acquisition module 10 of system, largely affect the three-dimensional facial reconstruction system institute
The value having.Particularly, in the present invention, the depth information acquisition mould group is implemented as TOF depth information camera shooting mould
Group 10 ' both can guarantee that the depth information of tested person's face had relatively high essence compared to high-precision three-dimensional laser scanner
Degree, and advantage of lower cost.
More specifically, if Fig. 4 is to as shown in figure 9, TOF depth information camera module 10 ' of the present invention is explained
It is bright, wherein TOF depth information camera module 10 ' of the present invention includes at least one for providing the light source die with preset wavelength laser
Block 20 and at least one photosensitive control module 30, wherein the photosensitive control module 30 includes an at least TOF light intensity sensor 31
With a controller 32, wherein the controller 32 include an at least data processing module 321, wherein the TOF light intensity sensor
31 can be connected with being powered with the data processing module 321, swash wherein the light source module 20 can be generated with preset wavelength
For light to measured target, the TOF light intensity sensor 31, which is set, can receive the laser reflected by measured target, and generate induction
Signal, wherein the data processing module 321 is set to receive the inductive signal from the TOF light intensity sensor 31,
Described in data processing module 321 be set be capable of handling the inductive signal and generate original data.It is understood that
It is that the TOF light intensity sensor 31 is arranged for receiving and/or incuding the laser of tested face reflection, and generation is corresponding just
Beginning image data.
It is worth noting that, light source module 20 herein and photosensitive control module 30 form depth detection system, to
The case depth of tested face is detected, thus depth information initial pictures number needed for obtaining tested face Depth Imaging data
According to.It is understood that the laser that the light source module 20 of TOF depth information camera module 10 ' of the present invention emits is by measured target
After reflection, is further incuded and detected by TOF light intensity sensor 31.Therefore, TOF light intensity sensor 31, which each of detects, swashs
Luminous point data all have depth (value) information.Skilled person will appreciate that TOF depth information camera module 10 ' of the present invention
The laser that light source module 20 issues (transmitting) can be infrared light.Preferably, the laser that the light source module 20 issues be with
The laser of one preset wavelength.
As shown in Fig. 4 to Fig. 9 of attached drawing, according to present pre-ferred embodiments TOF depth information camera module 10 ' it is described
Controller 32 includes a control module 322, wherein the control module 322, which is set, such as to be come from upper according to control instruction
The control instruction of position machine or processor, control TOF light intensity sensor 31 are run.The control module 322 can also be according to default journey
Sequence controls TOF light intensity sensor 31 and runs.Further, the control module 322, which is set, can control the controller 32
Other functional modules operation, control as described in controller 32 a data processing module 321 to 31 institute of TOF light intensity sensor
The inductive signal of induction is handled to generate corresponding original data.Further, the original data can be into
It is transferred to one step a host computer or processor, wherein the host computer can combine extraction of depth information method by initial pictures
Data carry out the depth information that conversion obtains the measured target.That is, the host computer be communicatively coupled with it is described
The original data of the measured target stored in data processing module 321, to obtain institute by further analytical calculation
State the depth information of measured target.In preferred implementation of the invention, the number of the TOF depth information camera module 10 '
The parsing of the preprocessing module 121 of the three-dimensional facial reconstruction system is communicatively coupled with according to processing module 321
Module 1211 to receive the original data from the data processing module 321, and passes through further analysis meter
Calculate the depth point cloud primary data for obtaining tested face.
Preferably, the control module 322 of the controller 32, which is set, to correct TOF according to TOF calibrating parameters
The original data that light intensity sensor 31 generates.For example, in order to reduce 10 ' institute of TOF depth information camera module of the present invention at
The deviation and distortion of image, need to remove in TOF detection data that there are larger difference (too high or too low) with adjacent laser (point)
Laser.These light (point) can be considered the flying spot in TOF imaging.That is, in the present invention, the preprocessing module
The part operation of 121 noise reduction module 1212 can be by the individually processing of TOF depth information camera module 10 ', with drop
The background system of the low three-dimensional facial reconstruction system handles load, improves modeling real-time performance.
The controller 32 according to the photosensitive control module 30 of present pre-ferred embodiments further comprises a data
Interface 323, so that the original data in the controller 32 can be transmitted to host computer or processor.For example, logical
A MIPI data-interface 323 is crossed, the original data is transferred to host computer or embedded chip processor.Particularly,
In presently preferred embodiment of the invention, what the data-interface 323 of the controller 32 can communicate is connected to the pre- place
The parsing module 1211 of module 121 is managed, to realize the transmission of the original data.
If attached drawing is 4 to shown in attached drawing 9, according to the light source of the TOF depth information camera module 10 ' of present pre-ferred embodiments
Module 20 includes a power supply 21 and one for emitting the laser emitter 22 of laser, wherein the laser emitter 22 exists
After being provided electric energy, excitation-emission laser.Preferably, in the preferred embodiment of the present invention, the light source module 20 is carried out
For a vertical-cavity surface-emitting device (VCSEL) ' comprising the supply power supply and a laser emitter of a vertical cavity surface emitting laser
22。
Those skilled in the art will be appreciated that, the vertical-cavity surface-emitting device (VCSEL) ' specific temperature model need to be maintained at
Enclosing could work normally, that is to say, that need to consider the heat dissipation problem of the TOF depth information camera module 10 ', remain described and hang down
Straight Cavity surface transmitter (VCSEL) ' there is stable working performance.Correspondingly, in the preferred implementation of the invention, the power supply
Supply 21 and the laser emitter 22 are arranged at intervals, in this way, on the one hand increase the light source module
On the other hand 20 heat dissipation area in total avoids heat phase caused by the power supply 21 and the laser emitter 22
It mutually influences, in favor of the light source module 20 heat dissipation.
It is further according to the TOF depth information camera module 10 ' of present pre-ferred embodiments as shown in Fig. 4 to Fig. 9 of attached drawing
Including a wiring board 40, wherein preferably, the light source module 20 and the photosensitive control module 30 are arranged at the line
Road plate 40.It is, in presently preferred embodiment of the invention, the light source module 20 and photosensitive 30 quilt of control module
It is integrally set to the wiring board 40, on the one hand makes the TOF depth information camera module 10 ' that there is compactedness structure,
On the other hand, it is conducive to improve the depth measurement precision of the TOF depth information camera module 10 '.
More specifically, the laser emitter 22 of the light source module 20 is neighboringly set to the wiring board 40,
So that the laser emitter 22 is formed by transmitting optical path between measured target, with measured target to the TOF light
It is as parallel as possible and be proximally arranged that strong sensor 31 is formed by receiving light path, to reduce due to transmitting optical path and receiving light path
Error caused by the difference of path, improves the measurement accuracy of the TOF depth information camera module 10 '.
The wiring board 40 include but is not limited to rigid circuit board, flexible electric circuit board, Rigid Flex, and ceramics and
PCB plate.In presently preferred embodiment of the invention, the wiring board 40 is pcb board, and there is a light source module 20 to assemble area
41 and one photosensitive control module assemble area 42, wherein the light source module 20 assembles area 41 and the photosensitive control module assembling
Area 42 is connected by a flexible connection plate 43, so that the light source module 20 and the photosensitive control module 30 can relatively certainly
By moving, optimize the overall structure of the TOF depth information camera module 10 '.Particularly, in the present invention, the TOF is deep
It spends information camera module 10 ' and uses laminating design pattern, that is, at the light source module 20 and the photosensitive control module 30
In different height spaces, in this way, so that the size of the TOF depth information camera module 10 ' reduces, together
When each component between location tolerance is also opposite reduces.
It is noted that dissipating for the ease of the even entire TOF depth information camera module 10 ' of light source module 20
Heat, the back portion region of the wiring board 40 of TOF depth information camera module 10 ' of the present invention is (with the light source module 20
The opposite side in place face) exposure is set in air, in order to radiate.It is noted that in another reality of the invention
It applies in example, is set to the metal conducting layer at 40 back side of wiring board by partly exposed, the exposed region corresponds to light source
Module 20, to further strengthen the heat dissipation effect of the wiring board 40.
In other embodiment of the invention, the wiring board 40 further includes a heat-conducting plate 44,44 weight of heat-conducting plate
It is set to the back side (opposite side with the 20 place face of light source module) of the wiring board 40 foldedly, and can conductively connect
It is connected to the light source module 20 and the photosensitive control module 30, to reinforce the TOF depth information by the heat-conducting plate 44
The heat dissipation performance of camera module 10 '.In addition, in another embodiment of the invention, the light source module 20 further comprise to
A few heat-conducting piece 23 wherein the heat-conducting piece 23 is arranged at the laser emitter 22, and passes through the line by a through-hole
Road plate 40 and the back side for extending to the wiring board 40.
It is to be appreciated that those skilled in the art that TOF depth information camera module 10 ' of the present invention uses laser as measurement
Light, therefore, Modular circuit design must satisfy human eye laser safety requirements, and pass through International Certification standard.In order to ensure in TOF
For camera module during manufacture and use, laser will not hurt the eyes of people, TOF depth information camera module of the present invention
10 ' further provide a safety protection structure, to protect the eyes of people.More specifically, TOF depth information of the present invention is taken the photograph
The light source module 20 of picture mould group 10 ' further comprises a protective cover 24, wherein the protective cover 24 is set
In the outside of the laser emitter 22, and it is used as a part of turning circuit.In other words, when the protective cover 24
Circuit quilt when falling off from the outside of the laser emitter 22, for powering to the laser emitter 22 of the light source module 20
It disconnects, is terminated so that the light of the laser emitter 22 of the light source module 20 be made to excite or shine.In addition, the metal coating
It is also further sharp that cover 24, which is arranged on the outside of the laser emitter 22 as the outer housing of the laser emitter 22,
Optical transmitting set 22 provides certain protective effect.
As shown in Fig. 4 to Fig. 9 of attached drawing, according to present pre-ferred embodiments TOF depth information camera module 10 ' it is described
Light source module 20 further comprises a diffraction optical element 25 (DOE), wherein the diffraction optical element 25 is described to change
The phase and spatial-intensity of light wave produced by laser emitter 22 have ideal luminous energy density to obtain.This field
Technical staff not only has higher environment resistant jamming performance it will be appreciated that the transmitting laser through ovennodulation, is conducive to described in raising
The measurement accuracy of TOF depth information camera module 10 ', and the transmitting light wave through ovennodulation will not damage human eye.
Particularly, in presently preferred embodiment of the invention, the diffractive optical element is arranged at the protective cover
Between 24 and the laser emitter 22, therefore, on the one hand the protective cover 24 can prevent the diffraction optical element 25
It falls off and laser beam that laser emitter 22 issues hurts the eyes of people, on the other hand, protective cover 24 can also be at itself
It when falling off, disconnects to the laser emitter 22 and the circuit of electric energy is provided, to terminate shining for the laser emitter 22.Especially
Ground, the protective cover 24 are installed on the wiring board 40, between the wiring board 40 and the protective cover 24
A separate cavities 241 are formed, wherein the laser emitter 22 and the diffractive optical element are accommodated in the separate cavities 241, and
By being set to an optical window 242 on 24 top of protective cover, the exit direction of the laser is controlled.The separate cavities 241
Cooperate the optical window 242, on the one hand, the laser emitter 22 is isolated, prevents radiation pollution, it is on the other hand, described to swash
Laser caused by optical transmitting set 22 is only capable of supporting by the optical window 242 to the external world, limitedly to limit the outgoing side of the laser
To.
TOF depth information camera module 10 ' according to present pre-ferred embodiments further comprises a temperature sensor 50,
Described in temperature sensor 50 can incude the light source module 20 the laser emitter 22 temperature, in the laser
The operating temperature of transmitter 22 is more than the control of the controller 32 of the photosensitive control module 30 after a preset temperature
Module 322 can reduce the power supply for even shutting off the laser emitter 22 to the light source module 20, to ensure the light
The work of the laser emitter 22 of source module 20 in safe range, will not be damaged.
Further, according to the light source module 20 of the TOF depth information camera module 10 ' of present pre-ferred embodiments into
One step includes one drive circuit 26, wherein the driving circuit 26 be arranged on the power supply and the laser emitter 22 it
Between, to control power supply of the power supply to the laser emitter 22.Preferably, the driving circuit 26 and the controller 32
The control module 322 can be connected with being powered so that the circuit can be according to the control instruction of the control module 322
Control the power supply to the power supply to the laser emitter 22.
As shown in Fig. 4 to Fig. 9 of attached drawing, according to present pre-ferred embodiments TOF depth information camera module 10 ' it is described
Photosensitive control module 30 further comprises a camera lens 33, wherein the camera lens 33 includes an at least lens 331, wherein the camera lens
33 are arranged on the outside of the TOF light intensity sensor 31 of the photosensitive control module 30, and correspond to the TOF light intensity
The photosensitive path of sensor 31, to acquire the laser that measured target surface is reflected by the camera lens 33.
As shown in Fig. 4 to Fig. 9 of attached drawing, according to present pre-ferred embodiments TOF depth information camera module 10 ' it is described
Photosensitive control module 30 further comprises a retainer 34, wherein the retainer is arranged for that the camera lens 23 is kept to be in
One position appropriate.Preferably, the camera lens 23 is arranged in the position fixation hole 340 that the retainer is formed, with true
It protects the camera lens 23 and is in a predeterminated position.
As shown in Fig. 1 to Fig. 9 of attached drawing, according to the sense for the TOF depth information camera module 10 ' that the present invention is preferably implemented
Photocontrol module 30 further includes a filter element 35, wherein the filter element 35 is set to 31 He of TOF light intensity sensor
Between the camera lens 23, to filter veiling glare by the filter element 35, the TOF depth information camera module 10 ' is improved
Measurement accuracy.Preferably, the filter element 35, which is set, only allows laser light caused by the laser emitter 22, and
It is finally radiated to the TOF light intensity sensor 31 and carries out photovoltaic reaction, the optical signal with measured target depth information is converted
For electric signal.It is noted that in one embodiment of this invention, the filter element 35 is arranged at the retainer,
And between the camera lens 23 and the TOF sensor.Optionally, in another embodiment of the invention, the photosensitive control
Molding block 30 further includes a filter element bracket, wherein the filter element 35 is assembled in the filter element bracket, the filter
Optical element bracket assembled is in the retainer, to change the support side of the filter element 35 by the filter element bracket
Formula.
It is further according to the TOF depth information camera module 10 ' of present pre-ferred embodiments as shown in Fig. 4 to Fig. 9 of attached drawing
Including a bracket 60, wherein the wiring board 40 is arranged at the bracket 60, so that the position of the wiring board 40 is fixed.
Further, the position for being arranged at each electronic component of the wiring board 40 is also fixed, to realize that TOF depth information is taken the photograph
As the default layout of mould group 10 '.
To sum up, the overall structure of the TOF camera module provided by the present invention is optimised, so that the TOF
Depth information camera module 10 ' closely can not be relatively more completely using tested face information, while can guarantee tested face
Depth information there is relatively high precision, and advantage of lower cost.In addition it is noted that depth provided by the present invention
Degree information camera module also at least also has the advantage being exemplified below,
First, the overall structure of the TOF depth information camera module 10 ' is optimised, to make it have smaller volume.
Further, few by effectively reduction due to the overall dimensions of the TOF depth information camera module 10 ', thus, the TOF
Depth information camera module 10 ' is easier to be set in other equipment with being integrated, so that the TOF depth information images mould
Group 10 ' has wider array of application range.
Second, the light source module 20 and the photosensitive control module 30 of the TOF depth information camera module 10 ',
It jointly and in a manner of lamination is set to same wiring board 40, so that the TOF depth information camera module 10 ' is whole
Body layout is optimised.
Third, the TOF depth information camera module 10 ' uses compact structure stack-design, so that the light
Source module 20 and the photosensitive control module 30 are neighboringly arranged, with reduce as transmitting light it is different from receiving light path diameter described in
The error of generation, to obtain the depth measurement initial data of higher precision.
Fourth, the light source module 20 of the TOF depth information camera module 10 ' includes a laser emitter 22 and one
Power supply 21, the supply circuit are positioned apart from the laser emitter 22, in order to which the laser emitter 22 dissipates
Heat.Further, the back side of the wiring board 40 of the TOF camera module is (opposite with the 20 place face of light source module
On one side), exposure is set in air, in order to radiate.In addition, the light source module 20 further includes at least one thermally conductive
Part 23 wherein the heat-conducting piece 23 is arranged at the laser emitter 22, and passes through 40 He of wiring board by a through-hole
The back side of the wiring board 40 is extended to, to be further conducive to the heat dissipation of the laser emitter 22, the TOF is kept to be imaged
System operating temperatures and performance are stablized.That is, the TOF depth information camera module 10 ' has good heat dissipation performance,
To ensure that the work that the TOF imaging system can be normal and stable operates.
Fifth, the light source module 20 of the TOF depth information camera module 10 ' has safety protection structure, to prevent
The laser that only laser emitter 22 emits damages eye-safe.More specifically, the light source module 20 includes
One protective cover 24 and a diffraction optical element 25, the diffraction optical element 25 are arranged at the laser emitter 22
Between the protective cover 24, wherein the metallic shield is used as a part of turning circuit, thus one side institute
Stating protective cover 24 can prevent the laser beam that the diffraction optical element 25 falls off and the laser emitter 22 is issued to hurt
Evil arrives the eyes of people, and on the other hand, protective cover 24 can also be disconnected and be mentioned to the laser emitter 22 when itself falls off
For the circuit of electric energy, to terminate shining for the laser emitter 22.In addition, the TOF camera module further includes that a temperature passes
Sensor 50, wherein the temperature sensor 50 can incude the temperature of the laser emitter 22 of the light source module 20, and
Using the temperature signal as control signal, the electric power thus supplied to the light source module 20 is controlled, at the laser to ensure transmitting
In in safety range of human eye.That is, the TOF depth information camera module 10 ' has good security protection performance, with
Ensure that the TOF depth information camera module 10 ' can steadily work.
Sixth, the TOF depth information camera module 10 ' can intelligently adjust the Working mould of the laser emitter 22
Formula, so that being in safe range always by the laser that the laser emitter 22 generates.
It is noted that in some embodiment of the invention, the three-dimensional facial reconstruction system further includes RGB camera shooting
Mould group 11, the RGB camera module 11 are used to acquire the two-dimentional RGB image of measured target face.Preferably, the RGB camera shooting
Mould group 11 is neighboringly set to the TOF depth information camera module 10 ', so that the RGB camera module 11 has and institute
The almost consistent acquisition position of TOF depth information camera module 10 ' is stated, to pass through the RGB camera module 11 collected two
It is corresponding with by the TOF depth information camera module 10 ' depth information image collected to tie up RGB image, in favor of mentioning
The matching degree and precision of high subsequent textures.Particularly, in the preferred implementation of the invention, the RGB camera module 11 can
It is neighboringly set to the photosensitive control module 30 or the light source module 20 of the TOF depth information camera module 10 ',
So that the RGB camera module 11 has and the intimate consistent camera site of the TOF depth information camera module 10 '.It answers
Understand, at this point, the RGB camera module 11 can share a wiring board 40 with the TOF depth information camera module 10 ',
So that the TOF depth information camera module 10 ' and the RGB camera module 11 have integrated cramped construction, in favor of
Layout and whole design.
It is noted that preferably, the RGB camera module 11 is set and the TOF depth information camera module
10 ' simultaneously operatings, so that when acquiring the depth information of tested face using the TOF depth information camera module 10 ', institute
The two-dimentional RGB information of the tested face can synchronously be acquired by stating RGB camera module 11.By the RGB camera module 11 and institute
It states TOF depth information camera module 10 ' and is set and operated in a synchronous manner, thus acquiring tested human face image information
When, the automatic shaking of face or other errors due to caused by tested face itself unstability can effectively be disappeared
It removes, in favor of improving the matching degree of subsequent textures and the precision of modeling.
According to another aspect of the present invention, a three-dimensional facial reconstruction method is elucidated with, wherein the three-dimensional facial reconstruction side
Method comprising steps of
By a TOF depth information camera module 10 ', acquisition has the tested human face image information of depth information;
The tested human face image information is pre-processed to extract the depth point cloud preprocessed data of a tested face, wherein institute
State the depth point cloud information that depth point cloud preprocessed data includes tested face effective coverage;With
Based on the depth point cloud preprocessed data, three-dimensional face model is constructed.
It is noted that in tested human face image information of the acquisition with depth information, the depth information camera shooting
The relative position of mould group and tested face changes, so that the tested face depth point cloud data captured can be relatively more
Completely.More specifically, in the actual process, tested individual the TOF depth information camera module 10 ' can carry out up and down relatively
It rotates left and right, so that the face relative position of the TOF depth information camera module 10 ' and tested human body changes.This
The technical staff in field should be readily apparent that, when the position of tested face and the TOF depth information camera module 10 ' changes
When, the shooting angle of the relatively tested face of the TOF depth information camera module 10 ' changes, so that some scripts can not
The position seen effectively is observed, alternatively, not enough clearly tested person face part clearly visually, is passed through this to some scripts
The mode of sample, so that the depth point cloud data of tested face is more complete.
It similarly, can be by the position of the change TOF depth information camera module 10 ', with the opposite change TOF
The relative positional relationship of camera module and tested face, so as to shoot tested face from different perspectives., it will be appreciated that different
The tested facial image of angle acquisition can mutually map comparison, to carry out the polishing in cavity, thus available relatively more complete
Depth point cloud data.
In the pretreated step, the preprocessing module 121 includes a parsing module 1211 and a noise reduction module
1212, wherein the parsing module 1211 is loaded into corresponding algorithm formula, such as time flight rule parses formula, to work as
The depth information initial data of the TOF depth information camera module 10 ' tested face collected is transferred to the parsing
After module 1211, the parsing module 1211 is handled the initial data according to corresponding algorithm formula, to obtain
State the depth point cloud primary data of tested face.Further, the noise reduction module of the preprocessing module 121
1212 are communicatively coupled with the parsing module 1211, for carrying out noise reduction process to the depth point cloud primary data.
Particularly, in presently preferred embodiment of the invention, the noise reduction module 1212 after receiving the depth point cloud primary data,
Bilateral filtering is carried out to the data of acquisition, (adjacent laser (point) is deposited in data with the flying spot that is effectively removed in the presence of data
In the flying spot error of larger difference (too high or too low)) and background point cloud data when shooting tested face, and by tested person
The effective coverage of face is retained.It is noted that in other embodiment of the invention, the noise reduction module 1212,
After detecting the flying spot data in the depth point cloud primary data, the flying spot data can be substituted in a manner of interpolation, with phase
More to reservation and more complete tested face depth information.
Correspondingly, the pre-treatment step includes an analyzing step and a noise reduction step, wherein the analyzing step includes:
Receive the depth information initial data for coming from the TOF depth information camera module 10 ' tested face collected;
With
The formula set by the parsing module 1211, parses the initial data, to obtain the tested face
The depth point cloud primary data.
Further, the noise reduction step in the pre-treatment step includes:
It removes background point cloud data and retains the effective coverage of tested face;
Identify the flying spot data in the presence of data, i.e., there are larger difference (excessively high or mistakes for adjacent laser (point) in data
It is low) data point;And
By interpolation algorithm, the corresponding depth information value of the flying spot data is replaced, in this way, to described tested
The depth point cloud data of face optimizes, in favor of subsequent 3D faceform building.
, it will be appreciated that noise reduction mode stage by stage not only simplifies the algorithm formula design difficulty of noise reduction module 1212,
Meanwhile the noise reduction module 1212 in each stage in only need to handle relatively single task, and difficulty series relative reduction, therefore
The optimised raising of processing speed of the noise reduction module 1212, and processing speed is relatively faster.
In addition, the reconstruction module 122 includes a compartmentalization module in the building three-dimensional face model the step of
1221, a parameterized module 1222 and a textures module 1223.The compartmentalization module 1221 is communicatively coupled with described pre-
Processing module 121 will be based on the depth point cloud preprocessed data with after receiving the depth point cloud preprocessed data
Face gray scale 3D model carries out compartmentalization processing.Particularly, in the present invention, the compartmentalization module 1221 is implanted the triangulation network
It formats algorithm formula, the gray scale 3D model is subjected to triangle gridding processing, that is, by the gray scale 3D model differential be one
Serial triangle gridding region.Certainly, in additional embodiment of the invention, the compartmentalization module 1221 can also be implanted other
The gray scale 3D model is carried out the division of different modes by algorithm formula, for example, being loaded into gridding algorithm formula in institute
State compartmentalization module 1221, using by the gray scale 3D model differential as series of rectangular net region, or, according to characteristic point region
The gray scale 3D model is carried out region division, i.e. the gray scale 3D according to preset human face characteristic point differential by partitioning algorithm formula
Model.
Further, the parameterized module 1222 is communicatively coupled with the compartmentalization module 1221, by region
The gray scale 3D model after change is parameterized, to provide corresponding control reference system for subsequent mapping operations.It is described
The formula that parameterized module 1222 is set corresponds to formula possessed by the compartmentalization module 1221, by the institute after compartmentalization
State gray scale 3D model parameterization.Particularly, in the preferred implementation of the invention, the parameterized module 1222 is by triangle gridding
Gray scale 3D model parameterization processing after change, to set corresponding parameter in each triangle gridding region, thus subsequent
In mapping operations, positioning textures can be carried out according to set parameter, to improve the precision and processing speed of modeling.It is worth mentioning
, the parameterized module 1222 can set different Parameterized Algorithm formulas to be adjusted to subsequent textures details,
To expand the interest and functionality of three-dimensional face model reconstruction.
In addition, the textures module 1223 is communicatively coupled with the compartmentalization module 1221, by two-dimentional RGB face
Image and the gray scale 3D model of parametrization are synthesized, and realize faceform's building.Particularly, of the invention this preferably
In embodiment, according to the parameter information of the gray scale 3D model, by two-dimentional RGB facial image respectively correspondingly textures in the ash
3D model is spent, so that the gray scale 3D model has RGB color multimedia message.It is noted that in the present invention, the two dimension
RGB facial image can be pre-stored in background data base, to call two dimension RGB face figure when needing to carry out textures
As in the corresponding gray scale 3D model.
Correspondingly, the building three-dimensional face model step further comprises the steps of:
The face gray scale 3D model established of the compartmentalization based on the depth point cloud preprocessed data;
Corresponding parameter is set separately corresponding to described with setting in the gray scale 3D model after parameterizing the compartmentalization
The gray scale 3D model after compartmentalization;And
Synthesize the gray scale 3D model of a corresponding two dimension RGB facial image and parametrization.
It is noted that the gray scale 3D model is handled by triangle gridding in the compartmentalization the step of, that is,
It is a series of triangle gridding regions by the gray scale 3D model differential.That is, the compartmentalization step further comprises the steps of:
The face gray scale 3D model established of the triangle gridding based on the depth point cloud preprocessed data.
Other side under this invention, the present invention further provides a three-dimensional facial reconstruction sides based on depth information
Method, wherein the three-dimensional facial reconstruction method includes the following steps:
(a) deep image information of a measured target face is obtained;
(b) the depth point cloud pretreatment number about the measured target face is obtained according to the deep image information
According to;And a human face three-dimensional model of the measured target face (c) is rebuild based on the depth point cloud preprocessed data.
Preferably, the three-dimensional facial reconstruction method of the invention further comprises step:
(d) it is registrated the three-dimensional face model, so that the three-dimensional face model has consistent geometric topology structure
Feature.
Further, further comprise step in the step (b):
(b.1) deep image information is parsed, to obtain a tested face depth point cloud primary data;With
(b.2) noise reduction is carried out to the tested face depth point cloud primary data, to obtain the depth point cloud pretreatment
Data.
Further, further comprise step in the step (c):
(c.1) the face gray scale 3D model of cloud preprocessed data in depth point described in compartmentalization;
(c.2) the face gray scale 3D model after parameterized treatment is localized;And
(c.3) the face gray scale 3D model after a two dimension RGB facial image and miserable ravings is synthesized, to rebuild face three
Dimension module.
Further, further comprise step in the step (b.2):
(b.2.1) the background point cloud data in the depth point cloud preprocessed data is removed, so that the depth point cloud is pre-
Handle the effective coverage for only retaining tested face in data;
(b.2.2) the flying spot data in the presence of the data of the effective coverage of the tested face are identified, wherein described fly
Point data is that there are the data points of larger difference for adjacent laser;And
(b.2.3) flying spot data described in assignment again.
It can thus be seen that the object of the invention can be efficiently accomplished sufficiently.It is used to explain the present invention function and structure principle
The embodiment is absolutely proved and is described, and the present invention is not by the limit based on the change on these embodiment basis
System.Therefore, the present invention includes all modifications covered within appended claims claimed range and spirit.
Claims (20)
1. the three-dimensional facial reconstruction system based on depth information is used to rebuild a face three-dimensional mould of a measured target face
Type characterized by comprising
One depth information acquisition module, wherein the depth information acquisition module is for acquiring about the measured target face
One deep image information;With
One three-dimensional face processing module, wherein the three-dimensional face processing module includes a preprocessing module and communicably connected
It is connected to the one of the preprocessing module and rebuilds module, the preprocessing module is communicatively connected in a network to be acquired in the depth information
Module, wherein the preprocessing module to the depth information acquisition module acquire described in the measured target face
Deep image information is pre-processed, to obtain the depth about the measured target face based on the deep image information
Point cloud pretreatment data, wherein the reconstruction module, which is based on the depth point cloud preprocessed data, rebuilds the measured target face
The human face three-dimensional model.
2. three-dimensional facial reconstruction system according to claim 1, wherein the preprocessing module include a parsing module and
It is communicatively coupled with a noise reduction module of the parsing module, the parsing module is communicatively connected in a network to be believed in the depth
Acquisition module is ceased, the reconstruction module is communicatively connected in a network in the noise reduction module, wherein the parsing module is to the quilt
The deep image information for surveying target face is parsed to obtain a tested face depth point cloud primary data, the noise reduction
Module carries out noise reduction to the tested face depth point cloud primary data to obtain the depth point cloud preprocessed data.
3. three-dimensional facial reconstruction system according to claim 1, wherein the reconstruction module include a compartmentalization module and
It is communicatively connected in a network the parameterized module and a textures module in the compartmentalization module, the compartmentalization module respectively
It is communicatively connected in a network in the preprocessing module, wherein people of the compartmentalization module to the depth point cloud preprocessed data
Face gray scale 3D model carries out compartmentalization processing, and the parameterized module joins the face gray scale 3D model after compartmentalization
Numberization processing, the textures module synthesize the face gray scale 3D model of two-dimentional RGB facial image and parametrization, with
Rebuild the human face three-dimensional model of the measured target face.
4. three-dimensional facial reconstruction system according to claim 2, wherein the reconstruction module include a compartmentalization module and
It is communicatively connected in a network the parameterized module and a textures module in the compartmentalization module, the compartmentalization module respectively
It is communicatively connected in a network in the noise reduction module, wherein face of the compartmentalization module to the depth point cloud preprocessed data
Gray scale 3D model carries out compartmentalization processing, and the parameterized module carries out parameter to the face gray scale 3D model after compartmentalization
Change processing, the textures module synthesize the face gray scale 3D model of two-dimentional RGB facial image and parametrization, with weight
Build the human face three-dimensional model of the measured target face.
5. it further comprise a registration module according to claim 1 to any three-dimensional facial reconstruction system in 4, wherein
The registration module is communicatively connected in a network in the three-dimensional face processing module, wherein the registration module for calibrate and
Parameters with the three-dimensional face model, so that there is the three-dimensional face model after being registered consistent set to open up
Flutter structure feature.
6. it further comprise an optimization module according to claim 1 to any three-dimensional facial reconstruction system in 4, wherein
The optimization module is communicatively connected in a network in the three-dimensional face processing module, wherein the optimization module be used to optimize institute
State three-dimensional face model.
7. three-dimensional facial reconstruction system according to claim 4, wherein the depth information acquisition module includes a camera shooting
Mould group, wherein the camera module is communicatively connected in a network in the textures module, wherein the camera module is described to shoot
The mode of measured target face obtains the two dimension RGB facial image about the measured target face.
8. any three-dimensional facial reconstruction system according to claim 1 to 4 or 7, wherein the depth information acquires mould
Block is a TOF depth information camera module.
9. any three-dimensional facial reconstruction system according to claim 1 to 4 or 7, wherein the depth information acquires mould
Block is communicatively connected in a network in a TOF depth information camera module.
10. the three-dimensional facial reconstruction method based on depth information, which is characterized in that the three-dimensional facial reconstruction method includes such as
Lower step:
(a) deep image information of a measured target face is obtained;
(b) the depth point cloud preprocessed data about the measured target face is obtained according to the deep image information;With
And
(c) human face three-dimensional model of the measured target face is rebuild based on the depth point cloud preprocessed data.
11. three-dimensional facial reconstruction method according to claim 10, wherein further comprising step in the step (b)
It is rapid:
(b.1) deep image information is parsed, to obtain a tested face depth point cloud primary data;(b.2) to described
Tested face depth point cloud primary data carries out noise reduction, to obtain the depth point cloud preprocessed data.
12. three-dimensional facial reconstruction method according to claim 10, wherein further comprising step in the step (c)
It is rapid:
(c.1) the face gray scale 3D model of cloud preprocessed data in depth point described in compartmentalization;
(c.2) the face gray scale 3D model after parameterized treatment is localized;And
(c.3) the face gray scale 3D model after a two dimension RGB facial image and miserable ravings is synthesized, to rebuild face three-dimensional mould
Type.
13. three-dimensional facial reconstruction method according to claim 11, wherein further comprising step in the step (c)
It is rapid:
(c.1) the face gray scale 3D model of cloud preprocessed data in depth point described in compartmentalization;
(c.2) the face gray scale 3D model after parameterized treatment is localized;And
(c.3) the face gray scale 3D model after a two dimension RGB facial image and miserable ravings is synthesized, to rebuild face three-dimensional mould
Type.
14. three-dimensional facial reconstruction method according to claim 11, wherein further comprising in the step (b.2)
Step:
(b.2.1) the background point cloud data in the depth point cloud preprocessed data is removed, so that the depth point cloud pretreatment
Only retain the effective coverage of tested face in data;
(b.2.2) the flying spot data in the presence of the data of the effective coverage of the tested face are identified, wherein the flying spot number
According to for adjacent laser, there are the data points of larger difference;And
(b.2.3) flying spot data described in assignment again.
15. three-dimensional facial reconstruction method according to claim 14, wherein in the step (b.2.1), to the depth
Degree point cloud primary data is filtered, to remove the background point cloud data in the depth point cloud preprocessed data.
16. three-dimensional facial reconstruction method according to claim 12 or 13, wherein in the step (c.1), with triangle
The face gray scale of depth point cloud preprocessed data described in the mode compartmentalization of depth point cloud preprocessed data described in gridding
3D model.
17. three-dimensional facial reconstruction method according to claim 12 or 13, wherein before the step (c.3), into one
Step is comprising steps of obtain the two dimension RGB facial image about the measured target face by a camera module.
18. any three-dimensional facial reconstruction method in 0 to 17 according to claim 1, wherein in the step (a), with
One depth information acquistion module is the mode of a TOF depth information camera module, is clapped by the TOF depth information camera module
It takes the photograph and obtains the deep image information of the measured target face.
19. any three-dimensional facial reconstruction method in 0 to 17 according to claim 1, wherein in the step (a), with
One depth information acquistion module is communicatively connected in a network in the mode of a TOF depth information camera module, by the TOF depth
The shooting of information camera module obtains the deep image information of the measured target face.
20. any three-dimensional facial reconstruction method in 0 to 19 according to claim 1, further comprises step:
(d) it is registrated the three-dimensional face model, so that the three-dimensional face model has consistent geometric topology structure feature.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711292737.5A CN109903368A (en) | 2017-12-08 | 2017-12-08 | Three-dimensional facial reconstruction system and its three-dimensional facial reconstruction method based on depth information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711292737.5A CN109903368A (en) | 2017-12-08 | 2017-12-08 | Three-dimensional facial reconstruction system and its three-dimensional facial reconstruction method based on depth information |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109903368A true CN109903368A (en) | 2019-06-18 |
Family
ID=66940205
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711292737.5A Pending CN109903368A (en) | 2017-12-08 | 2017-12-08 | Three-dimensional facial reconstruction system and its three-dimensional facial reconstruction method based on depth information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109903368A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110532750A (en) * | 2019-09-03 | 2019-12-03 | 南京信息职业技术学院 | System and method based on time-of-flight method 3D modeling technology prevention and control child myopia |
CN111063016A (en) * | 2019-12-31 | 2020-04-24 | 螳螂慧视科技有限公司 | Multi-depth lens face modeling method and system, storage medium and terminal |
WO2021063012A1 (en) * | 2019-09-30 | 2021-04-08 | 华为技术有限公司 | Method for presenting face in video call, video call apparatus and vehicle |
CN113140046A (en) * | 2021-04-21 | 2021-07-20 | 上海电机学院 | AR (augmented reality) cross-over control method and system based on three-dimensional reconstruction and computer readable medium |
CN113610971A (en) * | 2021-09-13 | 2021-11-05 | 杭州海康威视数字技术股份有限公司 | Fine-grained three-dimensional model construction method and device and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866497A (en) * | 2010-06-18 | 2010-10-20 | 北京交通大学 | Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system |
CN104050712A (en) * | 2013-03-15 | 2014-09-17 | 索尼公司 | Method and apparatus for establishing three-dimensional model |
CN104077808A (en) * | 2014-07-20 | 2014-10-01 | 詹曙 | Real-time three-dimensional face modeling method used for computer graph and image processing and based on depth information |
CN107330976A (en) * | 2017-06-01 | 2017-11-07 | 北京大学第三医院 | A kind of human body head three-dimensional modeling apparatus and application method |
-
2017
- 2017-12-08 CN CN201711292737.5A patent/CN109903368A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866497A (en) * | 2010-06-18 | 2010-10-20 | 北京交通大学 | Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system |
CN104050712A (en) * | 2013-03-15 | 2014-09-17 | 索尼公司 | Method and apparatus for establishing three-dimensional model |
CN104077808A (en) * | 2014-07-20 | 2014-10-01 | 詹曙 | Real-time three-dimensional face modeling method used for computer graph and image processing and based on depth information |
CN107330976A (en) * | 2017-06-01 | 2017-11-07 | 北京大学第三医院 | A kind of human body head three-dimensional modeling apparatus and application method |
Non-Patent Citations (2)
Title |
---|
杨铁军: "信息工程专业机器人创新实验设计", 电子科技大学出版社, pages: 150 - 151 * |
黄军伟: "基于不同性别头部网络模型的三维头部重建", 中国优秀硕士学位论文全文数据库, no. 9, pages 150 - 151 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110532750A (en) * | 2019-09-03 | 2019-12-03 | 南京信息职业技术学院 | System and method based on time-of-flight method 3D modeling technology prevention and control child myopia |
WO2021063012A1 (en) * | 2019-09-30 | 2021-04-08 | 华为技术有限公司 | Method for presenting face in video call, video call apparatus and vehicle |
CN111063016A (en) * | 2019-12-31 | 2020-04-24 | 螳螂慧视科技有限公司 | Multi-depth lens face modeling method and system, storage medium and terminal |
CN113140046A (en) * | 2021-04-21 | 2021-07-20 | 上海电机学院 | AR (augmented reality) cross-over control method and system based on three-dimensional reconstruction and computer readable medium |
CN113610971A (en) * | 2021-09-13 | 2021-11-05 | 杭州海康威视数字技术股份有限公司 | Fine-grained three-dimensional model construction method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109903368A (en) | Three-dimensional facial reconstruction system and its three-dimensional facial reconstruction method based on depth information | |
JP7418340B2 (en) | Image augmented depth sensing using machine learning | |
JP7413321B2 (en) | Daily scene restoration engine | |
US20210011289A1 (en) | Head-mounted display for virtual and mixed reality with inside-out positional, user body and environment tracking | |
Johnson et al. | Shape estimation in natural illumination | |
US20120242800A1 (en) | Apparatus and system for interfacing with computers and other electronic devices through gestures by using depth sensing and methods of use | |
CN102760234B (en) | Depth image acquisition device, system and method | |
CN104335005B (en) | 3D is scanned and alignment system | |
EP2939214A1 (en) | Using photometric stereo for 3d environment modeling | |
CN106772431A (en) | A kind of Depth Information Acquistion devices and methods therefor of combination TOF technologies and binocular vision | |
CN109949371A (en) | A kind of scaling method for laser radar and camera data | |
WO2020023524A1 (en) | Method and system for resolving hemisphere ambiguity using a position vector | |
CN103491897A (en) | Motion blur compensation | |
EP3069100B1 (en) | 3d mapping device | |
CN208572263U (en) | Array camera module and its electronic equipment | |
CN109766876A (en) | Contactless fingerprint acquisition device and method | |
CN107343148B (en) | Image completion method, apparatus and terminal | |
US20190037133A1 (en) | Tracking image collection for digital capture of environments, and associated systems and methods | |
CN109903377A (en) | A kind of three-dimensional face modeling method and system without phase unwrapping | |
CN103247074A (en) | 3D (three dimensional) photographing method combining depth information and human face analyzing technology | |
CN108319939A (en) | A kind of 3D four-dimension head face data discrimination apparatus | |
CN109905691A (en) | Depth image acquisition device and depth image acquisition system and its image processing method | |
CN108550184A (en) | A kind of biological characteristic 3D 4 D datas recognition methods and system based on light-field camera | |
GB2350511A (en) | Stereogrammetry; personalizing computer game character | |
CN112489189A (en) | Neural network training method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |