CN111063016A - Multi-depth lens face modeling method and system, storage medium and terminal - Google Patents

Multi-depth lens face modeling method and system, storage medium and terminal Download PDF

Info

Publication number
CN111063016A
CN111063016A CN201911407678.0A CN201911407678A CN111063016A CN 111063016 A CN111063016 A CN 111063016A CN 201911407678 A CN201911407678 A CN 201911407678A CN 111063016 A CN111063016 A CN 111063016A
Authority
CN
China
Prior art keywords
depth
data
distribution data
face
distribution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911407678.0A
Other languages
Chinese (zh)
Inventor
黄诗文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mantis Vision Ltd China
Original Assignee
Mantis Vision Ltd China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mantis Vision Ltd China filed Critical Mantis Vision Ltd China
Priority to CN201911407678.0A priority Critical patent/CN111063016A/en
Publication of CN111063016A publication Critical patent/CN111063016A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The embodiment of the invention discloses a multi-depth lens face modeling method, a multi-depth lens face modeling system, a storage medium and a terminal. The invention discloses a multi-depth lens face modeling method, which comprises the following steps: s1, acquiring first distribution data, second distribution data and third distribution data of the structured light; s2, acquiring position data of the first depth lens, the second depth lens and the third depth lens; s3, generating target point cloud data according to the first distribution data, the second distribution data, the third distribution data and the position data; and S4, generating a three-dimensional face model according to the target point cloud data. The multi-depth lens face modeling method has the advantages of more comprehensive information, higher reliability, no requirement for keeping a user still, high information dimension, good identification effect, high reliability, stability and high efficiency.

Description

Multi-depth lens face modeling method and system, storage medium and terminal
Technical Field
The embodiment of the invention relates to the technical field of computer vision algorithms, in particular to a multi-depth lens face modeling method, a multi-depth lens face modeling system, a storage medium and a terminal.
Background
In recent years, with the advance of manufacturers of mobile phone hardware such as iPhone and the like, depth shots based on TOF and structured light have become mature and gradually enter the lives of the masses. For example, identity authentication based on human face features has become increasingly popular under the impetus of various vendors. However, in order to meet the financial level of face authentication, a simple color image has not been able to meet the existing needs. The 3D face recognition, as an important supplement, has the characteristics of natural photo and video fraud resistance, and thus has become one of the mainstream technologies at present.
In order to deal with the face identity authentication at the financial level, currently, a single depth shot is mostly adopted for face modeling. However, the following problems exist with a single depth lens: a single depth lens can only provide one depth picture with depth value information, cannot cover the whole face including the left and right ears, the chin and other parts, and cannot provide all-around 3D face data. In order to make up for the defect of a single depth lens in face identity authentication, a face is scanned by using the single depth lens at present to realize face modeling and further realize the authentication of the face identity. However, this method is time-consuming and unstable, and requires the photographed person to remain still, so as to increase the probability of successful face authentication.
Disclosure of Invention
The embodiment of the invention aims to provide a multi-depth lens face modeling method, a multi-depth lens face modeling system, a storage medium and a terminal, and provides a solution scheme which is more comprehensive in information, higher in reliability, free from requiring a user to keep still, and has the advantages of high information dimension, good identification effect, high reliability, stability and high efficiency.
The embodiment of the invention provides a multi-depth lens face modeling method, which comprises the following steps:
s1, acquiring first distribution data, second distribution data and third distribution data of the structured light; wherein the content of the first and second substances,
the first distribution data is obtained by detecting the distribution of the structured light on the left face to be detected through a first depth lens, the second distribution data is obtained by detecting the distribution of the structured light on the right face to be detected through a second depth lens, and the third distribution data is obtained by detecting the distribution of the structured light on the front face to be detected through a third depth lens;
the first distribution data, the second distribution data, and the third distribution data include: three-dimensional distribution information of the structured light;
s2, obtaining position data of the first depth shot, the second depth shot, and the third depth shot, the position data including: position information between the first depth shot, the second depth shot, and the third depth shot;
s3, generating target point cloud data according to the first distribution data, the second distribution data, the third distribution data and the position data; the target point cloud data comprises three-dimensional space information of a face to be detected;
and S4, generating a three-dimensional face model according to the target point cloud data.
By adopting the technical scheme, the face can be subjected to omnibearing data acquisition, and the information is more comprehensive; the user does not need to keep still, and the data obtained from the three angles can be mutually verified; meanwhile, the three-dimensional modeling is not needed after the conversion of the depth map, and the fitting of the structured light three-dimensional distribution information is directly carried out, so that the data processing efficiency is improved.
Optionally, step S3 specifically includes:
s301, generating structured light fusion data according to the first distribution data, the second distribution data and the third distribution data, wherein the structured light fusion data comprises: three-dimensional distribution information of the structured light on the face to be detected;
s302, generating the target point cloud data according to the structured light fusion data and the position data.
By adopting the technical scheme, the first distribution data, the second distribution data and the third distribution data are fused, redundant, invalid and conflicting data are removed, and the efficiency of the subsequent modeling process is improved by utilizing the mutual verification function.
Optionally, after step S302, there are further provided:
s303, acquiring another first distribution data, the second distribution data or the third distribution data;
s304, generating a detection fitting degree according to the first distribution data, the second distribution data or the third distribution data and the target point cloud data;
s305, obtaining a standard fitting degree;
s306, if the test fitting degree and the standard fitting degree are adopted, the target point cloud data meeting the fitting requirements are screened out.
The fitting degree is verified here to check whether the effect of fitting using the aforementioned data is satisfactory.
Optionally, step S3 is followed by:
s310, fourth distribution data of the structured light are obtained, the fourth distribution data are obtained by detecting the distribution of the structured light at the chin of the face to be detected through a fourth depth lens, and the fourth distribution data comprise: three-dimensional distribution information of the structured light near the chin of the face to be detected;
s320, generating the target point cloud data according to the first distribution data, the second distribution data, the third distribution data, the fourth distribution data and the position data.
By adopting the technical scheme, the measurement data of the chin of the face to be measured is obtained, the dimensionality and the reliability of the data are improved, and further the modeling effect is more accurate and the reliability is higher.
Optionally, step S4 is followed by:
s5, obtaining mapping data, wherein the mapping data comprises the color, the brightness and the two-dimensional distribution information of the face to be detected;
and S6, generating a preview three-dimensional face model on the three-dimensional face model according to the mapping data and the target point cloud data.
The method is used for mapping the face to be detected, and aims to generate a three-dimensional face model for the user to preview, so that the use comfort of the user is improved.
The embodiment of the invention also provides a multi-depth lens face modeling system, which comprises: the system comprises a bracket, a first depth lens, a second depth lens, a third depth lens, a hardware synchronization line and a controller;
the bracket includes: the first tentacle, the second tentacle, the third tentacle and the base frame; the first tentacle, the second tentacle and the third tentacle are respectively fixed on the base frame, and the end points of the first tentacle, the second tentacle and the third tentacle are respectively positioned at the left side, the right side and the middle part of the same arc-shaped surface;
the first depth lens, the second depth lens and the third depth lens are respectively fixed on the end points of the first tentacle, the second tentacle and the third tentacle;
the first depth lens, the second depth lens and the third depth lens are respectively electrically connected with the controller;
the hardware synchronization line is electrically connected with the first depth lens, the second depth lens, the third depth lens and the controller respectively.
By adopting the technical scheme, each side face of the face to be detected can be effectively detected, and the device is simple in structure and convenient to maintain.
Optionally, the method further comprises: a fourth depth shot;
the support further comprises: the fourth tentacle is fixed on the base frame and is positioned on the lower side of the arc-shaped surface;
the fourth depth lens is fixed on the fourth tentacle and is electrically connected with the hardware synchronization line and the controller respectively.
The purpose of adding the fourth depth lens and the fourth tentacle is to model the chin of the face to be tested so as to improve the dimensionality of data and the reliability of modeling.
An embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements the multi-depth lens face modeling method according to any one of claims 1 to 5.
An embodiment of the present invention further provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the multi-depth lens face modeling method according to any one of claims 1 to 5 when executing the computer program.
Based on the scheme, the depth lenses are respectively arranged on the left side, the right side and the front side of the human face, and the modeling of the human face to be tested is realized through the three-dimensional distribution information of the structured light obtained from each depth lens. The multi-depth lens face modeling method adopts fitting of three-dimensional distribution information of structured light on each side face, modeling of a face to be tested can be achieved under the condition that a user moves, efficiency is high, the obtained data is compared with the existing single-depth lens, three-dimensional face data can be obtained without the need of user movement, information dimensionality is high, reliability is high, and the multi-depth lens face modeling method has the advantages of stability and high efficiency due to mutual verification.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart of a multi-depth shot face modeling method according to a first embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a multi-depth shot face modeling system according to a second embodiment of the present invention;
fig. 3 is a schematic view of an installation structure of a depth lens and a tentacle according to a second embodiment of the present invention.
Reference numbers in the figures:
1. a base frame; 2. a first tentacle; 3. a second tentacle; 4. a third tentacle; 5. a first depth lens; 6. a second depth shot; 7. a third depth shot; 8. a fourth tentacle; 9. and a fourth depth shot.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "axial," "radial," "circumferential," and the like are used in the indicated orientations and positional relationships based on the drawings for convenience in describing and simplifying the description, but do not indicate or imply that the referenced device or element must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the invention.
In the present invention, unless otherwise specifically stated or limited, the terms "mounted," "connected," "fixed," and the like are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally formed; the connection can be mechanical connection, electrical connection or communication connection; either directly or indirectly through intervening media, either internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 1 is a flowchart of a multi-depth shot face modeling method in a first embodiment of the present invention, fig. 2 is a schematic structural diagram of a multi-depth shot face modeling system in a second embodiment of the present invention, and fig. 3 is a schematic structural diagram of installation of a depth shot and a tentacle in the second embodiment of the present invention.
Example one
As shown in fig. 1, the multi-depth shot face modeling method of the embodiment is characterized by comprising the following steps:
and S1, acquiring first distribution data, second distribution data and third distribution data of the structured light.
The first distribution data is obtained by detecting the distribution of the structured light on the face to be detected on the left side through a first depth lens, the second distribution data is obtained by detecting the distribution of the structured light on the face to be detected on the right side through a second depth lens, and the third distribution data is obtained by detecting the distribution of the structured light on the face to be detected on the front side through a third depth lens.
The first distribution data, the second distribution data, and the third distribution data include: information on the three-dimensional distribution of the structured light.
The first depth shot, the second depth shot, and the third depth shot refer to shots for performing three-dimensional modeling, and specifically may be devices for performing modeling by using point cloud and acquiring point cloud data. For example, one possible structured light is a matrix of infrared light spots. One possible depth shot is to include: infrared ray facula matrix transmitter and infrared ray light board matrix shooting ware.
And the position between the infrared light spot matrix emitter and the infrared light plate matrix shooting device is determined.
The infrared ray light spot matrix generator emits scanning matrix light spots to the face to be detected, and the infrared ray light matrix shooting device shoots the scanning matrix light spots on the face to be detected. And then, according to the emission angle of the scanning matrix light spots, the shot pictures of the scanning matrix light spots and the scanning time and speed, calculating to obtain the three-dimensional distribution information of the infrared light spot matrix on the face to be detected. E.g. spots (a) in a matrix of infrared light spotsi,bj) Three-dimensional distribution information (x (t))ij,y(t)ij,z(t)ij). Wherein, x (t)ijIndicating a light spot (a)i,bj) Value of x-axis coordinate at a certain time point in Cartesian coordinate system, y (t)ijIndicating a light spot (a)i,bj) Value of the y-axis coordinate at a certain point in time in a Cartesian coordinate system, z (t)ijIndicating a light spot (a)i,bj) The value of the z-axis coordinate in a Cartesian coordinate system at a point in time. It should be noted that, because the shooting process of the infrared ray matrix camera may be intermittent, the corresponding (x (t))ij,y(t)ij,z(t)ij) It may also be a series of numerical sets rather than a continuous functional expression.
Meanwhile, the above description is only to illustrate possible specific meanings of the first distribution data, the second distribution data, and the third distribution data, which may be obtained. Since depth shots and the acquisition of three-dimensional distribution information of structured light by depth shots are known in the art, whether a specific structured light is infrared or not and what form the acquired three-dimensional distribution information is specifically may be different from the above.
S2, obtaining position data of the first depth shot, the second depth shot, and the third depth shot, the position data including: position information between the first depth shot, the second depth shot, and the third depth shot.
Specifically, the position data at this point is a premise for calculating the three-dimensional distribution information of the structured light using the first distribution data, the second distribution data, and the third distribution data. Only if the position data, and the first distribution data, the second distribution data, and the third distribution data are obtained, the next calculation can be performed.
S3, generating target point cloud data according to the first distribution data, the second distribution data, the third distribution data and the position data; the target point cloud data comprises three-dimensional space information of the face to be detected.
Here, the three-dimensional spatial distribution information of the structured light is calculated from the first distribution data, the second distribution data, the third distribution data, and the position data. Specifically, only a part of coordinates of the movement locus of the structured light on the face is obtained based on the aforementioned first distribution data, second distribution data, third distribution data, and position data. However, because of the possibility of coincidence, the coordinates can be used to remove the coincident structured light, and then fit the target point cloud data of the face to be measured, that is, the three-dimensional spatial information of the face to be measured, through the three-dimensional coordinate information of the non-coincident structured light.
And S4, generating a three-dimensional face model according to the target point cloud data.
As described above, since the first distribution data, the second distribution data, and the third distribution data of the structured light obtained are discontinuous, the distribution of the structured light obtained from these data is also discontinuous. This makes the three-dimensional curved surface of the face to be measured discontinuous. For a human face, a continuous three-dimensional model is generated, so that the discontinuous data needs to be fitted, and the fitted data is used for three-dimensional human face modeling, so that a three-dimensional space model which is the same as that of the human face to be detected is obtained.
It should be noted that, in the process of obtaining the three-dimensional face model, the acquisition intervals of the first distribution data, the second distribution data and the third distribution data are required to be as short as possible, so as to reduce the influence of the face movement on the accuracy of the modeling result. Compared with the single depth shot in the prior art, the human face modeling method adopting the three depth shots can acquire data of the human face in all directions, so that the information is more comprehensive; the user does not need to keep still, and the data obtained from the three angles can be mutually verified; meanwhile, the three-dimensional modeling is not needed after the conversion of the depth map, and the fitting of the structured light three-dimensional distribution information is directly carried out, so that the data processing efficiency is improved. In addition, the multi-depth lens face modeling method is more comprehensive in information, so that the information dimension is higher, the recognition effect is better when face identity verification is carried out, and the reliability is higher.
Optionally, in this embodiment, step S3 of the multi-depth shot face modeling method specifically includes:
s301, generating structured light fusion data according to the first distribution data, the second distribution data, and the third distribution data, the structured light fusion data including: the three-dimensional distribution information of the structured light on the face to be detected.
S302, generating the target point cloud data according to the structured light fusion data and the position data.
The purpose of steps S301 and S302 is to fuse the first distribution data, the second distribution data, and the third distribution data, remove redundant, invalid, and conflicting data, and improve the efficiency of the subsequent modeling process by using the verification function between them.
Optionally, in this embodiment, after step S302 of the multi-depth shot face modeling method, there are further provided:
s303, another first distribution data, another second distribution data, or another third distribution data is obtained.
S304, generating a checking fitting degree according to the first distribution data, the second distribution data or the third distribution data and the target point cloud data.
S305, obtaining the standard fitting degree.
S306, if the test fitting degree and the standard fitting degree are adopted, the target point cloud data meeting the fitting requirements are screened out.
The fitting degree is verified here to check whether the effect of fitting using the data described above is satisfactory or not. This is because, when the depth lens uses the structured light for detection, the data obtained in each frame is not stable, and in order to obtain more stable data and reduce the influence of accidental factors, a fitting degree detection step is added.
Optionally, in this embodiment, after step S3, the multi-depth shot face modeling method further includes:
s310, fourth distribution data of the structured light is obtained, the fourth distribution data are obtained by detecting the distribution of the structured light at the chin of the face to be detected through a fourth depth lens, and the fourth distribution data comprise: the three-dimensional distribution information of the structured light near the chin of the face to be detected.
S320, generating the target point cloud data according to the first distribution data, the second distribution data, the third distribution data, the fourth distribution data and the position data.
Steps S310 and S320 are provided after step S3, so as to obtain measurement data of the chin of the face to be measured, improve the dimensionality and reliability of the data, and further make the modeling effect more accurate and the reliability higher.
Optionally, in this embodiment, after step S4, the method for modeling a face with multiple depth shots further includes:
and S5, obtaining the mapping data, wherein the mapping data comprises the color, the brightness and the two-dimensional distribution information of the face to be detected.
And S6, generating a preview three-dimensional face model on the three-dimensional face model according to the map data and the target point cloud data.
The method is used for mapping the face to be detected, and aims to generate a three-dimensional face model for the user to preview, so that the use comfort of the user is improved.
Example two
The second embodiment provides a multi-depth shot face modeling system, as shown in fig. 2 and 3, including: a bracket, a first depth lens 5, a second depth lens 6, a third depth lens 7, a hardware synchronization line and a controller (not shown).
This support includes: the device comprises a first tentacle 2, a second tentacle 3, a third tentacle 4 and a base frame 1; the first tentacle 2, the second tentacle 3 and the third tentacle 4 are respectively fixed on the base frame 1, and the end points of the first tentacle 2, the second tentacle 3 and the third tentacle 4 are respectively positioned at the left side, the right side and the middle part of the same arc-shaped surface. One possible arcuate surface is a spherical surface, as shown in fig. 2. The structures of the first tentacle 2, the second tentacle 3, and the third tentacle 4 may refer to fig. 3, and fig. 3 shows the installation positions of the third depth lens 7 and the third tentacle 4. Of course, fig. 3 illustrates not the only configuration and mounting of first tentacle 2, second tentacle 3, third tentacle 4 and the corresponding depth lens.
The first depth lens 5, the second depth lens 6 and the third depth lens 7 are fixed to end points of the first tentacle 2, the second tentacle 3 and the third tentacle 4, respectively.
The first depth lens 5, the second depth lens 6 and the third depth lens 7 are electrically connected to the controller (not shown), respectively.
The hardware synchronization line is electrically connected to the first depth lens 5, the second depth lens 6, the third depth lens 7 and the controller respectively.
It should be noted that the first depth lens 5, the second depth lens 6, and the third depth lens 7 are all in the prior art, and structured light is used to detect the appearance of the face or the object to be detected, so as to obtain the spatial surface structure of the face or the object to be detected.
Meanwhile, the hardware synchronization line is a pulse trigger hardware, and as the prior art, the functions thereof can be as follows: the lens A and the lens B are electrically connected through a hardware synchronization line, the current lens A sends a shooting command to the next lens B after receiving the shooting command, and the B shoots after receiving the command and after a certain delay (the delay is to miss the exposure time of the lens A and prevent interference). The time delay of hardware triggering can be very short, such as several milliseconds, so that the shooting interval between the lenses can be controlled with high precision, and the lenses can be exposed one by one in the shortest possible time under the condition of ensuring that the lenses do not interfere with each other.
By adopting the structure, each side face of the face to be detected can be effectively detected, and the structure is simple and convenient to maintain.
Optionally, as shown in fig. 2, the multi-depth shot face modeling system in the second embodiment further includes: a fourth depth lens 9.
This support still includes: and the fourth tentacle 8, wherein the fourth tentacle 8 is fixed on the base frame 1, and the fourth tentacle 8 is positioned at the lower side of the arc-shaped surface.
The fourth depth lens 9 is fixed on the fourth tentacle 8, and the fourth depth lens 9 is electrically connected to the hardware synchronization line (not shown) and the controller (not shown), respectively.
The purpose of adding the fourth depth lens 9 and the fourth tentacle 8 is to model the chin of the face to be tested, so as to improve the dimensionality of data and the reliability of modeling.
Optionally, the multi-depth shot face modeling system in the second embodiment further includes: a polarizer (not shown).
The polarizers are disposed on the first depth lens 5, the second depth lens 6, the third depth lens 7, and the fourth depth lens 9, and are used to generate polarized light.
It should be noted that a polarizing plate is a prior art. By adding polarized light, noise pollution can be reduced, and modeling accuracy is improved.
In addition, when the above-described processes in the embodiments are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
In the present invention, unless otherwise explicitly specified or limited, the first feature "on" or "under" the second feature may be directly contacting the first feature and the second feature or indirectly contacting the first feature and the second feature through an intermediate.
Also, a first feature "on," "above," and "over" a second feature may mean that the first feature is directly above or obliquely above the second feature, or that only the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lower level than the second feature.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example" or "some examples," or the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A multi-depth lens face modeling method is characterized by comprising the following steps:
s1, acquiring first distribution data, second distribution data and third distribution data of the structured light; wherein the content of the first and second substances,
the first distribution data is obtained by detecting the distribution of the structured light on the left face to be detected through a first depth lens, the second distribution data is obtained by detecting the distribution of the structured light on the right face to be detected through a second depth lens, and the third distribution data is obtained by detecting the distribution of the structured light on the front face to be detected through a third depth lens;
the first distribution data, the second distribution data, and the third distribution data include: three-dimensional distribution information of the structured light;
s2, obtaining position data of the first depth shot, the second depth shot, and the third depth shot, the position data including: position information between the first depth shot, the second depth shot, and the third depth shot;
s3, generating target point cloud data according to the first distribution data, the second distribution data, the third distribution data and the position data; the target point cloud data comprises three-dimensional space information of a face to be detected;
and S4, generating a three-dimensional face model according to the target point cloud data.
2. The method for modeling a multi-depth shot face as recited in claim 1, wherein step S3 specifically comprises:
s301, generating structured light fusion data according to the first distribution data, the second distribution data and the third distribution data, wherein the structured light fusion data comprises: three-dimensional distribution information of the structured light on the face to be detected;
s302, generating the target point cloud data according to the structured light fusion data and the position data.
3. The method for modeling a face with multiple depth shots as claimed in claim 2, wherein after the step S302, there are further provided:
s303, acquiring another first distribution data, the second distribution data or the third distribution data;
s304, generating a detection fitting degree according to the first distribution data, the second distribution data or the third distribution data and the target point cloud data;
s305, obtaining a standard fitting degree;
s306, if the test fitting degree and the standard fitting degree are adopted, the target point cloud data meeting the fitting requirements are screened out.
4. The method for modeling a face with multiple depth shots as claimed in claim 1, wherein step S3 is followed by steps of:
s310, fourth distribution data of the structured light are obtained, the fourth distribution data are obtained by detecting the distribution of the structured light at the chin of the face to be detected through a fourth depth lens, and the fourth distribution data comprise: three-dimensional distribution information of the structured light near the chin of the face to be detected;
s320, generating the target point cloud data according to the first distribution data, the second distribution data, the third distribution data, the fourth distribution data and the position data.
5. The method for modeling a face with multiple depth shots as claimed in claim 1, further comprising after step S4:
s5, obtaining mapping data, wherein the mapping data comprises the color, the brightness and the two-dimensional distribution information of the face to be detected;
and S6, generating a preview three-dimensional face model on the three-dimensional face model according to the mapping data and the target point cloud data.
6. A multi-depth shot face modeling system, comprising: the system comprises a bracket, a first depth lens, a second depth lens, a third depth lens, a hardware synchronization line and a controller;
the bracket includes: the first tentacle, the second tentacle, the third tentacle and the base frame; the first tentacle, the second tentacle and the third tentacle are respectively fixed on the base frame, and the end points of the first tentacle, the second tentacle and the third tentacle are respectively positioned at the left side, the right side and the middle part of the same arc-shaped surface;
the first depth lens, the second depth lens and the third depth lens are respectively fixed on the end points of the first tentacle, the second tentacle and the third tentacle;
the first depth lens, the second depth lens and the third depth lens are respectively electrically connected with the controller;
the hardware synchronization line is electrically connected with the first depth lens, the second depth lens, the third depth lens and the controller respectively.
7. The system of claim 6, further comprising: a fourth depth shot;
the support further comprises: the fourth tentacle is fixed on the base frame and is positioned on the lower side of the arc-shaped surface;
the fourth depth lens is fixed on the fourth tentacle and is electrically connected with the hardware synchronization line and the controller respectively.
8. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements a multi-depth shot face modeling method according to any one of claims 1 to 5.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the multi-depth shot face modeling method according to any one of claims 1 to 5 when executing the computer program.
CN201911407678.0A 2019-12-31 2019-12-31 Multi-depth lens face modeling method and system, storage medium and terminal Pending CN111063016A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911407678.0A CN111063016A (en) 2019-12-31 2019-12-31 Multi-depth lens face modeling method and system, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911407678.0A CN111063016A (en) 2019-12-31 2019-12-31 Multi-depth lens face modeling method and system, storage medium and terminal

Publications (1)

Publication Number Publication Date
CN111063016A true CN111063016A (en) 2020-04-24

Family

ID=70305338

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911407678.0A Pending CN111063016A (en) 2019-12-31 2019-12-31 Multi-depth lens face modeling method and system, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN111063016A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013074153A1 (en) * 2011-11-17 2013-05-23 University Of Southern California Generating three dimensional models from range sensor data
CN103281507A (en) * 2013-05-06 2013-09-04 上海大学 Videophone system and videophone method based on true three-dimensional display
CN104794722A (en) * 2015-04-30 2015-07-22 浙江大学 Dressed human body three-dimensional bare body model calculation method through single Kinect
CN104881630A (en) * 2015-03-31 2015-09-02 浙江工商大学 Vehicle identification method based on window segmentation and fuzzy characteristics
CN105180830A (en) * 2015-09-28 2015-12-23 浙江大学 Automatic three-dimensional point cloud registration method applicable to ToF (Time of Flight) camera and system
CN107170043A (en) * 2017-06-19 2017-09-15 电子科技大学 A kind of three-dimensional rebuilding method
CN107945268A (en) * 2017-12-15 2018-04-20 深圳大学 A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN109242961A (en) * 2018-09-26 2019-01-18 北京旷视科技有限公司 A kind of face modeling method, apparatus, electronic equipment and computer-readable medium
CN109903368A (en) * 2017-12-08 2019-06-18 浙江舜宇智能光学技术有限公司 Three-dimensional facial reconstruction system and its three-dimensional facial reconstruction method based on depth information
CN109978984A (en) * 2017-12-27 2019-07-05 Tcl集团股份有限公司 Face three-dimensional rebuilding method and terminal device
CN109979013A (en) * 2017-12-27 2019-07-05 Tcl集团股份有限公司 Three-dimensional face chart pasting method and terminal device
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN110544233A (en) * 2019-07-30 2019-12-06 北京的卢深视科技有限公司 Depth image quality evaluation method based on face recognition application
CN110579180A (en) * 2019-08-07 2019-12-17 合肥学院 Light vision and conoscopic polarization group sum-based reflective curved surface part measurement method

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013074153A1 (en) * 2011-11-17 2013-05-23 University Of Southern California Generating three dimensional models from range sensor data
CN103281507A (en) * 2013-05-06 2013-09-04 上海大学 Videophone system and videophone method based on true three-dimensional display
CN104881630A (en) * 2015-03-31 2015-09-02 浙江工商大学 Vehicle identification method based on window segmentation and fuzzy characteristics
CN104794722A (en) * 2015-04-30 2015-07-22 浙江大学 Dressed human body three-dimensional bare body model calculation method through single Kinect
CN105180830A (en) * 2015-09-28 2015-12-23 浙江大学 Automatic three-dimensional point cloud registration method applicable to ToF (Time of Flight) camera and system
CN107170043A (en) * 2017-06-19 2017-09-15 电子科技大学 A kind of three-dimensional rebuilding method
CN109903368A (en) * 2017-12-08 2019-06-18 浙江舜宇智能光学技术有限公司 Three-dimensional facial reconstruction system and its three-dimensional facial reconstruction method based on depth information
CN107945268A (en) * 2017-12-15 2018-04-20 深圳大学 A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light
CN109978984A (en) * 2017-12-27 2019-07-05 Tcl集团股份有限公司 Face three-dimensional rebuilding method and terminal device
CN109979013A (en) * 2017-12-27 2019-07-05 Tcl集团股份有限公司 Three-dimensional face chart pasting method and terminal device
CN109242961A (en) * 2018-09-26 2019-01-18 北京旷视科技有限公司 A kind of face modeling method, apparatus, electronic equipment and computer-readable medium
CN110363858A (en) * 2019-06-18 2019-10-22 新拓三维技术(深圳)有限公司 A kind of three-dimensional facial reconstruction method and system
CN110544233A (en) * 2019-07-30 2019-12-06 北京的卢深视科技有限公司 Depth image quality evaluation method based on face recognition application
CN110579180A (en) * 2019-08-07 2019-12-17 合肥学院 Light vision and conoscopic polarization group sum-based reflective curved surface part measurement method

Similar Documents

Publication Publication Date Title
WO2018153311A1 (en) Virtual reality scene-based business verification method and device
CN107563304B (en) Terminal equipment unlocking method and device and terminal equipment
CN109557669B (en) Method for determining image drift amount of head-mounted display equipment and head-mounted display equipment
CN107527046B (en) Unlocking control method and related product
CN110047100A (en) Depth information detection method, apparatus and system
CN113280752B (en) Groove depth measurement method, device and system and laser measurement equipment
KR102463172B1 (en) Method and apparatus for determining inter-pupilary distance
CN106991378B (en) Depth-based face orientation detection method and device and electronic device
CN107863678A (en) Laser safety control method and device based on range sensor
CN111290580B (en) Calibration method based on sight tracking and related device
CN111192329B (en) Sensor calibration result verification method and device and storage medium
CN109186942A (en) The test parallelism detection method, apparatus and readable storage medium storing program for executing of structure light video camera head
CN108537103B (en) Living body face detection method and device based on pupil axis measurement
CN115908720A (en) Three-dimensional reconstruction method, device, equipment and storage medium
CN106646876A (en) Head-mounted display system and safety prompting method thereof
CN106991376B (en) Depth information-combined side face verification method and device and electronic device
CN113711229B (en) Control method of electronic device, and computer-readable storage medium
CN113034684B (en) Three-dimensional reconstruction method, electronic device, and computer-readable storage medium
US20210327083A1 (en) Systems and methods of measuring an object in a scene of a captured image
CN110260823A (en) A kind of structured light control method, device and computer equipment
US20220202297A1 (en) Method and Apparatus for Processing Blood Pressure Measurement, and Electronic Device
CN112164099A (en) Self-checking and self-calibrating method and device based on monocular structured light
CN111160233B (en) Human face in-vivo detection method, medium and system based on three-dimensional imaging assistance
CN112504473A (en) Fire detection method, device, equipment and computer readable storage medium
CN111063016A (en) Multi-depth lens face modeling method and system, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination