CN109003327A - Image processing method, device, computer equipment and storage medium - Google Patents

Image processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN109003327A
CN109003327A CN201810698470.8A CN201810698470A CN109003327A CN 109003327 A CN109003327 A CN 109003327A CN 201810698470 A CN201810698470 A CN 201810698470A CN 109003327 A CN109003327 A CN 109003327A
Authority
CN
China
Prior art keywords
vector
transformation
face
depth
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810698470.8A
Other languages
Chinese (zh)
Other versions
CN109003327B (en
Inventor
周衍鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201810698470.8A priority Critical patent/CN109003327B/en
Priority to PCT/CN2018/106427 priority patent/WO2020000696A1/en
Publication of CN109003327A publication Critical patent/CN109003327A/en
Application granted granted Critical
Publication of CN109003327B publication Critical patent/CN109003327B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Architecture (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This application discloses a kind of image processing method, device, computer equipment and storage mediums, effectively improve the quantity for the true three-dimensional face data that augmentation obtains.Above method part includes: to obtain the first depth face image data, the second depth face image data;Coordinate conversion is carried out to obtain the primary vector of the first face and the secondary vector of the second face to the first, second depth face image data;The first transformation matrix is determined according to primary vector and the first transformation parameter, and the second transformation matrix is determined according to secondary vector and the second transformation parameter;Vector transformation is carried out to obtain the first difference vector to primary vector according to the first transformation matrix, vector transformation is carried out to secondary vector to obtain the second difference vector according to the second transformation matrix;Target difference vector is determined according to the first difference vector and the second difference vector;Using target difference vector as new depth face image data.

Description

Image processing method, device, computer equipment and storage medium
Technical field
This application involves field of image processing more particularly to a kind of image processing method, device, computer equipment and storages Medium.
Background technique
With the fast development of face recognition technology, people get over accuracy, the rapidity requirement of face recognition technology Come it is higher, from two-dimension human face identify, then develop to three-dimensional face identification, three-dimensional face is as a kind of feedback face in reality Three-dimensional data can feed back the information on more plurality of human faces compared to two-dimension human face data.
But the public database of current depth facial image is considerably less, however carry out the complicated algorithms such as deep learning When being trained to depth face image data, it usually needs a large amount of depth face image data, so in depth face figure The training of picture usually can carry out data augmentation to existing depth face image data.Therefore, a kind of number is proposed on traditional technology It is that data augmentation is carried out based on face linear model, wherein the linear model of face is to multiple three-dimensional faces according to augmentation technology By alignment, the linear averaging face that the simple operations such as averaging are made, but linear model is suitable for two-dimension human face data Processing, is not suitable for three-dimensional face data, the use of the three-dimensional face data that linear model augmentation comes out is linear averaging face number According to causing to lose part human face data, augmentation obtains that face is caused to distort, the true three-dimensional face for obtaining augmentation Data are less.
Summary of the invention
Based on this, it is necessary in view of the above technical problems, provide it is a kind of can effectively improve augmentation obtain it is true Three-dimensional face data quantity image processing method, device, computer equipment and storage medium.
A kind of image processing method, comprising:
Obtain the first depth face image data and the second depth face image data, wherein the first depth face figure As data and the second depth face diagram data are respectively the depth face image data of the first face and the second face;
Coordinate conversion is carried out to obtain the primary vector of the first face to the first depth face image data, and deep to second Degree face image data carries out coordinate conversion to obtain the secondary vector of the second face;
The first transformation matrix is determined according to primary vector and the first transformation parameter, and is become according to secondary vector and second It changes parameter and determines the second transformation matrix;
Vector transformation is carried out to obtain the first difference vector to primary vector according to the first transformation matrix, and is become according to second It changes matrix and vector transformation is carried out to obtain the second difference vector to secondary vector;
Target difference vector is determined according to the first difference vector and the second difference vector;
Using target difference vector as three-dimensional face images data.
A kind of image processing apparatus, comprising:
Module is obtained, for obtaining the first depth face image data and the second depth face image data, wherein the One depth face image data and the second depth face diagram data are respectively the depth facial image of the first face and the second face Data;
Conversion module, for carrying out coordinate conversion to obtain the to obtaining the first depth face image data that module obtains The primary vector of one face, and coordinate conversion is carried out to the second depth face image data that module obtains is obtained to obtain second The secondary vector of face;
First determining module, primary vector and the first transformation parameter for being converted according to conversion module determine first Transformation matrix, and the second transformation matrix is determined according to the secondary vector and the second transformation parameter of conversion module conversion;
Conversion module, the first transformation matrix for being determined according to the first determining module carry out vector transformation to primary vector To obtain the first difference vector, and vector transformation is carried out to secondary vector according to the second transformation matrix that the first determining module determines To obtain the second difference vector;
Second determining module, the first difference vector and the second difference vector for being converted according to conversion module determine target Difference vector;
Third determining module, for using target difference vector as new depth face image data.
A kind of computer equipment, including memory, processor and storage are in the memory and can be in the processing The computer program run on device, the processor realize the step of above-mentioned image processing method when executing the computer program Suddenly.
A kind of computer readable storage medium, the computer-readable recording medium storage have computer program, the meter The step of calculation machine program realizes above-mentioned image processing method when being executed by processor.
Present applicant proposes a kind of image processing method and the corresponding image devices of the image processing method, computer Equipment and storage medium acquired one group of depth face image data before this in above-mentioned image processing method to obtain depth people The corresponding vector of face image data, then the corresponding vector of this group of depth face image data is carried out to quantitative change by transformation matrix It changes, one group of vector after vector transformation is obtained into new three-dimensional face images data, that is to say, that in the present solution, knot The three-dimensional coordinate for having closed depth face image data is converted, and is that needle two opens depth face image data during processing Three-dimensional coordinate data carries out conversion process, is not simply to be aligned to a depth face image data, the lines such as averaging Property map function make, loss human face data can be significantly reduced, reduce the distortion of face face, human face data ratio It is relatively true, the quantity extension of human face data is realized, the true depth face image data that augmentation obtains is effectively improved Quantity.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below by institute in the description to the embodiment of the present application Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the application Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings Obtain other attached drawings.
Fig. 1 is an application environment schematic diagram of image processing method in one embodiment of the application;
Fig. 2 is the flow diagram of image processing method in one embodiment of the application;
Fig. 3 is the structural schematic diagram of image processing apparatus in one embodiment of the application;
Fig. 4 is the structural schematic diagram of computer equipment in one embodiment of the application.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that described embodiment is some embodiments of the present application, instead of all the embodiments.Based on this Shen Please in embodiment, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall in the protection scope of this application.
Image processing apparatus corresponding to image processing method provided by the present application and the image processing method calculates Machine equipment and storage medium can be applicable in application environment schematic diagram as is shown in fig. 1, wherein image processing apparatus is for real Existing above-mentioned image processing method, in above-mentioned image processing method, by obtaining a large amount of depth face image datas, and it is a large amount of deep Degree face image data carries out the processing such as transformation calculations two-by-two, so that new three-dimensional face data is obtained, rather than it is simple right Together, the linear averaging face that the operations such as averaging are made, obtained three-dimensional face data face distort less, human face data Compare really, and can be realized the extension of quantity, achievees the purpose that data augmentation.In addition, server refers to independent clothes The server cluster of business device either multiple servers composition realizes, specifically without limitation.Below by specific embodiment To image processing apparatus corresponding to the image processing method and the image processing method in the application, computer equipment and Storage medium is introduced.
In one embodiment, it please refers to shown in Fig. 2, Fig. 2 is that the process of image processing method in one embodiment of the application is shown It is intended to, includes the following steps:
S10, the first depth face image data and the second depth face image data are obtained, wherein the first depth people Face image data and the second depth face diagram data are respectively the depth face image data of the first face and the second face;
Wherein, in the present solution, the first depth face image data and the second depth face image data are one kind three Dimensional data image, the first depth face image data and the second depth face diagram data are respectively the first face and the second face Depth face image data.In the practical application of this programme, it can use depth camera (depth camera) and obtain largely Depth face image data to construct predetermined depth face image database, then from the predetermined depth face image database with Machine obtains above-mentioned first depth face image data and the second depth face image data, the first depth face image data And second depth face image data be the predetermined depth face image database in any two depth facial image number According to.Wherein, depth camera refers to a kind of imaging sensor, which is able to observe that object or person in space Position, specifically, the depth camera can be active, passive type, contact or contactless depth camera, wherein main Dynamic formula is directed to target object emitted energy beam (such as laser, electromagnetic wave or ultrasonic wave), and passive type depth camera mainly utilizes The condition of the ambient enviroment of target object is imaged, and contact depth camera, which refers to, need to contact with target object or relatively close to non- Contact, which refers to, not to be needed to contact with target object.Illustratively, above-mentioned depth camera can specifically refer to TOF (time-of- Flight) depth camera, in addition to this it is possible to be kinect depth camera, XTion depth camera or RealSense depth Camera, specifically without limitation.
S20, coordinate conversion is carried out to the first depth face image data to obtain the primary vector of the first face, and to the Two depth face image datas carry out coordinate conversion to obtain the secondary vector of the second face;
Wherein, it should be noted that for the point in reality scene, each frame depth face that depth camera scans Image not only includes in reality scene, and the color RGB image of each point of the depth facial image, further includes depth facial image Distance value of each point to the vertical plane where depth camera.This distance value is referred to as depth value (depth), these depth Value has collectively constituted the depth facial image of this frame.That is, depth facial image can regard a secondary gray level image as, Wherein, the gray value of each point represents the depth value of this point, i.e. position of this in reality in the depth facial image The actual distance of vertical plane where to camera.In other words, the depth people in reality scene got using depth camera In face image, the gray value of each point in the depth facial image represents the depth value of the point.For example, for reality scene Midpoint M, depth camera can obtain the vertical plane where its imaging point XM and M to depth camera in RGB image The distance value of (plane that X, Y are constituted), this distance value are the depth values of M.Using depth camera position as origin, depth phase Machine institute is Z axis towards direction, and two axial directions of the vertical plane of depth camera are X, Y-axis, can establish the part three of depth camera It ties up coordinate system and passes through these that is, the depth facial image of acquisition is a kind of three-dimensional face images data using depth camera Three-dimensional face images data simultaneously use default geometric formula, such as triangle geometric formula, and available M is in the part of depth camera Three-dimensional coordinate in coordinate system, in brief, each pixel in depth face image data contain under local coordinate system The information of each coordinate value, depth face image data have fed back the three-dimensional information on facial image surface.
As known from the above, each point in RGB image, can all correspond to three in the local coordinate system of depth camera Point is tieed up, the depth face image data of each frame of depth camera is equivalent to a partial 3 d coordinate system in depth camera In point cloud model.Therefore, in the present solution, the first depth face image data, can be the local coordinate of depth camera Three-dimensional point in system has corresponding three-dimensional coordinate.In addition, it is necessary to explanation, from the foregoing it will be appreciated that each frame depth face figure As the corresponding point cloud model of data is in the partial 3 d coordinate system of camera, different depth camera positions (i.e. different frame) is just Correspond to different partial 3 d coordinate systems.In the present solution, last depth three-dimensional face images data need to be located at one A coordinate system, such as world coordinate system or global coordinate system.Therefore, in the present solution, passing through the first depth face figure to acquisition As the primary vector that data progress vector is converted to corresponding first depth face similarly passes through the second depth facial image Data carry out vector conversion to obtain the secondary vector of corresponding second face, that is, primary vector is the first depth face figure As the corresponding three-dimensional coordinate data of data, secondary vector is the corresponding three-dimensional coordinate data of the second depth face image data.
What needs to be explained here is that the movement of above-mentioned determining primary vector and secondary vector has no chronological order limit It is fixed, specifically without limitation, as long as coordinate conversion can be carried out after obtaining the first depth face image data to obtain first Vector, similarly, as long as coordinate conversion can be carried out after obtaining the second depth face image data to obtain secondary vector.
S30, the first transformation matrix is determined according to primary vector and the first transformation parameter, and according to secondary vector and Two transformation parameters determine the second transformation matrix;
In the present solution, obtaining the corresponding primary vector of the first depth face image data and the second depth face figure After the corresponding secondary vector of data, by determining the first transformation parameter and the second transformation parameter respectively to primary vector and Secondary vector determines corresponding transformation matrix.Namely: the first transformation square is determined according to primary vector and the first transformation parameter Battle array, determines the second transformation matrix according to secondary vector and the second transformation parameter.
What needs to be explained here is that the movement of above-mentioned the first transformation matrix of determination and the second transformation matrix has no time elder generation Sequence limits afterwards, specifically without limitation, as long as can determine the first transformation after the first transformation parameter and primary vector has been determined Matrix, similarly, as long as can determine the first transformation matrix after after the second transformation parameter and secondary vector has been determined.
S40, vector transformation is carried out to primary vector to obtain the first difference vector according to the first transformation matrix, and according to the Two transformation matrixs carry out vector transformation to secondary vector to obtain the second difference vector;
In the present solution, after the first transformation matrix and the second transformation matrix has been determined, can further according to this One transformation matrix converts primary vector, and is further converted according to second transformation matrix to primary vector, To obtain corresponding first difference vector and the second difference vector, wherein the first difference vector refers to that primary vector converts For the distance between energy representation required for secondary vector namely primary vector to secondary vector;Second difference vector refers to Secondary vector is transformed to energy representation namely secondary vector required for primary vector the distance between to primary vector.
What needs to be explained here is that the above-mentioned movement for carrying out vector transformation to primary vector and secondary vector has no the time Sequencing limits, specifically without limitation, as long as can be according to the first transformation matrix to first after the first transformation matrix has been determined Vector carries out vector transformation to obtain the first difference vector, similarly, as long as can be according to second after the second transformation matrix has been determined Transformation matrix carries out vector transformation to secondary vector to obtain the second difference vector.
S50, target difference vector is determined according to the first difference vector and the second difference vector;
It is further first poor according to this in the present solution, after the first difference vector and the second difference vector has been determined Incorgruous amount and the second difference vector determine target difference vector.
S60, using target difference vector as new depth face image data.
In the present solution, can be using the target difference vector acquired as new depth face image data, thus to reaching The data of image augmentation are played to depth face image data, illustratively, for 1000 depth face image datas, Then its combination of two shares 499500 combinations, we can therefrom choose 100,000, the data augmentation side for taking this programme to provide Method achievees the purpose that image augmentation to carry out augmentation to depth facial image.
To sum up, in the present solution, obtaining the first depth face image data and the second depth face image data, wherein First depth face image data and the second depth face diagram data are respectively the depth face figure of the first face and the second face As data;Coordinate conversion is carried out to obtain the primary vector of the first face to the first depth face image data, and deep to second Degree face image data carries out coordinate conversion to obtain the secondary vector of the second face;According to primary vector and the first transformation ginseng Number determines the first transformation matrix, and determines the second transformation matrix according to secondary vector and the second transformation parameter;Become according to first Change matrix and vector transformation carried out to obtain the first difference vector to primary vector, and according to the second transformation matrix to secondary vector into Row vector is converted to obtain the second difference vector;Target difference vector is determined according to the first difference vector and the second difference vector; Using target difference vector as three-dimensional face images data.That is, acquiring one group of depth face image data before this to obtain To the corresponding vector of depth face image data, then by transformation matrix to the corresponding vector of this group of depth face image data into Row vector transformation, will obtain new three-dimensional face images data according to one group of vector after vector transformation, that is to say, that In the present solution, the three-dimensional coordinate for combining depth face image data is converted, it is that needle two opens depth people during processing The three-dimensional coordinate datas of face image data carries out conversion process, is not simply to carry out pair to a depth face image data Together, the linear transformations such as averaging operation is made, and can significantly reduce loss human face data, reduces face face and turns round Song, human face data is truer, realizes the quantity extension of human face data, effectively improves the true depth that augmentation obtains The quantity of face image data.
In one embodiment, described that first transformation square is determined according to primary vector and the first transformation parameter in step S30 Battle array, specifically includes:
The first transformation matrix is determined according to the following formula:
Wherein, A1=| | Fi a-Fi b||2log||Fi a-Fi b| |, a, b ∈ [1, n], B1=[1, xp,yp,zp], T1Become for first Change parameter, M1For the first transformation matrix, FiFor primary vector,P ∈ [1, n], n are positive integer.
It is further to note that in some embodiments, above-mentioned first transformation parameter T1For random number, do not do specifically It limits.It can be seen, the present embodiment proposes specific determine and is determined according to primary vector and the first transformation parameter The mode of first transformation matrix improves the exploitativeness of scheme.
In one embodiment, in step S40, vector transformation is carried out to primary vector to obtain the according to the first transformation matrix One difference vector, specifically includes:
Vector transformation is carried out to obtain the first difference vector to primary vector according to the following formula:
Wherein, it is the first transformation matrix, is the first difference vector.
Here, it proposes a kind of pair of primary vector and carries out vector transformation to obtain the first difference vector, improve scheme Exploitativeness.
In one embodiment, in step S30, the second transformation matrix is determined according to secondary vector and the second transformation parameter, It specifically includes:
Above-mentioned second transformation matrix is determined according to the following formula:
Wherein,A', b' ∈ [1, m], B2=[1, xk,yk,zk], T2Become for second Change parameter, M2For the second transformation matrix, FjFor secondary vector, Fj=[xk,yk,zk], k ∈ [1, m], m are positive integer.
It should be noted that in some embodiments, above-mentioned first transformation parameter T2For random number, do not limit specifically It is fixed.It can be seen, the present embodiment proposes specific determine and determines the according to secondary vector and the second transformation parameter The mode of two transformation matrixs improves the exploitativeness of scheme.
In one embodiment, in step S40, vector transformation is carried out to secondary vector to obtain the according to the second transformation matrix Two difference vectors, specifically include:
Vector transformation is carried out to obtain the second difference vector to secondary vector according to the following formula:
Wherein, M2For the second transformation matrix, αjiFor the second difference vector.
Here, it proposes a kind of pair of secondary vector and carries out vector transformation to obtain the second difference vector, improve scheme Exploitativeness.It can be seen, in the present solution, mode same as the first difference vector is determined, which can be used, obtains the first change Matrix is changed, and vector is carried out to primary vector according to the first transformation matrix and obtains the first difference vector.
It should be noted that in some embodiments, in some embodiments, above-mentioned T2And T1For identical transformation Parameter, such as the first transformation parameter can be 0.5,1,1.5 etc., and the second transformation parameter can be 0.5,1,1.5 etc., specifically also not It limits.
It is further to note that other than the mode of above-mentioned the first transformation matrix of determination and the second transformation matrix, also There are other to determine the mode of the first transformation matrix and the second transformation matrix, such as:
The first transformation matrix is determined according to the following formula:
Wherein, A1=| | Fi a-Fi b||2log||Fi a-Fi b| |, a, b ∈ [1, n], B1=[1, xp,yp,zp], T1Become for first Change parameter, M1' it is the first transformation matrix, FiFor primary vector,P ∈ [1, n], n are positive integer.
The second transformation matrix is determined according to according to following formula:
Wherein,A', b' ∈ [1, m], B1=[1, xk,yk,zk], T2Become for second Change parameter, M2' it is the second transformation matrix, FjFor secondary vector, Fj=[xk,yk,zk], k ∈ [1, m], m are positive integer.
In one embodiment, step S50, namely determine that goal discrepancy is incorgruous according to the first difference vector and the second difference vector Amount, specifically includes:
Target difference vector is determined according to the following formula:
Wherein, s (i, j) is target difference vector, αijFor the first difference vector, αjiFor the second difference vector.
It should be noted that according to the first difference vector and the second difference vector, in addition to determining mesh according to aforesaid way It marks outside difference vector, can also there is other modes, specifically without limitation, such as:
Parameter differences vector is determined according to the following formula:
Wherein, s'(i, j) it is above-mentioned parameter difference vector, αijFor the first difference vector, αjiFor Second difference vector, and above-mentioned parameter difference vector is subjected to affine transformation to obtain above-mentioned target difference vector, wherein it is affine Transformation refers to that a vector space carries out once linear transformation and connects a translation, is transformed to the mistake of another vector space Journey.
In one embodiment, the first face and the second face are the face of the same person.Using the depth of the face of same person It spends image human face data and carries out above-mentioned image processing method, due to belonging to individual, the new depth finally obtained can be made Face image data is more true.
In conclusion present applicant proposes a kind of image processing methods, acquire before this one group of depth face image data with The corresponding vector of depth face image data is obtained, then by transformation matrix to the corresponding vector of this group of depth face image data Vector transformation is carried out, new three-dimensional face images data will be obtained according to one group of vector after vector transformation, that is to say, that The three-dimensional coordinate for combining depth face image data is converted, and is no longer simply to be aligned, the operations production such as averaging Linear averaging face out, obtained new human face data face distortion is less, and human face data is truer, and can be real The extension for having showed quantity achievees the purpose that data augmentation.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application constitutes any limit It is fixed.
In one embodiment, a kind of image processing apparatus is provided, in the image processing apparatus and above-described embodiment at image Reason method corresponds.As shown in figure 3, the image processing apparatus includes obtaining the determining mould of module 301, conversion module 302, first Block 303, conversion module 304, the second determining module 305 and third determining module 306.Detailed description are as follows for each functional module:
Module 301 is obtained, for obtaining the first depth face image data and the second depth face image data, In, the first depth face image data and the second depth face diagram data are respectively the depth face of the first face and the second face Image data;
Conversion module 302, for obtain module 301 obtain the first depth face image data carry out coordinate conversion with The primary vector of the first face is obtained, and carries out coordinate conversion to the second depth face image data that module 301 obtains is obtained To obtain the secondary vector of the second face;
First determining module 303, the primary vector and the first transformation parameter for being converted according to conversion module 302 are true Fixed first transformation matrix, and the secondary vector converted according to conversion module 302 and the second transformation parameter determine the second transformation square Battle array;
Conversion module 304, for according to the first determining module 303 determine the first transformation matrix to primary vector carry out to Change of variable to obtain the first difference vector, and the second transformation matrix determined according to the first determining module 303 to secondary vector into Row vector is converted to obtain the second difference vector;
Second determining module 305, the first difference vector and the second difference vector for being converted according to conversion module 304 are true Set the goal difference vector;
Third determining module 306, the target difference vector for determining the second determining module 305 is as three-dimensional face figure As data.
In one embodiment, the first determining module 303 is specifically used for:
The first transformation matrix is determined according to the following formula:
Wherein, B1=[1, xp,yp,zp], T1For the first transformation parameter, M1For the first transformation matrix, FiFor primary vector,P ∈ [1, n], n are positive integer.
In one embodiment, conversion module 304 is specifically used for:
Vector transformation is carried out to obtain the first difference vector to primary vector according to the following formula:
Wherein, M1For the first transformation matrix, αijFor the first difference vector.
In one embodiment, the first determining module 303 is specifically used for:
Above-mentioned second transformation matrix is determined according to the following formula:
Wherein,A', b' ∈ [1, m], B2=[1, xk,yk,zk], T2For Second transformation parameter, M2For the second transformation matrix, FjFor secondary vector, Fj=[xk,yk,zk], k ∈ [1, m], m are positive integer.
In one embodiment, conversion module 304 is specifically used for:
Vector transformation is carried out to obtain the second difference vector to secondary vector according to the following formula:
Wherein, M2For the second transformation matrix, αjiFor the second difference vector.
In one embodiment, the second determining module 305 is specifically used for:
Target difference vector is determined according to the following formula:
Wherein, s (i, j) is target difference vector, αijFor the first difference vector, αjiFor the second difference vector.
In one embodiment, the first face and the second face are the face of the same person.
It can obtain, present applicant proposes a kind of image processing apparatus, image processing apparatus acquired one group of depth face figure before this As data to obtain the corresponding vector of depth face image data, then by transformation matrix to this group of depth face image data pair The vector answered carries out vector transformation, will obtain new three-dimensional face images data according to one group of vector after vector transformation, That is, being needle during processing in the present solution, the three-dimensional coordinate for combining depth face image data is converted The three-dimensional coordinate data of two depth face image datas carries out conversion process, is not simply to a depth facial image number According to being aligned, the linear transformations operation such as averaging is made, and can be significantly reduced loss human face data, be reduced people Face face distortion, human face data is truer, realize human face data quantity extension, effectively improve augmentation obtain it is true The quantity of real depth face image data.
Specific about image processing apparatus limits the restriction that may refer to above for image processing method, herein not It repeats again.Modules in above-mentioned image processing apparatus can be realized fully or partially through software, hardware and combinations thereof.On Stating each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also store in a software form In memory in computer equipment, the corresponding operation of the above modules is executed in order to which processor calls.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction Composition can be as shown in Figure 4.The computer equipment include by system bus connect processor, memory, network interface and Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating The database of machine equipment is used to store the first, second depth image human face data obtained, and stores the above-mentioned change calculated Change matrix, transformation parameter etc..The network interface of the computer equipment is used for logical by network connection with external computer equipment Letter.To realize a kind of image processing method when the computer program is executed by processor.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory And the computer program that can be run on a processor, processor perform the steps of when executing computer program
Obtain the first depth face image data and the second depth face image data, wherein the first depth face figure As data and the second depth face diagram data are respectively the depth face image data of the first face and the second face;
Coordinate conversion is carried out to obtain the primary vector of the first face to the first depth face image data, and deep to second Degree face image data carries out coordinate conversion to obtain the secondary vector of the second face;
The first transformation matrix is determined according to primary vector and the first transformation parameter, and is become according to secondary vector and second It changes parameter and determines the second transformation matrix;
Vector transformation is carried out to obtain the first difference vector to primary vector according to the first transformation matrix, and is become according to second It changes matrix and vector transformation is carried out to obtain the second difference vector to secondary vector;
Target difference vector is determined according to the first difference vector and the second difference vector;
Using the target difference vector as new depth face image data.
It should be noted that the processor in the computer equipment executes other achievable steps when computer program, Other steps can correspond to the description refering to embodiment of the method in aforementioned image processing method, and description is not repeated herein.
It can obtain, present applicant proposes a kind of computer equipment, three-dimensional of the computer equipment to depth face image data Coordinate is converted, and is the three-dimensional coordinate data progress conversion process that needle two opens depth face image data during processing, It is not simply be aligned to a depth face image data, the linear transformations operation such as is averaging and makes, it can be with Loss human face data is significantly reduced, reduces the distortion of face face, human face data is truer, realizes the number of human face data Amount extension, effectively improves the quantity for the true depth face image data that augmentation obtains.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program performs the steps of when being executed by processor
Obtain the first depth face image data and the second depth face image data, wherein the first depth face figure As data and the second depth face diagram data are respectively the depth face image data of the first face and the second face;
Coordinate conversion is carried out to obtain the primary vector of the first face to the first depth face image data, and deep to second Degree face image data carries out coordinate conversion to obtain the secondary vector of the second face;
The first transformation matrix is determined according to primary vector and the first transformation parameter, and is become according to secondary vector and second It changes parameter and determines the second transformation matrix;
Vector transformation is carried out to obtain the first difference vector to primary vector according to the first transformation matrix, and is become according to second It changes matrix and vector transformation is carried out to obtain the second difference vector to secondary vector;
Target difference vector is determined according to the first difference vector and the second difference vector;
Using the target difference vector as new depth face image data.
It should be noted that storing other achievable steps when the computer program of computer media is executed by processor Suddenly, which can correspond to the description refering to embodiment of the method in aforementioned image processing method, and description is not repeated herein.
It can obtain, a kind of computer storage medium is proposed in this programme, when the computer journey for being stored in computer media When sequence is executed by processor, the three-dimensional coordinate for combining depth face image data is converted, and is needle two during processing The three-dimensional coordinate data for opening depth face image data carries out conversion process, is not simply to a depth face image data It is aligned, the linear transformations operation such as averaging is made, and can significantly reduce loss human face data, reduce face Face distortion, human face data is truer, realize human face data quantity extension, effectively improve augmentation obtain it is true Depth face image data quantity.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, To any reference of memory, storage, database or other media used in each embodiment provided herein, Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing The all or part of function of description.
Embodiment described above is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all Comprising within the scope of protection of this application.

Claims (10)

1. a kind of image processing method characterized by comprising
Obtain the first depth face image data and the second depth face image data, wherein the first depth face figure As data and the second depth face diagram data are respectively the depth face image data of the first face and the second face;
Coordinate conversion is carried out to obtain the primary vector of first face to the first depth face image data, and to institute It states the second depth face image data and carries out coordinate conversion to obtain the secondary vector of second face;
The first transformation matrix is determined according to the primary vector and the first transformation parameter, and according to the secondary vector and Two transformation parameters determine the second transformation matrix;
Vector transformation is carried out to obtain the first difference vector to the primary vector according to first transformation matrix, and according to institute It states the second transformation matrix and vector transformation is carried out to obtain the second difference vector to the secondary vector;
Target difference vector is determined according to first difference vector and second difference vector;
Using the target difference vector as new depth face image data.
2. image processing method as described in claim 1, which is characterized in that described to be joined according to primary vector and the first transformation Number determines the first transformation matrix, comprising:
First transformation matrix is determined according to the following formula:
Wherein, the A1=| | Fi a-Fi b||2log||Fi a-Fi b| |, a, b ∈ [1, n], the B1=[1, xp,yp,zp], the T1 For the first transformation parameter, the M1For first transformation matrix, the FiIt is described for the primary vectorThe n is positive integer.
3. image processing method as claimed in claim 2, which is characterized in that it is described according to first transformation matrix to described Primary vector carries out vector transformation to obtain the first difference vector, comprising:
Vector transformation is carried out to obtain first difference vector to the primary vector according to the following formula:
Wherein, the M1For first transformation matrix, the αijFor first difference vector.
4. image processing method as described in any one of claims 1-3, which is characterized in that described according to secondary vector and Two transformation parameters determine the second transformation matrix, comprising:
Above-mentioned second transformation matrix is determined according to the following formula:
Wherein, describedA', b' ∈ [1, m], the B2=[1, xk,yk,zk], the T2 For second transformation parameter, the M2For second transformation matrix, the FjFor the secondary vector, the Fj=[xk, yk,zk], k ∈ [1, m], the m are positive integer.
5. image processing method as claimed in claim 4, which is characterized in that it is described according to second transformation matrix to described Secondary vector carries out vector transformation to obtain the second difference vector, comprising:
Vector transformation is carried out to obtain second difference vector to the secondary vector according to the following formula:
Wherein, the M2For second transformation matrix, the αjiFor second difference vector.
6. image processing method as claimed in claim 5, which is characterized in that described according to the first difference vector and the second difference Vector determines target difference vector, specifically includes:
The target difference vector is determined according to the following formula:
Wherein, the s (i, j) is the target difference vector, the αijFor first difference vector, the αjiIt is described Two difference vectors.
7. image processing method as claimed in claim 6, which is characterized in that first face and second face are same The face of one people.
8. a kind of image processing apparatus characterized by comprising
Module is obtained, for obtaining the first depth face image data and the second depth face image data, wherein described the One depth face image data and the second depth face diagram data are respectively the depth face of the first face and the second face Image data;
Conversion module, the first depth face image data for obtaining to the acquisition module carry out coordinate and convert to obtain The primary vector of first face is obtained, and the second depth face image data obtained to the acquisition module carries out coordinate and turns It changes to obtain the secondary vector of second face;
First determining module, the primary vector and the first transformation parameter for being converted according to the conversion module determine First transformation matrix, and the second change is determined according to the secondary vector and the second transformation parameter of conversion module conversion Change matrix;
Conversion module, first transformation matrix for being determined according to first determining module carry out the primary vector Vector transformation is to obtain the first difference vector, and second transformation matrix determined according to first determining module is to described Secondary vector carries out vector transformation to obtain the second difference vector;
Second determining module, first difference vector and second difference vector for being converted according to the conversion module Determine target difference vector;
Third determining module, for using the target difference vector as new depth face image data.
9. a kind of computer equipment, including memory, processor and storage are in the memory and can be in the processor The computer program of upper operation, which is characterized in that the processor realized when executing the computer program as claim 1 to The step of any one of 7 described image processing method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In the step of any one of such as claim 1 to 7 of realization described image processing method when the computer program is executed by processor Suddenly.
CN201810698470.8A 2018-06-29 2018-06-29 Image processing method, image processing device, computer equipment and storage medium Active CN109003327B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810698470.8A CN109003327B (en) 2018-06-29 2018-06-29 Image processing method, image processing device, computer equipment and storage medium
PCT/CN2018/106427 WO2020000696A1 (en) 2018-06-29 2018-09-19 Image processing method and apparatus, computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810698470.8A CN109003327B (en) 2018-06-29 2018-06-29 Image processing method, image processing device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109003327A true CN109003327A (en) 2018-12-14
CN109003327B CN109003327B (en) 2022-09-30

Family

ID=64602138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810698470.8A Active CN109003327B (en) 2018-06-29 2018-06-29 Image processing method, image processing device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN109003327B (en)
WO (1) WO2020000696A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972008A (en) * 2021-11-04 2022-08-30 华为技术有限公司 Coordinate restoration method and device and related equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011049046A1 (en) * 2009-10-20 2011-04-28 楽天株式会社 Image processing device, image processing method, image processing program, and recording medium
JP2011188007A (en) * 2010-03-04 2011-09-22 Fujitsu Ltd Image processing device, image processing method, and image processing program
CN107066559A (en) * 2017-03-30 2017-08-18 天津大学 A kind of method for searching three-dimension model based on deep learning
CN107330904A (en) * 2017-06-30 2017-11-07 北京金山安全软件有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2017219391A1 (en) * 2016-06-24 2017-12-28 深圳市唯特视科技有限公司 Face recognition system based on three-dimensional data
CN107967677A (en) * 2017-12-15 2018-04-27 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and computer equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160147491A (en) * 2015-06-15 2016-12-23 한국전자통신연구원 Apparatus and method for 3D model generation
CN106651767A (en) * 2016-12-30 2017-05-10 北京星辰美豆文化传播有限公司 Panoramic image obtaining method and apparatus
CN107341841B (en) * 2017-07-26 2020-11-27 厦门美图之家科技有限公司 Generation method of gradual animation and computing device
CN108122280A (en) * 2017-12-20 2018-06-05 北京搜狐新媒体信息技术有限公司 The method for reconstructing and device of a kind of three-dimensional point cloud

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011049046A1 (en) * 2009-10-20 2011-04-28 楽天株式会社 Image processing device, image processing method, image processing program, and recording medium
JP2011188007A (en) * 2010-03-04 2011-09-22 Fujitsu Ltd Image processing device, image processing method, and image processing program
WO2017219391A1 (en) * 2016-06-24 2017-12-28 深圳市唯特视科技有限公司 Face recognition system based on three-dimensional data
CN107066559A (en) * 2017-03-30 2017-08-18 天津大学 A kind of method for searching three-dimension model based on deep learning
CN107330904A (en) * 2017-06-30 2017-11-07 北京金山安全软件有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN107967677A (en) * 2017-12-15 2018-04-27 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and computer equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JUN-YAN ZHU等: "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks", 《百度 ARXIV.ORG>CS>ARXIV:1703.10593V4 COMPUTER SCIENCE>COMPUTER VISION AND PATTERN RECOGNITION ARXIV.ORG/ABS/1703.10593V4》 *

Also Published As

Publication number Publication date
WO2020000696A1 (en) 2020-01-02
CN109003327B (en) 2022-09-30

Similar Documents

Publication Publication Date Title
US11010924B2 (en) Method and device for determining external parameter of stereoscopic camera
CN110197109B (en) Neural network model training and face recognition method, device, equipment and medium
CN102369413B (en) A method for determining the relative position of a first and a second imaging device and devices therefore
CN112461230A (en) Robot repositioning method and device, robot and readable storage medium
CN109711419A (en) Image processing method, device, computer equipment and storage medium
WO2019049331A1 (en) Calibration device, calibration system, and calibration method
DE112018001050T5 (en) SYSTEM AND METHOD FOR VIRTUALLY ENHANCED VISUAL SIMULTANEOUS LOCALIZATION AND CARTOGRAPHY
CN105654547B (en) Three-dimensional rebuilding method
US11315313B2 (en) Methods, devices and computer program products for generating 3D models
CN111598993A (en) Three-dimensional data reconstruction method and device based on multi-view imaging technology
WO2019230813A1 (en) Three-dimensional reconstruction method and three-dimensional reconstruction device
CN109559349A (en) A kind of method and apparatus for calibration
CN111325828B (en) Three-dimensional face acquisition method and device based on three-dimensional camera
CN109102524B (en) Tracking method and tracking device for image feature points
US9317968B2 (en) System and method for multiple hypotheses testing for surface orientation during 3D point cloud extraction from 2D imagery
CN115345942A (en) Space calibration method and device, computer equipment and storage medium
CN109003327A (en) Image processing method, device, computer equipment and storage medium
CN111105489A (en) Data synthesis method and apparatus, storage medium, and electronic apparatus
RU2384882C1 (en) Method for automatic linking panoramic landscape images
CN116012227A (en) Image processing method, device, storage medium and processor
Morinaga et al. Underwater active oneshot scan with static wave pattern and bundle adjustment
KR20160049639A (en) Stereoscopic image registration method based on a partial linear method
CN114494612A (en) Method, device and equipment for constructing point cloud map
Peng et al. Projective reconstruction with occlusions
US20230237778A1 (en) Real time face swapping system and methods thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant