CN111797797B - Face image processing method, terminal and storage medium based on grid deformation optimization - Google Patents

Face image processing method, terminal and storage medium based on grid deformation optimization Download PDF

Info

Publication number
CN111797797B
CN111797797B CN202010668700.3A CN202010668700A CN111797797B CN 111797797 B CN111797797 B CN 111797797B CN 202010668700 A CN202010668700 A CN 202010668700A CN 111797797 B CN111797797 B CN 111797797B
Authority
CN
China
Prior art keywords
face image
grid
feature
optimization
lattice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010668700.3A
Other languages
Chinese (zh)
Other versions
CN111797797A (en
Inventor
解为成
沈琳琳
田怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202010668700.3A priority Critical patent/CN111797797B/en
Publication of CN111797797A publication Critical patent/CN111797797A/en
Application granted granted Critical
Publication of CN111797797B publication Critical patent/CN111797797B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a face image processing method, a terminal and a storage medium based on grid deformation optimization, wherein the method comprises the following steps: acquiring a pose face image, and acquiring a first feature lattice of the pose face image and a second feature lattice of a predicted front face image corresponding to the pose face image, wherein the first feature lattice comprises all first feature points, and the second feature lattice comprises all second feature points; respectively constructing a first grid network corresponding to the pose face image and a second grid network corresponding to the predicted face image; and optimizing the positions of the first feature points and the first grid network according to the second grid network and the second feature lattice, and processing the pose face image into a front face image. The application realizes the conversion of the gesture face image into the frontal face image, can enable the face recognition technology to recognize the gesture face image, and improves the performance of the face recognition system.

Description

Face image processing method, terminal and storage medium based on grid deformation optimization
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a face image processing method, a terminal, and a storage medium based on grid deformation optimization.
Background
The face recognition technology is widely applied in various fields, but the existing face recognition technology is often limited to face recognition (face facing the front), so that the recognition efficiency of a face recognition system is low.
Accordingly, there is a need for improvement and advancement in the art.
Disclosure of Invention
Aiming at the defects in the prior art, the application provides a face image processing method, a terminal and a storage medium based on grid deformation optimization, and aims to solve the problem of low recognition efficiency of face recognition technology in the prior art.
In order to solve the technical problems, the technical scheme adopted by the application is as follows:
in a first aspect of the present application, a face image processing method based on mesh deformation optimization is provided, the method comprising:
acquiring a first feature lattice of a gesture face image and a second feature lattice of a predicted front face image corresponding to the gesture face image, wherein the first feature lattice comprises all first feature points, and the second feature lattice comprises all second feature points;
respectively constructing a first grid network corresponding to the first feature lattice and a second grid network corresponding to the second feature lattice;
optimizing the first grid network according to the second grid network, the second feature lattice and the first feature lattice;
and converting the attitude face image into a target front face image according to the optimized first grid network.
The method for processing the face image based on the grid deformation optimization, wherein the step of obtaining the second feature lattice of the predicted face image corresponding to the pose face image comprises the following steps:
constructing a face shape database, and acquiring feature vectors of the face shape database;
obtaining the second characteristic lattice according to a first preset formula,
the first preset formula is as follows:
wherein ,Q0 For the vector representation of the second feature lattice, E i Representing the ith feature vector of the face shape database,the average shape of the face shape database is O, and n is the face shape of the pose face image 0 Is constant, n 0 -1 represents the number of feature vectors removed.
The face image processing method based on grid deformation optimization, wherein before the first grid network corresponding to the first feature lattice and the second grid network corresponding to the second feature lattice are respectively constructed, comprises the following steps:
expanding the number of the first feature points in the first feature lattice and the number of the second feature points in the second feature lattice.
The face image processing method based on grid deformation optimization, wherein the respectively constructing a first grid network corresponding to the pose face image and a second grid network corresponding to the predicted face image comprises the following steps:
respectively constructing the first grid network and the second grid network according to a second preset formula;
wherein, the second preset formula is:
P i+1,j +P i-1,j +P i,j+1 +P i,j-1 -4P i,j =0
i=0,…,N u ;j=0,…,N v
wherein ,Pi,j For a grid point in the ith row and the jth column of the grid, N u +1 is the number of rows of the grid, N v +1 is the number of columns of the grid.
The method for processing the face image based on the grid deformation optimization, wherein the optimizing the positions of the first feature points and the first grid network according to the second grid network, the second feature lattice and the first feature lattice comprises the following steps:
performing primary optimization on the first grid network according to a third preset formula;
re-optimizing the first grid network subjected to the primary optimization according to a first optimization function, a second optimization function and a third optimization function;
wherein, the third preset formula is:
wherein ,Pi,j ,P′ i,j Grid points of the first grid network and the second grid network, respectively, Q t ,Q′ t Respectively from P in the first mesh network i,j T first feature point in mesh starting from mesh point and P 'from the second mesh network' i,j A t second feature point in the grid from the grid point;
the first optimization function, the second optimization function and the third optimization function are respectively constructed based on smoothness, translation invariance and face bilateral symmetry.
The face image processing method based on grid deformation optimization, wherein the first optimization function is as follows:
E TPS (z(P i,j ))=(zx″ u,u ) 2 +2(zx″ u,v ) 2 +(zx″ v,v ) 2 +(zy″ u,u ) 2 +2(zy″ u,v ) 2 +(zy″ v,v ) 2
wherein ,z(Pi,j ) = (zx, zy) representing P i,j Offset of grid point, zx is P i,j Offset of grid point in u direction, zy is P i,j Offset of grid point in v direction, zx u,v Representing the second order directional partial derivatives of zx with respect to the u-direction and v-direction;
the second optimization function is:
wherein ,z(Qt )=Q′ t -Q t Represents Q t ,Q′ t A translation vector therebetween;
the third optimization function is:
wherein ,respectively representing characteristic point columns with the same point sequence on the left side and the right side of the gesture face image,and respectively obtaining pixel colors of grid points corresponding to the left and right sides of the gesture face image in the first grid network.
The face image processing method based on grid deformation optimization, wherein re-optimizing the first grid network subjected to the primary optimization according to a first optimization function, a second optimization function and a third optimization function comprises the following steps:
and acquiring a first grid network which enables the function values of the first optimization function, the second optimization function and the third optimization function to be minimum as an optimization result.
The method for processing the face image based on the grid deformation optimization, wherein the step of converting the pose face image into the target face image according to the optimized first grid network comprises the following steps:
converting the attitude face image into an intermediate frontal face image according to the optimized first grid network;
correcting the middle front face image according to a fourth preset formula to obtain the target front face image;
wherein, the fourth preset formula is:
wherein ,OLp Is the brightness, OL, of the pixel point p in the occluded region of the intermediate face image q For the brightness, NL, of pixel q in the neighborhood of p pixels p 、NL q The brightness of the pixel points corresponding to the pixel point p and the pixel point q in the non-shielding area corresponding to the shielded area are respectively N p For 8 neighborhoods of pixel points p, CE OL 、CE NL Respectively, an occluded region and an image in a boundary ring region in a non-occluded region corresponding to the occluded regionThe illumination intensity variation amplitude of the pixel points.
In a second aspect of the present application, there is provided a terminal comprising a processor, a storage medium communicatively coupled to the processor, the storage medium being adapted to store a plurality of instructions, the processor being adapted to invoke the instructions in the storage medium to perform the steps of implementing the mesh deformation optimization-based face image processing method of any of the above.
In a third aspect of the present application, there is provided a storage medium storing one or more programs executable by one or more processors to implement the steps of the mesh deformation optimization-based face image processing method described in any one of the above.
Compared with the prior art, the application provides a face image processing method, a terminal and a storage medium based on grid deformation optimization, wherein the face image processing method based on grid deformation optimization is used for processing a gesture face image, acquiring a second characteristic lattice of a predicted front face image according to a first characteristic lattice of the gesture face image, constructing a first grid network of the gesture face image and a second grid network of the predicted front face image, and optimizing the first grid network according to the second grid network, the second characteristic lattice and the first characteristic lattice; according to the optimized first grid network, the gesture face image is converted into the target frontal face image, the gesture face image is converted into the frontal face image, the gesture face image can be recognized by the face recognition technology, and the performance of a face recognition system is improved.
Drawings
FIG. 1 is a flowchart of an embodiment of a face image processing method based on mesh deformation optimization provided by the application;
fig. 2 is a schematic diagram of a first feature lattice for acquiring a pose face image in an embodiment of a face image processing method based on mesh deformation optimization provided by the application;
FIG. 3 is a schematic diagram of a face shape database in an embodiment of a face image processing method based on mesh deformation optimization provided by the application;
fig. 4 is a schematic diagram of acquiring a second feature lattice in an embodiment of a face image processing method based on mesh deformation optimization provided by the present application;
FIG. 5 is a flowchart of sub-steps of step S300 of an embodiment of a face image processing method based on mesh deformation optimization provided by the present application;
FIG. 6 is a schematic diagram of generating an intermediate frontal image in an embodiment of a mesh deformation-based optimized face image processing method provided by the present application;
FIG. 7 is a schematic diagram of generating a target frontal image in an embodiment of a grid-deformation-based face image processing method according to the present application;
fig. 8 is a schematic diagram of an embodiment of a terminal provided by the present application.
Detailed Description
In order to make the objects, technical solutions and effects of the present application clearer and more specific, the present application will be described in further detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Example 1
The face image processing method based on the grid deformation optimization can be applied to a terminal, and the terminal can process the pose face image through the face image processing method based on the grid deformation optimization and convert the pose face image into a front face image. As shown in fig. 1, one embodiment of the face image processing method based on mesh deformation optimization includes the steps of:
s100, acquiring a first feature lattice of a pose face image and a second feature lattice of a predicted front face image corresponding to the pose face image.
In this embodiment, the front face image represents a face image facing the front, the pose face image is a face image other than the front face image, when the pose face image needs to be converted into the front face image, first feature lattices of the pose face image are obtained first, the first feature lattices include first feature points, that is, a plurality of feature points are extracted from the pose face image to obtain the first feature lattices (as shown in fig. 2), the second feature lattices include second feature points, the number of the first feature points in the first feature lattices is equal to the number of the second feature points in the second feature lattices, and in a possible implementation manner, the number of feature points in the first feature lattices and the second feature points in the first feature lattices may be 68, and the first feature points may be obtained by extracting feature points from the pose face image. The obtaining the second feature lattice of the predicted frontal image corresponding to the gesture face comprises the following steps:
s110, constructing a face shape database, and obtaining feature vectors of the face shape database.
The predicted frontal image is a frontal image estimated according to the pose face image, that is, the predicted frontal image is a virtual object, in this embodiment, the second feature lattice of the predicted frontal image corresponding to the pose face image is obtained by a PCA (principal component analysis) method, specifically, a face shape database is first constructed, as shown in fig. 3, in which a plurality of face images are stored, n feature points are extracted for each face image, and a feature lattice corresponding to each face image (the feature lattice corresponding to the face image may also be referred to as a shape of a face) may be represented by a vector O = (x) 1 ,y 1 ...x n ,y n ) T, wherein ,(xn ,y n ) The nth feature point is represented. S120, acquiring the second feature lattice according to a first preset formula.
Specifically, the first preset formula is:
wherein ,Q0 Is the firstVector representation of two feature lattices, E i Representing the ith feature vector of the face shape database,the average shape of the face shape database is O, and n is the face shape of the pose face image 0 Is constant and n 0 -1 represents the number of feature vectors removed, that is, the first n of the face shape database is removed when the second feature lattice is acquired 0 -1 eigenvector, n 0 May be set to 2,3, etc. According to the above formula, the shape of the predicted frontal face image (the second feature lattice) corresponding to the pose face image may be obtained, as shown in fig. 4.
Referring to fig. 1 again, the face image processing method based on the grid deformation optimization further includes the steps of:
and S200, respectively constructing a first grid network corresponding to the attitude face image and a second grid network corresponding to the predicted front face image.
In one possible implementation manner, in order to obtain more feature points, to improve accuracy of image processing, before the step of respectively constructing the first mesh network corresponding to the pose face image and the second mesh network corresponding to the predicted face image, the method further includes the steps of:
expanding the number of the first feature points in the first feature lattice and the number of the second feature points in the second feature lattice.
Expanding the number of feature points in the first feature lattice and the second feature lattice can be performed by the formula:is realized by that RS and Q' are respectively the first characteristic lattice and the second characteristic lattice after expansion, Q 0 、RS 0 And b, T and C are respectively preset scale coefficients, a transformation matrix and a translation matrix. b. T, C can be passed through the sampleThe image is obtained by pre-solving and calculating, specifically, in this embodiment, the number of feature points in the first feature lattice and the second feature lattice is 79 after the feature points in the first feature lattice and the second feature lattice are added to the feature points in the forehead portion of the human face.
The building of the first mesh network corresponding to the pose face image and the second mesh network corresponding to the predicted face image respectively includes:
respectively constructing the first grid network and the second grid network according to a second preset formula;
the second preset formula is:
P i+1,j +P i-1,j +P i,j+1 +P i,j-1 -4P i,j =0
i=0,…,N u ;j=0,…,N v
wherein ,Pi,j For a grid point in the ith row and the jth column of the grid, N u +1 is the number of rows of the grid, N v +1 is the number of columns of the grid.
The boundary conditions of the grid are:
the left boundary line, the right boundary line, the upper boundary line and the lower boundary line of the grid are respectively: p (P) 0,j 、P 1,j 、P i,0 P i,1
The first grid network and the second grid network are respectively constructed according to the formula, and it is easy to see that the initial shapes of the first grid network and the second grid network are the same, and in the following processing steps, the second grid network is kept unchanged, and the first grid network is optimized and adjusted.
And S300, optimizing the first grid network according to the second grid network, the second feature lattice and the first feature lattice.
After the first mesh network and the second mesh network are constructed, the first mesh network is optimized, and the position of each grid point in the first mesh network is adjusted, so that the first relative distance between each first feature point and the grid point in the first mesh network is consistent with the second relative distance between each second feature point and the grid point in the second mesh network, and the aim of processing the pose face image into a front face image according to the first mesh network is achieved.
Specifically, as shown in fig. 5, the optimizing the positions of the first feature points and the first mesh network according to the second mesh network and the second feature lattice, and processing the pose face image into a face image includes:
s310, primarily optimizing the first grid network according to a third preset formula;
the third preset formula is:
wherein ,Pi,j ,P′ i,j Grid points of the first grid network and the second grid network, respectively, Q t ,Q′ t Respectively from P in the first mesh network i,j T first feature point in mesh starting from mesh point and P 'from the second mesh network' i,j And (3) the t second characteristic point in the grid from which the grid point starts, and iteratively optimizing the position of the grid point in the first grid network through the third preset formula.
Illustratively, Q t May not be in the slave P at the beginning i,j In the initial grid, through the third preset formula, the corresponding point Q t Will gradually approach the grid point P after one more iterations i,j
S320, re-optimizing the first grid network subjected to the primary optimization according to a first optimization function, a second optimization function and a third optimization function;
after the step S310 is performed, the positions of the grid points in the first mesh network are initially optimized, and in the step S320, the first mesh network is further optimized.
In the step S320, image deformation optimization is automatically performed using the similarity between the structure and the texture on the intermediate domain, and specifically, the first optimization function, the second optimization function, and the third optimization function are respectively constructed based on smoothness, translational invariance, and face bilateral symmetry.
The first optimization function is used for restraining smoothness of a front face image obtained by converting the attitude face image according to the optimized first grid network, the smaller the value of the first optimization function is, the better the smoothness of the front face image is, and the first optimization function is that:
E TPS (z(P i,j ))=(zx″ u,u ) 2 +2(zx″ u,v ) 2 +(zx″ v,v ) 2 +(zy″ u,u ) 2 +2(zy″ u,v ) 2 +(zy″ v,v ) 2
wherein ,z(Pi,j ) = (zx, zy) representing P i,j Offset of grid point, zx is P i,j Offset of grid point in u direction, zy is P i,j Offset of grid point in v direction, zx u,v The second order partial derivatives of zx with respect to the u-and v-directions are expressed, i.e. in this embodiment the smoothness is expressed as the sum of thin-plate splines (TPS) in the x-and y-directions of the face network.
The second optimization function is used for restraining translation invariance of the front face image obtained by converting the gesture face image according to the optimized first grid network, the smaller the function value of the second optimization function is, the better the translation invariance of the front face image is, and the second optimization function is that:
wherein ,z(Qt )=Q′ t -Q t Represents Q t ,Q′ t A translation vector therebetween.
The third optimization function is configured to constrain bilateral symmetry of a frontal image obtained by converting a pose face image according to the first mesh network, where the smaller the function value of the third optimization function is, the better the bilateral symmetry of the frontal image is, specifically, in this embodiment, the bilateral symmetry includes shape symmetry and texture symmetry, and the third optimization function is:
wherein ,LSymShape As a function of constrained shape symmetry, L SymTex To constrain the function of texture symmetryCharacteristic point columns respectively representing the same point sequence on the left side and the right side of the gesture face image, and +.> And respectively obtaining pixel colors of grid points corresponding to the left and right sides of the gesture face image in the first grid network. According to the formulaFeature points in the function of constrained shape symmetry may be converted to grid points such that the grid points are as a function L SymShape The variable in (1) is realized through a function L SymShape And optimizing the first grid network.
Re-optimizing the first mesh network subjected to the initial optimization by solving the first, second and third optimization functions, specifically, re-optimizing the first mesh network subjected to the initial optimization according to the first, second and third optimization functions includes:
and acquiring a first grid network which enables the function values of the first optimization function, the second optimization function and the third optimization function to be minimum as an optimization result.
The relation between the first optimization function, the second optimization function, and the third optimization function and the smoothness, the translational invariance, and the bilateral symmetry of the image has been described above, and therefore, the first mesh network is optimized (positions and pixel values of each mesh point in the first mesh network are adjusted) by a constraint manner that the sum of the first optimization function, the second optimization function, and the third optimization function is made to be a minimum value, so that the image quality generated from the optimized first mesh network is higher.
Referring to fig. 1 again, the face image processing method based on the grid deformation optimization further includes the steps of:
s400, converting the attitude face image into a target front face image according to the optimized first grid network.
Specifically, the converting the pose face image into the target face image according to the optimized first mesh network includes:
s410, converting the attitude face image into an intermediate front face image according to the optimized first grid network;
s420, correcting the middle front face image according to a fourth preset formula to obtain the target front face image.
The image corresponding to the first mesh network may be generated according to the optimized first mesh network, which is the prior art and will not be described herein. In a possible implementation manner, the intermediate frontal image generated in the step S410 is directly used as the target frontal image, that is, as a result of processing the pose face image. However, for a face image with a large oblique pose, the pose face may include a large blocking portion, and some regions of the intermediate frontal image generated according to the pose face may have a texture loss (as shown in fig. 6), and the region with the texture loss is called a blocked region.
The intermediate face image is processed by adopting a poisson-based patching method, and a conventional filling algorithm formula is as follows:
wherein ,OLp Is the brightness, OL, of a pixel point p in an occluded region in an image q For the brightness, NL, of pixel q in the neighborhood of p pixels p 、NL q The brightness of the pixel points corresponding to the pixel point p and the pixel point q in the non-shielding area corresponding to the shielded area are respectively N p Is the 4 neighborhood of the pixel point p, i.e., |N p |=4。
In this embodiment, after the above formula is improved, the processing of the intermediate face image is performed, and the fourth preset formula is:
wherein ,OLp Is the brightness, OL, of the pixel point p in the occluded region of the intermediate face image q For the brightness, NL, of pixel q in the neighborhood of p pixels p 、NL q The brightness of the pixel points corresponding to the pixel point p and the pixel point q in the non-shielding area corresponding to the shielded area are respectively N p For 8 neighborhoods of pixel points p, CE OL 、CE NL The illumination intensity variation amplitude of the pixel points in the boundary ring area in the shielded area and the non-shielded area corresponding to the shielded area is respectively.
As is apparent from the above description, in the present embodiment, the number of adjacent pixels is changed from 4 to 8, the number of equations corresponding to each pixel is changed from 1 to 8, and the ratio coefficient of the luminance difference is obtained, so that the detail information obtained in filling is improved, and the authenticity of the filling area is improved. And solving the illumination intensity of the abnormal region in the middle face image through the fourth preset formula to obtain the target front face image, as shown in fig. 7.
In summary, the present embodiment provides a face image processing method based on mesh deformation optimization, where the face image processing method based on mesh deformation optimization is used to process a pose face image, obtain a second feature lattice of a predicted front face image according to a first feature lattice of the pose face image, construct a first mesh network of the pose face image and a second mesh network of the predicted front face image, and optimize the first mesh network according to the second mesh network, the second feature lattice and the first feature lattice; according to the optimized first grid network, the gesture face image is converted into the target frontal face image, the gesture face image is converted into the frontal face image, the gesture face image can be recognized by the face recognition technology, and the performance of a face recognition system is improved.
It should be understood that, although the steps in the flowcharts shown in the drawings of this specification are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a portion of the steps in the flowcharts may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order in which the sub-steps or stages are performed is not necessarily sequential, and may be performed in turn or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Example two
Based on the above embodiment, the present application also correspondingly provides a terminal, as shown in fig. 8, which includes a processor 10 and a memory 20. It will be appreciated that fig. 8 shows only some of the components of the terminal, but it is to be understood that not all of the illustrated components need be implemented and that more or fewer components may alternatively be implemented.
The memory 20 may in some embodiments be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 20 may in other embodiments also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal. Further, the memory 20 may also include both an internal storage unit and an external storage device of the terminal. The memory 20 is used for storing application software and various data installed in the terminal. The memory 20 may also be used to temporarily store data that has been output or is to be output. In an embodiment, the memory 20 stores a face image processing program 30 based on grid deformation optimization, and the face image processing program 30 based on grid deformation optimization can be executed by the processor 10, so as to implement the face image processing method based on grid deformation optimization in the present application.
The processor 10 may in some embodiments be a central processing unit (Central Processing Unit, CPU), microprocessor or other chip for executing program code or processing data stored in the memory 20, for example performing the grid shape-based optimized face image processing method, etc.
In one embodiment, the following steps are implemented when the processor 10 executes the mesh deformation optimization based face image processing program 30 in the memory 20:
acquiring a first feature lattice of a gesture face image and a second feature lattice of a predicted front face image corresponding to the gesture face image, wherein the first feature lattice comprises all first feature points, and the second feature lattice comprises all second feature points;
respectively constructing a first grid network corresponding to the first feature lattice and a second grid network corresponding to the second feature lattice;
optimizing the first grid network according to the second grid network, the second feature lattice and the first feature lattice;
and converting the attitude face image into a target front face image according to the optimized first grid network.
The obtaining the second feature lattice of the predicted frontal face image corresponding to the pose face image comprises:
constructing a face shape database, and acquiring feature vectors of the face shape database;
obtaining the second characteristic lattice according to a first preset formula,
the first preset formula is as follows:
wherein ,Q0 For the second feature latticeVector representation, E i Representing the ith feature vector of the face shape database,the average shape of the face shape database is O, and n is the face shape of the pose face image 0 Is constant and n 0 -1 represents the number of feature vectors removed.
The step of respectively constructing a first grid network corresponding to the first feature lattice and a second grid network corresponding to the second feature lattice includes:
expanding the number of the first feature points in the first feature lattice and the number of the second feature points in the second feature lattice.
The constructing a first mesh network corresponding to the pose face image and a second mesh network corresponding to the predicted face image respectively includes:
respectively constructing the first grid network and the second grid network according to a second preset formula;
wherein, the second preset formula is:
P i+1,j +P i-1,j +P i,j+1 +P i,j-1 -4P i,j =0
i=0,…,N u ;j=0,…,N v
wherein ,Pi,j For a grid point in the ith row and the jth column of the grid, N u +1 is the number of rows of the grid, N v +1 is the number of columns of the grid.
Wherein optimizing the positions of the first feature points and the first mesh network according to the second mesh network, the second feature lattice and the first feature lattice includes:
performing primary optimization on the first grid network according to a third preset formula;
re-optimizing the first grid network subjected to the primary optimization according to a first optimization function, a second optimization function and a third optimization function;
wherein, the third preset formula is:
wherein ,Pi,j ,P′ i,j Grid points of the first grid network and the second grid network, respectively, Q t ,Q′ t Respectively from P in the first mesh network i,j T first feature point in mesh starting from mesh point and P 'from the second mesh network' i,j A t second feature point in the grid from the grid point;
the first optimization function, the second optimization function and the third optimization function are respectively constructed based on smoothness, translation invariance and face bilateral symmetry.
Wherein the first optimization function is:
E TPS (z(P i,j ))=(zx″ u,u ) 2 +2(zx″ u,v ) 2 +(zx″ v,v ) 2 +(zy″ u,u ) 2 +2(zy″ u,v ) 2 +(zy″ v,v ) 2
wherein ,z(Pi,j ) = (zx, zy) representing P i,j Offset of grid point, zx is P i,j Offset of grid point in u direction, zy is P i,j Offset of grid point in v direction, zx u,v Representing the second order directional partial derivatives of zx with respect to the u-direction and v-direction;
the second optimization function is:
wherein ,z(Qt )=Q′ t -Q t Represents Q t ,Q′ t A translation vector therebetween;
the third optimization function is:
wherein ,respectively representing characteristic point columns with the same point sequence on the left side and the right side of the gesture face image,and respectively obtaining pixel colors of grid points corresponding to the left and right sides of the gesture face image in the first grid network.
Wherein re-optimizing the first mesh network with the initial optimization according to a first optimization function, a second optimization function, and a third optimization function comprises:
and acquiring a first grid network which enables the function values of the first optimization function, the second optimization function and the third optimization function to be minimum as an optimization result.
The converting the pose face image into the target face image according to the optimized first mesh network includes:
converting the attitude face image into an intermediate frontal face image according to the optimized first grid network;
correcting the middle front face image according to a fourth preset formula to obtain the target front face image;
wherein, the fourth preset formula is:
wherein ,OLp Is the brightness, OL, of the pixel point p in the occluded region of the intermediate face image q For the brightness, NL, of pixel q in the neighborhood of p pixels p 、NL q The brightness of the pixel points corresponding to the pixel point p and the pixel point q in the non-shielding area corresponding to the shielded area are respectively N p For 8 neighborhoods of pixel points p, CE OL 、CE NL The illumination intensity variation amplitude of the pixel points in the boundary ring area in the shielded area and the non-shielded area corresponding to the shielded area is respectively.
Example III
The present application also provides a storage medium in which one or more programs are stored, the one or more programs being executable by one or more processors to implement the steps of the mesh deformation optimization-based face image processing method as described above.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (8)

1. A face image processing method based on mesh deformation optimization, the method comprising:
acquiring a first feature lattice of a gesture face image and a second feature lattice of a predicted front face image corresponding to the gesture face image, wherein the first feature lattice comprises all first feature points, and the second feature lattice comprises all second feature points;
respectively constructing a first grid network corresponding to the first feature lattice and a second grid network corresponding to the second feature lattice;
optimizing the first grid network according to the second grid network, the second feature lattice and the first feature lattice;
converting the attitude face image into a target front face image according to the optimized first grid network;
the optimizing the positions of the first feature points and the first mesh network according to the second mesh network, the second feature lattice and the first feature lattice includes:
performing primary optimization on the first grid network according to a third preset formula;
re-optimizing the first grid network subjected to the primary optimization according to a first optimization function, a second optimization function and a third optimization function;
wherein, the third preset formula is:
wherein ,a grid point of the first grid network and a grid point of the second grid network, respectively, +.>,/>From the first mesh network, respectively +.>T first feature point in mesh starting from mesh point and from +.>A t second feature point in the grid from the grid point;
the first optimization function, the second optimization function and the third optimization function are respectively constructed based on smoothness, translation invariance and face bilateral symmetry;
the first optimization function is:
wherein ,representing->Offset of grid point +.>Is->Grid point at->The amount of offset in the direction is such that,is->Grid point at->Offset in direction, +.>Representation->Relative to->Direction and->Second order directional partial derivative of direction;
the second optimization function is:
wherein ,representing->,/>A translation vector therebetween;
the third optimization function is:
wherein ,,/>characteristic point columns respectively representing the same point sequence on the left side and the right side of the gesture face image, and +.>、/>Respectively representing pixel colors of grid points corresponding to the left and right sides of the gesture face image in the first grid network, b represents a preset scale coefficient, T represents a transformation matrix, C represents a translation matrix, and>as a function of constrained shape symmetry, +.>As a function of constraint texture symmetry.
2. The method for processing a face image based on grid deformation optimization according to claim 1, wherein obtaining a second feature lattice of a predicted frontal face image corresponding to the pose face image comprises:
constructing a face shape database, and acquiring feature vectors of the face shape database;
obtaining the second characteristic lattice according to a first preset formula,
the first preset formula is as follows:
wherein ,for a vector representation of the second feature lattice, and (2)>Representing the face shape database +.>Individual feature vectors->For the average shape of the face shape database, +.>For the face shape of the pose face image, < >>Is constant, n 0 -1 represents the number of feature vectors removed.
3. The face image processing method based on grid deformation optimization according to claim 1, wherein before the first grid network corresponding to the first feature lattice and the second grid network corresponding to the second feature lattice are respectively constructed, the method comprises:
expanding the number of the first feature points in the first feature lattice and the number of the second feature points in the second feature lattice.
4. The method for processing a face image based on mesh deformation optimization according to claim 1, wherein the respectively constructing a first mesh network corresponding to the pose face image and a second mesh network corresponding to the predicted face image comprises:
respectively constructing the first grid network and the second grid network according to a second preset formula;
wherein, the second preset formula is:
wherein ,for the position +.>Line->One grid point of column->As the number of rows of the mesh network,is the number of columns of the mesh network.
5. The mesh deformation optimization-based face image processing method of claim 1, wherein re-optimizing the first mesh network subjected to the initial optimization according to a first optimization function, a second optimization function, and a third optimization function comprises:
and acquiring a first grid network which enables the function values of the first optimization function, the second optimization function and the third optimization function to be minimum as an optimization result.
6. The method for processing a face image based on mesh deformation optimization according to claim 1, wherein the converting the pose face image into a target face image according to the optimized first mesh network comprises:
converting the attitude face image into an intermediate frontal face image according to the optimized first grid network;
correcting the middle front face image according to a fourth preset formula to obtain the target front face image;
wherein, the fourth preset formula is:
wherein ,is pixel point +.>Luminance of->Is->Pixel point +.>Luminance of->、/>Pixel points in non-occlusion areas corresponding to the occluded areas>And pixel dot->Brightness of corresponding pixel, +.>Is pixel dot +.>8 neighborhood of->、/>The illumination intensity variation amplitude of the pixel points in the boundary ring area in the shielded area and the non-shielded area corresponding to the shielded area is respectively.
7. A terminal, the terminal comprising: a processor, a storage medium communicatively coupled to the processor, the storage medium adapted to store a plurality of instructions, the processor adapted to invoke the instructions in the storage medium to perform the steps of implementing the mesh deformation optimization-based face image processing method of any of the above claims 1-6.
8. A storage medium storing one or more programs executable by one or more processors to implement the steps of the mesh deformation optimization-based face image processing method of any one of claims 1-6.
CN202010668700.3A 2020-07-13 2020-07-13 Face image processing method, terminal and storage medium based on grid deformation optimization Active CN111797797B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010668700.3A CN111797797B (en) 2020-07-13 2020-07-13 Face image processing method, terminal and storage medium based on grid deformation optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010668700.3A CN111797797B (en) 2020-07-13 2020-07-13 Face image processing method, terminal and storage medium based on grid deformation optimization

Publications (2)

Publication Number Publication Date
CN111797797A CN111797797A (en) 2020-10-20
CN111797797B true CN111797797B (en) 2023-09-15

Family

ID=72808406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010668700.3A Active CN111797797B (en) 2020-07-13 2020-07-13 Face image processing method, terminal and storage medium based on grid deformation optimization

Country Status (1)

Country Link
CN (1) CN111797797B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108304829A (en) * 2018-03-08 2018-07-20 北京旷视科技有限公司 Face identification method, apparatus and system
WO2019100608A1 (en) * 2017-11-21 2019-05-31 平安科技(深圳)有限公司 Video capturing device, face recognition method, system, and computer-readable storage medium
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
CN110363091A (en) * 2019-06-18 2019-10-22 广州杰赛科技股份有限公司 Face identification method, device, equipment and storage medium in the case of side face

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019100608A1 (en) * 2017-11-21 2019-05-31 平安科技(深圳)有限公司 Video capturing device, face recognition method, system, and computer-readable storage medium
WO2019128508A1 (en) * 2017-12-28 2019-07-04 Oppo广东移动通信有限公司 Method and apparatus for processing image, storage medium, and electronic device
CN108304829A (en) * 2018-03-08 2018-07-20 北京旷视科技有限公司 Face identification method, apparatus and system
CN110363091A (en) * 2019-06-18 2019-10-22 广州杰赛科技股份有限公司 Face identification method, device, equipment and storage medium in the case of side face

Also Published As

Publication number Publication date
CN111797797A (en) 2020-10-20

Similar Documents

Publication Publication Date Title
US20180260664A1 (en) Deep-learning network architecture for object detection
US8942512B2 (en) Methods and systems for processing a first image with reference to a second image
JP6044134B2 (en) Image area dividing apparatus, method, and program according to optimum image size
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
US20080317383A1 (en) Adaptive Point-Based Elastic Image Registration
CN111091567B (en) Medical image registration method, medical device and storage medium
US9542732B2 (en) Efficient image transformation
Micheli et al. A linear systems approach to imaging through turbulence
US9495734B2 (en) Information processing apparatus, system, method, and medium storing program
JP7149124B2 (en) Image object extraction device and program
US10275863B2 (en) Homography rectification
CN114820739B (en) Multispectral camera-oriented image rapid registration method and device
JP2007535066A (en) Image processing apparatus and method
CN111797797B (en) Face image processing method, terminal and storage medium based on grid deformation optimization
JP7114431B2 (en) Image processing method, image processing device and program
US9519974B2 (en) Image processing apparatus and image processing method
KR101919879B1 (en) Apparatus and method for correcting depth information image based on user&#39;s interaction information
US9092840B2 (en) Image processing apparatus, control method of the same and non-transitory computer-readable storage medium
KR101937859B1 (en) System and Method for Searching Common Objects in 360-degree Images
CN114119593B (en) Super-resolution image quality evaluation method based on texture features of shallow and deep structures
TWI819641B (en) Image stitching correction device and method thereof
CN116127403B (en) Information fusion method, device and storage medium based on cross-modal feature recalibration
CN116778066B (en) Data processing method, device, equipment and medium
CN114331871A (en) Video inverse tone mapping method capable of removing banding artifacts and related equipment
CN115357845B (en) Evaluation method and device for microscopic light field iterative reconstruction result

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant