CN102354397B - Method for reconstructing human facial image super-resolution based on similarity of facial characteristic organs - Google Patents

Method for reconstructing human facial image super-resolution based on similarity of facial characteristic organs Download PDF

Info

Publication number
CN102354397B
CN102354397B CN 201110278771 CN201110278771A CN102354397B CN 102354397 B CN102354397 B CN 102354397B CN 201110278771 CN201110278771 CN 201110278771 CN 201110278771 A CN201110278771 A CN 201110278771A CN 102354397 B CN102354397 B CN 102354397B
Authority
CN
China
Prior art keywords
image
resolution
similarity
training
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN 201110278771
Other languages
Chinese (zh)
Other versions
CN102354397A (en
Inventor
戚金清
梁维伟
马晓红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN 201110278771 priority Critical patent/CN102354397B/en
Publication of CN102354397A publication Critical patent/CN102354397A/en
Application granted granted Critical
Publication of CN102354397B publication Critical patent/CN102354397B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a method for reconstructing human facial image super-resolution based on the similarity of facial characteristic organs. The method comprises the following steps of: 1, establishing a high-resolution front human facial image library and a high-resolution characteristic organ image library by utilizing a gray scale projection method according to a preset ideal high-resolution human facial image; 2, extracting a low-resolution characteristic organ image from a low-resolution target human facial image; 3, performing bicubic interpolation on the low-resolution target human facial image and the low-resolution characteristic organ image to acquire a training image set of the low-resolution image; 4, constructing characteristic space corresponding to the training image set by the training image set to reconstruct projection vectors of a corresponding high-resolution integral human facial image and a corresponding high-resolution organ image; and 5, fusing the high-resolution integral human facial image and the high-resolution characteristic organ image into a high-resolution target human facial image. The method has the characteristics of less preprocessing time, high retrieval accuracy of training images, high trueness of the acquired human facial images and the like.

Description

Face image super-resolution reconstruction method based on facial characteristics organ similarity
Technical field
The present invention relates to a kind of face image super-resolution reconstruction method based on facial characteristics organ similarity.
Background technology
The purpose of image super-resolution rebuilding technology is the resolution of given low-resolution image is effectively promoted by digital image processing techniques, thereby obtains high-quality high-definition picture.Face image super-resolution rebuilding (claiming again illusion face technology) namely reconstructs by given low resolution front face image the high-resolution human face image that comprises enough effective informations, this technology is widely used in the safety-security areas such as video monitoring, criminal investigation at present, and there is very high requirement in above field of while to the quality of the high-resolution human face image that this technology recovers.The current image super-resolution rebuilding technology that generally adopts is divided three classes substantially: based on interpolation, based on reconstruct with based on study.Method based on interpolation completes according to the original pixels in the K neighborhood that is inserted into a little and specific interpolation formula the estimation that is inserted into a pixel.The arithmetic speed of the method depends on the size of radius of neighbourhood K and the complexity of interpolation formula.Than other method, the method arithmetic speed is the fastest, but effect is also least desirable, therefore is not suitable for facial image.Image after degenerating based on the method utilization of reconstruct and the structure of the similarity between original low-resolution image cost function equation, utilize simultaneously the regularization term of characteristics of image prior imformation equationof structure, adopt at last the method for iteration to ask for this regularization equation optimum solution, obtain optimum high-definition picture, but the effect of the method depends on the characteristics of image prior imformation that adopts, final high-definition picture edge effect of rebuilding is better, but operand is large and detailed information is deficient.Method based on study is mainly to make up the required detailed information of original low-resolution image by the detailed information that one group of high resolving power training image provides, the reconstructed image edge effect that the method obtains is general, but image detail is abundant, have visual effect preferably, so face image super-resolution rebuilding generally adopts the method based on study.
At present, researchist and technician have proposed multiple face image super-resolution reconstruction method.Method one: based on multiple dimensioned and face image super-resolution rebuilding algorithm Orientation Features, employing can be controlled the space distribution of the low level local feature of pyramid structure study facial image, predicts characteristic matching between best high low-resolution image in conjunction with pyramid hierarchical structure and locally best matching algorithm.Method two: based on the method for vector quantization, train the grid relational model between high low resolution facial image, then complete estimation to the high-resolution human face image by this model and training image.Method three: based on the method for Markov random field, the method is thought between image block or is satisfied a kind of nonparametric priori relational model between image pixel, this model can utilize Markov random field model to replace, can obtain high-frequency information by this model from training image and compensate the required high-frequency information of low resolution target facial image, but the method operand is very large.Method four: based on the method for tensor, utilize the layered characteristic tensor to characterize people's face, obtain the global characteristics tensor by training, more jointly complete face image super-resolution rebuilding in conjunction with the local feature tensor.Method five: based on the method for sparse expression, the method thinks that the high-definition picture piece can adopt the sparse expression of one group of standard signal unit to describe, in the situation that deteriroation of image quality is not serious, can recover smoothly by low-resolution image the sparse expression of corresponding high-definition picture based on the principle of compressed sensing.method six: based on the method for proper subspace, by multivariate statistics technology (MulitiVariate Statistical Technique), pivot analysis (Principle Component Analysis, PCA), multiple linear is analyzed (MultiLinear Analysis) and Non-negative Matrix Factorization (Non-negative Matrix Factorization, NMF) etc. method with image transitions to proper subspace, low resolution target facial image can be expressed by the linear combination of low resolution training image, keep combination coefficient and the low resolution training image is replaced with corresponding high resolving power training image, the output that obtains is high resolving power target facial image, the method strong adaptability, but need to be with the training image accuracy registration to target image.
Face image super-resolution reconstruction method based on study mainly contains following two defectives: one, the complicated pretreatment operation of needs.All there is this defective in above listed any method, mainly depends on the degree of functioning of training image based on the face image super-resolution rebuilding process of study, i.e. the integral body of training image and target image and local similarity.Preprocessing process refers to be that a width target image is set up effective image training set, generally need the multi-step complex operations, comprise the operations such as image retrieval, proportional zoom, image registration and brightness normalization, simple pre-service is difficult to obtain desirable effect, and high-precision preprocess method is very time-consuming (as optical flow method etc.) again.If pre-service is not enough, the effect of the high resolving power target image that reconstructs can be greatly affected, especially to have the greatest impact based on the method for proper subspace is suffered; Two, be not suitable for little target image super-resolution rebuilding.Outside eliminating method six, in the above method, the effect of training image all is the high-frequency information compensation, namely concentrate the retrieval image similar to target image or the high-frequency information of topography's piece at training image, and the medium and low frequency information of training plan image set has only played the effect that helps retrieval, do not participate in the image super-resolution rebuilding process of essence, caused the waste of resource.When the target image size hour, target image itself does not comprise effective high-frequency information substantially, only relies on the high-frequency information that the low-frequency image similarity retrieval obtains can not play effective compensating action, sometimes even can cause reverse effect.
Summary of the invention
The present invention is directed to the proposition of above problem, and develop a kind of face image super-resolution reconstruction method based on facial characteristics organ similarity.Concrete technical scheme is as follows:
A kind of face image super-resolution reconstruction method based on facial characteristics organ similarity is characterized in that comprising following method:
Step 1 is according to given desirable high-resolution human face image, utilize the Gray Projection method to locate fast the human eye pupil position, according to interpupillary distance convergent-divergent and all facial images of cutting, make pupil position and face contour size between all images basically identical, the image after registration has formed high resolving power front face image storehouse; Simultaneously all images in high resolving power front face image storehouse are utilized again the centre coordinate of the legal plane of Gray Projection section feature organ, then extract organic image according to fixed size and centre coordinate, set up high-resolution features organic image storehouse;
Step 2 a pair width is treated the low resolution target Facial Image Alignment pupil of both eyes position of super-resolution rebuilding, calculate the size of low resolution target facial image pupil coordinate and feature organic image according to the image super-resolution amplification coefficient, then plane section feature organ legal according to Gray Projection and extract low resolution feature organic image;
At first step 3 carries out bicubic interpolation to low resolution target facial image and low resolution feature organic image, obtain initial target facial image and initial characteristics organic image, with all images in high resolving power front face image storehouse as the candidate's training image for the initial target facial image; And the method for Multiple through then out pivot analysis is calculated the degree of approximation between initial target facial image and candidate training image corresponding to it, a part of candidate's training image that similarity is low is rejected, candidate's training image that the residue degree of approximation is high forms the training plan image set for low resolution target facial image, and is identical with it for the training plan image set construction method of low resolution feature organic image;
Step 4 builds corresponding feature space separately for the training plan image set of low resolution target facial image and low resolution feature organic image, projection vector by low resolution target facial image and low resolution feature organic image, reconstruct the projection vector of the whole facial image of corresponding high resolving power and high resolving power organic image, the projection vector that will rebuild in feature space returns to pixel space and has namely obtained the whole facial image of high resolving power and high-resolution features organic image;
Step 5 is fused to final high resolving power target facial image with the whole facial image of high resolving power and the high-resolution features organic image that step 4 obtains, located the centre coordinate of facial characteristics organ in the low resolution target facial image preprocessing process of step 2, final high resolving power target facial image after the pixel value of organ site is to rebuild feature organic image and whole facial image in the weighted sum of this position pixel value; Wherein, feature organic image center is to the weights Gaussian distributed on border.
The method of the described Multiple through then out pivot analysis of step 3 is calculated the similarity between initial target image and candidate training image corresponding to it, the concrete steps that a part of image that similarity is low is rejected are as follows: 1. the similarity measurement of the first half number of times carries out in gradient field, namely at first calculate the gradient image of all images, then the similarity between the compute gradient image, thus the similarity of edge contour between candidate's training image and initial target image guaranteed; 2. the similarity measurement of later half number of times is completed in the gray scale territory, i.e. the direct gray scale similarity between calculated candidate training image and initial target image.
the described image similarity measuring method of step 3 concrete steps are as follows: 1. the method by pivot analysis builds all candidate's training image characteristics of correspondence spaces, calculate all candidate's training images and the projection vector of initial target image in this feature space, 2. the Euclidean distance between the projection vector of the projection vector of initial target image and candidate's training image is as the basis for estimation of similarity between them, candidate's training image that similarity is little is rejected, remaining candidate re-constructs feature space, and calculate similarity with identical method, candidate's training image that similarity is little is rejected again, until the number of residue candidate training image meets the demands, finally obtain the training plan image set.
It is apparent comparing advantage of the present invention with prior art, is specially:
1, the image pre-service only needs according to pupil position and face contour simple scalability and cutting image, has saved greatly pretreatment time, image fault and the quality degradation of also having avoided many ratios registration and brightness processed to cause;
2, the multistep pca method retrieval similar image that combines with the gray scale territory by gradient field, structure training plan image set, both removed the interference of illumination change, guaranteed again the similarity of image outline, bring into play to greatest extent the advantage based on the image search method of pivot analysis, further improved the precision of result for retrieval;
3, the facial characteristics organic image carries out separately super-resolution rebuilding, makes the facial image validity of final reconstruction higher, and the key position detailed information is abundanter;
4, the present invention more is applicable to the small-sized image super-resolution rebuilding, remarkable based on the method for proper subspace effect when the small-sized image super-resolution rebuilding, and feature organic image itself is simple in structure, between image, similarity is high, has therefore further improved based on the reconstruction effect of proper subspace method to the small size facial image.
Description of drawings
Fig. 1 is the process flow diagram of the method for the invention;
Fig. 2 is the structure process flow diagram in high resolving power front face image storehouse and feature organic image storehouse;
Fig. 3 is that the training plan image set builds process flow diagram;
Fig. 4 is the face image super-resolution rebuilding process flow diagram based on pivot analysis;
Fig. 5 is face image super-resolution rebuilding comparison diagram as a result.
Embodiment
Its basic thought based on the face image super-resolution reconstruction method of facial characteristics organ similarity as shown in Figures 1 to 4 is: utilize the whole facial image of low resolution and facial characteristics organ are utilized respectively based on the method for proper subspace and carry out super-resolution rebuilding, both reconstructed results is merged obtain final high-definition picture.Than previous method, in the improvement that all has aspect algorithm complex, operand, image effect and robustness in various degree, the whole facial image of high-resolution after reconstruction has guaranteed the consistance of image border profile information and original object image, and the facial characteristics organic image after rebuilding has promoted the visual effect of facial image organ sites, reduce simultaneously overcritical to the training image registration accuracy, reduced greatly pretreatment time.Concrete grammar is model high resolving power front face image storehouse, utilize the Gray Projection method to locate fast the human eye pupil position, according to interpupillary distance convergent-divergent and all facial images of cutting, make pupil position and face contour size between all images basically identical, the image after registration has formed high resolving power front face image storehouse.Secondly all images in image library are utilized again the centre coordinate of legal six facial characteristics organs such as left eye, right eye, left eyebrow, right eyebrow, nose and lip of Gray Projection, then extract organic image according to fixed size and centre coordinate, set up high-resolution features organic image storehouse.when a given width during until the low resolution target facial image of super-resolution rebuilding, at first it is carried out simple registration and extracts the feature organic image, then the method for utilizing respectively the multistep pivot analysis in gradient field and gray scale territory is its similar whole facial image of retrieval and feature organic image, result for retrieval is facial image and feature organic image training plan image set as a whole, adopt on the basis of its training plan image set respectively for the general image of target people face and feature organic image and carry out super-resolution rebuilding based on the method for proper subspace, obtain whole man's face high-definition picture and facial characteristics organ high-definition picture, to both merge according to previous locating information, namely obtain final high-resolution human face image.
Now by reference to the accompanying drawings the present invention is described further:
Fig. 1 describes is the overall flow of face image super-resolution rebuilding algorithm based on facial characteristics organ similarity proposed by the invention.
Step 1 is according to given desirable high-resolution human face image, utilize the Gray Projection method to locate fast the human eye pupil position, according to interpupillary distance convergent-divergent and all facial images of cutting, make pupil position and face contour size between all images basically identical, the image after registration has formed high resolving power front face image storehouse; Simultaneously all images in high resolving power front face image storehouse are utilized again the centre coordinate of the legal plane of Gray Projection section feature organ, then extract organic image according to fixed size and centre coordinate, set up high-resolution features organic image storehouse; High resolving power front face image storehouse is based on the source of the required training image of face image super-resolution rebuilding of study, and the facial image in this image library has desirable high resolving power, and between image, profile is consistent, measure-alike.The construction method of this image library is: given desirable high-resolution human face image, at first utilize legal pupil position of Gray Projection, according to the pupil position registering images, on this basis according to the face contour cutting image, after processing through this, all picture sizes are consistent, pupil position is identical, and all have desirable high resolving power.The present invention adopts Gray Projection method location and extracts the facial characteristics organ, and its principle is: for front face image arbitrarily, organ is to satisfy certain statistical law at the distributing position of face.To the lower jaw bottom, people's face is divided into from top to bottom three parts from forehead middle part, eyes are positioned at 1/3rd places substantially, and nose is positioned at 1/2nd places, and face is positioned at lower 1/3rd places, and the center of while nose and mouth also is positioned on the median vertical line of people's face.Can determine that according to this statistical law each facial characteristics organ is in the approximate location of facial image.The feature of concrete methods of realizing is: at first according to the approximate location target setting zone of organ at face, be generally rectangular window, then calculate the interior image of window at the Gray Projection curve of horizontal and vertical direction, trough on each curve is the central point of character pair organ, position as eyebrow, the eye pupil position, the center of lip etc.Then according to the feature organ centre coordinate that obtains, be partitioned into the organic image of fixed size.Image in high resolving power front face image storehouse is extracted the facial characteristics organ, structure left eye, right eye, left eyebrow, right eyebrow, nose, lip organ storehouse.
Step 2 a pair width is treated the low resolution target facial image of super-resolution rebuilding, and what " low resolution target facial image " referred to is exactly that needs adopt described method to carry out the original input picture of super-resolution rebuilding; Registration pupil of both eyes position, calculate the size of low resolution target facial image pupil coordinate and feature organic image according to the image super-resolution amplification coefficient, plane section feature organ legal according to Gray Projection and extract low resolution feature organic image then, " low resolution feature organic image " is the organic image that extracts from " low resolution target facial image ";
The structure of step 3 training plan image set is the process of the similar image of searched targets people face and feature organ in facial image database and organic image storehouse.The present invention proposes the multistep pca method retrieval similar image that gradient field and gray scale territory combine, the number of the image by constantly dwindling the construction feature space progressively improves the reliability that image similarity is measured.At first low resolution target facial image and low resolution feature organic image are carried out bicubic interpolation, obtain initial target facial image and initial characteristics organic image, " initial target facial image " and " initial characteristics organic image " obtains by interpolation, be high resolving power, but poor definition, with all images in high resolving power front face image storehouse as the candidate's training image for the initial target facial image; And repeatedly (the supposition number of times is K) passes through the method calculating initial target facial image of pivot analysis and the degree of approximation between candidate training image corresponding to it, a part of candidate's training image that similarity is low is rejected, candidate's training image that the residue degree of approximation is high forms the training plan image set for low resolution target facial image, and is identical with it for the training plan image set construction method of low resolution feature organic image; " low resolution feature organic image " and " low resolution target facial image " extracts respectively the high image conduct training image separately of the degree of approximation from " high resolving power organic image storehouse " and " high resolving power front face image storehouse ".the method of described Multiple through then out pivot analysis is calculated the similarity between initial target image (initial target image refers to described initial target facial image or initial characteristics organic image) and candidate training image corresponding to it, the concrete steps that a part of image that similarity is low is rejected are as follows: 1. the similarity measurement of the first half number of times (front K/2 time) carries out in gradient field, namely at first calculate the gradient image of all images, then the similarity between the compute gradient image, thereby guaranteed the similarity of edge contour between candidate's training image and initial target image, 2. the similarity measurement of later half number of times (rear K/2 time) is completed in the gray scale territory, i.e. the direct gray scale similarity between calculated candidate training image and initial target image, wherein image similarity measuring method concrete steps are as follows: 1. the method by pivot analysis builds all candidate's training image characteristics of correspondence spaces, calculate all candidate's training images and initial target image (initial target image refers to the described initial target facial image of this step or the initial characteristics organic image) projection vector in this feature space, 2. the Euclidean distance between the projection vector of the projection vector of initial target image and candidate's training image is as the basis for estimation of similarity between them, candidate's training image that similarity is little is rejected, remaining candidate re-constructs feature space, and calculate similarity with identical method, candidate's training image that similarity is little is rejected again, until the number of residue candidate training image meets the demands, finally obtain the training plan image set.
Step 4 is the most sufficient to the utilization of training image information in all super resolution ratio reconstruction methods based on study based on the method for proper subspace, is therefore quoted by the present invention.though proper subspace method implementation is more, but all be based upon under the maximum posteriori probability framework, therefore the present invention only briefly introduces to do as example based on the method for pivot analysis, training plan image set for low resolution target facial image and low resolution feature organic image builds corresponding feature space separately, projection vector by low resolution target facial image and low resolution feature organic image, reconstruct the projection vector of the whole facial image of corresponding high resolving power and high resolving power organic image, the projection vector that to rebuild in feature space returns to pixel space and has namely obtained the whole facial image of high resolving power and high-resolution features organic image,
Step 5 is fused to final high resolving power target facial image with the whole facial image of high resolving power and the high-resolution features organic image that step 4 obtains, located the centre coordinate of facial characteristics organ in the low resolution target facial image preprocessing process of step 2, final high resolving power target facial image after the pixel value of organ site is to rebuild feature organic image and whole facial image in the weighted sum of this position pixel value; Wherein, feature organic image center is to the weights Gaussian distributed on border.
The foundation in high resolving power front face image storehouse and organic image storehouse mainly is based on the image preprocessing process of Gray Projection method.As shown in Figure 2, at first given any high-resolution human face image utilizes pupil of both eyes position, the legal position of Gray Projection, and according to the pupil position registering images, then according to face contour by the fixed size cutting image, acquired results deposits in high resolving power front face image storehouse; Next again utilize the organ such as the left eyebrow in the legal position of Gray Projection, right eyebrow, nose, lip center, and extract all feature organic images that comprise eyes by fixed size, acquired results deposits in corresponding organic image storehouse.
For the training plan image set of low resolution target facial image and basic identical for the structure principle of the training plan image set of low resolution feature organic image, difference only is that at first the latter need to locate and extract the facial characteristics organ of low resolution target facial image.The building process of training plan image set is described its feature as shown in Figure 3 as an example of whole facial image example: supposition low resolution target facial image I LAnd high resolving power front face image storehouse
Figure BDA0000092364850000081
Wherein i is the index position of image in the storehouse, as
Figure BDA0000092364850000082
Be the i width image in high resolving power front face image storehouse; Wherein N is total number of images order in the storehouse.Choose m (m N) width image people's face training plan image set as a whole for low resolution target facial image, m is the training image number, generally determines the concrete numerical value of m according to actual conditions.Setting Initial Face training plan image set is
Figure BDA0000092364850000083
M ' is current candidate's training image number, and its initial value is total number of images order N in high resolving power front face image storehouse, with target image I LAdopt bicubic interpolation to be reconstructed into desirable high resolving power and obtain initial target image I I, then calculate I IWith
Figure BDA0000092364850000084
Gradient image I GIWith At first the similarity between the gradient field computed image, feature is: utilize the method for pivot analysis to build (m≤m '≤N) characteristic of correspondence space Ω l(l<m '), wherein l is the dimension of feature space, generally gets l less than m ', with the I of vectorization GIAnd
Figure BDA0000092364850000087
Interior image all projects to Ω l, obtain projection vector X separately GIAnd
Figure BDA0000092364850000088
Calculate X GIWith The Euclidean distance of interior all projection vectors:
D GX ( i ) = Σ k ∈ [ 1 , l ] ( X GI ( k ) - X GH ( i ) ( k ) ) 2
Will be larger
Figure BDA00000923648500000811
Corresponding candidate's training image Concentrate deletion from current candidate's training image, the number of deleted image can be adjusted according to the number of times that this process repeats, thereby obtains new candidate's training set
Figure BDA00000923648500000813
Wherein the value of m ' is done corresponding modify according to the picture number of deletion, repeatedly repeats said process, the low image of deletion similarity.During less than predetermined threshold value T, generally T is set to N/2 as m ', changes into gray scale territory computed image similarity, it is characterized in that: utilize the method for pivot analysis to build current candidate's training plan image set
Figure BDA00000923648500000814
Characteristic of correspondence space Ω l′(l '<m '), wherein l ' is the dimension of feature space, with the initial target image I of vectorization IAnd current candidate's training plan image set
Figure BDA00000923648500000815
Interior image all projects to Ω l′, obtain projection vector X separately IAnd
Figure BDA00000923648500000816
Calculate X IWith
Figure BDA0000092364850000091
The Euclidean distance of interior all projection vectors:
D X ( i ) = Σ k ∈ [ 1 , l ] ( X I ( k ) - X G ( i ) ( k ) ) 2
Equally will be larger
Figure BDA0000092364850000093
Corresponding
Figure BDA0000092364850000094
And concentrate deletion from training image, the number of deleted image is adjusted according to the number of times that this process repeats, and obtains new training set
Figure BDA0000092364850000095
Repeat said process, until m '=m.So far namely completed whole facial image training set
Figure BDA0000092364850000096
Structure, the structure of feature organic image training set is identical with it, repeats no more.
Feature based on pivot analysis technical construction characteristics of image subspace is: with all image vector expression in the training plan image set, and therefore this training plan image set of expression matrix of available m row
Figure BDA0000092364850000097
Wherein m is the training image number, calculates the covariance matrix of this matrix, then calculates the eigenwert of covariance matrix
Figure BDA0000092364850000098
Keep l larger eigenwert
Figure BDA0000092364850000099
Wherein i refers to the index position of element in set, as
Figure BDA00000923648500000910
(l<m) and characteristic of correspondence thereof are vectorial to refer to i element in set
Figure BDA00000923648500000911
Proper vector structural attitude space Ω thus, all training images and all can being expressed by the projection vector in Ω by it through the initial target image that bicubic interpolation obtains so are for the image I of arbitrary width vectorization H, the projection vector in feature space Ω is X=B T(I H-μ), B=[b wherein 1..., b l] be the eigenvectors matrix of previous reservation
Figure BDA00000923648500000912
μ is the training plan image set
Figure BDA00000923648500000913
The pixel average of interior all images.
Figure 4 shows that the basic flow sheet based on the image super-resolution rebuilding method of pivot analysis.According to the theoretical derivation of maximum posteriori probability (MAP), suppose low resolution target image I L, the high resolving power projection vector X that it is rebuild in feature space Ω *For:
X *=(B TA TAB+λΛ -1) -1B TA T(I L-Aμ)
A is the down-sampling matrix, can be understood as generally acknowledged functional form, and its function is for being down sampled to the high-definition picture of vectorization the resolution of low resolution target image; λ is scale-up factor (value is between 0.02-0.5),
Figure BDA00000923648500000914
L the eigenwert that the previous described feature space construction process of serving as reasons keeps The diagonal matrix that consists of.The projection vector of rebuilding is returned to pixel space, and the recovery expression formula is:
Figure BDA00000923648500000916
Namely obtained final super-resolution reconstruction image
Figure BDA00000923648500000917
The purpose of image co-registration is that the feature organic image that will rebuild is fused in the whole facial image of reconstruction.In the construction process of organic image training set, low resolution target facial image has been carried out the organ location, according to these elements of a fix and super-resolution coefficient, the feature organic image accurately can be fused in the whole facial image of reconstruction.In order to reduce algorithm complex, the present invention adopts the image fusion technology of intensity-based weighting:
I *(x,y)=β(x,y)G(x,y)+(1-β(x,y))I(x,y)
Wherein G (x, y) is the high-resolution features organic image of reconstruction, the whole facial image of I (x, y) for rebuilding, I *(x, y) is facial image after merging, and β (x, y) is the weights coefficient of pixel fusion, and span is between 0-1, and for guaranteeing the visual effect of fused images, β (x, y) obeys take the organic image central spot as peaked Gaussian distribution.
For the effect based on the face image super-resolution reconstruction method of facial characteristics organ of verifying that the present invention proposes, and the advantage of outstanding method involved in the present invention, we adopt the small size front face image to test as the low resolution target image, the image size is 16 * 16, and amplification coefficient is 4.Experimental situation is Intelcore2CPU, and the Windows7 PC of system of 3GHz dominant frequency adopts matlab (R2010b) software to carry out emulation.the all experiments were facial image derives from Asia face database CAS-PEAL-R1 (the document 1:Wen Gao of the Chinese Academy of Sciences, et al.The CAS-PEAL Large-Scale Chinese Face Database and Baseline Evaluations.IEEE Transactions on System Man, and Cybernetics (Part A), 2008, (38): 149-161.), comprise altogether man's front face image 600 width, Ms's front face image 400 width, all images are gray level image, deposit high resolving power front face image storehouse in through intercepting facial zone after simple pupil registration, its resolution is 64 * 64.
according to experimental result in the past as can be known, when small size low resolution target face image is arrived high-resolution, general reconstructing method can't be competent at, and based on two-step approach (the document 2:C.Liu of pivot analysis, H.Shum, and C.Zhang, A Two-Step Approach to Hallucinating Faces:Global Parametric Model and Local Nonparametric Model, in Proc.Of CRPR, 2001, 1:192-198.) gain universal acceptance in the face image super-resolution rebuilding technology based on study always, but the method requires very high for the face feature point registration between target image and training image, require simultaneously training image and target image that very high similarity is arranged, usually we can't obtain sufficiently high registration accuracy, also can't obtain enough perfect training images, so in the significant zone of image personal characteristics, as organ, profiles etc. are located, can produce more fuzzy and degradation effect.Simultaneously with method involved in the present invention with contrast based on the method for bicubic interpolation with based on the two-step approach of pivot analysis, reduced parameter is the Y-PSNR of image, and the Y-PSNR comparison diagram after the super-resolution rebuilding of front face image as shown in Figure 5.Adopt the image Y-PSNR value contrast box figure after super-resolution rebuilding is carried out in the present invention and two kinds of control methodss, box figure has described and has adopted every kind of method five width man images and five width woman images to be carried out the statistics of ten Y-PSNRs after super-resolution rebuilding, is followed successively by from top to bottom minimum value, quartile value, intermediate value, the 3rd quartile value and maximal value.
Simultaneously, experiment is found when employing is carried out image super-resolution rebuilding based on the two-step approach of pivot analysis, in second step, the computation process of local feature facial image is very consuming time, when low resolution target image size hour, this local feature image does not make to improving final high resolving power target image quality the contribution of expecting, and the high-resolution human face image that the designed method of the present invention reconstructs effect aspect details is remarkable.The image reconstruction time average of method involved in the present invention is about 30 seconds, and the method that document [2] relates to is rebuild identical image and needed about 400 seconds.
The above; only be the better embodiment of the present invention; but protection scope of the present invention is not limited to this; anyly be familiar with those skilled in the art in the technical scope that the present invention discloses; be equal to replacement or changed according to technical scheme of the present invention and inventive concept thereof, within all should being encompassed in protection scope of the present invention.

Claims (3)

1. face image super-resolution reconstruction method based on facial characteristics organ similarity is characterized in that comprising following method:
Step 1 is according to given desirable high-resolution human face image, utilize the Gray Projection method to locate fast the human eye pupil position, according to interpupillary distance convergent-divergent and the acquired face images of cutting, make pupil position and face contour size between all images basically identical, the image after registration has formed high resolving power front face image storehouse; Simultaneously all images in high resolving power front face image storehouse are utilized again the centre coordinate of the legal plane of Gray Projection section feature organ, then extract organic image according to fixed size and centre coordinate, set up high-resolution features organic image storehouse;
Step 2 a pair width is treated the low resolution target Facial Image Alignment pupil of both eyes position of super-resolution rebuilding, calculate the size of low resolution target facial image pupil coordinate and feature organic image according to the image super-resolution amplification coefficient, then plane section feature organ legal according to Gray Projection and extract low resolution feature organic image;
At first step 3 carries out bicubic interpolation to low resolution target facial image and low resolution feature organic image, obtain initial target facial image and initial characteristics organic image, with all images in high resolving power front face image storehouse as the candidate's training image for the initial target facial image; And the method for Multiple through then out pivot analysis is calculated the similarity between initial target facial image and candidate training image corresponding to it, a part of candidate's training image that similarity is low is rejected, candidate's training image that the residue similarity is high forms the training plan image set for low resolution target facial image, and is identical with it for the training plan image set construction method of low resolution feature organic image;
Step 4 builds corresponding feature space separately for the training plan image set of low resolution target facial image and low resolution feature organic image, projection vector by low resolution target facial image and low resolution feature organic image, reconstruct the projection vector of the whole facial image of corresponding high resolving power and high resolving power organic image, the projection vector that will rebuild in feature space returns to pixel space and has namely obtained the whole facial image of high resolving power and high-resolution features organic image;
Step 5 is fused to final high resolving power target facial image with the whole facial image of high resolving power and the high-resolution features organic image that step 4 obtains, located the centre coordinate of facial characteristics organ in the low resolution target facial image preprocessing process of step 2, final high resolving power target facial image after the pixel value of organ site is to rebuild feature organic image and whole facial image in the weighted sum of this position pixel value; Wherein, feature organic image center is to the weights Gaussian distributed on border.
2. a kind of face image super-resolution reconstruction method based on facial characteristics organ similarity according to claim 1, it is characterized in that the method calculating initial target facial image of the described Multiple through then out pivot analysis of step 3 and the similarity between candidate training image corresponding to it, the concrete steps that a part of candidate's training image that similarity is low is rejected are as follows: 1. the similarity measurement of the first half number of times carries out in gradient field, namely at first calculate the gradient image of all images, then the similarity between the compute gradient image, thereby guaranteed the similarity of edge contour between candidate's training image and initial target facial image, 2. the similarity measurement of later half number of times is completed in the gray scale territory, i.e. the direct gray scale similarity between calculated candidate training image and initial target facial image.
3. a kind of face image super-resolution reconstruction method based on facial characteristics organ similarity according to claim 1 and 2, the measuring method concrete steps that it is characterized in that the described similarity of step 3 are as follows: 1. the method by pivot analysis builds all candidate's training image characteristics of correspondence spaces, calculate all candidate's training images and the projection vector of initial target facial image in this feature space, 2. the Euclidean distance between the projection vector of the projection vector of initial target facial image and candidate's training image is as the basis for estimation of similarity between them, candidate's training image that similarity is little is rejected, remaining candidate re-constructs feature space, and calculate similarity with identical method, candidate's training image that similarity is little is rejected again, until the number of residue candidate training image meets the demands, finally obtain the training plan image set.
CN 201110278771 2011-09-19 2011-09-19 Method for reconstructing human facial image super-resolution based on similarity of facial characteristic organs Expired - Fee Related CN102354397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110278771 CN102354397B (en) 2011-09-19 2011-09-19 Method for reconstructing human facial image super-resolution based on similarity of facial characteristic organs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110278771 CN102354397B (en) 2011-09-19 2011-09-19 Method for reconstructing human facial image super-resolution based on similarity of facial characteristic organs

Publications (2)

Publication Number Publication Date
CN102354397A CN102354397A (en) 2012-02-15
CN102354397B true CN102354397B (en) 2013-05-15

Family

ID=45577958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110278771 Expired - Fee Related CN102354397B (en) 2011-09-19 2011-09-19 Method for reconstructing human facial image super-resolution based on similarity of facial characteristic organs

Country Status (1)

Country Link
CN (1) CN102354397B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629373B (en) * 2012-02-27 2014-05-28 天津大学 Super-resolution image acquisition method based on sparse representation theory
CN102800069A (en) * 2012-05-22 2012-11-28 湖南大学 Image super-resolution method for combining soft decision self-adaptation interpolation and bicubic interpolation
US8675999B1 (en) * 2012-09-28 2014-03-18 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Apparatus, system, and method for multi-patch based super-resolution from an image
CN102902966A (en) * 2012-10-12 2013-01-30 大连理工大学 Super-resolution face recognition method based on deep belief networks
CN103914807B (en) * 2012-12-31 2017-05-03 北京大学 Non-locality image super-resolution method and system for zoom scale compensation
CN103454225B (en) * 2013-07-05 2016-04-06 中南大学 Based on the copper floatation foam image regional area area measurement method of MPCA
US9589178B2 (en) * 2014-09-12 2017-03-07 Htc Corporation Image processing with facial features
CN108133456A (en) * 2016-11-30 2018-06-08 京东方科技集团股份有限公司 Face super-resolution reconstruction method, reconstructing apparatus and computer system
CN107895345B (en) 2017-11-29 2020-05-26 浙江大华技术股份有限公司 Method and device for improving resolution of face image
CN108121957B (en) * 2017-12-19 2021-09-03 麒麟合盛网络技术股份有限公司 Method and device for pushing beauty material
CN110136055B (en) * 2018-02-02 2023-07-14 腾讯科技(深圳)有限公司 Super resolution method and device for image, storage medium and electronic device
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN109284729B (en) * 2018-10-08 2020-03-03 北京影谱科技股份有限公司 Method, device and medium for acquiring face recognition model training data based on video
CN111353943B (en) * 2018-12-20 2023-12-26 杭州海康威视数字技术股份有限公司 Face image recovery method and device and readable storage medium
CN109948555B (en) * 2019-03-21 2020-11-06 于建岗 Face super-resolution identification method based on video stream
CN109961451A (en) * 2019-03-22 2019-07-02 西北工业大学 A kind of material grains tissue segmentation methods based on marginal information
CN110188598B (en) * 2019-04-13 2022-07-05 大连理工大学 Real-time hand posture estimation method based on MobileNet-v2
CN110503606B (en) * 2019-08-29 2023-06-20 广州大学 Method for improving face definition
CN110956599A (en) * 2019-11-20 2020-04-03 腾讯科技(深圳)有限公司 Picture processing method and device, storage medium and electronic device
CN110991310B (en) * 2019-11-27 2023-08-22 北京金山云网络技术有限公司 Portrait detection method, device, electronic equipment and computer readable medium
CN112991165B (en) * 2019-12-13 2023-07-14 深圳市中兴微电子技术有限公司 Image processing method and device
CN114549307B (en) * 2022-01-28 2023-05-30 电子科技大学 High-precision point cloud color reconstruction method based on low-resolution image
CN115311477B (en) * 2022-08-09 2024-01-16 北京惠朗时代科技有限公司 Super-resolution reconstruction-based simulated trademark accurate detection method and system
WO2024042970A1 (en) * 2022-08-26 2024-02-29 ソニーグループ株式会社 Information processing device, information processing method, and computer-readable non-transitory storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299235A (en) * 2008-06-18 2008-11-05 中山大学 Method for reconstructing human face super resolution based on core principle component analysis
CN101710386A (en) * 2009-12-25 2010-05-19 西安交通大学 Super-resolution face recognition method based on relevant characteristic and non-liner mapping

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101299235A (en) * 2008-06-18 2008-11-05 中山大学 Method for reconstructing human face super resolution based on core principle component analysis
CN101710386A (en) * 2009-12-25 2010-05-19 西安交通大学 Super-resolution face recognition method based on relevant characteristic and non-liner mapping

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Feature-Based Super-Resolution for Face Recognition;Zhifei Wang et al;《IEEE International Conference on Multimedia and Expo》;20080626;全文 *
Image Super-Resolution Via Sparse Representation;Jianchao Yang et al;《IEEE TRANSACTIONS ON IMAGE PROCESSING》;20101130;第19卷(第11期);全文 *
Jianchao Yang et al.Image Super-Resolution Via Sparse Representation.《IEEE TRANSACTIONS ON IMAGE PROCESSING》.2010,第19卷(第11期),全文.
Region-Based Super-Resolution Aided Facial Feature Extraction from Low-Resolution Video Sequences;T. Celik et al;《IEEE International Conference on Acoustics, Speech and Signal Processing》;20050323;全文 *
T. Celik et al.Region-Based Super-Resolution Aided Facial Feature Extraction from Low-Resolution Video Sequences.《IEEE International Conference on Acoustics, Speech and Signal Processing》.2005,全文.
Zhifei Wang et al.Feature-Based Super-Resolution for Face Recognition.《IEEE International Conference on Multimedia and Expo》.2008,全文.
基于多尺度和多方向特征的人脸超分辨率算法;黄丽 等;《计算机辅助设计与图形学学报》;20040731;第16卷(第7期);全文 *
黄丽 等.基于多尺度和多方向特征的人脸超分辨率算法.《计算机辅助设计与图形学学报》.2004,第16卷(第7期),全文.

Also Published As

Publication number Publication date
CN102354397A (en) 2012-02-15

Similar Documents

Publication Publication Date Title
CN102354397B (en) Method for reconstructing human facial image super-resolution based on similarity of facial characteristic organs
Han et al. A deep learning method for bias correction of ECMWF 24–240 h forecasts
WO2020238902A1 (en) Image segmentation method, model training method, apparatuses, device and storage medium
CN110599528A (en) Unsupervised three-dimensional medical image registration method and system based on neural network
CN111798462A (en) Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
Biasutti et al. Lu-net: An efficient network for 3d lidar point cloud semantic segmentation based on end-to-end-learned 3d features and u-net
CN109035172B (en) Non-local mean ultrasonic image denoising method based on deep learning
CN104504394A (en) Dese population estimation method and system based on multi-feature fusion
CN102651124B (en) Image fusion method based on redundant dictionary sparse representation and evaluation index
CN110826389A (en) Gait recognition method based on attention 3D frequency convolution neural network
CN106157249A (en) Based on the embedded single image super-resolution rebuilding algorithm of optical flow method and sparse neighborhood
CN114758288A (en) Power distribution network engineering safety control detection method and device
CN109977968B (en) SAR change detection method based on deep learning classification comparison
CN110648331B (en) Detection method for medical image segmentation, medical image segmentation method and device
CN109522831B (en) Real-time vehicle detection method based on micro-convolution neural network
CN113112416B (en) Semantic-guided face image restoration method
CN104657717A (en) Pedestrian detection method based on layered kernel sparse representation
CN110175529A (en) A kind of three-dimensional face features' independent positioning method based on noise reduction autoencoder network
CN104732546A (en) Non-rigid SAR image registration method based on region similarity and local spatial constraint
Rajeswari et al. Automatic road extraction based on level set, normalized cuts and mean shift methods
CN104599291A (en) Structural similarity and significance analysis based infrared motion target detection method
CN107392211A (en) The well-marked target detection method of the sparse cognition of view-based access control model
CN104732230A (en) Pathology image local-feature extracting method based on cell nucleus statistical information
Wang et al. 3D human pose and shape estimation with dense correspondence from a single depth image
Shit et al. An encoder‐decoder based CNN architecture using end to end dehaze and detection network for proper image visualization and detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20130515

Termination date: 20150919

EXPY Termination of patent right or utility model