CN107680037B - Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning - Google Patents

Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning Download PDF

Info

Publication number
CN107680037B
CN107680037B CN201710817616.1A CN201710817616A CN107680037B CN 107680037 B CN107680037 B CN 107680037B CN 201710817616 A CN201710817616 A CN 201710817616A CN 107680037 B CN107680037 B CN 107680037B
Authority
CN
China
Prior art keywords
resolution
sample
image
low
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710817616.1A
Other languages
Chinese (zh)
Other versions
CN107680037A (en
Inventor
渠慎明
张东生
苏靖
王永强
王青博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan University
Original Assignee
Henan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan University filed Critical Henan University
Priority to CN201710817616.1A priority Critical patent/CN107680037B/en
Publication of CN107680037A publication Critical patent/CN107680037A/en
Application granted granted Critical
Publication of CN107680037B publication Critical patent/CN107680037B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an improved human face super-resolution reconstruction method based on nearest characteristic line manifold learning, which is characterized in that on the basis of the existing human face super-resolution reconstruction method based on nearest characteristic line manifold learning, the condition that a projection point falls on an extrapolation line of a connecting line between two sample points is distinguished, namely when the sum of Euclidean distances from the projection point to the two sample points is greater than the Euclidean distance between the two sample pointsWAnd searching a sample point which is closer to the projection point from the two sample points to replace the projection point to form a point set to be screened, so that the projection point is limited to have stronger relevance with the sample point, the expression capability of newly obtained sample data on the input low-resolution image block can be greatly improved, the introduction of detailed information which does not exist in the original image is avoided as much as possible, and the reconstruction effect of the low-resolution image is improved.

Description

Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning
Technical Field
The invention relates to the technical field of image processing, in particular to an improved face super-resolution reconstruction method based on nearest characteristic line manifold learning.
Background
2016 in government work reports emphasizes: 'innovation of a social security comprehensive treatment mechanism, support promotion of social security prevention and control system construction by informatization, law punishment of illegal criminal behaviors, severe attack of violent terrorism activities and enhancement of the security sense of the masses'; at present, in a plurality of security measures, video monitoring and image processing technologies play more and more important roles in crime prevention and the like, but according to statistical data, the quality difference ratio of a monitored image obtained in the daytime is up to 60%, and the quality difference ratio of the monitored image obtained at night is up to 95%, so that how to reconstruct and obtain a high-quality recognizable face image on the basis of the face image of an original low-quality suspect becomes an urgent need of video detection.
At present, the study of the face super-resolution reconstruction method based on learning is in the main study direction in the field of image processing, the face super-resolution reconstruction method based on learning is to reconstruct and obtain a high-resolution face image which is most similar to an input low-resolution face image by utilizing high-resolution and low-resolution face image training library samples according to an observed low-resolution face image, and the face super-resolution reconstruction method can reproduce the local details of a face and achieve the aim of enhancing the accuracy of face identification; compared with the traditional method, the learning-based face super-resolution reconstruction method can obtain better reconstruction effect and higher magnification by means of prior information obtained by training samples.
The basic premise of the learning-based face image super-resolution reconstruction method is that sample image blocks of a low-resolution face image and sample image blocks of a high-resolution face image have similar local geometric structures, however, in order to realize the assumption, two premise conditions must be satisfied: first, sample data is densely sampled in the potential manifold space; second, the samples are not disturbed by noise; for the first precondition, the maximum number of samples in the existing face image library does not exceed 2000 samples without considering the individual repetition of the samples, and even if the samples are combined into a training set and put into a high-dimensional face manifold space, a sparse sample space is formed; therefore, the existing face image library samples cannot meet the precondition that the learning-based face image super-resolution reconstruction method is supposed to be established, the creation of the face image library is a very time-consuming and complex process, and a large amount of computing resources are occupied in the operation process of the creation algorithm; therefore, it is not practical to solve the problem of insufficiently dense sampling of the manifold space by simply increasing the number of face image samples to expand the face image library.
In 2014, Jiang et al of Wuhan university introduced the concept of the nearest characteristic line in the field of image processing, and proposed a face super-resolution reconstruction method based on the manifold learning of the nearest characteristic line, the application number of which is 201110421817.2, and the method expands the expression capacity of a sample library by introducing the concept of the nearest characteristic line into the super-resolution reconstruction; firstly, selecting a sample image nearest to a sample point to be inquired from a low-resolution training sample library by an inventor; secondly, connecting the screened sample images serving as sample points pairwise to obtain corresponding characteristic lines, and solving the projection point of the sample point to be inquired on each characteristic line, so that the capacity expansion work of the sample data is realized, and the problem that sampling in manifold space is not dense enough is solved; then selecting a part of projection points nearest to the sample point to be queried from the obtained projection points, and solving a linear reconstruction weight between the sample point to be queried and the nearest projection points; and finally, replacing the low-resolution projection points with the high-resolution projection points corresponding to the nearest neighbor part low-resolution projection points, and reconstructing to obtain a target high-resolution image.
Although the method greatly expands the expression capability of sample data, when the nearest neighbor projection point is selected, necessary constraint information is lacked, and detailed information which does not exist in the original image is introduced, so that the image reconstruction effect is not ideal.
Disclosure of Invention
The invention aims to provide an improved face super-resolution reconstruction method based on recent characteristic line manifold learning, which can avoid the introduction of detail information which does not exist in an original image as much as possible on the basis of the face super-resolution reconstruction method based on recent characteristic line manifold learning, and improve the reconstruction effect of a low-resolution image.
The technical scheme adopted by the invention is as follows: an improved face super-resolution reconstruction method based on nearest feature line manifold learning comprises the following steps:
step 1, inputting a low-resolution face image, and dividing the input low-resolution face image, a low-resolution face sample image in a low-resolution training set and a high-resolution face sample image in a high-resolution training set into mutually overlapped image blocks.
And 2, for each image block in the input low-resolution face image, taking the image block at the corresponding position of each low-resolution face sample image in the low-resolution training set as a sample point, establishing a low-resolution face sample block space, and calculating K nearest projection points on the low-resolution face sample block space, wherein under the condition that the projection points fall on an extrapolation line of a connecting line between the sample points, the projection points which do not accord with the reality are found out according to a constraint parameter W, and the nearest points which accord with the reality are calculated as substitutes.
And 3, for each image block in the input low-resolution face image, performing linear reconstruction by using K nearest projection points on the low-resolution face sample block space obtained in the step 2 to obtain a weight coefficient of the linear reconstruction.
And 4, for each image block in the input low-resolution face image, taking the image block at the corresponding position of each high-resolution face sample image in the high-resolution training set as a sample point, establishing a high-resolution face sample block space, and calculating K sample points which respectively correspond to K nearest projection points in the high-resolution face sample block space and the low-resolution face sample block space obtained in the step 2.
And 5, replacing K nearest projection points in the low-resolution face sample block space obtained in the step 2 with K sample points in the high-resolution face sample block space obtained in the step 4, and weighting and reconstructing a high-resolution image block by using the weighting coefficient obtained in the step 3.
And 6, superposing all weighted and reconstructed high-resolution image blocks according to positions, and then dividing the superposed times of the positions of each pixel to reconstruct a high-resolution face image.
Further, in step 1, the input low-resolution face image, the high-resolution training set and the low-resolution training set are respectively converted into one-dimensional vectors to obtain low-resolution image x to be reconstructed and high-resolution image training samples
Figure DEST_PATH_GDA0001469155440000031
And low resolution image training samples
Figure DEST_PATH_GDA0001469155440000032
Where N represents the number of training sample patterns in the high resolution image training samples and the low resolution image training samples.
After dividing each training sample pattern in the low-resolution image X, the high-resolution image training sample Y and each training sample pattern in the low-resolution image training sample X to be reconstructed into mutually overlapped image blocks with equal size, respectively, the low-resolution image block set to be reconstructed, the high-resolution image training sample set and the low-resolution image training sample set are respectively formed as follows: { xi|1≤i≤M},
Figure DEST_PATH_GDA0001469155440000033
Where M denotes the number of image blocks per image division.
In step 2, for each image block in the low-resolution face image, calculating K nearest projection points in the low-resolution face sample block space specifically includes steps 2.1-2.6:
step 2.1, the t-th image block x in the low resolution image block set to be reconstructedtRespectively extracting the t-th image block of each block back training sample pattern in the high-resolution image training sample set and the low-resolution image training sample set to form a high-resolution training image block set HtAnd a low resolution training image block set Lt
Figure DEST_PATH_GDA0001469155440000034
Step 2.2 training the image Block set L from the Low resolutiontIn (1), select sum image block xtKpre sample image blocks with the nearest Euclidean distance form a screened low-resolution neighbor image block set
Figure DEST_PATH_GDA0001469155440000035
Wherein
Figure DEST_PATH_GDA0001469155440000036
Denotes xtThe neighborhood set of (a) is selected,
Figure DEST_PATH_GDA0001469155440000037
representing a neighborhood
Figure DEST_PATH_GDA0001469155440000038
The number of image blocks in (1).
Step 2.3, the screened low-resolution neighbor image block set LKpre tAt any two sample points
Figure DEST_PATH_GDA0001469155440000041
And
Figure DEST_PATH_GDA0001469155440000042
are connected to form
Figure DEST_PATH_GDA0001469155440000043
Strip characteristic line
Figure DEST_PATH_GDA0001469155440000044
j1And j2Are all integers and j is not less than 11≤j2≤N。
Step 2.4, calculate input image Block xtIn all characteristic lines
Figure DEST_PATH_GDA0001469155440000045
Projected point on
Figure DEST_PATH_GDA0001469155440000046
Figure DEST_PATH_GDA0001469155440000047
Representing a position parameter, wherein
Figure DEST_PATH_GDA0001469155440000048
Then, input image block xtAnd characteristic line
Figure DEST_PATH_GDA0001469155440000049
Can be regarded as xtAnd projection point
Figure DEST_PATH_GDA00014691554400000410
A distance of (i) that
Figure DEST_PATH_GDA00014691554400000411
Wherein,
Figure DEST_PATH_GDA00014691554400000412
representing an input image block xtTo the projection point
Figure DEST_PATH_GDA00014691554400000413
The euclidean distance of (c).
Step 2.5, projecting points are aligned according to actual conditions
Figure DEST_PATH_GDA00014691554400000414
Carrying out distinguishing calculation; when projected point
Figure DEST_PATH_GDA00014691554400000415
Does not fall on the sample point
Figure DEST_PATH_GDA00014691554400000416
And
Figure DEST_PATH_GDA00014691554400000417
when extrapolated, illustrate the projected point
Figure DEST_PATH_GDA00014691554400000418
Fall on the sample point
Figure DEST_PATH_GDA00014691554400000419
And
Figure DEST_PATH_GDA00014691554400000420
between the line segments of the connecting line, the projection points are not required to be replaced; when projected point
Figure DEST_PATH_GDA00014691554400000421
Fall on the sample point
Figure DEST_PATH_GDA00014691554400000422
And
Figure DEST_PATH_GDA00014691554400000423
when extrapolated, computing the projected points
Figure DEST_PATH_GDA00014691554400000424
To the sample point
Figure DEST_PATH_GDA00014691554400000425
And
Figure DEST_PATH_GDA00014691554400000426
euclidean distance of (a):
Figure DEST_PATH_GDA00014691554400000427
and
Figure DEST_PATH_GDA00014691554400000428
if the projected point
Figure DEST_PATH_GDA00014691554400000429
To the sample point
Figure DEST_PATH_GDA00014691554400000430
Euclidean distance of
Figure DEST_PATH_GDA00014691554400000431
Smaller, two sample points
Figure DEST_PATH_GDA00014691554400000432
And
Figure DEST_PATH_GDA00014691554400000433
euclidean distance of
Figure DEST_PATH_GDA00014691554400000434
Multiplied by a constraint parameter W, if
Figure DEST_PATH_GDA00014691554400000435
Then let the projection point be
Figure DEST_PATH_GDA00014691554400000436
Order to
Figure DEST_PATH_GDA00014691554400000437
Is composed of
Figure DEST_PATH_GDA00014691554400000438
Putting the sample set to be selected into the sample set, if the projection point
Figure DEST_PATH_GDA00014691554400000439
To the sample point
Figure DEST_PATH_GDA00014691554400000440
Euclidean distance of
Figure DEST_PATH_GDA00014691554400000441
The processing is the same as described above.
Step 2.6, obtained according to step 2.5
Figure DEST_PATH_GDA00014691554400000442
From a set L of low resolution neighboring image blocksKpre tIn search image block xtK nearest neighbor projection points, i.e. equivalent to finding image block xtAnd DRK projection points of the characteristic lines with the nearest distance form a low-resolution nearest neighbor sample projection point set
Figure DEST_PATH_GDA00014691554400000443
And C (t) is a set of K nearest neighbor sample projection point subscripts.
In step 3, the t-th image block x in the input low-resolution image block set to be reconstructedtFrom the low resolution neighbor image block set L, using step 2.6Kpre tLow-resolution nearest neighbor sample projection point set formed by projection points of K nearest neighbor samples screened in the process
Figure DEST_PATH_GDA00014691554400000444
Performing linear reconstruction to obtain a target reconstruction weight Wt
In step 4, for the t-th image block x in the input low resolution image block set to be reconstructedtAt high resolution training the set of image blocks HtProjection point set of medium-computation and low-resolution nearest neighbor samples
Figure DEST_PATH_GDA00014691554400000445
The K projection point image blocks corresponding to each projection point form a high-resolution nearest neighbor sample projection point set
Figure DEST_PATH_GDA0001469155440000051
Wherein
Figure DEST_PATH_GDA0001469155440000052
Figure DEST_PATH_GDA0001469155440000053
Calculated for step 2.4
Figure DEST_PATH_GDA0001469155440000054
At j1=c,j2D is the value taken.
In step 5, for the t-th image block x in the input low resolution image block set to be reconstructedtCombining the high-resolution nearest neighbor sample projection points obtained in the step 4
Figure DEST_PATH_GDA0001469155440000055
Linear synthetic meshElevation resolution image block ytCoefficient of synthesis Wt
Figure DEST_PATH_GDA0001469155440000056
In a further step 2.5, the value of the constraint parameter W is 1.25.
The invention has the main advantages that: the method comprises the steps of distinguishing and processing the condition that a projection point falls on an extrapolation line of a connecting line between two sample points in the steps of the existing human face super-resolution reconstruction method based on nearest characteristic line manifold learning, and if the sum of Euclidean distances from the projection point to the two sample points is larger than W times of the Euclidean distance between the two sample points, searching a sample point close to the projection point from the two sample points to replace the projection point to form a point set to be screened, so that the projection point is limited to have stronger relevance with the sample point, the expression capacity of newly obtained sample data on input low-resolution image blocks can be greatly improved, the introduction of non-existing detail information of an original image is avoided as much as possible, and the reconstruction effect of the low-resolution image is improved.
And the value of the parameter W is further constrained to be 1.25, so that a better reconstruction effect can be achieved when the low-resolution image is reconstructed.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic representation of projected points according to the present invention falling on an extrapolation of a connecting line between two sample points;
FIG. 3 is a schematic diagram of changes in an objective evaluation index PSNR under different constraints W according to the present invention;
fig. 4 is a schematic diagram of a change situation of the objective evaluation index SSIM under different constraint conditions W in the present invention.
Detailed Description
The technical scheme of the invention can adopt a software form to realize automatic flow operation, and the technical scheme of the invention is further explained by combining an embodiment and an attached drawing, as shown in figure 1, an improved face super-resolution reconstruction method based on nearest characteristic line manifold learning specifically comprises the following steps:
step 1, inputting a low-resolution face image, and dividing the input low-resolution face image, a low-resolution face sample image in a low-resolution training set and a high-resolution face sample image in a high-resolution training set into mutually overlapped image blocks; in the step, the input low-resolution face image, the input high-resolution training set and the input low-resolution training set are respectively converted into one-dimensional vectors to obtain low-resolution image x to be reconstructed and a high-resolution image training sample
Figure DEST_PATH_GDA0001469155440000061
And low resolution image training samples
Figure DEST_PATH_GDA0001469155440000062
Where N represents the number of training sample patterns in the high resolution image training samples and the low resolution image training samples.
After dividing each training sample pattern in the low-resolution image X, the high-resolution image training sample Y and each training sample pattern in the low-resolution image training sample X to be reconstructed into mutually overlapped image blocks with equal size, respectively, the low-resolution image block set to be reconstructed, the high-resolution image training sample set and the low-resolution image training sample set are respectively formed as follows: { xi|1≤i≤M},
Figure DEST_PATH_GDA0001469155440000063
Where M denotes the number of image blocks per image division.
The embodiment adopts a CAS-PEAL-RI face library, which is obtained in a special experimental environment and comprises 1040 individuals, wherein 30871 facial images of the individuals under different postures, illumination and expressions are covered; selecting 1040 human face images with neutral expressions and normal illumination in a database, firstly picking up human face areas of individual images, cutting the human face areas into images with 112 multiplied by 100 pixels, manually labeling the nose tips, two mouth corners and two eye centers of the human face images as characteristic points, and then performing radioactive transformation alignment to obtain a high-resolution training set; the low-resolution training set is obtained by performing fuzzy 4-fold down-sampling on the high-resolution training set, wherein 1000 images are used as training samples, and 40 images are used as test images.
The embodiment relates to five parameters in total, namely the number Kpre of pre-screened image blocks, the number K of projections of nearest neighbor samples, the sizes of image blocks in a low-resolution image block set to be reconstructed, a high-resolution image training sample set and a low-resolution image training sample set, the number of overlapped pixels between adjacent image blocks, and a constraint condition W for comparison between sample points and projection points, wherein the sizes of the image blocks are all set to be 7 × 7, the number of overlapped pixels between the image blocks is set to be 4, in order to facilitate smooth experiment, other parameters are respectively tested, and W>1, only W>1, the projection point can be on the extrapolation line; kpre is more than or equal to 1 and less than or equal to 1040, but if the sample value is too small, the result is not ideal due to the lack of sample data, if the value is too large, the algorithm complexity is greatly increased, and the experiment difficulty is increased by geometric times;
Figure DEST_PATH_GDA0001469155440000064
after the nearest characteristic line processing, the total number of projection points is
Figure DEST_PATH_GDA0001469155440000065
Greater than 3 is to obtain enough sample data to make the effect better.
And 2, for each image block in the input low-resolution face image, taking the image block at the corresponding position of each low-resolution face sample image in the low-resolution training set as a sample point, establishing a low-resolution face sample block space, and calculating K nearest projection points on the low-resolution face sample block space, wherein for the condition that the projection points fall on an extrapolation line of a connecting line between two sample points, the projection points which do not accord with the reality are found out according to a constraint parameter W, and the nearest points which accord with the reality are calculated as substitutes.
In this step, for each image block in the low-resolution face image, calculating K nearest projection points in the low-resolution face sample block space specifically includes steps 2.1-2.6:
step 2.1, the t-th image block x in the low resolution image block set to be reconstructedtRespectively extracting the t-th image block of each block back training sample pattern in the high-resolution image training sample set and the low-resolution image training sample set to form a high-resolution training image block set HtAnd a low resolution training image block set Lt
Figure DEST_PATH_GDA0001469155440000071
Low resolution training image block set LtHigh resolution training image block set H representing low resolution face sample block spacetRepresenting a high resolution face sample block space.
Step 2.2 training the image Block set L from the Low resolutiontIn (1), select sum image block xtKpre sample image blocks with the nearest Euclidean distance form a screened low-resolution neighbor image block set
Figure DEST_PATH_GDA0001469155440000072
Wherein
Figure DEST_PATH_GDA0001469155440000073
Denotes xtThe neighborhood set of (a) is selected,
Figure DEST_PATH_GDA0001469155440000074
representing a neighborhood
Figure DEST_PATH_GDA0001469155440000075
The number of image blocks in (1).
Step 2.3, the screened low-resolution neighbor image block set LKpre tAt any two sample points
Figure DEST_PATH_GDA0001469155440000076
And
Figure DEST_PATH_GDA0001469155440000077
are connected to form
Figure DEST_PATH_GDA0001469155440000078
Strip characteristic line
Figure DEST_PATH_GDA0001469155440000079
j1And j2Are all integers and j is not less than 11≤j2≤N。
Step 2.4, calculate input image Block xtIn all characteristic lines
Figure DEST_PATH_GDA00014691554400000710
Projected point on
Figure DEST_PATH_GDA00014691554400000711
Figure DEST_PATH_GDA00014691554400000712
Representing a position parameter, wherein
Figure DEST_PATH_GDA00014691554400000713
Then, input image block xtAnd characteristic line
Figure DEST_PATH_GDA00014691554400000714
Can be regarded as xtAnd projection point
Figure DEST_PATH_GDA00014691554400000715
A distance of (i) that
Figure DEST_PATH_GDA00014691554400000716
Wherein,
Figure DEST_PATH_GDA00014691554400000717
representing an input image block xtTo the projection point
Figure DEST_PATH_GDA00014691554400000718
The euclidean distance of (c).
Step 2.5, projecting points are aligned according to actual conditions
Figure DEST_PATH_GDA00014691554400000719
Performing differential calculation when the projected points
Figure DEST_PATH_GDA00014691554400000720
Does not fall on the sample point
Figure DEST_PATH_GDA00014691554400000721
And
Figure DEST_PATH_GDA00014691554400000722
when extrapolated, illustrate the projected point
Figure DEST_PATH_GDA00014691554400000723
Fall on the sample point
Figure DEST_PATH_GDA00014691554400000724
And
Figure DEST_PATH_GDA00014691554400000725
between the line segments of the connecting line, the projection points are not required to be replaced; when projected point
Figure DEST_PATH_GDA0001469155440000081
Fall on the sample point
Figure DEST_PATH_GDA0001469155440000082
And
Figure DEST_PATH_GDA0001469155440000083
when extrapolated, computing the projected points
Figure DEST_PATH_GDA0001469155440000084
To the sample point
Figure DEST_PATH_GDA0001469155440000085
And
Figure DEST_PATH_GDA0001469155440000086
euclidean distance of (a):
Figure DEST_PATH_GDA0001469155440000087
and
Figure DEST_PATH_GDA0001469155440000088
if the projected point
Figure DEST_PATH_GDA0001469155440000089
To the sample point
Figure DEST_PATH_GDA00014691554400000810
Euclidean distance of
Figure DEST_PATH_GDA00014691554400000811
Smaller, two sample points
Figure DEST_PATH_GDA00014691554400000812
And
Figure DEST_PATH_GDA00014691554400000813
euclidean distance of
Figure DEST_PATH_GDA00014691554400000814
Multiplied by a constraint parameter W, if
Figure DEST_PATH_GDA00014691554400000815
Then let the projection point be
Figure DEST_PATH_GDA00014691554400000816
Order to
Figure DEST_PATH_GDA00014691554400000817
Is composed of
Figure DEST_PATH_GDA00014691554400000818
Put into the sample set to be selected, because
Figure DEST_PATH_GDA00014691554400000819
This does not exist, and is not considered in the present invention, and otherwise
Figure DEST_PATH_GDA00014691554400000820
Illustrating projected points
Figure DEST_PATH_GDA00014691554400000821
Fall on the sample point
Figure DEST_PATH_GDA00014691554400000822
And
Figure DEST_PATH_GDA00014691554400000823
between the line segments of the connecting line, the projection points are not required to be replaced; if the projected point
Figure DEST_PATH_GDA00014691554400000824
To the sample point
Figure DEST_PATH_GDA00014691554400000825
Euclidean distance of
Figure DEST_PATH_GDA00014691554400000826
The processing is the same as described above.
As shown in figure 2 of the drawings, in which,
Figure DEST_PATH_GDA00014691554400000827
and
Figure DEST_PATH_GDA00014691554400000828
are respectively an input query point xiOn the characteristic line
Figure DEST_PATH_GDA00014691554400000829
And
Figure DEST_PATH_GDA00014691554400000830
projected point of (a), xiTo
Figure DEST_PATH_GDA00014691554400000831
Closer in distance, xiAnd
Figure DEST_PATH_GDA00014691554400000832
having more similar characteristics, but
Figure DEST_PATH_GDA00014691554400000833
Distance sample point
Figure DEST_PATH_GDA00014691554400000834
And
Figure DEST_PATH_GDA00014691554400000835
if the distance is too far, the super-resolution algorithm is preferentially selected according to the face super-resolution algorithm based on the manifold learning of the nearest characteristic lines before improvement
Figure DEST_PATH_GDA00014691554400000836
It does not match the reality and therefore for the sample point
Figure DEST_PATH_GDA00014691554400000837
And
Figure DEST_PATH_GDA00014691554400000838
feature line formed if the query point x is inputiIs projected on
Figure DEST_PATH_GDA00014691554400000839
And
Figure DEST_PATH_GDA00014691554400000840
on the extrapolation line of
Figure DEST_PATH_GDA00014691554400000841
And the Euclidean distance from the nearest sample point is greater than that of the sample point
Figure DEST_PATH_GDA00014691554400000842
And
Figure DEST_PATH_GDA00014691554400000843
w times the Euclidean distance, then from the sample point
Figure DEST_PATH_GDA00014691554400000844
And
Figure DEST_PATH_GDA00014691554400000845
searching for sample points closer to projection point
Figure DEST_PATH_GDA00014691554400000846
Replacement proxels
Figure DEST_PATH_GDA00014691554400000847
And a point set to be screened is formed, the limitation on the projection points enables the projection points to have stronger relevance with the sample points, and the expression capability of newly obtained sample data on the input low-resolution image block can be greatly improved.
In order to better determine the influence of different W on the reconstruction result of the improved algorithm of the face super-resolution based on the latest feature line manifold learning, the PSNR and SSIM values of the face super-resolution are determined under different W to facilitate the analysis of the algorithm performance, wherein the PSNR is a peak signal-to-noise ratio and is an objective standard for evaluating images, the larger the PSNR value is, the less distortion is represented, the SSIM is structural similarity and is an index for measuring the similarity of two images, the structural similarity range is-1 to 1, and when the two images are identical, the SSIM value is equal to 1.
Referring to fig. 3 and 4, with the increase of the constraint parameter W, the values of the objective evaluation indexes PSNR and SSIM are in a trend of rising first and then falling, with the continuous increase of W, the values of the objective evaluation indexes PSNR and SSIM gradually approach to the face super-resolution algorithm based on the recent eigen line manifold learning before improvement, the PSNR index of the reconstructed image reaches the best when W is 1.25, and the SSIM index of the reconstructed image reaches the best when W is 1.7, because the change amplitude of the SSIM value is small, the constraint parameter W is set to 1.25 in this embodiment.
The reason that the image reconstruction effect changes along with the constraint parameter W is that the selective constraint strength of the constraint condition on the projection point becomes smaller along with the change of W, and when W is very small, the projection point which is positioned outside the sample point and has a very small Euclidean distance from the nearest sample point is replaced, so that the reconstruction effect is temporarily reduced; with the increase of W, the constraint condition does not timely replace the projection point which is far from the sample point and affects the image reconstruction, so that the image reconstruction effect is not ideal, under the condition of keeping other parameters unchanged, the number of the preselected points and the number of the nearest neighbor projection points are respectively tested, and the best experimental effect is obtained in the embodiment when Kpre is 60 and K is 30.
Step 2.6, obtained according to step 2.5
Figure DEST_PATH_GDA0001469155440000091
From a set L of low resolution neighboring image blocksKpre tIn search image block xtK nearest neighbor projection points, i.e. equivalent to finding image block xtAnd DRK projection points of the characteristic lines with the nearest distance form a low-resolution nearest neighbor sample projection point set
Figure DEST_PATH_GDA0001469155440000092
And C (t) is a set of K nearest neighbor sample projection point subscripts.
And 3, for each image block in the input low-resolution face image, performing linear reconstruction by using K nearest projection points on the low-resolution face sample block space obtained in the step 2 to obtain a weight coefficient of the linear reconstruction.
Step 3 specifically is to input the t-th image block x in the low resolution image block set to be reconstructedtFrom the low resolution neighbor image block set L, using step 2.6Kpre tLow-resolution nearest neighbor sample projection point set formed by projection points of K nearest neighbor samples screened in the process
Figure DEST_PATH_GDA0001469155440000093
Performing linear reconstruction to obtain a target reconstruction weight WtTarget reconstruction weight WtThe calculation of (a) belongs to the prior art, and is not described in detail herein.
And 4, for each image block in the input low-resolution face image, taking the image block at the corresponding position of each high-resolution face sample image in the high-resolution training set as a sample point, establishing a high-resolution face sample block space, and calculating K sample points which respectively correspond to K nearest projection points in the high-resolution face sample block space and the low-resolution face sample block space obtained in the step 2.
Step 4 specifically is that for the t-th image block x in the input low-resolution image block set to be reconstructedtAt high resolution training the set of image blocks HtProjection point set of medium-computation and low-resolution nearest neighbor samples
Figure DEST_PATH_GDA0001469155440000094
The K projection point image blocks corresponding to each projection point form a high-resolution nearest neighbor sample projection point set
Figure DEST_PATH_GDA0001469155440000095
Wherein
Figure DEST_PATH_GDA0001469155440000096
Figure DEST_PATH_GDA0001469155440000097
Calculated for step 2.4
Figure DEST_PATH_GDA0001469155440000098
At j1=c,j2D is the value taken.
And 5, replacing K nearest projection points in the low-resolution face sample block space obtained in the step 2 with K sample points in the high-resolution face sample block space obtained in the step 4, and weighting and reconstructing a high-resolution image block by using the weighting coefficient obtained in the step 3.
Step 5 specifically includes that for the t-th image block x in the input low-resolution image block set to be reconstructedtThe high-resolution nearest neighbor sample projection point set H obtained in the step 4t KLinearly synthesizing target high-resolution image block ytCoefficient of synthesis Wt
Figure DEST_PATH_GDA0001469155440000101
Coefficient of synthesis WtIn the actual calculation for one matrix or set,
Figure DEST_PATH_GDA0001469155440000102
is an element thereof.
And 6, superposing all weighted and reconstructed high-resolution image blocks according to positions, and then dividing the superposed times of the positions of each pixel to reconstruct a high-resolution face image.
In summary, on the basis of the existing face super-resolution reconstruction method based on the nearest characteristic line manifold learning, the invention distinguishes and processes the situation that the projection point falls on the extrapolation line of the connecting line between the two sample points, namely when the euclidean distance between the projection point and the two sample points is greater than W times of the euclidean distance between the two sample points, the sample point closer to the projection point is searched from the two sample points to replace the projection point, and a point set to be screened is formed, so that the projection point is limited to have stronger relevance with the sample point, the expression capability of newly obtained sample data on the input low-resolution image block can be greatly improved, the introduction of the nonexistent detail information of the original image is avoided as much as possible, and the reconstruction effect of the low-resolution image is improved; in addition, the method is funded by a national science fund project (project number: U1404618) and a scientific and technological development plan project (project number: 172102210186) of Henan province, and has good research value in the technical field of image processing.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (3)

1. An improved face super-resolution reconstruction method based on nearest feature line manifold learning is characterized by comprising the following steps:
step 1, inputting a low-resolution face image, and dividing the input low-resolution face image, a low-resolution face sample image in a low-resolution training set and a high-resolution face sample image in a high-resolution training set into mutually overlapped image blocks;
step 2, for each image block in the input low-resolution face image, taking the image block at the corresponding position of each low-resolution face sample image in the low-resolution training set as a sample point, establishing a low-resolution face sample block space, and calculating K nearest projection points on the low-resolution face sample block space, wherein for the condition that the projection points fall on an extrapolation line of a connecting line between the sample points, according to a constraint parameter W, the projection points which do not conform to the reality are found out, and the nearest points which conform to the reality are calculated as a substitute;
step 3, for each image block in the input low-resolution face image, performing linear reconstruction by using K nearest projection points in the low-resolution face sample block space obtained in the step 2 to obtain a weight coefficient of the linear reconstruction;
step 4, for each image block in the input low-resolution face image, taking the image block at the corresponding position of each high-resolution face sample image in the high-resolution training set as a sample point, establishing a high-resolution face sample block space, and calculating K sample points which respectively correspond to K nearest projection points on the low-resolution face sample block space obtained in the step 2 in the high-resolution face sample block space;
step 5, replacing K nearest projection points on the low-resolution face sample block space obtained in the step 2 with K sample points on the high-resolution face sample block space obtained in the step 4, and weighting and reconstructing a high-resolution image block by using the weighting coefficient obtained in the step 3;
and 6, superposing all weighted and reconstructed high-resolution image blocks according to positions, and then dividing the superposed times of the positions of each pixel to reconstruct a high-resolution face image.
2. The improved face super-resolution reconstruction method based on nearest eigen-line manifold learning of claim 1, characterized in that:
in step 1, the input low-resolution face image, the input high-resolution training set and the input low-resolution training set are respectively converted into one-dimensional vectors to obtain low-resolution image x to be reconstructed and high-resolution image training samples
Figure FDA0001405469760000011
And low resolution image training samples
Figure FDA0001405469760000021
Wherein N represents the number of training sample patterns in the high-resolution image training sample and the low-resolution image training sample;
after dividing each training sample pattern in the low-resolution image X, the high-resolution image training sample Y and each training sample pattern in the low-resolution image training sample X to be reconstructed into mutually overlapped image blocks with equal size, respectively, the low-resolution image block set to be reconstructed, the high-resolution image training sample set and the low-resolution image training sample set are respectively formed as follows: { xi|1≤i≤M},
Figure FDA0001405469760000022
Wherein M represents the number of image blocks of each image division;
in step 2, for each image block in the low-resolution face image, calculating K nearest projection points in the low-resolution face sample block space specifically includes steps 2.1-2.6:
step 2.1, the t-th image block x in the low resolution image block set to be reconstructedtRespectively extracting the t-th image block of each block back training sample pattern in the high-resolution image training sample set and the low-resolution image training sample set to form a high-resolution training image block set HtAnd a low resolution training image block set Lt
Figure FDA0001405469760000023
Step 2.2 training the image Block set L from the Low resolutiontIn (1), select sum image block xtKpre sample image blocks with the nearest Euclidean distance form a screened low-resolution neighbor image block set
Figure FDA0001405469760000024
Wherein
Figure FDA0001405469760000025
Denotes xtThe neighborhood set of (a) is selected,
Figure FDA0001405469760000026
representing a neighborhood
Figure FDA0001405469760000027
The number of image blocks in (1);
step 2.3, the screened low-resolution neighbor image block set LKpre tAt any two sample points
Figure FDA0001405469760000028
And
Figure FDA0001405469760000029
are connected to form
Figure FDA00014054697600000210
Strip characteristic line
Figure FDA00014054697600000211
j1And j2Are all integers and j is not less than 11≤j2≤N;
Step 2.4, calculate input image Block xtIn all characteristic lines
Figure FDA00014054697600000212
Projected point on
Figure FDA00014054697600000213
Figure FDA00014054697600000214
Figure FDA00014054697600000215
Representing a position parameter, wherein
Figure FDA00014054697600000216
Then, input image block xtAnd characteristic line
Figure FDA00014054697600000217
Can be regarded as xtAnd projection point
Figure FDA00014054697600000218
A distance of (i) that
Figure FDA00014054697600000219
Wherein,
Figure FDA00014054697600000220
representing an input image block xtTo the projectionDot
Figure FDA00014054697600000221
The Euclidean distance of;
step 2.5, projecting points are aligned according to actual conditions
Figure FDA00014054697600000222
Carrying out distinguishing calculation; when projected point
Figure FDA00014054697600000223
Does not fall on the sample point
Figure FDA00014054697600000224
And
Figure FDA0001405469760000031
when extrapolated, illustrate the projected point
Figure FDA0001405469760000032
Fall on the sample point
Figure FDA0001405469760000033
And
Figure FDA0001405469760000034
between the line segments of the connecting line, the projection points are not required to be replaced; when projected point
Figure FDA0001405469760000035
Fall on the sample point
Figure FDA0001405469760000036
And
Figure FDA0001405469760000037
when extrapolated, computing the projected points
Figure FDA0001405469760000038
To the sampleDot
Figure FDA0001405469760000039
And
Figure FDA00014054697600000310
euclidean distance of (a):
Figure FDA00014054697600000311
and
Figure FDA00014054697600000312
if the projected point
Figure FDA00014054697600000313
To the sample point
Figure FDA00014054697600000314
Euclidean distance of
Figure FDA00014054697600000315
Smaller, two sample points
Figure FDA00014054697600000316
And
Figure FDA00014054697600000317
euclidean distance of
Figure FDA00014054697600000318
Multiplied by a constraint parameter W, if
Figure FDA00014054697600000319
Then let the projection point be
Figure FDA00014054697600000320
Order to
Figure FDA00014054697600000321
Is composed of
Figure FDA00014054697600000322
Putting the sample set to be selected into the sample set, if the projection point
Figure FDA00014054697600000323
To the sample point
Figure FDA00014054697600000324
Euclidean distance of
Figure FDA00014054697600000325
Smaller, the processing mode is the same as the mode;
step 2.6, obtained according to step 2.5
Figure FDA00014054697600000326
From a set L of low resolution neighboring image blocksKpre tIn search image block xtK nearest neighbor projection points, i.e. equivalent to finding image block xtAnd DRK projection points of the characteristic lines with the nearest distance form a low-resolution nearest neighbor sample projection point set
Figure FDA00014054697600000327
C (t) is a set of K nearest neighbor sample projection point subscripts;
in step 3, the t-th image block x in the input low-resolution image block set to be reconstructedtFrom the low resolution neighbor image block set L, using step 2.6Kpre tLow-resolution nearest neighbor sample projection point set formed by projection points of K nearest neighbor samples screened in the process
Figure FDA00014054697600000328
Performing linear reconstruction to obtain a target reconstruction weight Wt
In step 4, for the t-th image block x in the input low resolution image block set to be reconstructedtAt high resolution training the set of image blocks HtProjection point set of medium-computation and low-resolution nearest neighbor samples
Figure FDA00014054697600000329
The K projection point image blocks corresponding to each projection point form a high-resolution nearest neighbor sample projection point set
Figure FDA00014054697600000330
Wherein
Figure FDA00014054697600000331
Figure FDA00014054697600000332
Calculated for step 2.4
Figure FDA00014054697600000333
At j1=c,j2D is the value taken;
in step 5, for the t-th image block x in the input low resolution image block set to be reconstructedtCombining the high-resolution nearest neighbor sample projection points obtained in the step 4
Figure FDA00014054697600000334
Linearly synthesizing target high-resolution image block ytCoefficient of synthesis Wt
Figure FDA00014054697600000335
3. The improved face super-resolution reconstruction method based on nearest eigen-line manifold learning of claim 2, characterized in that: in step 2.5, the value of the constraint parameter W is 1.25.
CN201710817616.1A 2017-09-12 2017-09-12 Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning Active CN107680037B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710817616.1A CN107680037B (en) 2017-09-12 2017-09-12 Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710817616.1A CN107680037B (en) 2017-09-12 2017-09-12 Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning

Publications (2)

Publication Number Publication Date
CN107680037A CN107680037A (en) 2018-02-09
CN107680037B true CN107680037B (en) 2020-09-29

Family

ID=61135193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710817616.1A Active CN107680037B (en) 2017-09-12 2017-09-12 Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning

Country Status (1)

Country Link
CN (1) CN107680037B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447020A (en) * 2018-03-12 2018-08-24 南京信息工程大学 A kind of face super-resolution reconstruction method based on profound convolutional neural networks
CN109359655B (en) * 2018-09-18 2021-07-16 河南大学 Image segmentation method based on context regularization cycle deep learning
CN109343692B (en) * 2018-09-18 2021-07-23 河南大学 Mobile device display power saving method based on image segmentation
CN113516588B (en) * 2021-04-26 2024-07-02 中国工商银行股份有限公司 Image generation method, device and equipment
CN114549323A (en) * 2022-02-28 2022-05-27 福建师范大学 Robust face super-resolution processing method and system based on empirical relationship deviation correction

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004075093A2 (en) * 2003-02-14 2004-09-02 University Of Rochester Music feature extraction using wavelet coefficient histograms
CN102402784A (en) * 2011-12-16 2012-04-04 武汉大学 Human face image super-resolution method based on nearest feature line manifold learning
CN103336960A (en) * 2013-07-26 2013-10-02 电子科技大学 Human face identification method based on manifold learning
CN103824272A (en) * 2014-03-03 2014-05-28 武汉大学 Face super-resolution reconstruction method based on K-neighboring re-recognition
CN104112147A (en) * 2014-07-25 2014-10-22 哈尔滨工业大学深圳研究生院 Nearest feature line based facial feature extracting method and device
CN104933692A (en) * 2015-07-02 2015-09-23 中国地质大学(武汉) Reconstruction method and apparatus for the super-resolution of a face
CN105023240A (en) * 2015-07-08 2015-11-04 北京大学深圳研究生院 Dictionary-type image super-resolution system and method based on iteration projection reconstruction
CN105488776A (en) * 2014-10-10 2016-04-13 北京大学 Super-resolution image reconstruction method and apparatus
CN107133921A (en) * 2016-02-26 2017-09-05 北京大学 The image super-resolution rebuilding method and system being embedded in based on multi-level neighborhood

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004075093A2 (en) * 2003-02-14 2004-09-02 University Of Rochester Music feature extraction using wavelet coefficient histograms
CN102402784A (en) * 2011-12-16 2012-04-04 武汉大学 Human face image super-resolution method based on nearest feature line manifold learning
CN103336960A (en) * 2013-07-26 2013-10-02 电子科技大学 Human face identification method based on manifold learning
CN103824272A (en) * 2014-03-03 2014-05-28 武汉大学 Face super-resolution reconstruction method based on K-neighboring re-recognition
CN104112147A (en) * 2014-07-25 2014-10-22 哈尔滨工业大学深圳研究生院 Nearest feature line based facial feature extracting method and device
CN105488776A (en) * 2014-10-10 2016-04-13 北京大学 Super-resolution image reconstruction method and apparatus
CN104933692A (en) * 2015-07-02 2015-09-23 中国地质大学(武汉) Reconstruction method and apparatus for the super-resolution of a face
CN105023240A (en) * 2015-07-08 2015-11-04 北京大学深圳研究生院 Dictionary-type image super-resolution system and method based on iteration projection reconstruction
CN107133921A (en) * 2016-02-26 2017-09-05 北京大学 The image super-resolution rebuilding method and system being embedded in based on multi-level neighborhood

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
An improved classifier based on nearest feature line;Youfu Du,et al;《2012 International Conference on Information Security and》;20130207;正文第321-324页 *
人脸图像特征提取和分类算法研究;徐征;《中国优秀硕士学位论文全文数据库 信息科技辑》;20120415;I138-1931 *
改进的K最近特征线算法在文本分类中的应用;谭冠群,丁华福;《哈尔滨理工大学学报》;20081231;第13卷(第6期);正文第19-22页 *

Also Published As

Publication number Publication date
CN107680037A (en) 2018-02-09

Similar Documents

Publication Publication Date Title
CN107680037B (en) Improved face super-resolution reconstruction method based on nearest characteristic line manifold learning
CN111311563B (en) Image tampering detection method based on multi-domain feature fusion
CN110135366B (en) Shielded pedestrian re-identification method based on multi-scale generation countermeasure network
Liu et al. LF-YOLO: A lighter and faster yolo for weld defect detection of X-ray image
CN109325550B (en) No-reference image quality evaluation method based on image entropy
Wang et al. LiSiam: Localization invariance Siamese network for deepfake detection
CN102402784B (en) Human face image super-resolution method based on nearest feature line manifold learning
CN110648310A (en) Weak supervision casting defect identification method based on attention mechanism
Li et al. Image quality assessment using deep convolutional networks
CN112818849B (en) Crowd density detection algorithm based on context attention convolutional neural network for countermeasure learning
CN111652240B (en) CNN-based image local feature detection and description method
Li et al. A review of deep learning methods for pixel-level crack detection
CN112927783A (en) Image retrieval method and device
Wang et al. Small vehicle classification in the wild using generative adversarial network
US11481919B2 (en) Information processing device
Li et al. Adversarial domain adaptation via category transfer
Zhuang et al. ReLoc: A restoration-assisted framework for robust image tampering localization
CN113344110A (en) Fuzzy image classification method based on super-resolution reconstruction
CN116934820A (en) Cross-attention-based multi-size window Transformer network cloth image registration method and system
CN110516640B (en) Vehicle re-identification method based on feature pyramid joint representation
Liu et al. Adaptive Texture and Spectrum Clue Mining for Generalizable Face Forgery Detection
Xiu et al. Double discriminative face super-resolution network with facial landmark heatmaps
CN116596836A (en) Pneumonia CT image attribute reduction method based on multi-view neighborhood evidence entropy
CN115170897A (en) Image processing method based on mask region convolution neural network and application thereof
CN113222887A (en) Deep learning-based nano-iron labeled neural stem cell tracing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant